Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Yet who prohibits? Who prevents it from happening?

Eliezer seems absurdly optimistic to me. He is relying on some unseen entity to reach in and keep the laws of physics stable in our universe. We already see lots of evidence that they are not truly stable, for example we believe in both the electroweak transition and earlier transitions, of various natures depending on your school of physics. We /just saw/ in 1998 that unknown laws of physics can creep up and spring out by surprise, suddenly 'taking over' 74 percent of the Universe's postulated energy supply.

Who is keeping the clockwork universe running? Or, why is the hardware and software (operating system etc) for the automata fishbowl so damned stable? Is it part of the Library of Babel? Well, that's a good explanation, but the Bible's universe is in there also and arguably a priori more probable than a computer system that runs many worlds quantum mechanics sims on a panuniversal scale, and never halts and/or catches fire. It is very hard to keep a perfectly stable web server going when demand keeps expanding, and that seems much simpler.

Look into the anthropic principle literature and search on 'Lee Smolin' or start somewhere like http://evodevouniverse.com/wiki/Cosmological_natural_selection_(fecund_universes) for some reasoned speculation on how we might have wound up in such a big, rich, diverse universe from simpler origins.

I don't think it is rational to accept the stability of natural law without some reasonable model for either the origins of said law, or some timeless physics model that has stable physics with apparently consistent evolution as an emergent property, or it's a gift from entities with powers far beyond ours, or some such.

If the worst that can happen is annihilation, you're living in a fool's paradise. Many people have /begged/ for true death and in the animal world things happen all the time that are truly horrifying to most human eyes. I will avoid going into sickening detail here, but let me refer you to H.P. Lovecraft and Charles Stross for some examples of how the universe might have been slightly less friendly.

Moreover, we know of examples where natural selection has caused drastic decreases in organismal complexity – for example, canine venereal sarcoma, which today is an infectious cancer, but was once a dog.

Or human selection. Henrietta Lax (or her cancer) is now many tonnes of cultured cells; she has become an organism that reproduces by mitosis and thrives in the niche environment of medical research labs.

I love the idea of an intelligence explosion but I think you have hit on a very strong point here:

In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.

In fact, we can see from both history and paleontology that when a new breakthrough was made in "biologicial technology" like the homeobox gene or whatever triggered the PreCambrian explosion of diversity, that when self-modification (here a 'self' isn't one meat body, it's a clade of genes that sail through time and configuration space together - think of a current of bloodlines in spacetime, that we might call a "species" or genus or family) is made easier (and the development of modern-style morphogenesis is in some way like developing a toolkit for modification of body plan at some level) then there was apparently an explosion of explorers, bloodlines, into the newly accessible areas of design space.

But the explosion eventually ended. After the Diaspora into over a hundred phyla of critters hard enough to leave fossils, the expansion into new phyla stopped. Some sort of new frontier was reached within tens of millions of years, then the next six hundred million years or so was spent slowly whittling improvements within phyla. Most phyla died out, in fact, while a few like Arthropdoda took over many roles and niches.

We see very similar incidents throughout human history, look at the way languages develop, or technologies. For an example perhaps familiar to many readers, look at the history of algorithms. For thousands of years we see slow development in this field, from Babylonian algorithms on how to find the area of a triangle to the Sieve of Eratosthenes to... after a lot of development - medieval Italian merchants writing down how to do double entry bookkeeping.

Then in the later part of the Renaissance there is some kind of phase change and the mathematical community begins compiling books of algorithms quite consciously. This has happened before, in Sumer and Egypt to start, in Babylon and Greece, in Asia several times, and most notably in the House of Wisdom in Baghdad in the ninth century. But always there are these rising and falling cycles where people compile knowledge and then it is lost and others have to rebuild, often the new cycle is helped by the rediscovery or re-appreciation of a few surviving texts from a prior cycle.

But around 1350 there begins a new cycle (which of course draws on surviving data from prior cycles) where people begin to accumulate formally expressed algorithms that is unique in that it has lasted to this day. Much of what we call the mathematical literature consists of these collections, and in the 1930s people (Church, Turing, many others) finally develop what we might now call classical theory of algorithms. Judging by the progress of various other disciplines, you would expect little more progress in this field, relative to such a capstone achievement, for a long time.

(One might note that this seven-century surge of progress might well be due, not to human mathematicians somehow becoming more intelligent in some biological way, but to the development of printing and associated arts and customs that led to the wide spread dissemination of information in the form of journals and books with many copies of each edition. The custom of open-sourcing your potentially extremely valuable algorithms was probably as important as the technology of printing here; remember that medieval and ancient bankers and so on all had little trade secrets of handling numbers and doing maths in a formulaic way, but we don't retain in the general body of algorithmic lore any of their secret tricks unless they published or chance preserved some record of their methods.)

Now, we'd have expected Turing's 1930's work to be the high point in this field for centuries to come (and maybe it was; let history be the judge) but between the development of the /theory/ of a general computing machine, progress in other fields such as electronics, and a leg up in from the intellectual legacy left by priors such as George Boole, the 1940's somehow put together (under enormous pressure of circumstances) a new sort of engine that could run algorithmic calculations without direct human intervention. (Note that here I say 'run', not 'design - I mean that the new engines could execute algorithms on demand).

The new computing engines, electro-mechanical as well as purely electronic, were very fast compared to human predecessors. This led to something in algorithm space that looks to me a lot like the Precambrian explosion, with many wonderful critters like LISP and FORTRAN and BASIC evolving that bridged the gap between human minds and assembly language, which itself was a bridge to the level of machine instructions, which... and so on. Layers and layers developed, and then in the 1960's giants wrought mighty texts of computer science no modern professor can match; we can only stare in awe at their achievements in some sense.

And then... although Moore's law worked on and on tirelessly, relatively little fundamental progress in computer science happened for the next forty years. There was a huge explosion in available computing power, but just as jpaulson suspects, merely adding computing power didn't cause a vast change in our ability to 'do computer science'. Some problems may /just be exponentially hard/ and an exponential increase in capability starts to look like a 'linear increase' by 'the important measure'.

It may well be that people will just ... adapt... to exponentially increasing intellectual capacity by dismissing the 'easy' problems as unimportant and thinking of things that are going on beyond the capacity of the human mind to grasp as "nonexistent" or "also unimportant". Right now, computers are executing many many algorithms too complex for any one human mind to follow - and maybe too tedious for any but the most dedicated humans to follow, even in teams - and we still don't think they are 'intelligent'. If we can't recognize an intelligence explosion when we see one under our noses, it is entirely possible we won't even /notice/ the Singularity when it comes.

If it comes - as jpaulson indicates, there might be a never ending series of 'tiers' where we think "Oh past here it's just clear sailing up to the level of the Infinite Mind of Omega, we'll be there soon!" but when we actually get to the next tier, we might always see that there is a new kind of problem that is hyperexponentially difficult to solve before we can ascend further.

If it was all that easy, I would expect that whatever gave us self-reproducing wet nanomachines four billion years ago would have solved it - the ocean has been full of protists and free swimming virii, exchanging genetic instructions and evolving freely, for a very long time. This system certainly has a great deal of raw computing power, perhaps even more than it would appear on the surface. If she (the living ocean system as a whole) isn't wiser than the average individual human, I would be very surprised, and she apparently either couldn't create such a runaway explosion of intelligence, or decided it would be unwise to do so any faster than the intelligence explosion we've been watching unfold around us.

Even if such worlds do 'exist', whether I believe in magic within them is unimportant, since they are so tiny;

Since there is a good deal of literature indicating that our own world has a surprisingly tiny probabilty (ref: any introduction to the Anthropic Principle), I try not to dismiss the fate of such "fringe worlds" as completely unimportant.

army1987's argument above seems very good though, I suggest you look at his comment very seriously

Is there an underlying problem of crying wolf; too many warning messages obscure the ones that are really matters of life and death?

This is certainly an enormous problem for interface design in general for many systems where there is some element of danger. The classic "needle tipping into the red" is an old and brilliant solution for some kinds of gauges - an analogue meter where you can see the reading tipping toward a brightly marked "danger zone", usually with a 'safe' zone and an intermediate zone also marked, has surely prevented many accidents. If the pressure gauge on the door had such a meter where green meant "safe to open hatches" and red meant "terribly dangerous", that might have been a better design than just raw numbers.

I haven't worked with pressure doors but I have worked with large vacuum systems, cryogenic systems, labs with lasers that could blind you or x-ray machines that can be dangerously intense, and so on. I can attest that the designers of physics lab equipment do indeed put a good deal of thought and effort into various displays that indicate when the equipment is in a dangerous state.

However, when there are /many/ things that can go dangerously wrong, it becomes very difficult to avoid cluttering the sensorium of the operator with various warnings. The classic example are the control panels for vehicles like airplanes or space ships; you can see a beautiful illustration of the 'indicator clutter problem' in the movie "Airplane!":

"There is an object one foot across in the asteroid belt composed entirely of chocolate cake."

This is a lovely example, which sounds quite delicious. It reminds me strongly of the famous example of Russell's Teapot (from his 1952 essay "Is There a God?"). Are you familiar with his writing?

You'll just subconsciously avoid any Devil's arguments that make you genuinely nervous, and then congratulate yourself for doing your duty.

Yes, I have noticed that many of my favorite people, myself included, do seem to spend a lot of time on self-congratulation that they could be spending on reasoning or other pursuits. I wonder if you know anyone who is immune to this foible :)

stay away from this community I responded to this suggestion but deleted the response as unsuitable because it might embarass you. I would be happy to email my reply if you are interested.

we'd probably convince you such perma-death would be the highly probable outcome

Try reading what I said in more detail in both the post I made that you quoted and my explanation of how there might be a set of worlds of very small measure. Then go read Eliezer Yudkowsky's posts on Many Worlds (or crack a book by Deutsch or someone, or check Wikipedia.) Then reread the clause you published here which I just quoted above, and see if you still stand by it, or if you can see just how very silly it is. I don't want to bother to try to explain things again that have already been very well explained on this site.

I am trying to communicate using local community standards of courtesy, it is difficult. I am used to a very different tone of discourse.

Now, whether that distributed information is 'experiencing' anything is arguable,

As far as I know, the latter is what people are worrying about when they worry about ceasing to exist.

Ahhh... that never occurred to me. I was thinking entirely in terms of risk of data loss.

(Which is presumably a reason why your comment's been downvoted a bunch; most readers would see it as missing the point.)

I don't understand the voting rules or customs. Downvoting people who see things from a different perspective is... a custom designed to keep out the undesirables? I am sorry I missed the point but I learned nothing from the downvoting. I learned a great deal from your helpful comment - thank you.

I thought one of the points of the discussion was to promote learning among the readership.

Substitute "within almost any given branch" — I think my point still goes through.

Ah... see, that's where I think the 'lost' minds are likely hiding out, in branches of infinitesimal measure. Which might sound bad, unless you have read up on the anthropic principle and realize that /we/ seem to be residing on just such a branch. (Read up on the anthropic principle if our branch of the universal tree seems less than very improbable to you.)

I'm not worried that there won't be a future branch that what passes for my consciousness (I'm a P-zombie, I think, so I have to say "what passes for") will surivve on. I'm worried that some consciousnesses, equivalent in awareness to 'me' or better, might be trapped in very unpleasant branches. If "I " am permanently trapped in an unpleasant branch, I absolutely do want my consciousness shut down if it's not serving some wonderful purpose that I'm unaware of. If my suffering does serve such a purpose then I'm happy to think of myself as a utility mine, where external entities can come and mine for positive utilons as long as they get more positive utlions out of me than the negative utilons they leave me with.

My perceived utility function often goes negative. When that happens, I would be extremely tempted to kill my meat body if there were a guarantee it would extinguish my perceived consciousness permanently. That would be a huge reward to me in that frame of mind, not a loss. This may be why I don't see these questions the way most people here do.

P.S. Is there a place the rating system is explained? I have looked casually and not found it with a few minutes of effort; it seems like it should be explained prominently somewhere. Are downgradings intended as a punitive training measure ("don't post this! bad monkey!") or just a guide to readers (don't bother reading this, it's drivel, by our community standards). I was assuming the latter.

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life" ... " nothing CAN do this, because nothing HAS done it."

The grey goo scenario isn't really very silly. We seem to have had a green goo scenario around 1.5 to 2 billion years ago that killed off many or most critters around due to release of deadly deadly oxygen; if the bacterial ecosystem were completely stable against goo scenarios this wouldn't have happened. We have had mini goo scenarios when for example microbiota pretty well adapted to one species made the jump to another and oops, started reproducing rapidly and killing off their new host species rapidly, e.g. Yersinia pestis. Just because we haven't seen a more omnivous goo sweep over the ecosphere recently ..., ...other than Homo sapiens, which is actually a pretty good example of a grey goo - think of the species as a crude mesoscale universal assembler, which is spreading pretty fast and killing off other species at a good clip and chewing up resources quite rapidly... ... doesn't mean it couldn't happen at the microscale also. Ask the anaerobes if you can find them, they are hiding pretty well still after the chlorophyll incident.

Since the downside is pretty far down, I don't think complacency is called for. A reasonable caution before deploying something that could perhaps eat everyone and everything in sight seems prudent.

Remember that the planet spent almost 4 billion years more or less covered in various kind of goo before the Precambrian Explosion. We know /very little/ of the true history of life in all that time; there could have been many, many, many apocalyptic type scenarios where a new goo was deployed that spread over the planet and ate almost everything, then either died wallowing in its own crapulence or formed the base layer for a new sort of evolution.

Multicellular life could have started to evolve /thousands of times/ only to be wiped out by goo. If multicellulars only rarely got as far as bones or shells, and were more vulnerable to being wiped out by a goo-plosion than single celled critters that could rebuild their population from a few surviving pockets or spores, how would we even know? Maybe it took billions of years for the Great War Of Goo to end in a Great Compromise that allowed mesoscopic life to begin to evolve, maybe there were great distributed networks of bacterial and viral biochemical computing engines that developed intelligence far beyond our own and eventually developed altruism and peace, deciding to let multicellular life develop.

Or we eukaryotes are the stupid runaway "wet" technology grey goo of prior prokaryote/viral intelligent networks, and we /destroyed/ their networks and intelligence with our runaway reproduction. Maybe the reason we don't see disasters like forests and cities dissolving in swarms of Andromeda-Strain like universal gobblers is that safeguards against that were either engineered in, or outlawed, long ago. Or, more conventionally, evolved.

What we /do/ think we know about the history of life is that the Earth evolved single celled life or inherited it via panspermia etc. within about half a billion years of the Earth's coalescence, then some combination of goo more or less dominated the Earth's surface te roost (as far as biology goes) for over three billion years, esp if you count colonies like stromatolites as gooey. In the middle of this long period was at least one thing that looked like a goo apocalypse that remade the Earth profoundly enough that the traces are very obvious (e.g. huge beds of iron ore). But there could have been many more mass extinctions we know of.

Then less than a billion years ago something changed profoundly and multicellulars started to flourish. This era is less than a sixth of the span of life on earth. So... five sixths, goo dominated world, one sixth, non goo dominated world, is the short history here. This does not fill me with confidence that our world is very stable against a new kind of goo based on non-wet, non-biochemical assemblers.

I do think we are pretty likely not to deploy grey goo, though. Not because humans are not idiots - I am an idiot, and it's the kind of mistake I would make, and I'm demonstrably above average by many measures of intelligence. It's just that I think Eliezer and others will deploy a pre-nanotech Friendly AI before we get to the grey goo tipping point, and that it will be smart enough, altruistic enough, and capable enough to prevent humanity from bletching the planet as badly as the green microbes did back in the day :)

Yes, I am sorry for the mistakes, not sure if I can rectify them. I see now about protecting special characters, I will try to comply.

I am sorry, I have some impairments and it is hard to make everything come out right.

Thank you for your help

Load More