Nominull comments on The mind-killer - Less Wrong

23 Post author: ciphergoth 02 May 2009 04:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread.

Comment author: Nominull 02 May 2009 07:47:45PM *  2 points [-]

I will admit to an estimate higher than 95% that humanity or its uploads will survive the next hundred years. Many of the "apocalyptic" scenarios people are concerned about seem unlikely to wipe out all of humanity; so long as we have a breeding population, we can recover.

Comment author: Nick_Tarleton 03 May 2009 05:33:37AM *  1 point [-]

No significant risk of unFriendly AI (especially since you apparently consider uploading within 100 years plausible)? Nanotech war? Even engineered disease? I'm surprised.

Comment author: mattnewport 03 May 2009 05:45:13AM *  1 point [-]

The comment appears to me to be saying there is no significant risk of wiping out all of humanity, not that there is no significant risk of any of the dangers you describe causing significant harm.

I think an unfriendly AI is somewhat likely for example but put a very low probability on an unfriendly AI completely wiping out humanity. The consequences could be quite unpleasant and worth working to avoid but I don't think it's an existential threat with any significant probability.

Comment author: Vladimir_Nesov 03 May 2009 11:58:38AM *  8 points [-]

That's a very strange perspective. Other threats are good in that they are stupid, so they won't find you if you colonize space or live on an isolated island, or have a lucky combination of genes, or figure out a way to actively outsmart them, etc. Stupid existential risks won't methodically exterminate every human, and so there is a chance for recovery. Unfriendly AI, on the other hand, won't go away, and you can't hide from it on another planet. (Indifference works this way too, it's the application of power indifferent to humankind that is methodical, e.g. Paperclip AI.)

Comment author: Nominull 03 May 2009 10:34:12PM 0 points [-]

Consider: humanity is an intelligence, one not particularly friendly to, say, the fieldmouse. Fieldmice are not yet extinct.

Comment author: MBlume 03 May 2009 11:00:48PM 5 points [-]

I think it is worth considering the number of species to which humanity is largely indifferent which are extinct as a result of humanity optimizing other criteria

Comment author: Nick_Tarleton 04 May 2009 04:31:59AM 2 points [-]

Humans satisfice, and not very well at that compared to what an AGI could do. If we effectively optimized for... almost any goal not referring to fieldmice... fieldmice would be extinct.

Comment author: Vladimir_Nesov 03 May 2009 10:57:05PM 1 point [-]

Humanity is weak.

Comment author: Nominull 03 May 2009 11:58:12PM 0 points [-]

Humanity is pretty damn impressive from a fieldmouse's perspective, I dare say!

Comment author: MBlume 04 May 2009 12:04:49AM 0 points [-]

yet humanity cannot create technology on the level of a fieldmouse.

Comment author: MichaelHoward 03 May 2009 11:00:04PM 0 points [-]

Fieldmice (outside of Douglas Adams fiction) aren't any particular threat to us in the way we might be to the Unfriendly AI. They're not likely to program another us to fight us for resources.

If fieldmice were in danger of extinction we'd probably move to protect them, not that that would necessarily help them.

Comment author: mattnewport 03 May 2009 06:14:45PM -2 points [-]

You are assuming that mere intelligence is sufficient to give an AI an overwhelming advantage in any conflict. While I concede that is possible in theory I consider it much less likely than seems to be the norm here. This is partly because I am also skeptical about the existential dangers of self replicating nanotech, bioengineered viruses and other such technologies that an AI might attempt to use in a conflict.

As long as there is any reasonable probability that an AI would lose a conflict with humans or suffer serious damage to its capacity to achieve its goals, its best course of action is unlikely to be to attempt to wipe out humanity. A paperclip maximizer for example would seem to better further its goals by heading to the asteroid belt where it could advance its goals without needing to devote large amounts of computational capacity to winning a conflict with other goal-directed agents.

Comment author: mattnewport 03 May 2009 11:06:50PM 2 points [-]

For people who've voted this down, I'd be interested in your answers to the following questions:

1) Can you envisage a scenario in which a greater than human intelligence AI with goals not completely compatible with human goals would ever choose a course of action other than wiping out humanity?

2) If you answered yes to 1), what probability do you assign to such an outcome, rather than an outcome involving the complete annihilation of humanity?

3) If you answered no to 1), what makes you certain that such a scenario is not possible?

Comment author: loqi 04 May 2009 01:30:23AM -1 points [-]

Unfriendly AI, on the other hand, won't go away, and you can't hide from it on another planet.

Not on another planet, no. But I wonder how practical a constantly accelerating seed ship will turn out to be.

Comment author: Mario 03 May 2009 06:48:25PM 0 points [-]

I agree generally, but I think when we talk about wiping out humanity we should include the idea that if we were to lose a significant portion of our accumulated information it would be essentially the same as extinction. I don't see a difference between a stone age tech. group of humans surviving the apocalypse and slowly repopulating the world and a different species (whether dogs, squirrels, or porpoises) doing the same thing.

Comment author: Nick_Tarleton 04 May 2009 04:28:31AM *  1 point [-]

I don't see a difference between a stone age tech. group of humans surviving the apocalypse and slowly repopulating the world and a different species (whether dogs, squirrels, or porpoises) doing the same thing.

See In Praise of Boredom and Sympathetic Minds: random evolved intelligent species are not guaranteed to be anything we would consider valuable.

Comment author: Nominull 03 May 2009 10:30:57PM 1 point [-]

I like humans. I think they're cute :3

Comment author: mattnewport 03 May 2009 07:21:47PM 0 points [-]

We have pretty solid evidence that a stone age tech group of humans can develop a technologically advanced society in a few 10s of thousands of years. I imagine it would take considerably longer for squirrels to get there and I would be much less confident they can do it at all. It may well be that human intelligence is an evolutionary accident that has only happened once in the universe.

Comment author: Mario 03 May 2009 07:57:25PM *  0 points [-]

The squirrel civilization would be a pretty impressive achievement, granted. The destruction of this particular species (humans) would seemingly be a tremendous loss universally, if intelligence is a rare thing. Nonetheless, I see it as only a certain vessel in which intelligence happened to arise. I see no particular reason why intelligence should be specific to it, or why we should prefer it over other containers should the opportunity present itself. We would share more in common with an intelligent squirrel civilization than a band of gorillas, even though we would share more genetically with the latter. If I were cryogenically frozen and thawed out a million years later by the world-dominating Squirrel Confederacy, I would certainly live with them rather than seek out my closest primate relatives.

EDIT: I want to expand on this slightly. Say our civilization were to be completely destroyed, and a group of humans that had no contact with us were to develop a new civilization of their own concurrent with a squirrel population doing the same on the other side of the world. If that squirrel civilization were to find some piece of our history, say the design schematics of an electric toothbrush, and adopt it as a part of their knowledge, I would say that for all intents and purposes, the squirrels are more "us" than the humans, and we would survive through the former, not the latter.

Comment author: mattnewport 03 May 2009 09:26:23PM 0 points [-]

I don't see any fundamental reason why intelligence should be restricted to humans. I think it's quite possible that intelligence arising in the universe is an extremely rare event though. If you value intelligence and think it might be an unlikely occurrence then the survival of some humans rather than no humans should surely be a much preferred outcome?

I disagree that we would have more in common with the electric toothbrush wielding squirrels. I've elaborated more on that in another comment.

Comment author: Mario 03 May 2009 09:36:22PM 1 point [-]

Preferred, absolutely. I just think that the survival of our knowledge is more important than the survival of the species sans knowledge. If we are looking to save the world, I think an AI living on the moon pondering its existence should be a higher priority than a hunter-gatherer tribe stalking wildebeest. The former is our heritage, the latter just looks like us.

Comment author: Vladimir_Nesov 03 May 2009 07:14:20PM *  0 points [-]

Does this imply that you are OK with a Paperclip AI wiping out humanity, since it will be an intelligent life form much more developed than we are?

Comment author: Mario 03 May 2009 07:49:18PM 0 points [-]

If I implied that, it was unintentional. All I mean is that I see no reason why we should feel a kinship toward humans as humans, as opposed to any species of people as people. If our civilization were to collapse entirely and had to be rebuilt from scratch, I don't see why the species that is doing the rebuilding is all that important -- they aren't "us" in any real sense. We can die even if humanity survives. By that same token, if the paperclip AI contains none of our accumulated knowledge, we go extinct along with the species. If the AI contains some our of knowledge and a good degree of sentience, I would argue that part of us survives despite the loss of this particular species.

Comment author: ciphergoth 03 May 2009 08:06:07PM 3 points [-]

Bear in mind, the paperclip AI won't ever look up to the broader challenges of being a sentient being in the Universe; the only thing that will ever matter to it, until the end of time, is paperclips. I wouldn't feel in that instance that we had left behind a creature that represented our legacy, no matter how much it knows about the Beatles.

Comment author: Mario 03 May 2009 08:50:21PM 0 points [-]

OK, I can see that. In that case, maybe a better metric would be the instrumental use of our accumulated knowledge, rather than its mere possession. Living in a library doesn't mean you can read, after all.

Comment author: ciphergoth 03 May 2009 11:01:08PM 3 points [-]

What I think you're driving at is that you want it to value the Beatles in some way. Having some sort of useful crossover between our values and its is the entire project of FAI.

Comment author: Mario 03 May 2009 11:17:45PM 1 point [-]

I'm just trying to figure out under what circumstances we could consider a completely artificial entity a continuation of our existence. As you pointed out, merely containing our knowledge isn't enough. Human knowledge is a constantly growing edifice, where each generation adds to and build upon the successes of the past. I wouldn't expect an AI to find value in everything we have produced, just as we don't. But if our species were wiped out, I would feel comfortable calling an AI which traveled the universe occasionally writing McCartney- or Lennon-inspired songs "us." That would be survival. (I could even deal with a Ringo Starr AI, in a pinch.)

Comment author: ciphergoth 03 May 2009 11:29:20PM 1 point [-]

I strongly suspect that that is the same thing as a Friendly AI, and therefore I still consider UFAI an existential risk.

Comment author: Vladimir_Nesov 03 May 2009 09:21:59PM 1 point [-]

The Paperclip AI will optimally use its knowledge about the Beatles to make more paperclips.

Comment author: mattnewport 03 May 2009 09:19:53PM 0 points [-]

How much of what it means to be human do you think is cultural conditioning versus innate biological tendency? I think the evidence points to a very large biologically determined element to humanity. I would expect to find more in common with a hunter gatherer in a previously undiscovered tribe, or even with a paleolithic tribesman, than with an alien intelligence or an evolved dolphin.

If you read ancient Greek literature, it is easy to empathize with most of the motivations and drives of the characters even though they lived in a very different world. You could argue that our culture's direct lineage from theirs is a factor but it seems that westerners can recognize as fellow humans the minds behind ancient Chinese or Indian texts with less shared cultural heritage with our own.

Comment author: Mario 03 May 2009 09:45:52PM 1 point [-]

I don't consider our innate biological tendencies the core of our being. We are an intelligence superimposed on a particular biological creature. It may be difficult to separate the aspects of one from the other (and I don't pretend to be fully able to do so), but I think it's important that we learn which is which so that we can slowly deemphasize and discard the biological in favor of the solely rational.

I'm not interested in what it means to be human, I want to know what it means to be a person. Humanity is just an accident as far as I'm concerned. It might as well have been anything else.

Comment author: loqi 04 May 2009 01:34:59AM *  0 points [-]

I'm curious as to what sorts of goals you think a "solely rational" creature possesses. Do you have a particular point of disagreement with Eliezer's take on the biological heritage of our values?

Comment author: Mario 04 May 2009 02:30:54AM 0 points [-]

Oh, I don't know that. What would remain of you if you could download your mind into a computer? Who would you be if you were no longer affected by the level of serotonin or adrenaline you are producing, or if pheromones didn't affect you? Once you subtract the biological from the human, I imagine what remains to be pure person. There should be no difference between that person and one who was created intentionally or one that evolved in a different species, beyond their personal experiences (controlling for the effects of their physiology).

I don't have any disagreement with Eliezer's description of how our biology molded our growth, but I see no reason why we should hold on to that biology forever. I could be wrong, however. It may not be possible to be a person without certain biological-like reactions. I can certainly see how this would be the case for people in early learning stages of development, particularly if your goal is to mold that person into a friendly one. Even then, though, I think it would be beneficial to keep those parts to the bare minimum required to function.

Comment author: loqi 04 May 2009 03:45:18AM 1 point [-]

What would remain of you if you could download your mind into a computer?

That depends on the resolution of the simulation. Wouldn't you agree?

Once you subtract the biological from the human, I imagine what remains to be pure person.

I think you're using the word "biological" to denote some kind of unnatural category.

I don't have any disagreement with Eliezer's description of how our biology molded our growth, but I see no reason why we should hold on to that biology forever.

The reasons you see for why any of us "should" do anything almost certainly have biologically engineered goals behind them in some way or another. What of self-preservation?

Comment author: Mario 04 May 2009 07:06:04PM 1 point [-]

Not unnatural, obviously, but a contaminant to intelligence. Manure is a great fertilizer, but you wash it off before you use the vegetable.

Comment author: mattnewport 02 May 2009 09:27:52PM -1 points [-]

I take much the same position.