Vladimir_Nesov comments on The mind-killer - Less Wrong

23 Post author: ciphergoth 02 May 2009 04:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 03 May 2009 11:58:38AM *  8 points [-]

That's a very strange perspective. Other threats are good in that they are stupid, so they won't find you if you colonize space or live on an isolated island, or have a lucky combination of genes, or figure out a way to actively outsmart them, etc. Stupid existential risks won't methodically exterminate every human, and so there is a chance for recovery. Unfriendly AI, on the other hand, won't go away, and you can't hide from it on another planet. (Indifference works this way too, it's the application of power indifferent to humankind that is methodical, e.g. Paperclip AI.)

Comment author: Nominull 03 May 2009 10:34:12PM 0 points [-]

Consider: humanity is an intelligence, one not particularly friendly to, say, the fieldmouse. Fieldmice are not yet extinct.

Comment author: MBlume 03 May 2009 11:00:48PM 5 points [-]

I think it is worth considering the number of species to which humanity is largely indifferent which are extinct as a result of humanity optimizing other criteria

Comment author: Nick_Tarleton 04 May 2009 04:31:59AM 2 points [-]

Humans satisfice, and not very well at that compared to what an AGI could do. If we effectively optimized for... almost any goal not referring to fieldmice... fieldmice would be extinct.

Comment author: Vladimir_Nesov 03 May 2009 10:57:05PM 1 point [-]

Humanity is weak.

Comment author: Nominull 03 May 2009 11:58:12PM 0 points [-]

Humanity is pretty damn impressive from a fieldmouse's perspective, I dare say!

Comment author: MBlume 04 May 2009 12:04:49AM 0 points [-]

yet humanity cannot create technology on the level of a fieldmouse.

Comment author: MichaelHoward 03 May 2009 11:00:04PM 0 points [-]

Fieldmice (outside of Douglas Adams fiction) aren't any particular threat to us in the way we might be to the Unfriendly AI. They're not likely to program another us to fight us for resources.

If fieldmice were in danger of extinction we'd probably move to protect them, not that that would necessarily help them.

Comment author: mattnewport 03 May 2009 06:14:45PM -2 points [-]

You are assuming that mere intelligence is sufficient to give an AI an overwhelming advantage in any conflict. While I concede that is possible in theory I consider it much less likely than seems to be the norm here. This is partly because I am also skeptical about the existential dangers of self replicating nanotech, bioengineered viruses and other such technologies that an AI might attempt to use in a conflict.

As long as there is any reasonable probability that an AI would lose a conflict with humans or suffer serious damage to its capacity to achieve its goals, its best course of action is unlikely to be to attempt to wipe out humanity. A paperclip maximizer for example would seem to better further its goals by heading to the asteroid belt where it could advance its goals without needing to devote large amounts of computational capacity to winning a conflict with other goal-directed agents.

Comment author: mattnewport 03 May 2009 11:06:50PM 2 points [-]

For people who've voted this down, I'd be interested in your answers to the following questions:

1) Can you envisage a scenario in which a greater than human intelligence AI with goals not completely compatible with human goals would ever choose a course of action other than wiping out humanity?

2) If you answered yes to 1), what probability do you assign to such an outcome, rather than an outcome involving the complete annihilation of humanity?

3) If you answered no to 1), what makes you certain that such a scenario is not possible?

Comment author: loqi 04 May 2009 01:30:23AM -1 points [-]

Unfriendly AI, on the other hand, won't go away, and you can't hide from it on another planet.

Not on another planet, no. But I wonder how practical a constantly accelerating seed ship will turn out to be.