Eliezer_Yudkowsky comments on Against easy superintelligence: the unforeseen friction argument - Less Wrong

25 Post author: Stuart_Armstrong 10 July 2013 01:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 11 July 2013 05:20:46AM 6 points [-]

A key point about intelligent agency is that it can produce positive noise as well as negative noise. Imagine someone who'd watched the slow evolution of life on Earth over the last billion years, looking at brainy hominids and thinking, "Well, these brains seem mighty efficient, maybe even a thousand times as efficient as evolution, but surely not everything will go as expected." They would be correct to widen their confidence intervals based on this "not everything will go as I currently expect" heuristic. They would be wrong to widen their confidence intervals only downward. Human intelligence selectively sought out and exploited the most positive opportunities.

Comment author: Stuart_Armstrong 11 July 2013 09:18:46AM 8 points [-]

Yes, but extra positive noise in the strong FOOM scenarios doesn't make much difference. If a bad AI fooms in 30 minutes rather than an hour, or even in 5 minutes, we're still equally dead.

Comment author: Larks 11 July 2013 09:51:33AM 4 points [-]

Positive noise might mean being able to FOOM from a lower starting base.

Comment author: Stuart_Armstrong 11 July 2013 10:00:35AM *  0 points [-]

Point taken, though I've already increased my probability of early FOOM ( http://www.youtube.com/watch?v=ad4bHtSXiFE )

And I stand by the point that most noise will be negative. Start changing random things in, say, the earth's ecosystem, may open great new opportunities, but is most likely to cost us than benefit us.