Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Shane_Legg comments on The Magnitude of His Own Folly - Less Wrong

27 Post author: Eliezer_Yudkowsky 30 September 2008 11:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Shane_Legg 30 September 2008 05:33:33PM 1 point [-]

Eli,

FAI problems are AGI problems, they are simply a particular kind and style of AGI problem in which large sections of the solution space have been crossed out as unstable.

Ok, but this doesn't change my point: you're just one small group out of many around the world doing AI research, and you're trying to solve an even harder version of the problem while using fewer of the available methods. These factors alone make it unlikely that you'll be the ones to get there first. If this correct, then your work is unlikely to affect the future of humanity.

Valdimir,

Outcompeting other risks only becomes relevant when you can provide a better outcome.

Yes, but that might not be all that hard. Most AI researchers I talk to about AGI safety think the idea is nuts -- even the ones who believe that super intelligent machines will exist in a few decades. If somebody is going to set off a super intelligent machine I'd rather it was a machine that will only *probably* kill us, rather than a machine that almost certainly will kill us because issues of safety haven't even been considered.

If I had to sum up my position it would be: maximise the safety of the first powerful AGI, because that's likely to be the one that matters. Provably safe theoretical AGI designs aren't going to matter much to us if we're already dead.