MichaelGR comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: MichaelGR 14 November 2009 01:20:26AM 1 point [-]

I would ask the same question to other AGI organizations if I could, but this is a Q&A with only Eliezer (though I'm also curious to know if he knows anything about what other groups are doing with regards to this).

Regardless of who is the first to get to AGI, that group could potentially run into the kind of problems I mentioned. I never said it was the most probable thing that can go wrong. But it should probably be looked into seriously since, if it does happen, it could be pretty catastrophic.

The way I see it, either AGI is developed in secret and Eliezer could be putting the finishing touches on the code right now without telling anyone, or it'll be developed in a fairly open way, with mathematical and algorithmic breakthroughs discussed at conferences, on the net, in papers, whatever. If the latter is the case, some big breakthroughs could attract the attention of powerful organizations (or even of AGI researchers who have enough of a clue to understand these breakthroughs, but that also know they're too far behind to catch up, so the best way for them to get there first is to somehow convince an intelligence agency to steal the code or whatever - again specifics are not important here, just the general principle of what to do with security as we get closer to full AGI).