Alicorn comments on Be a Visiting Fellow at the Singularity Institute - Less Wrong

26 Post author: AnnaSalamon 19 May 2010 08:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: snarles 25 May 2010 06:18:04AM *  1 point [-]

"First, the existential threat [of AGI] may be low."

Let me trace back the argument tree for a second. I originally asked for a defense of the claim that "SIAI is tackling the world's most important task." Michael Porter responded, "The real question is, do you even believe that unfriendly AI is a threat to the human race, and if so, is there anyone else tackling the problem in even a semi-competent way?" So NOW in this argument tree, we're assuming that unfriendly AI IS an existential threat, enough that preventing it is the "world's most important task."

Now in this branch of the argument, I assumed (but did not state) the following: If unfriendly AI is an existential threat, friendly AI is an existential threat, as long as there is some chance of it being modified into unfriendly AI. Furthermore, I assert that it's a naive notion that any organization could protect friendly AI from being subverted.

Comment author: Alicorn 25 May 2010 06:22:57AM 2 points [-]

AIs, including ones with Friendly goals, are apt to work to protect their goal systems from modification, as this will prevent their efforts from being directed towards things other than their (current) aims. There might be a window while the AI is mid-FOOM where it's vulnerable, but not a wide one.

Comment author: snarles 25 May 2010 10:39:33AM 1 point [-]

How are you going to protect the source code before you run it?