Alicorn comments on Be a Visiting Fellow at the Singularity Institute - Less Wrong

26 Post author: AnnaSalamon 19 May 2010 08:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 25 May 2010 06:22:57AM 2 points [-]

AIs, including ones with Friendly goals, are apt to work to protect their goal systems from modification, as this will prevent their efforts from being directed towards things other than their (current) aims. There might be a window while the AI is mid-FOOM where it's vulnerable, but not a wide one.

Comment author: snarles 25 May 2010 10:39:33AM 1 point [-]

How are you going to protect the source code before you run it?