Wei_Dai2 comments on Not Taking Over the World - Less Wrong

21 Post author: Eliezer_Yudkowsky 15 December 2008 10:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (91)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Wei_Dai2 16 December 2008 09:29:11PM 1 point [-]

Anna Salamon wrote: Is it any safer to think ourselves about how to extend our adaptation-executer preferences than to program an AI to figure out what conclusions we would come to, if we did think a long time?

First, I don't know that "think about how to extend our adaptation-executer preferences" is the right thing to do. It's not clear why we should extend our adaptation-executer preferences, especially given the difficulties involved. I'd backtrack to "think about what we should want".

Putting that aside, the reason that I prefer we do it ourselves is that we don't know how to get an AI to do something like this, except through opaque methods that can't be understood or debugged. I imagine the programmer telling the AI "Stop, I think that's a bug." and the AI responding with "How would you know?"

g wrote: Wei Dai, singleton-to-competition is perfectly possible, if the singleton decides it would like company.

In that case the singleton might invent a game called "Competition", with rules decided by itself. Anti-prediction says that it's pretty unlikely those rules would happen to coincide with the rules of base-level reality, so base-level reality would still be controlled by the singleton.