roystgnr comments on Evidence for the orthogonality thesis - Less Wrong

11 Post author: Stuart_Armstrong 03 April 2012 10:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (289)

You are viewing a single comment's thread. Show more comments above.

Comment author: roystgnr 03 April 2012 06:52:13PM 6 points [-]

Many of our tools are supposed to be web browsers, email clients, etc., but have a history of suddenly doing something completely nuts like taking over the whole computer, which was obviously not the intended purpose. Programming is hard that way - the result will only follow your program, verbatim. Attempts to give programs a greater sense of context and implications aren't new - they're called "higher level languages". They feel less like hand-holding a dumb machine and more like describing a thought process, and you can even design the language to make whole classes of lower-level bugs unwriteable, but machines still end up doing what they're instructed, verbatim (where "what they're instructed" can now also include the output of compiler bugs).

The trouble is that you can't rule out every class of bugs. It's hard (impossible?) to distinguish a priori between what might be a bug and what might just be a different programmers' intention, even though we've been wishing for the ability to do so for over a century. "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?"

Comment author: NancyLebovitz 03 April 2012 07:00:49PM 2 points [-]

Thank you. I've been trying to argue that "the computer does what you tell it to" is a much more chaotic situation than those who want to build FAI seem to believe, and you lay it out better than I have.

Comment author: Eugine_Nier 04 April 2012 05:13:13AM 0 points [-]

"Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?"

Yet, people around here seem to believe that the AI will develop an accurate model of the world even if its input isn't all that accurate.

Comment author: JGWeissman 04 April 2012 05:28:31AM 0 points [-]

people around here seem to believe that the AI will develop an accurate model of the world even if its input isn't all that accurate.

Who believes what, exactly?