bentarm comments on The Friendly AI Game - Less Wrong

38 Post author: bentarm 15 March 2011 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (170)

You are viewing a single comment's thread.

Comment author: bentarm 15 March 2011 04:46:39PM 6 points [-]

Oracle AI - its only desire is to provide the correct answer to yes or no questions posed to it in some formal language (sort of an ueber Watson).

Comment author: wedrifid 16 March 2011 04:08:06AM 9 points [-]

Oracle AI - its only desire is to provide the correct answer to yes or no questions posed to it in some formal language (sort of an ueber Watson).

Oops. The local universe just got turned into computronium. It is really good at answering questions though. Apart from that you gave it a desire to provide answers. The way to ensure that it can answer questions is to alter humans such that they ask (preferably easy) questions as fast as possible.

Comment author: Costanza 15 March 2011 05:06:21PM 14 points [-]

Comment upvoted for starting the game off! Thanks!


Q: Is the answer to the Ultimate Question of Life, the Universe, and Everything 42?

A: Tricky. I'll have to turn the solar system into computronium to answer it. Back to you as soon as that's done.

Comment author: bentarm 16 March 2011 12:11:17PM 0 points [-]

Yes, this was the first nightmare scenario that occurred to me. Interesting that there are so many others...

Comment author: prase 15 March 2011 05:18:54PM *  7 points [-]

Some villain then asks how to reliably destroy the world, and follows the given answer.

Alternatively: A philosopher asks for the meaning of life, and the Oracle returns an extremely persuasive answer which convinces most of people that life is worthless.

Another alternative: After years of excellent work, the Oracle gains so much trust that people finally start to implement a possibility to ask less formal questions, like "how to maximise human utility", and then follow the given advice. Unfortunately (but not surprisingly), unnoticed mistake in the definition of human utility has slipped through the safety checks.

Comment author: AlexMennen 15 March 2011 10:54:01PM 2 points [-]

Unfortunately (but not surprisingly), unnoticed mistake in the definition of human utility has slipped through the safety checks.

Yes, that's the main difficulty behind friendly AI in general. This does not constitute a specific way that it could go wrong.

Comment author: prase 16 March 2011 12:55:45PM *  1 point [-]

Oh, sure. My only intention was to show that limiting the AI's power to mere communication doesn't imply safety. There may be thousands of specific ways how it could go wrong. For instance:

The Oracle answers that human utility is maximised by wireheading everybody to become a happiness automaton, and that it is a moral duty to do that to others even against their will. Most people believe the Oracle (because its previous answers always proved true and useful, and moreover it makes a really neat PowerPoint presentations of its arguments) and wireheading becomes compulsory. After the minority of dissidents are defeated, all mankind turns into happiness automata and happily dies out a while later.

Comment author: CronoDAS 15 March 2011 11:46:02PM 3 points [-]

The 1946 short story "A Logic Named Joe" describes exactly that scenario, gone horribly wrong.

Comment author: NihilCredo 15 March 2011 04:56:27PM 6 points [-]

Would take overt or covert dictatorial control of humanity and reshape their culture so that (a) breeding to the brink of starving is a mass moral imperative and (b) asking very simple questions to the Oracle five times a day is a deeply ingrained quasi-religious practice.

Comment author: Vladimir_M 15 March 2011 07:27:36PM 2 points [-]

Would take overt or covert dictatorial control of humanity and reshape their culture so that (a) breeding to the brink of starving is a mass moral imperative

Out of curiosity, how many people here are total utilitarians who would welcome this development?

Comment author: Dorikka 17 March 2011 01:17:52AM 1 point [-]

This sounds like it would stabilize 'fun' at a comparatively low level with regard to all possibilities, so I don't think that an imaginative utilitarian would like it.

Comment author: Johnicholas 15 March 2011 06:56:10PM 2 points [-]

Anders Sandberg wrote fiction (well, an adventure within the Eclipse Phase RPG) about this:

http://www.aleph.se/EclipsePhase/ThinkBeforeAsking.pdf