You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eliezer_Yudkowsky comments on "Stupid" questions thread - Less Wrong Discussion

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread. Show more comments above.

Comment author: ikrase 13 July 2013 08:22:51AM 9 points [-]
  • What's with the ems? People who are into ems seem to make a lot of assumptions about what ems are like and seem completely unattached to present-day culture or even structure of life, seem willing to spam duplicates of people around, etc. I know that Hanson thinks that 1. ems will not be robbed of their humanity and 2. that lots of things we currently consider horrible will come to pass and be accepted, but it's rather strange just how as soon as people say 'em' (as opposed to any other form of uploading) everything gets weird. Does anthropics come into it?

  • Why the huge focus on fully paternalistic Friendly AI rather than Obedient AI? It seems like a much lower-risk project. (and yes, I'm aware of the need for Friendliness in Obedient AI.)

Comment author: Eliezer_Yudkowsky 14 July 2013 04:15:54AM 7 points [-]

Well, no offense, but I'm not sure you are aware of the need for Friendliness in Obedient AI, or rather, just how much F you need in a genie.

If you were to actually figure out how to build a genie you would have figured it out by trying to build a CEV-class AI, intending to tackle all those challenges, tackling all those challenges, having pretty good solutions to all of those challenges, not trusting those solutions quite enough, and temporarily retreating to a mere genie which had ALL of the safety measures one would intuitively imagine necessary for a CEV-class independently-acting unchecked AI, to the best grade you could currently implement them. Anyone who thought they could skip the hard parts of CEV-class FAI by just building a genie instead, would die like a squirrel under a lawnmower. For reasons they didn't even understand because they hadn't become engaged with that part of the problem.

I'm not certain that this must happen in reality. The problem might have much kinder qualities than I anticipate in the sense of mistakes naturally showing up early enough and blatantly enough for corner-cutters to spot them. But it's how things are looking as a default after becoming engaged with the problems of CEV-class AI. The same problems show up in proposed 'genies' too, it's just that the genie-proposers don't realize it.

Comment author: ikrase 14 July 2013 10:30:51AM 0 points [-]

I'm... not sure what you mean by this. And I wouldn't be against putting a whole CEV-ish human morality in an AI, either. My point is that there seems to be a big space between your Outcome Pump fail example and highly paternalistic AIs of the sort that caused Failed Utopia 4-2.

It reminds me a little bit about how modern computers are only occasionally used for computation.

Comment author: Eliezer_Yudkowsky 14 July 2013 11:18:49PM 6 points [-]

Anything smarter-than-human should be regarded as containing unimaginably huge forces held in check only by the balanced internal structure of those forces, since there is nothing which could resist them if unleashed. The degree of 'obedience' makes very little difference to this fact, which must be dealt with before you can go on to anything else.

Comment author: NancyLebovitz 14 July 2013 01:24:53PM 5 points [-]

As I understand it, a AI is expected to make huge, inventive efforts to fulfill its orders as it understands them.

You know how sometimes people cause havoc while meaning well? Imagine something immensely more powerful and probably less clueful making the same mistake.