Dmytry comments on SotW: Be Specific - Less Wrong

38 Post author: Eliezer_Yudkowsky 03 April 2012 06:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (306)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 07 April 2012 11:35:49AM *  -2 points [-]

A dumb (ie. About as smart as us but far more rational) AI would, I assume, think along the lines of:

Sounds like a lot of common sense that is very difficult to derive rationally.

"So... I'm kinda dumb. How about I make myself smart before I fuck with stuff? So for now I'll do a basic analysis of what my humans seem to want and make sure I don't do anything drastic to damage that while I'm in my recursive improvement stage. For example I'm definitely not going to turn them all into computation. It don't take a genius to figure out they probably don't want that."

Have you got a better idea than that? If so, then probably the FAI would do that instead of what I just came up with after 2 seconds thought.

Just a little more anthropomorphizing and we'll be speaking of AI that just knows what is the moral thing to do, innately, because he's such a good guy.

The 'basic analysis of what my humans seem to want' has fairly creepy overtones to it (testing hypotheses style). On top of it, say, you tell it, okay just do what ever you think i would do if I thought faster, and it obliges, you are vaporized, because you would of gotten bored into suicide if you thought faster, your simple values system works like this. What's exactly wrong about that course of action? I don't think 'extrapolating' is well defined.

re: volition of mankind. Yep.

Comment author: wedrifid 07 April 2012 12:27:07PM *  1 point [-]

Sounds like a lot of common sense

It doesn't sound like particularly common sense - I'd guess that significantly less than half of humans would arrive at that as a cached 'common sense' solution.

that is very difficult to derive rationally.

It's utterly trivial application of instrumental rationality. I can come up with it in 2 seconds. If the AI is as smart as I am (and with far less human biases) it can arrive at the solution as simply as I can. Especially after it reads every book on strategy that humans have written. Heck, it can read my comment and then decide whether it is a good strategy.

Artificial intelligences aren't stupid.

Just a little more anthropomorphizing and we'll be speaking of AI that just knows what is the moral thing to do, innately, because he's such a good guy.

Or... not. That's utter nonsense. We have been explicitly describing AIs that have been programmed with terminal goals. The AI would then

The 'basic analysis of what my humans seem to want' has fairly creepy overtones to it (testing hypotheses style). On top of it, say, you tell it, okay just do what ever you think i would do if I thought faster, and it obliges, you are vaporized, because you would of gotten bored into suicide if you thought faster, your simple values system works like this. What's exactly wrong about that course of action? I don't think 'extrapolating' is well defined.

CEV is well enough defined that it just wouldn't do that unless you actually do want it - in which case you, well, want it to do that so have no cause to complain. Reading even the incomplete specification from 2004 is sufficient to tell us that a GAI that does that is not implementing something that can reasonably called CEV. I must conclude that you are replying to a straw man (presumably due to not having actually read the materials you criticise.)

Comment author: Dmytry 07 April 2012 01:52:07PM *  -2 points [-]

CEV is not defined to do what you as-is actually want, but to do what you would of wanted, even in circumstances when you as-is actually want something else, as the 2004 paper cheerfully explains.

In any case, once you assume such intent-understanding interpretative powers of AI, it's hard to demonstrate why instructing the AI in plain English to "Be a good guy. Don't do bad things" would not be a better shot.

Comment author: wedrifid 07 April 2012 02:02:26PM 2 points [-]

In any case, once you assume such intent-understanding interpretative powers of AI

Programmed in with great effort, thousands of hours of research and development and even then great chance of failure. That isn't "assumption".

it's hard to demonstrate why instructing the AI in plain English to "Be a good guy. Don't do bad things" would not be a better shot.

That would seem to be a failure of imagination. That exhortation tells even an FAI-complete AI that is designed to follow commands to do very little.