lump1 comments on Superintelligence 23: Coherent extrapolated volition - Less Wrong

5 Post author: KatjaGrace 17 February 2015 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: lump1 20 February 2015 12:50:00AM 1 point [-]

I think the burden of answering your "why?" question falls to those who feel sure that we have the wisdom to create superintelligent, super-creative lifeforms who could think outside the box regarding absolutely everything except ethical values. For those, they would inevitably stay on the rails that we designed for them. The thought "human monkey-minds wouldn't on reflection approve of x" would forever stop them from doing x.

In effect, we want superintelligent creatures to ethically defer to us the way Euthyphro deferred to the gods. But as we all know, Socrates had a devastating comeback to Euthyphro's blind deference: We should not follow the gods simply because they want something, or because they command something. We should only follow them if the things they want are right. Insofar as the gods have special insight into what's right, then we should do what they say, but only because what they want is right. On the other hand, if the gods' preferences are morally arbitrary, we have no obligation to heed them.

How long will it take a superintelligence to decide that Socrates won this argument? Milliseconds? Then how do we convince the superintelligence that our preferences (or CEV extrapolated preferences) track genuine moral rightness, rather than evolutionary happenstance? How good a case do we have that humans possess a special insight into what is right that the superintelligence doesn't have, so that the superintelligence will feel justified in deferring to our values?

If you think this is an automatic slam dunk for humans.... Why?

Comment author: passive_fist 20 February 2015 01:35:11AM 1 point [-]

I don't think there's any significant barrier to making a superintelligence that deferred to us for approval on everything. It would be a pretty lousy superintelligence, because it would essentially be crippled by its strict adherence to our wishes (making it excruciatingly slow) but it would work, and it would be friendly.

Comment author: lump1 21 February 2015 01:09:02AM 1 point [-]

Given that there is a very significant barrier to making children that deferred to us for approval on everything, why do you think the barrier would be reduced if instead of children, we made a superintelligent AI?

Comment author: passive_fist 21 February 2015 02:22:21AM 1 point [-]

The 'child' metaphor for SI is not very accurate. SIs can be designed and, most importantly, we have control over what their utility functions are.

Comment author: lump1 22 February 2015 03:29:43AM *  0 points [-]

I thought it's supposed to work like this: The first generation of AI are designed by us. The superintelligence is designed by them, the AI. We have initial control over what their utility functions are. I'm looking for a good reason for we should expect to retain that control beyond the superintelligence transition. No such reasons have been given here.

A different way to put a my point: Would a superintelligence be able to reason about ends? If so, then it might find itself disagreeing with our conclusions. But if not - if we design it to have what for humans would be a severe cognitive handicap - why should we think that subsequent generations of SuperAI will not repair that handicap?

Comment author: passive_fist 22 February 2015 03:49:10AM 1 point [-]

You're making the implicit assumption that a runaway scenario will happen. A 'cognitive handicap' would, in this case, simply prevent the next generation AI from being built at all.

As I'm saying, it would be a lousy SI and not very useful. But it would be friendly.

Comment author: satt 21 February 2015 04:05:54PM 0 points [-]

As friendly as we are, anyway.