passive_fist comments on Superintelligence 23: Coherent extrapolated volition - Less Wrong

5 Post author: KatjaGrace 17 February 2015 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: passive_fist 21 February 2015 02:22:21AM 1 point [-]

The 'child' metaphor for SI is not very accurate. SIs can be designed and, most importantly, we have control over what their utility functions are.

Comment author: lump1 22 February 2015 03:29:43AM *  0 points [-]

I thought it's supposed to work like this: The first generation of AI are designed by us. The superintelligence is designed by them, the AI. We have initial control over what their utility functions are. I'm looking for a good reason for we should expect to retain that control beyond the superintelligence transition. No such reasons have been given here.

A different way to put a my point: Would a superintelligence be able to reason about ends? If so, then it might find itself disagreeing with our conclusions. But if not - if we design it to have what for humans would be a severe cognitive handicap - why should we think that subsequent generations of SuperAI will not repair that handicap?

Comment author: passive_fist 22 February 2015 03:49:10AM 1 point [-]

You're making the implicit assumption that a runaway scenario will happen. A 'cognitive handicap' would, in this case, simply prevent the next generation AI from being built at all.

As I'm saying, it would be a lousy SI and not very useful. But it would be friendly.