KatjaGrace comments on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities - Less Wrong

25 Post author: KatjaGrace 16 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (232)

You are viewing a single comment's thread. Show more comments above.

Comment author: rlsj 16 September 2014 03:05:08AM 2 points [-]

An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in order to obtain a necessary or desirable usefulness? It seems likely that autonomous laborers, assembly-line workers, clerks and low-level managers would, without requiring such flirtation, be useful and sufficient for the society of abundance that is our main objective. But can they operate without a working AGI? We may find out if we let the robots stumble onward and upward.

Comment author: KatjaGrace 16 September 2014 03:34:34AM 1 point [-]

In what sense do you think of an autonomous laborer as being under 'our control'? How would you tell if it escaped our control?

Comment author: rlsj 16 September 2014 08:39:49PM 2 points [-]

How would you tell? By its behavior: doing something you neither ordered nor wanted.

Think of the present-day "autonomous laborer" with an IQ about 90. The only likely way to lose control of him is for some agitator to instill contrary ideas. Censorship for robots is not so horrible a regime.

Who is it that really wants AGI, absent proof that we need it to automate commodity production?

Comment author: leplen 17 September 2014 04:16:14PM 3 points [-]

In my experience, computer systems currently get out of my control by doing exactly what I ordered them to do, which is frequently different than I what I wanted them to do.

Whether or not a system is "just following orders" doesn't seem to be a good metric for it being under your control.

Comment author: rlsj 17 September 2014 11:42:19PM 1 point [-]

How does "just following orders," a la Nuremberg, bear upon this issue? It's out of control when its behavior is neither ordered nor wanted.

Comment author: leplen 18 September 2014 11:15:24PM 1 point [-]

While I agree that it is out of control if the behavior is neither ordered nor wanted, I think it's also very possible for the system to get out of control while doing exactly what you ordered it to, but not what you meant for it to.

The argument I'm making is approximately the same as the one we see in the outcome pump example.

This is to say, while a system that is doing something neither ordered nor wanted is definitely out of control, it does not follow that a system that is doing exactly what it was ordered to do is necessarily under your control.

Comment author: [deleted] 03 October 2014 11:58:50PM 0 points [-]

Who is it that really wants AGI, absent proof that we need it to automate commodity production?

Ideological singulatarians.