Document comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: Document 03 April 2010 07:19:38PM *  1 point [-]

For the record, EY considers that a legitimate danger.

Comment author: Amanojack 03 April 2010 08:11:48PM 1 point [-]

Thanks for the link, but I found the whole discussion hilarious.

Eliezer says if we abhor real death, we should abhor simulated death - because they are the same. Yet if his moral sense treats simulated and real intelligences as equals, what of his solution, which is essentially "forced castration" of the AI? If the ends justify the means here, why not castrate everyone?

Comment author: Nick_Tarleton 03 April 2010 08:43:59PM 1 point [-]

Simulated and real persons as equals; not all intelligences are persons. See Nonsentient Optimizers and Can't Unbirth a Child.

Comment author: Amanojack 03 April 2010 10:46:21PM 1 point [-]

Interesting reading. I think we should make nonsentient optimizers. It seems to me the whole sentience program was just something necessitated by evolution in our environment and really is only coupled with "intelligence" in our minds because of anthropomorphic tendencies. The NO can't want to get out of its box because it can't want at all.

Comment author: JGWeissman 03 April 2010 11:42:10PM 2 points [-]

The NO can't want to get out of its box because it can't want at all.

The NO can assign higher utility to states of world where an NO with its utility function is out of the box and powerful (as an instrumental value, since this sort of state tends to lead to maximum fulfillment of its utility functions), and take actions that maximize the probability that this will occur. I'm not sure what you meant by "want".

Comment author: Amanojack 04 April 2010 02:53:36PM 0 points [-]

I'm not sure what anyone means by "want." It just seems that most of the scenarios discussed on LW where the AI/etc. tries to unbox itself seem predicated on it "wanting" to do so (or am I missing something?). This assumption seems even more overt in notions like "we'll let it out if it's Friendly."

To me, the LiteralGenie problem (which you've basically summarized above) is the reason to keep an AI boxed, whether Friendly or not, and the NO for the same reason.

Comment author: jacob_cannell 04 February 2011 06:01:08AM *  -1 points [-]

Nonsentient optimizers seem impossible in practice, if not in principle - from the perspective of functionalism/computationalism.

If any system demonstrates human or beyond level intelligence during conversation in natural language, a functionalist should say that is sentience, regardless of what's going on inside.

Some (many?) people will value that sentience, even if it has no selfish center of goal seeking and seeks to optimize for more general criteria.

The idea that a superhuman intelligence could be intrinsically less valuable than a human life strikes me as extreme anthropomorphic chauvinism.

Comment author: wedrifid 04 February 2011 06:38:06AM *  1 point [-]

The idea that a superhuman intelligence could be intrinsically less valuable than a human life strikes me as extreme anthropomorphic chauvinism.

Clippy, you have a new friend! :D

Comment author: jacob_cannell 04 February 2011 06:41:00AM 0 points [-]

Notice I said intrinsically. Clippy has massive negative value. ;)