jacob_cannell comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 04 February 2011 06:01:08AM *  -1 points [-]

Nonsentient optimizers seem impossible in practice, if not in principle - from the perspective of functionalism/computationalism.

If any system demonstrates human or beyond level intelligence during conversation in natural language, a functionalist should say that is sentience, regardless of what's going on inside.

Some (many?) people will value that sentience, even if it has no selfish center of goal seeking and seeks to optimize for more general criteria.

The idea that a superhuman intelligence could be intrinsically less valuable than a human life strikes me as extreme anthropomorphic chauvinism.

Comment author: wedrifid 04 February 2011 06:38:06AM *  1 point [-]

The idea that a superhuman intelligence could be intrinsically less valuable than a human life strikes me as extreme anthropomorphic chauvinism.

Clippy, you have a new friend! :D

Comment author: jacob_cannell 04 February 2011 06:41:00AM 0 points [-]

Notice I said intrinsically. Clippy has massive negative value. ;)