CarlShulman comments on The Meaning of Right - Less Wrong

30 Post author: Eliezer_Yudkowsky 29 July 2008 01:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (147)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 11 September 2009 01:37:22AM 4 points [-]

What makes you think that any coherence exists in the first place?

Most people wouldn't want to be turned into paperclips?

Comment author: CarlShulman 11 September 2009 04:29:06AM 11 points [-]

A variety of people profess to consider this desirable if it leads to powerful intelligent life filling the universe with higher probability or greater speed. I would bet that there are stable equilibria that can be reached with arguments.

Comment author: rhollerith_dot_com 11 September 2009 06:00:00AM *  8 points [-]

Carl says that a variety of people profess to consider it desirable that present-day humans get disassembled "if it leads to powerful intelligent life filling the universe with higher probability or greater speed."

Well, yeah, I'm not surprised. Any system of valuing things in which every life, present and future, has the same utility as every other life will lead to that conclusion because turning the existing living beings and their habitat into computronium, von-Neumann probes, etc, to hasten the start of the colonization of the light cone by a few seconds will have positive expected marginal utility according to the system of valuing things.

Comment author: jacob_cannell 02 February 2011 02:04:38AM 2 points [-]

That could still be a great thing for us provided that current human minds were uploaded into the resulting computronium explosion.

Comment author: anon895 02 February 2011 03:21:37AM 2 points [-]

...which won't happen if the computronium is the most important thing and uploading existing minds would slow it down. The AI might upload some humans to get their cooperation during the early stages of takeoff, but it wouldn't necessarily keep those uploads running once it no longer depended on humans, if the same resources could be used more efficiently for itself.

Comment author: dxu 17 April 2015 09:13:17PM 0 points [-]

To get my cooperation, at least, it would have to credibly precommit that it wouldn't just turn my simulation off after it no longer needs me. (Of course, the meaning of the word "credibly" shifts somewhat when we're talking about a superintelligence trying to "prove" something to a human.)