gregconen comments on Welcome to Heaven - Less Wrong

23 Post author: denisbider 25 January 2010 11:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (242)

You are viewing a single comment's thread.

Comment author: gregconen 27 January 2010 02:59:21PM 6 points [-]

If all the AI cares about is the utility of each being times the number of beings, and is willing to change utility functions to get there, why should it bother with humans? Humans have all sorts of "extra" mental circuitry associated with being unhappy, which is just taking up space (or computer time in a simulator). Instead, it makes new beings, with easily satisfied utility functions and as little extra complexity as possible.

The end result is just as unFriendly, from a human perspective, as the naive "smile maximizer".

Comment author: RobertWiblin 29 January 2010 01:53:49PM *  0 points [-]

Who cares about humans exactly? I care about utility. If the AI thinks humans aren't an efficient way of generating utility, we should be eliminated.

Comment author: gregconen 29 January 2010 02:36:12PM 2 points [-]

That's a defensible position, if you care about the utility of beings that don't currently exist, to the extent that you trade the utility of currently existing beings to create new, happier ones.

The point is that the result of total utility maximization is unlikely to be something we'd recognize as people, even wireheads or Super Happy People.

Comment author: tut 29 January 2010 02:24:58PM *  2 points [-]

Who cares about humans exactly? I care about utility.

That is nonsense. Utility is usefulness to people. If there are no humans there is no utility. An AI that could become convinced that "humans are not an efficient way to generate utility" would be what is referred to as a paperclipper.

This is why I don't like the utile jargon. It makes it sound as though utility was something that could be measured independently of human emotions. Perhaps some kind of substance. But if statements about utility are not translated back to statements about human action or goals then they are completely meaningless.

Comment author: ciphergoth 29 January 2010 02:38:50PM 9 points [-]

Utility is usefulness to people. If there are no humans there is no utility.

Utility is goodness measured according to some standard of goodness; that standard doesn't have to reference human beings. In my most optimistic visions of a far future, human values outlive the human race.

Comment author: tut 29 January 2010 02:51:58PM *  2 points [-]

Utility is goodness measured according to some standard of goodness; that standard doesn't have to reference human beings. In my most optimistic visions of a far future, human values outlive the human race.

Are we using the same definition of "human being"? We would not have to be biologically identical to what we are now in order to be people. But human values without humans also sounds meaningless to me. There are no values atoms or goodness atoms sitting around somewhere. To be good or to be valuable something must be good or valuable by the standards of some person. So there would have to be somebody around to do the valuing. But the standards don't have to be explicit or objective.

Comment author: RobertWiblin 29 January 2010 04:39:44PM 0 points [-]

Utility as I care about it is probably the result of information processing. Not clear why information should only be able to be processed in that way by human type minds, let alone fleshy ones.

Comment author: thomblake 29 January 2010 02:00:23PM 0 points [-]

Starting with the assumption of utilitarianism, I believe you're correct. I think the folks working on this stuff assign a low probability to "kill all humans" being Friendly. But I'm pretty sure people aren't supposed to speculate about the output of CEV.

Comment author: RobertWiblin 29 January 2010 05:24:58PM 1 point [-]

Probably the proportion of 'kill all humans' AIs that are friendly is low. But perhaps the proportion of FAIs that 'kill all humans' is large.

Comment author: gregconen 30 January 2010 03:17:42AM 0 points [-]

That depends on your definition of Friendly, which in turn depends on your values.

Comment author: Vladimir_Nesov 29 January 2010 11:53:04PM *  0 points [-]

But perhaps the proportion of FAIs that 'kill all humans' is large.

Maybe probability you estimate for that to happen is high, but "proportion" doesn't makes sense, since FAI is defined as an agent acting for specific preference, so FAIs have to agree on what to do.

Comment author: RobertWiblin 30 January 2010 04:07:38AM 0 points [-]

OK, I'm new to this.