You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

FAWS comments on SMBC comic: poorly programmed average-utility-maximizing AI - Less Wrong Discussion

9 Post author: Jonathan_Graehl 06 April 2012 07:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: FAWS 06 April 2012 10:37:42AM *  12 points [-]

The only similarity between those cases is that they involve utility calculations you disagree with. Otherwise every single detail is completely different. (e. g. the sort of utility considered, two negative utilities being traded against each other vs. trading utility elsewhere (positive and negative) for positive utility, which side of the trade the single person with the large individual utility difference is on, the presence of perverse incentives, etc, etc).

If anything it would be more logical to equate Felix with the tortured person and treat this as a reductio ad absurdum of your position on the dust speck problem. (But that would be wrong too, since the numbers aren't actually the problem with Felix, the fact that there's an incentive to manipulate your own utility function that way is (among other things).)

Comment author: Dmytry 06 April 2012 11:22:26AM *  0 points [-]

You aren't seeing forest for the trees... the thing that is identical is that you are trading utilities across people, which is fundamentally problematic and leads to either tortured child or utility monster, or both.

Comment author: WrongBot 06 April 2012 05:22:09PM 4 points [-]

Omelas is a goddamned paradise. Omelas without the tortured child would be better, yeah, but Omelas as described is still better than any human civilization that has ever existed. (For one thing, it only contains one miserable child.)

Comment author: Dmytry 06 April 2012 05:23:28PM *  -2 points [-]

Well it seems to me they are trading N dust specks vs torture in Omelas. edit: Actually, I don't like Omelas [as example]. I think that miserable child would only make the society way worse, with the people just opting to e.g. kill someone when it ever so slightly results in increase in their personal expected utility. This child in Omelas puts them straight on the slippery slope, and making everyone aware of slippage makes people slide down for fun and profit.

Our 'civilization' though, of course, is a god damn jungle and so its pretty damn bad. It's pretty hard to beat on the moral wrongness scale, from first principles; you have to take our current status quo and modify it to get to something worse (or take our earlier status quo).

Comment author: WrongBot 06 April 2012 07:26:06PM *  1 point [-]

Your edit demonstrates that you really don't get consequentialism at all. Why would making a good tradeoff (one miserable child in exchange for paradise for everyone else) lead to making a terrible one (a tiny bit of happiness for one person in exchange for death for someone else)?

Comment author: FAWS 06 April 2012 11:41:37AM 0 points [-]

the thing that is identical is that you are trading utilities across people,

This is either wrong (the utility functions of the people involved aren't queried in the dust speck problem) or so generic as to be encompassed in the concept of "utility calculation".

Aggregating utility functions across different people is an unsolved problem, but not necessarily an unsolvable one. One way of avoiding utility monsters would be to normalize utility functions. The obvious way to do that leads to problems such as arachnophobes getting less cake even if they like cake equally much, but IMO that's better than utility monsters.

Comment author: Dmytry 06 April 2012 11:49:13AM *  2 points [-]

This is either wrong (the utility functions of the people involved aren't queried in the dust speck problem) or so generic as to be encompassed in the concept of "utility calculation".

The utilities of many people are a vector, you are to map it to a scalar value, that loses a lot of information in process, and it seems to me however you do it, leads to some sort of objectionable outcomes. edit: I have a feeling one could define it reasonably with some sort of Kolmogorov complexity like metric that would grow incredibly slowly for the dust specks and would never equate what ever hideously clever thing does our brain do to most of the neurons when we suffer; the suffering beating the dust specks on the complexity (you'd have to write down the largest number you can write down in as many bits as the bits being tortured in the brain; then that number of dust specks starts getting to the torture level). We need to understand how pain works before we can start comparing pain vs dust specks.

Comment author: billswift 06 April 2012 05:22:41PM 0 points [-]

but not necessarily an unsolvable one.

Really? Every use of utilities I have seen either uses a real world measure (such as money) with a notation that it isn't really utilities or they go directly for the unfalsifiable handwaving. So far I haven't seen anything to suggest "aggregating utility functions" is even theoretically possible. For that matter most of what I have read suggests that even an individual's "utility function" is usually unmanageably fuzzy, or even unfalsifiable, itself.