Comment author: utilitymonster 12 June 2011 07:10:01PM *  12 points [-]

Both you and Eliezer seem to be replying to this argument:

  • People only intrinsically desire pleasure.

  • An FAI should maximize whatever people intrinsically desire.

  • Therefore, an FAI should maximize pleasure.

I am convinced that this argument fails for the reasons you cite. But who is making that argument? Is this supposed to be the best argument for hedonistic utilitarianism?

Comment author: utilitymonster 12 June 2011 07:07:47PM 1 point [-]

We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.

IAWYC, but would like to hear more about why you think the last sentence is supported by the previous sentence. I don't see an easy argument from "X is a terminal value for many people" to "X should be promoted by the FAI." Are you supposing a sort of idealized desire fulfilment view about value? That's fine--it's a sensible enough view. I just wouldn't have thought it so obvious that it would be a good idea to go around invisibly assuming it.

In response to comment by Wei_Dai on What we're losing
Comment author: lukeprog 16 May 2011 08:28:05PM 6 points [-]

I suspect that clearly defining open rationality problems would act as a focusing lens for action, not a demotivator. Please do publish your list of open rationality problems. Do for us what Hilbert did for mathematicians. But you don't have to talk about 'drowning.' :)

In response to comment by lukeprog on What we're losing
Comment author: utilitymonster 17 May 2011 10:24:29PM *  1 point [-]

Second the need for a list of the most important problems.

Comment author: utilitymonster 10 May 2011 02:07:57AM 4 points [-]

How do you record your findings for future use, and how do you make sure you don't forget the important parts?

In response to comment by sark on Consequentialism FAQ
Comment author: Vladimir_M 29 April 2011 09:06:40PM *  16 points [-]

This essay by David Friedman is probably the best treatment of the subject of Schelling points in human relations:
http://www.daviddfriedman.com/Academic/Property/Property.html

Applying these insights to the fat man/trolley problem, we see that the horrible thing about pushing the man is that it transgresses the gravest and most terrible Schelling point of all: the one that defines unprovoked deadly assault, whose violation is understood to give the other party the licence to kill the violator in self-defense. Normally, humans see such crucial Schelling points as sacrosanct. They are considered violable, if at all, only if the consequentialist scales are loaded to a far more extreme degree than in the common trolley problem formulations. Even in the latter case, the act will likely cause serious psychological damage. This is probably an artifact of additional commitment not to violate them, which may also be a safeguard against rationalizations.

Now, the utilitarian may reply that this is just human bias, an unfortunate artifact of evolutionary psychology, and we’d all be better off if people instead made decisions according to pure utilitarian calculus. However, even ignoring all the other fatal problems of utilitarianism, this view is utterly myopic. Humans are able to coordinate and cooperate because we pay respect to the Schelling points (almost) no matter what, and we can trust that others will also do so. If this were not so, you would have to be constantly alert that anyone might rob, kill, cheat, or injure you at any moment because their cost-benefit calculations have implied doing so, even if these calcualtions were in terms of the most idealistic altruistic utilitarianism. Clearly, no organized society could exist in that case: even if with unlimited computational power and perfect strategic insight you could compute that cooperation is viable, this would clearly be impractical.

It is however possible in practice for humans to evaluate each other’s personalities and figure out if others’, so to say, decision algorithms follow these constraints. Think of how people react when they realize that someone has a criminal history or sociopathic tendencies. This person is immediately perceived as creepy and dangerous, and with good reason: people realize that his decision algorithm lacks respect for the conventional Schelling points, so that normal trust and relaxed cooperation with him is impossible, and one must be on the lookout for nasty surprises. Similarly, imagine meeting someone who was in the fat man/trolley situation and who mechanically made the utilitarian decision and pushed the man without a twitch of guilt. Even the most zealous utilitarian will in practice be creeped out by such a person, even though he should theoretically perceive him as an admirable hero. (As always when it comes to ideology, people may be big on words but usually know better when their own welfare is at stake.)

(This comment is also cursory and simplified, and an alert reader will likely catch multiple imprecisions and oversimplifications. This is unfortunately unavoidable because of the complexity of the topic. However, the main point stands regardless. In particular, I haven’t addressed the all too common cases where cooperation between people breaks down and all sorts of conflict ensue. But this analysis would just reinforce the main point that cooperation critically depends on mutual recognition of near-unconditional respect for Schelling points.)

Comment author: utilitymonster 30 April 2011 08:51:42AM 0 points [-]

Can you explain why this analysis renders directing away from the five and toward the one permissible?

Comment author: Eliezer_Yudkowsky 21 March 2011 12:36:36AM 5 points [-]

What, you mean in mainstream philosophy? I don't think mainstream philosophers think that way, even Quineans. The best ones would say gravely, "Yes, goals are important" and then have a big debate with the rest of the field about whether goals are important or not. Luke is welcome to prove me wrong about that.

Comment author: utilitymonster 21 March 2011 11:09:01AM 4 points [-]

I actually don't think this is about right. Last time I asked a philosopher about this, they pointed to an article by someone (I.J. Good, I think) about how to choose the most valuable experiment (given your goals), using decision theory.

Comment author: JGWeissman 26 February 2011 06:47:03PM 7 points [-]

If GiveWell really does influence a substantial amount of philanthropy, then I would consider it as a public good charity with the multiplier that implies. Is there data on its influence and projected influence?

I recall a while back that Vasser was talking with GiveWell about rating SIAI. Has anything come of that?

Comment author: utilitymonster 27 February 2011 08:17:50PM *  1 point [-]

Is there data on its influence and projected influence?

Yes. They posted a bunch of self-evaluation stats. It is a start toward the information you seek.

Comment author: SilasBarta 15 February 2011 03:57:59PM 15 points [-]

Here's another one: what I call the layshadow heuristic: could an intelligent layperson produce passable, publishable work [1] in that field after a few days of self-study? It's named after the phenomenon in which someone with virtually no knowledge of the field sells the service of writing papers for others who don't want to do the work, and are never discovered, with their clients being granted degrees.

The heuristic works because passing it implies very low inferential distance and therefore very little knowledge accumulation.

[1] specifically, work that unsuspecting "experts" in the field cannot distinguish from that produced by "serious" researchers with real "experience" and "education" in that field.

Comment author: utilitymonster 16 February 2011 10:18:53PM 0 points [-]

For how many fields do you think this is possible?

Comment author: utilitymonster 11 February 2011 01:11:18PM 1 point [-]

Epic.

Comment author: lukeprog 03 February 2011 07:31:09PM 1 point [-]

Agreed. And I'm skeptical of both. You?

Comment author: utilitymonster 03 February 2011 09:25:16PM 1 point [-]

Hard to be confident about these things, but I don't see the problem with external reasons/oughts. Some people seem to have some kind of metaphysical worry...harder to reduce or something. I don't see it.

View more: Prev | Next