Comment author: Bongo 26 September 2011 06:18:54PM 9 points [-]

LW Minecraft server anyone?

Comment author: Bongo 16 September 2011 01:48:13AM *  3 points [-]

If you really can predict your karma, you should post encrypted predictions* offsite at the same time as you make your post, or use some similar scheme so your predictions are verifiable.

Seems obviously worth the bragging rights.

* A prediction is made up of a post id, a time, and a karma score, and means that the post will have that karma score at that time.

Comment author: nerzhin 17 August 2011 07:51:15PM 4 points [-]

Another way of saying this (I think - Vladimir_M can correct me):

You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.

Comment author: Bongo 18 August 2011 09:17:57PM 3 points [-]

You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.

This seems obviously false.

Comment author: Bongo 18 August 2011 06:48:44PM *  22 points [-]

Thus, when aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas.

I love that you don't seem to argue against maximizing EV, but rather to argue that a certain method, EEV, is a bad way to maximize EV. If this was stated at the beginning of the article I would have been a lot less initially skeptical.

Comment author: Vladimir_M 17 August 2011 12:33:26AM *  30 points [-]

I think this whole "utilitarian vs. deontological" setup is a misleading false dichotomy. In reality, the way people make moral judgments -- and I'd also say, any moral system that is really usable in practice -- is best modeled neither by utilitarianism nor by deontology, but by virtue ethics.

All of the puzzles listed in this article are clarified once we realize that when people judge whether an act is moral, they ask primarily what sort of person would act that way, and consequently, whether they want to be (or be seen as) this sort of person and how people of this sort should be dealt with. Of course, this judgment is only partly (and sometimes not at all) in the form of conscious deliberation, but from an evolutionary and game-theoretical perspective, it's clear why the unconscious processes would have evolved to judge things from that viewpoint. (And also why their judgment is often covered in additional rationalizations at the conscious level.)

The "fat man" variant of the trolley problem is a good illustration. Try to imagine someone who actually acts that way in practice, i.e. who really goes ahead and kills in cold blood when convinced by utilitarian arithmetic that it's right to do so. Would you be comfortable working or socializing with this person, or even just being in their company? Of course, being scared and creeped out by such a person is perfectly rational: among the actually existing decision algorithms implemented by human brains, there are none (or at least very few) that would make the utilitarian decision in the fat man-trolley problem and otherwise produce reasonably predictable, cooperative, and non-threatening behavior.

It's similar with the less dramatic examples discussed by Haidt. In all of these, the negative judgment, even if not explicitly expressed that way, is ultimately about judging what kind of person would act like that. (And again, except perhaps for the ideologically polarized flag example, it is true that such behaviors signal that the person in question is likely to be otherwise weird, unpredictable, and threatening.)

I'd also add that when it comes to rationalizations, utilitarians should be the last ones to throw stones. In practice, utilitarianism has never been much more than a sophisticated framework for constructing rationalizations for ideological positions on questions where correct utilitarian answers are at worst just undefined, and at best wildly intractable to calculate. (As is the case for pretty much all questions of practical interest.)

Comment author: Bongo 17 August 2011 08:30:49AM *  7 points [-]

So I guess the takeaway is that if you care more about your status as a predictable, cooperative, and non-threatening person than about four innocent lives, don't push the fat man.

Comment author: Bongo 14 August 2011 01:11:42PM *  3 points [-]

I don't think it's that bad. Anything at an inferential distance sounds ridiculous is you just matter-of-factly assert it, but that just means that if you want to tell someone about something at an inferential distance don't just matter-of-factly assert it. The framing probably matters at least as much as the content.

Comment author: Bongo 14 August 2011 12:14:28PM *  2 points [-]

science is wrong

No. Something like "Bayesian reasoning is better than science" would work.

Every fraction of a second you split into thousands of copies of yourself.

Not "thousands". "Astronomically many" would work.

Computers will soon become so fast that AI researchers will be able to create an artificial intelligence that's smarter than any human

That's the accelerating change, not the intelligence explosion school of singularity. Only the latter is popular around here.

Also, we sometimes prefer torture to dust-specs.

Add "for sufficiently many dust-specks".

I also agree with lessdazed's first three criticisms.

--

Other than these, it's not a half-bad summary!

Comment author: David_Gerard 07 August 2011 04:17:06PM 1 point [-]

Controversy scores would indeed be useful things - e.g., are the scores on the QM sequence so low because the posts are controversial, or because few people read them?

Comment author: Bongo 07 August 2011 10:17:26PM *  0 points [-]

A little UI idea to avoid number clutter: represent the controversy score by having the green oval be darker (or lighter) green the more controversial the post is.

Comment author: cousin_it 05 August 2011 10:52:33AM *  9 points [-]

You're relying on the fact that you have uncertainty about Omega's prediction, which is really an accidental feature of the problem, not shared by other problems in the same vein.

Imagine a variant where both boxes are transparent and you can see what's inside, but the contents of the boxes were still determined by Omega's prediction of your future decision. (I think this formulation is due to Gary Drescher.) I'm a one-boxer in that variant too, how about you? Also see Parfit's Hitchhiker, where the predictor's decision depends on what you would do if you already knew the predictor decided in your favor, and Counterfactual Mugging, where you already know that your decision cannot help the current version of you (but you'd precommit to it nonetheless).

The most general solution to such problems that we currently know is Wei Dai's UDT. Informally it goes something like this: "choose your action so that the fact of your choosing it in your current situation logically implies the highest expected utility (weighted over all apriori possible worlds before you learned your current situation) compared to all other actions you could take in your current situation".

Comment author: Bongo 05 August 2011 01:07:12PM *  6 points [-]

Extremely counterfactual mugging is the simplest such variation IMO. Though it has the same structure as Parfit's Hitchhiker, it's better because issues of trust and keeping promises don't come into it. Here it is:

Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.

Omega asks you to pay him $100. Do you pay?

Comment author: peter_hurford 04 August 2011 01:32:16AM *  5 points [-]

What if the problem were phrased like this?

Set Four:

1.) Save 24000 lives, with certainty

2.) 0.0001% chance of saving 27 billion lives, 99.9999% chance of saving no lives.

Comment author: Bongo 04 August 2011 06:33:41PM *  3 points [-]

You mean this?:

1.) 26986000 people die, with certainty.

2.) 0.0001% chance that nobody dies; 99.9999% chance that 27000000 people die.

And of course the answer is obvious. Given a population of 40 billion, you'd have to be a monster to not pick 2. :)

View more: Prev | Next