Open Thread: June 2010

5 Post author: Morendil 01 June 2010 06:04PM

To whom it may concern:

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

(After the critical success of part II, and the strong box office sales of part III in spite of mixed reviews, will part IV finally see the June Open Thread jump the shark?)

Comments (651)

Comment author: roland 01 June 2010 06:23:10PM *  3 points [-]

Should we buy insurance at all?

There is a small remark in Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making about insurance saying that all insurance has negative expected utility, we pay too high a price for too little a risk, otherwise insurance companies would go bankrupt. If this is the case should we get rid of all our insurances? If not, why not?

Comment author: RobinZ 01 June 2010 06:51:42PM 5 points [-]

Risk is more expensive when you have a smaller bankroll. Many slot machines actually offer positive expected value payouts - they make their return on people plowing their winnings back in until they go broke.

Comment author: roland 01 June 2010 06:59:29PM 3 points [-]

Ahh, Kelly criterion, correct?

Comment author: RobinZ 01 June 2010 08:18:38PM 1 point [-]

...

*looks up Kelly criterion*

That's definitely a related result. (So related, in fact, that thinking about the +EV slots the other day got me wondering what the optimal fraction of your wealth was to bid on an arbitrary bet - which, of course, is just the Kelly criterion.)

Comment author: Douglas_Knight 01 June 2010 08:39:44PM 3 points [-]

Citation please? A cursory search suggests that machines go through +EV phases, just like blackjack, but that individual machines are -EV. It's not just that they expect people to plow the money back in, but that pros have to wait for fish to plow money in to get to the +EV situation.

The difference with blackjack is that you can (in theory) adjust your bet to take advantage of the different phases of blackjack. Your first sentence seems to match Roland's comment about the Kelly criterion (you lose betting against snake eyes if you bet your whole bankroll every time), but that doesn't make sense with fixed-bet slots. There, if it made sense to make the first bet, it makes sense to continuing betting after a jackpot.

Comment author: RobinZ 01 June 2010 10:08:08PM 1 point [-]

On the scale from "saw it in The Da Vinci Code" to "saw it in Nature", I'd have to say all I have is an anecdote from a respectable blogger:

Because slot machines are designed to hook you in, you're going to get some return on investment from them if you hold yourself to a specific amount. At the Casino de Lac Leamy, up in Canada (run, I would add, by the Quebec provincial government. Now that's a lottery system), the slots are 'loose.' They pay out relatively often. In fact, when Weds and I have played twenty dollars worth of slots together, we've never failed to leave the casino floor with more money than we had entering the floor. That twenty dollars has been anything from thirty to sixty-five dollars, the three or four times we've done this.

I'll give you that "many" is almost certainly flat wrong, on reflection, but such machines are (were?) probably out there.

Comment author: SilasBarta 01 June 2010 10:13:40PM 6 points [-]

On the scale from "saw it in The Da Vinci Code"

That move was full of falsehoods. For example, people named Silas are actually no more or less likely than the general population to be tall homicidal albino monks -- but you wouldn't guess that from seeing the movie, now, would you?

Comment author: RobinZ 02 June 2010 02:28:04AM 2 points [-]

That's why it represents the bottom end of my "source-reliability" scale.

Comment author: bentarm 01 June 2010 11:02:55PM *  4 points [-]

The only relevant part of the quote seems to be:

That twenty dollars has been anything from thirty to sixty-five dollars, the three or four times we've done this.

I'm pretty sure it's not that unlikely to come up ahead 'three or four' times when playing slot machines (if it weren't so late I'd actually do the sums). It seems much more plausible that the blog author was just lucky than that the machines were actually set to regularly pay out positive amounts.

Comment author: Dagon 01 June 2010 10:29:07PM 2 points [-]

This comes up frequently in gambling and statistics circles. "Citation please" is the correct response - casinos do NOT expect to make a profit by offering losing (for them) bets and letting "gambler's ruin" pay them off. It just doesn't work that way.

The fact that a +moneyEV bet can be -utilityEV for a gambler does NOT imply that a -moneyEV bet can be +utilityEV for the casino. It's -utility for both participants.

The only reason casinos offer such bets ever is for promotional reasons, and they hope to make the money back on different wagers the gambler will make while there.

The Kelly calculations work just fine for all these bets - for cyclic bets, it ends up you should bet 0 when -EV. When +EV, bet some fraction of your bankroll that maximizes mean-log-outcome for each wager.

Comment author: CronoDAS 01 June 2010 10:30:12PM 1 point [-]

Some casinos advertise that they have slots with "up to" a 101% rate of return. Good luck finding the one machine in the casino that actually has a positive EV, though!

Comment author: Jonathan_Graehl 01 June 2010 07:09:52PM *  1 point [-]

Obviously if you know your utility function and the true distribution of possible risks, it's easy to decide whether to take a particular insurance deal.

The standard advice is that if you can afford to self-insure, you should, for the reason you cite (that insurance companies make a profit, on average).

That's a heuristic that holds up fine except when you know (for reasons you will keep secret from insurers) your own risk is higher than they could expect; then, depending on how competitive insurers are, even if you're not too risk-averse, you might find a good deal, even to the extent that you turn an expected (discounted) profit, and so should buy it even if you have zero risk aversion. Apparently in California, auto insurers are required to publish the algorithm by which they assign premiums (and are possibly prohibited from using certain types of information).

Conversely, you may choose to have no insurance (or extremely high deductible) in cases where you believe your personal risk is far below what the insurer appears to believe, even when you're actually averse to that risk.

Of course, it's not sufficient to know how wrong the insurer's estimate of your risk is; they insist on a pretty wide vig - not just to survive both uncertainties in their estimation of risk and the market returns on the float, but also to compensate for the observed amount of successful adverse selection that results from people applying the above heuristic.

I suppose it may also be possible that the insurer won't pay. I don't know what exactly what guarantees we have in the U.S.

Comment author: Douglas_Knight 01 June 2010 09:46:27PM *  1 point [-]

to compensate for the observed amount of successful adverse selection that results from people applying the above heuristic.

Actually, I think that for voluntary insurance, the observed adverse selection is negative, but I can't find the cite. People simply don't do cost-benefit calculations. People who buy insurance are those who are terribly risk-averse or see it as part of their role. Such people tend to be more careful than the general population. In a competitive market, the price of insurance would be bid down to reflect this, but it isn't.

Comment author: SilasBarta 01 June 2010 07:19:06PM *  21 points [-]

There is a small remark in Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making about insurance saying that all insurance has negative expected utility, we pay too high a price for too little a risk, otherwise insurance companies would go bankrupt.

No -- Insurance has negative expected monetary return, which is not the same as expected utility. If your utility function obeys the law of diminishing marginal utility, then it also obeys the law of increasing marginal disutility. So, for example, losing 10x will be more than ten times as bad as losing x. (Just as gaining 10x is less than ten times as good as gaining x.)

Therefore, on your utility curve, a guaranteed loss of x can be better than a 1/1000 chance of losing 1000x.

ETA: If it helps, look at a logarithmic curve and treat it as your utility as a function of some quantity. Such a curve obeys diminishing marginal utility. At any given point, your utility increases less than proportionally going up, but more than proportionally going down.

(Incidentally, I acutally wrote an embarrasing article arguing in favor of the thesis roland presents, and you can still probably find on it the internet. That exchange is also an example of someone being bad at explaining. If my opponent had simply stated the equivalence between DMU and IMD, I would have understood why that argument about insurance is wrong. Instead, he just resorted to lots of examples of when people buy insurance that are totally unconvincing if you accept the quoted argument.)

Comment author: mkehrt 01 June 2010 08:25:51PM 1 point [-]

I voted this up, but I want to comment to point out that this is a really important point. Don't be tricked into not getting insurance just because it has a negative expected monetary value.

Comment author: mattnewport 01 June 2010 08:49:06PM *  3 points [-]

I voted Silas up as well because it's an important point but it shouldn't be taken as a general reason to buy as much insurance as possible (I doubt Silas intended it that way either). Jonathan_Graehl's point that you should self-insure if you can afford to and only take insurance for risks you cannot afford to self-insure is probably the right balance.

Personally I don't directly pay for any insurance. I live in Canada (universal health coverage) and have extended health insurance through work (much to my dismay I cannot decline it in favor of cash) which means I have far more health insurance than I would purchase with my own money. Given my aversion to paperwork I don't even fully use what I have. I do not own a house or a car which are the other two areas arguably worth insuring. I don't have dependents so have no need for life or disability coverage. All other forms of insurance fall into the 'self-insure' category for me given my relatively low risk aversion.

Comment author: cousin_it 01 June 2010 06:23:36PM *  5 points [-]

This might be old news to everyone "in", or just plain obvious, but a couple days ago I got Vladimir Nesov to admit he doesn't actually know what he would do if faced with his Counterfactual Mugging scenario in real life. The reason: if today (before having seen any supernatural creatures) we intend to reward Omegas, we will lose for certain in the No-mega scenario, and vice versa. But we don't know whether Omegas outnumber No-megas in our universe, so the question "do you intend to reward Omega if/when it appears" is a bead jar guess.

Comment author: Nisan 01 June 2010 06:50:20PM 2 points [-]

Whatever our prior for encountering No-mega, it should be counterbalanced by our prior for encountering Yes-mega (who rewards you if you are counterfactually-muggable).

Comment author: cousin_it 01 June 2010 06:53:16PM *  0 points [-]

You haven't considered the full extent of the damage. What is your prior over all crazy mind-reading agents that can reward or punish you for arbitrary counterfactual scenarios? How can you be so sure that it will balance in favor of Omega in the end?

Comment author: Nisan 01 June 2010 07:29:46PM *  1 point [-]

In fact, I can consider all crazy mind-reading reward/punishment agents at once: For every such hypothetical agent, there is its hypothetical dual, with the opposite behavior with respect to my status as being counterfactually-muggable (the one rewarding what the other punishes, and vice versa). Every such agent is the dual of its own dual; in the universal prior, being approached by an agent is about as likely as being approached by its dual; and I don't think I have any evidence that one agent will be more likely to appear than its dual. Thus, my total expected payoff from these agents is 0.

Omega itself does not belong to this class of agent; it has no dual. (ETA: It has a dual, but the dual is a deceptive Omega, which is much less probable than Omega. See below.) So Omega is the only one I should worry about.

I should add that I feel a little uneasy because I can't prove that these infinitesimal priors don't dominate everything when the symmetry is broken, especially when the stakes are high.

Comment author: cousin_it 01 June 2010 08:01:05PM *  2 points [-]

Omega itself does not belong to this class of agent; it has no dual.

Why? Can't your definition of dual be applied to Omega? I admit I don't completely understand the argument.

Comment author: Nisan 01 June 2010 08:23:14PM *  2 points [-]

Okay, I'll be more explicit: I am considering the class of agents who behave one way if they predict you're muggable and behave another way if they predict you're unmuggable. The dual of an agent behaves exactly the same as the original agent, except the behaviors are reversed. In symbols:

  • An agent A has two behaviors.
  • It it predicts you'd give Omega $5, it will exhibit behavior X; otherwise, it will exhibit behavior Y.
  • The dual agent A* exhibits behavior Y if it predicts you'd give Omega $5, and X otherwise.
  • A and A* are equally likely in my prior.

What about Omega?

  • Omega has two behaviors.
  • If it predicts you'd give Omega $5, it will flip a coin and give you $100 on heads; otherwise, nothing. In either case, it will tell you the rules of the game.

What would Omega* be?

  • If Omega* predicts you'd give Omega $5, it will do nothing. Otherwise, it will flip a coin and give you $100 on heads. In either case, it will assure you that it is Omega, not Omega*.

So the dual of Omega is something that looks like Omega but is in fact deceptive. By hypothesis, Omega is trustworthy, so my prior probability of encountering Omega* is negligible compared to meeting Omega.

(So yeah, there is a dual of Omega, but it's much less probable than Omega.)

Then, when I calculate expected utility, each agent A is balanced by its dual A* , but Omega is not balanced by Omega*.

Comment author: cousin_it 01 June 2010 08:42:46PM *  0 points [-]

If we assume you can tell "deceptive" agents from "non-deceptive" ones and shift probability weight accordingly, then not every agent is balanced by its dual, because some "deceptive" agents probably have "non-deceptive" duals and vice versa. No?

(Apologies if I'm misunderstanding - this stuff is slowly getting too complex for me to grasp.)

Comment author: Nisan 02 June 2010 12:05:46AM *  1 point [-]

The reason we shift probability weight away from the deceptive Omega* is that, in the original problem, we are told that we believe Omega to be non-deceptive. The reasoning goes like this: If it looks like Omega and talks like Omega, then it might be Omega or Omega* . But if it were Omega* , then it would be deceiving us, so it's most probably Omega.

In the original problem, we have no reason to believe that No-mega and friends are non-deceptive.

(But if we did, then yes, the dual of a non-deceptive agent would be deceptive, and so have lower prior probability. This would be a different problem, but it would still have a symmetry: We would have to define a different notion of dual, where the dual of an agent has the reversed behavior and also reverses its claims about its own behavior.

What would Omega* be in that case? It would not claim to be Omega. It would truthfully tell you that if it predicted you would not give it $5 on tails, then it would flip a coin and give you $100 on heads; and otherwise it would not give you anything. This has no bearing on your decision in the Omega problem.)

Edit: Formatting.

Comment author: cousin_it 02 June 2010 10:10:05AM *  0 points [-]

By your definitions, Omega* would condition its decision on you being counterfactually muggable by the original Omega, not on you giving money to Omega* itself. Or am I losing the plot again? This notion of "duality" seems to be getting more and more complex.

Comment author: Nisan 02 June 2010 03:16:39PM 0 points [-]

"Duality" has become more complex because we're now talking about a more complex problem — a version of Counterfactual Mugging where you believe that all superintelligent agents are trustworthy. The old version of duality suffices for the ordinary Counterfactual Mugging problem.

My thesis is that there's always a symmetry in the space of black swans like No-mega.

In the case currently under consideration, I'm assuming Omega's spiel goes something like "I just flipped a coin. If it had been heads, I would have predicted what you would do if I had approached you and given my spiel...." Notice the use of first-person pronouns. Omega* would have almost the same spiel verbatim, also using first-person pronouns, and make no reference to Omega. And, being non-deceptive, it would behave the way it says it does. So it wouldn't condition on your being muggable by Omega.

You could object to this by claiming that Omega actually says "I am Omega. If Omega had come up to you and said....", in which case I can come up with a third notion of duality.

Comment author: Jonathan_Graehl 01 June 2010 06:58:27PM 2 points [-]

Surely the last thing on anyone's mind, having been persuaded they're in the presence of Omega in real life, is whether or not to give $100 :)

I like the No-mega idea (it's similar to a refutation of Pascal's wager by invoking contrary gods), but I wouldn't raise my expectation for the number of No-mega encounters I'll have by very much upon encountering a solitary Omega.

Generalizing No-mega to include all sorts of variants that reward stupid or perverse behavior (are there more possible God-likes that reward things strange and alien to us?), I'm not in the least bit concerned.

I suppose it's just a good argument not to make plans for your life on the basis of imagined God-like beings. There should be as many gods who, when pleased with your action, intervene in your life in a way you would not consider pleasant, and are pleased at things you'd consider arbitrary, as those who have similar values they'd like us to express, and/or actually reward us copacetically.

Comment author: cousin_it 01 June 2010 07:03:11PM *  2 points [-]

I wouldn't raise my expectation for the number of No-mega encounters I'll have by very much upon encountering a solitary Omega.

You don't have to. Both Omega and No-mega decide based on what your intentions were before seeing any supernatural creatures. If right now you say "I would give money to Omega if I met one" - factoring in all belief adjustments you would make upon seeing it - then you should say the reverse about No-mega, and vice versa.

ETA: Listen, I just had a funny idea. Now that we have this nifty weapon of "exploding counterfactuals", why not apply it to Newcomb's Problem too? It's an improbable enough scenario that we can make up a similarly improbable No-mega that would reward you for counterfactual two-boxing. Damn, this technique is too powerful!

Comment author: Jonathan_Graehl 01 June 2010 07:21:33PM *  0 points [-]

By not believing No-mega is probable just because I saw an Omega, I mean that I plan on considering such situations as they arise on the basis that only the types of godlike beings I've seen to date (so far, none) exist. I'm inclined to say that I'll decide in the way that makes me happiest, provided I believe that the godlike being is honest and really can know my precommitment.

I realize this leaves me vulnerable to the first godlike huckster offering me a decent exclusive deal; I guess this implies that I think I'm much more likely to encounter 1 godlike being than many.

Comment author: Vladimir_Nesov 01 June 2010 07:08:29PM 3 points [-]

The caveat is of course that Counterfactual Mugging or Newcomb Problem are not to be analyzed as situations you encounter in real life: the artificial elements that get introduced are specified explicitly, not by an update from surprising observation. For example, the condition that Omega is trustworthy can't be credibly expected to be observed.

The thought experiments explicitly describe the environment you play your part in, and your knowledge about it, the state of things that is much harder to achieve through a sequence of real-life observations, by updating your current knowledge.

Comment author: cousin_it 02 June 2010 11:45:00AM *  0 points [-]

I dunno, Newcomb's Problem is often presented as a situation you'd encounter in real life. You're supposed to believe Omega because it played the same game with many other people and didn't make mistakes.

In any case I want a decision theory that works on real life scenarios. For example, CDT doesn't get confused by such explosions of counterfactuals, it works perfectly fine "locally".

ETA: My argument shows that modifying yourself to never "regret your rationality" (as Eliezer puts it) is impossible, and modifying yourself to "regret your rationality" less rather than more requires elicitation of your prior with humanly impossible accuracy (as you put it). I think this is a big deal, and now we need way more convincing problems that would motivate research into new decision theories.

Comment author: Vladimir_Nesov 02 June 2010 11:49:45AM *  0 points [-]

If you do present observations that move the beliefs to represent the thought experiment, it'll work just as well as the magically contrived thought experiment. But the absence of relevant No-megas is part of the setting, so it too should be a conclusion one draws from those observations.

Comment author: cousin_it 02 June 2010 11:58:23AM *  0 points [-]

Yes, but you must make the precommitment to love Omegas and hate No-megas (or vice versa) before you receive those observations, because that precommitment of yours is exactly what they're judging. (I think you see that point already, and we're probably arguing about some minor misunderstanding of mine.)

Comment author: JamesAndrix 01 June 2010 06:27:14PM 4 points [-]
Comment author: NancyLebovitz 01 June 2010 08:16:17PM 1 point [-]

Delightful, and has a nice breakdown of the sort of questions to ask yourself (what exactly is the problem, how much precision is actually needed, what is the condition of the tools, etc.) if you want to get things done efficiently.

Comment author: Spurlock 01 June 2010 07:46:09PM 5 points [-]

I've been reading the Quantum Mechanics sequence, and I have a question about Many-Worlds. My understanding of MWI and the rest of QM is pretty much limited to the LW sequence and a bit of Wikipedia, so I'm sure there will be no shortage of people here who have a better knowledge of it and can help me.

My question is this: why are the Born Probabilites a problem for MWI?

I'm sure it's a very difficult problem, I think I just fail to understand the implications of some step along the way. FWIW, my understanding of the Born Probabilities mainly clicks here:

If a whole gigantic human experimenter made up of quintillions of particles,

Interacts with one teensy little atom whose amplitude factor has a big bulge on the >left and a small bulge on the right,

Then the resulting amplitude distribution, in the joint configuration space,

Has a big amplitude blob for "human sees atom on the left", and a small amplitude >blob of "human sees atom on the right".

And what that means, is that the Born probabilities seem to be about finding >yourself in a particular blob, not the particle being in a particular place.

Firstly, I know probability is the wrong word, but I'm going to use it here, insufficiently, in the same way that it's normally insufficiently used to talk about QM. I sure hope that's okay because it is a pain to nail down in English.

So... If a quantum event has a 30% chance of going LEFT and a 70% chance of going right (which you could observe without entangling yourself, for example by blasting a whole bunch of photons through slits and seeing the overall density pattern without measuring individual photons) (I think), then if you entangle yourself with a single instance of it, you'll have a 30% probability of observing LEFT and a 70% probability of observing RIGHT.

So why is this surprising? Obviously if we're just counting observers then we would expect a 50/50 probability spread, but I assume the problem isn't that naive. Obviously if the particles themselves exhibit a 30/70 preference, then we, being made of particles, should expect to do the same. Or... if the particles themselves can exist along a (psuedo)probability continuum, then why should we, the entagled, not expect to do the same? If those quarks are 70/30, then why aren't yours? Why should MWI necessarily imply the sudden creation of exactly 2 worlds with equal weight, as opposed to just dividing experience, locally and where necessary, into a weighted continuum?

I think I'll try this from another angle. MWI gets points for treating people/observers as particles, governed by the same laws as everything else. But are we really treating ourselves equally if we don't assume that we too follow this 30/70 split? It seems like this should be the default assumption, the one requiring no extra postulates, that we divide up not into discrete worlds but along a weighted continuum. Obviously it's easier on our typical conception of conciousness if we can just have the whole universe split neatly in two, but that feels to me like putting the weirdness where it logically belongs (on our comparatively weak understanding of concious experience).

Hope this makes at least some since to someone who can steer me in the right direction. I'd appreciate responses as to where specifically I've erred, as this will continue to bug me until I see where exactly I went wrong. Thanks in advance.

Comment author: [deleted] 01 June 2010 10:07:21PM 7 points [-]

So... If a quantum event has a 30% chance of going LEFT and a 70% chance of going right . . . you'll have a 30% probability of observing LEFT and a 70% probability of observing RIGHT.

So why is this surprising?

The surprising (or confusing, mysterious, what have you) thing is that quantum theory doesn't talk about a 30% probability of LEFT and a 70% probability of RIGHT; what it talks about is how LEFT ends up with an "amplitude" of 0.548 and RIGHT with an "amplitude" of 0.837. We know that the observed probability ends up being the square of the absolute value of the amplitude, but we don't know why, or how this even makes sense as a law of physics.

Comment author: Spurlock 01 June 2010 10:19:51PM 2 points [-]

Ah. So it's not the idea that it's weighted so much as the specific act of squaring the amplitude. "Why squaring the amplitude, why not something else?".

I suppose the way I had been reading, I thought that the problem came from expecting a different result given the squared amplitude probability thing, not from the thing itself.

That is helpful, many thanks.

Comment author: [deleted] 01 June 2010 10:25:21PM 0 points [-]

Yes, precisely.

Comment author: Douglas_Knight 01 June 2010 11:20:48PM 3 points [-]

"Why squaring the amplitude, why not something else?"

That's one issue, but as Warrigal said, the other issue is "how this even makes sense." it seems to say that the amplitude is a measure of how real the configuration is.

Comment author: NancyLebovitz 01 June 2010 08:17:42PM 1 point [-]

Any recommendations for how much redundancy is needed to make ideas more likely to be comprehensible?

Comment author: [deleted] 01 June 2010 09:52:08PM *  3 points [-]

It really depends upon the topic and upon how much inferential difference there is between your ideas and the reader's understanding of the topic. Eliezer's earlier posts are easily understandable to someone with no prior experience in statistics, cognitive science, etc. because he uses a number of examples and metaphors to clearly illustrate his point. In fact, it might be helpful to use his posts as a metric to help answer your question. In general, though, it's probably best to repeat yourself by summarizing your point at both the beginning and end of your essay/post/whatever and by using several examples to illustrate whatever you are talking about, especially if writing for non-experts.

Comment author: Eliezer_Yudkowsky 02 June 2010 07:44:59AM 8 points [-]

There's a general rule in writing that if you don't know how many items to put in a list, you use three. So if you're giving examples and you don't know how many to use, use three. Don't know if that helps, but it's the main heuristic I know that's actually concrete.

Comment author: hegemonicon 02 June 2010 02:39:04PM 6 points [-]

The only guideline I'm familiar with is "Tell me three times - tell me what you're going to explain, then explain it, then tell me what you just explained." This seems to work on multiple scales - from complete books to shorter essays (though I'm not sure if it works on the level of individual paragraphs).

Comment author: dclayh 02 June 2010 06:05:44PM 0 points [-]

I believe that's called the Bellman's Rule.

Comment author: James_K 01 June 2010 08:46:54PM 8 points [-]

This post is about the distinctions between Traditional and Bayesian Rationality, specifically the difference between refusing to hold a position on an idea until a burden of proof is met versus Bayesian updating.

Good quality government policy is an important issue to me (it's my Something to Protect, or the closest I have to one), and I tend to approach rationality from that perspective. This gives me a different perspective from many of my fellow aspiring rationalists here at Less Wrong.

There are two major epistemological challenges in policy advice, in addition to the normal difficulties we all have to deal with: 1) Policy questions fall almost entirely within the social sciences. That means the quality of evidence is much lower than it is in the physical sciences. Uncontrolled observations, analysed with statistical techniques, are generally the strongest possible evidence, and sometimes you have nothing but theory or professional instinct to work with.
2) You have a very limited time in which to find an answer. Cabinet Ministers often want an answer within weeks, a timeframe measured in months is luxurious. And often a policy proposal is too sensitive to discuss with the general public, or sometimes with anyone outside your team.

By the standards of Traditional Rationality, policy advice is often made without meeting a burden of proof. Best guesses and theoretical considerations are too weak to reach conclusions. A proper practitioner of Traditional Rationality wouldn't be able to make any kind of recommendation, one could identify some promising initial hypotheses, but that's it.

But Just because you didn't have time to come up with a good answer doesn't mean that Ministers don't expect an answer. And a practitioner of Bayesian Rationality always has a best guess as to what is true, even if the evidence base is non-existent you can fall back on your prior. You don't want to be overconfident in stating your position, assumptions must be outlined and sensitivities should be explored. But you still need to give an answer and that's what attracts me to Bayesian approaches: you don't have to be officially agnostic until being presented with a level of evidence that is unrealistically high for policy work.

It seems to me that if you have very good quality evidence then Bayesian and Traditional Rationality are very similar. Good evidence either proves or disproves a proposition for a Traditional Rationalist, and for a Bayesian Rationalist it will shift their probability estimate, as well as increasing their confidence a lot. The biggest difference seem to me to be that Bayesian Rationality seems is able to make use of weak evidence in a way Traditional Rationality can't.

Comment author: realitygrill 02 June 2010 03:05:53AM 0 points [-]

I am not at all like you. I don't have much interest in policy at all, and I do tend to refuse to hold a position, being very mindful of how easy it is to be completely off course (Probably from reading too much history of science. It's "the graveyard of dead ideas", after all.). I'm likely to tell the Cabinet Ministers to get off my back or they'll have absolutely useless recommendations.

However, I think you have hit upon the point that makes Bayesianism attractive to me: it's rationality you can use to act in real-time, under uncertainty, in normal life. Traditional Rationality is slow.

Comment author: James_K 02 June 2010 03:38:38AM 0 points [-]

I see your point, the trouble is that a recommendation that comes too late often is absolutely useless. A lot of policy is time-dependant, if you don't act within a certain time frame then you might a swell do nothing. While sometimes doing nothing is the right thing to do, a late recommendation is often no better than no recommendation.

Comment author: realitygrill 02 June 2010 05:02:12AM 0 points [-]

Yeah, I forgot to add that you've budged me slightly from my staunch positivist attitude for social science. Thanks. Reading up on complex adaptive systems has made me just that much more skeptical about our ability to predict policy's effects, and perhaps biased me.

Comment author: James_K 02 June 2010 06:05:33AM 1 point [-]

It's nice to know I've had an influence :)

As it happens, I'm pretty sceptical as to how much we can know as well. There's nothing like doing policy to gain an understanding of how messy it can be. While the social sciences have a less than wonderful record in developing knowledge (look at the record of development economics, as one example), and economic forecasting is still not much better than voodoo but it's not like there's another group out there with all the answers. We don't have all of the answers, or even most of them, but we're better than nothing, which is the only alternative.

Comment author: matt 02 June 2010 09:48:18PM *  5 points [-]

Nothing is often a pretty good alternative. Government action always comes at a cost, even if only the deadweight loss of taxation (keyphrase "public choice" for reasons you might expect the cost to be higher than that). I'm not trying to turn this into a political debate, but you should consider doing nothing not necessarily a bad thing, and what you do not necessarily better.

Comment author: mattnewport 02 June 2010 10:01:50PM 1 point [-]

Politicians' logic: “Something must be done. This is something. Therefore we must do it.”

Comment author: xamdam 02 June 2010 04:15:26PM 1 point [-]

Reminded me of one of my favorite movie dialogues - from Sunshine. Context was actually physics, but the complexity of the situation and the time frame but the characters in the same situation as you with the Cabinet ministers.

Capa: It's the problem right there. Between the boosters and the gravity of the sun the velocity of the payload will get so great that space and time will become smeared together and everything will distort. Everything will be unquantifiable.

Kaneda: You have to come down on one side or the other. I need a decision.

Capa: It's not a decision, it's a guess. It's like flipping a coin and asking me to decide whether it will be heads or tails.

Kaneda: And?

Capa: Heads... We harvested all Earth's resources to make this payload. This is humanity's last chance... our last, best chance... Searle's argument is sound. Two last chances are better than one.

http://www.imdb.com/title/tt0448134/quotes?qt0386955

Comment author: James_K 02 June 2010 10:06:19PM 1 point [-]

Yes, that's a good example. There are times when a decision has to be made, and saying you don't know isn't very useful. Even if you have very little to go on, you still have to decide one way or the other.

Comment author: dclayh 01 June 2010 08:52:46PM 3 points [-]

William Saletan at Slate is writing a series of articles on the history and uses of memory falsification, dealing mainly with Elizabeth Loftus and the ethics of her work. Quote from the latest article:

Loftus didn't flinch at this step. "A therapist isn't supposed to lie to clients," she conceded. "But there's nothing to stop a parent from trying something like [memory modification] with an overweight child or teen." Parents already lied to kids about Santa Claus and the tooth fairy, she observed. To her, it was a no-brainer: "A white lie that might get them to eat broccoli and asparagus vs. a lifetime of obesity and diabetes: Which would you rather have for your kid?"

(This topic has, of course, been done to death around these parts.)

Comment author: billswift 02 June 2010 05:01:41PM 0 points [-]

Interesting. I have read several of Loftus's books, but the last one was The Myth of Repressed Memory: False Memories and Allegations of Sexual Abuse over ten years ago. I think I'll go see what she has written since. Thanks for reminding me of her work.

Comment author: cousin_it 01 June 2010 09:52:35PM *  3 points [-]

The blog of Scott Adams (author of Dilbert) is generally quite awesome from a rationalist perspective, but one recent post really stood out for me: Happiness Button.

Suppose humans were born with magical buttons on their foreheads. When someone else pushes your button, it makes you very happy. But like tickling, it only works when someone else presses it. Imagine it's easy to use. You just reach over, press it once, and the other person becomes wildly happy for a few minutes.

What would happen in such a world?

...

Comment author: Vladimir_Nesov 01 June 2010 10:11:03PM *  4 points [-]

What would happen in such a world?

Classical game theorists establish a scientific consensus that the only rational course of action is not to push the buttons. Anyone who does is regarded with contempt or pity and gets lowered in the social stratum, before finally managing to rationalize the idea out of conscious attention, with the help of the instinct to conformity. A few free-riders smugly teach the remaining naive pushers a bitter lesson, only to stop receiving the benefit. Everyone gets back to business as usual, crazy people spinning the wheels of a mad world.

Comment author: Houshalter 01 June 2010 10:31:08PM 0 points [-]

How does that work? I suppose it makes sense a little considering that the world has to go on and can't stop because everyones on the ground being "happy", but it wouldn't mean that people wouldn't do it, or even that it wouldn't be the "rational" thing to do.

Comment author: mattnewport 01 June 2010 10:33:25PM *  10 points [-]

Is everyone missing the obvious subtext in the original article - that we already live in just such a world but the button is located not on the forehead but in the crotch?

Perhaps some people would give their button-pushing services away for free, to anyone who asked. Let's call those people generous, or as they would become known in this hypothetical world: crazy sluts.

Comment author: Vladimir_Nesov 01 June 2010 10:42:11PM 0 points [-]

That would not model the True Prisoner's Dilemma.

Comment author: mattnewport 01 June 2010 10:57:15PM 0 points [-]

What's that got to do with the price of eggs?

Comment author: Blueberry 01 June 2010 10:48:54PM 2 points [-]

Except that sex, unlike the button in the story, doesn't always make people happy. Sometimes, for some people, it comes with complications that decrease net utility. (Also, it is possible to push your own button with sex.)

Comment author: mattnewport 01 June 2010 10:58:07PM *  4 points [-]

Sure, but it's not my comparison - I'm just saying it appears to be the obvious subtext of the original article.

Button pushing would become an issue of power and politics within relationships and within business. The rich and famous would get their buttons pushed all day long, while the lonely would fantasize about how great that would be.

Comment author: Houshalter 01 June 2010 11:10:08PM 1 point [-]

The rich and famous would get their buttons pushed all day long, while the lonely would fantasize about how great that would be.

But two poor, "lonely" people could just get together and push each others buttons. Thats the problem with this, any two people that can cooperate with each other can get the advantage. There was once an expiriment to evolve different programs in a genetic algorithm that could play the prisoners dilema. I'm not sure exactly how it was organized, which would really make or break different strategies, but the result was a program which always cooperated except when the other wasn't and it continued refusing to cooperate with the other untill it believed they were "even".

Comment author: mattnewport 01 June 2010 11:14:41PM 1 point [-]

Are you thinking of tit for tat?

I'm not trying to argue for or against the comparison. Would you agree that the subtext exists in the original article or do you think I'm over-interpreting?

Comment author: bentarm 02 June 2010 09:31:30AM 1 point [-]

No, the subtext is definitely there in the original article. At least, I saw it immediately, as did most of the commenters:

My invisible friend says that having your happiness button pushed will cause you to spend eternity boiling in a lava pit.

Comment author: Houshalter 01 June 2010 10:58:51PM 0 points [-]

I think the best analogy would be drugs, but those have bad things associated with them that the button example doesn't. They take up money, they cause health problems, etc.

Comment author: CronoDAS 01 June 2010 11:27:43PM 4 points [-]

But you can touch that button yourself...

Comment author: SilasBarta 02 June 2010 12:45:27AM 4 points [-]

How does that compare to when someone else touches your button with their button?

Comment author: CronoDAS 02 June 2010 01:46:21AM 5 points [-]

I've never done that, so I don't know.

Comment author: RichardKennaway 02 June 2010 10:16:01AM 2 points [-]

I see that subtext, but I also see a subtext of geeks blaming the obvious irrationality of everyone else for them not getting any, like, it's just poking a button, right?

Comment author: Wei_Dai 02 June 2010 04:15:17AM 6 points [-]

Are you saying that classical game theorists would model the button-pushing game as one-shot PD? Why would they fail to notice the repetitive nature of the game?

Comment author: Vladimir_Nesov 02 June 2010 08:58:19AM *  1 point [-]

The theory says to defect in the iterated dilemma as well (under some assumptions).

Comment author: cousin_it 02 June 2010 12:18:05PM *  3 points [-]

Here's what the theory actually says: if you know the number of iterations exactly, it's a Nash equilibrium for both to defect on all iterations. But if you know the chance that this iteration will be the last, and this chance isn't too high (e.g. below 1/3, can't be bothered to give an exact value right now), it's a Nash equilibrium for both to cooperate as long as the opponent has cooperated on previous iterations.

Comment author: khafra 02 June 2010 01:37:51PM 2 points [-]

I'd be far more willing to believe in game theorists calling for defection on the iterated PD than in mathematicians steering mainstream culture.

However, with the positive-sum nature of this game, I'd expect theorists to go with Schelling instead of Nash; and then be completely disregarded by the general public who categorize it under "physical ways of causing pleasure" and put sexual taboos on it.

Comment author: AlephNeil 02 June 2010 07:17:42PM 0 points [-]

This comment was very entertaining... but...

I actually do think people in such a world ought not to press buttons. But not very strongly... only about the same "oughtnotness" as people ought not to waste time looking at porn.

The argument is the same: Aren't there better things we could be doing?

Ideally, in button-world, people will devise a way to remove their buttons.

But if that couldn't be done, and we're seriously asking "what would happen?" I suppose it might end up being treated like sex. Having one's button publicly visible is "indecent" - buttons are only pushed in private. Etc. etc.

Comment author: Mass_Driver 02 June 2010 07:53:56PM 4 points [-]

I dunno, this strikes me as a somewhat sex-negative attitude. Responding seriously to your question about the better things we could be doing, it strikes me that we people spend most of our time doing worthless things. We seldom really know whether we are happy, what it means to be happy, or how what we are doing might connect to somebody's future happiness.

If the buttons actually made people happy from time to time, it could be quite useful as a 'reality check.' People suspecting that X led to happiness could test and falsify their claim by seeing whether X produced the same mental/emotional state that the button did.

Obviously we shouldn't spend all our time pressing buttons, having sex, or looking at porn. But I sometimes wonder whether we wouldn't be better off if most people, especially in the developed world where labor seems to be over-supplied and the opportunity cost of not working is low, spent a couple hours a day doing things like that.

Comment author: AlephNeil 02 June 2010 09:28:06PM *  2 points [-]

If the buttons actually made people happy from time to time, it could be quite useful as a 'reality check.' People suspecting that X led to happiness could test and falsify their claim by seeing whether X produced the same mental/emotional state that the button did.

Isn't that a bit like snorting some coke (or perhaps just masturbating) after a happy experience (say, proving a particularly interesting theorem) to test whether it was really 'happy'?

There are many different kinds of 'happiness', and what makes an experience a happy or an unhappy one is not at all simple to pin down. A kind of happiness that one can obtain at will, as often as desired, and which is unrelated to any "objective improvement" in oneself or the things one cares about, isn't really happiness at all.

Pretend it's new year's eve and you're planning some goals for next year - some things that, if you achieve them, you will look back with pride and a sense of accomplishment. Is 'looking at lots of porn' on your list (even assuming that it's free and no-one was harmed in producing it)?

I don't mean to imply anything about sex, because sex has a whole lot of things associated with it that make it extremely complicated. But the 'pleasure button' scenario gives us a clean slate to work from, and to me it seems an obvious reductio ad absurdum of the idea that pleasure = utility.

Comment author: Blueberry 02 June 2010 09:36:15PM 2 points [-]

You seem to be confusing happiness with accomplishment:

A kind of happiness that one can obtain at will, as often as desired, and which is unrelated to any "objective improvement" in oneself or the things one cares about, isn't really happiness at all.

Sure it is. It may not be accomplishment, or meaningfulness, but it is happiness, by definition. I think the confusion comes because you seem to value many other things more than happiness, such as pride and accomplishment. Happiness is just a feeling; it's not defined as something that you need to value most, or gain the most utility from.

Comment author: RomanDavis 02 June 2010 09:43:04PM *  0 points [-]

Depends on how you define happiness. If you define it as "how much dopamine is in my system" ,"joy" or "these are the neat brainwaves my brain is giving off" then yes, you could achieve happiness by pressing a button (in theory).

A lot of people seem to assume happiness = utility measured in utilons, which is a whole different thing altogether.

Sort of like seeing some one writhe in ecstasy after jamming a needle in their arm and saying, "I'm so happy I'm not a heroin addict."

Comment author: SilasBarta 02 June 2010 09:49:34PM 1 point [-]

Depends on how you define happiness. If you define it as "how much dopamine is in my system" ,"joy" or "these are the neat brainwaves my brain is giving off" then yes, you can achieve happiness by pressing a button.

Oh, really? How can I get a cheap, legal, repeatable dopamine rush to my brain?

Comment author: RomanDavis 02 June 2010 09:53:46PM *  2 points [-]

Edited my post to reflect your point. Although, I'm a young male and can achieve orgasm multiple times in under ten minutes with the aid of some lube and free porn. You probably didn't want to know that.

Comment author: Blueberry 02 June 2010 09:59:51PM 0 points [-]

That's amazing. A drug that could eliminate refractory period like that would sell better than Viagra.

Comment author: Blueberry 02 June 2010 09:51:16PM 0 points [-]

A lot of people seem to assume happiness = utility measured in utilons, which is a whole different thing altogether.

Yes, I've noticed that assumption, and I think even Jeremy Bentham talked about pleasure in utility terms. I don't think it's accurate for everyone, for instance, someone who values accomplishment more than happiness will assign higher utility to choices that lead to unhappy accomplishment than to unproductive leisure.

Comment author: RomanDavis 02 June 2010 09:56:31PM -1 points [-]

...and then they're happier working. By definition. Welcome to semantics.

Comment author: Blueberry 02 June 2010 10:02:18PM *  0 points [-]

That's a strange definition of "happier". They're happier with a choice just because they prefer that choice? Even if they appear frustrated and tired and grumpy all the time? Even if they tell you they're not happy and they prefer this unhappiness to not accomplishing anything?

(In real life, I suspect happy people actually accomplish more, but consider a hypothetical where you have to choose between unhappy accomplishment and unproductive leisure.)

Comment author: AlephNeil 02 June 2010 09:49:14PM 0 points [-]

How do you distinguish a degenerate case of 'happiness' from 'satiation of a need'. Is the smoker or heroin addict made 'happy' by their fix? Does a glass of water make you 'happy' if you're dying from thirst, or does it just satiate the thirst?

And can't the same sensation be either 'happy' or 'unhappy' depending on the circumstances. A person with persistent sexual arousal syndrome isn't made 'happy' by the orgasms they can't help but 'endure'.

The idea that there's a "raw happiness feeling" detachable from the information content that goes with it is intuitively appealing but fatally flawed.

Comment author: Blueberry 02 June 2010 09:57:33PM 1 point [-]

And can't the same sensation be either 'happy' or 'unhappy' depending on the circumstances? A person with persistent sexual arousal syndrome isn't made 'happy' by the orgasms they can't help but 'endure'.

Yes, this is true. We will need to assume that the button can analyze the context to determine how to provide happiness for the particular brain it's attached to.

My point is that happiness is not necessarily associated with accomplishment or objective improvement in oneself (though it can be). In such a situation, some people might not value this kind of detached happiness, but that doesn't mean it's not happiness.

Comment author: Blueberry 02 June 2010 09:38:50PM 3 points [-]

I suppose it might end up being treated like sex. Having one's button publicly visible is "indecent" - buttons are only pushed in private.

The analogy to sex is rough. From a historical and evolutionary perspective, sex is treated the way it is because it leads to gene replication and parenthood, not because it leads to pleasure. The lack of side effects from the buttons makes them more comparable to rubbing someone's back, smiling, or saying something nice to someone.

Comment author: AlephNeil 02 June 2010 09:59:49PM 2 points [-]

OK - well that's one possibility. But in discussing either of these analogies, aren't we just showing (a) that the pleasure-button scenario is underdetermined, because there are many different kinds of pleasure and (b) that it's redundant, because people can actually give each other pats on the back, or hand-jobs or whatever.

Comment author: Alicorn 01 June 2010 11:27:34PM 5 points [-]

A social custom would be established that buttons are only to be pressed by knocking foreheads together. Offering to press a button in a fashion that doesn't ensure mutuality is seen as a pathetic display of low status.

Comment author: Wei_Dai 02 June 2010 04:15:06AM 11 points [-]

Pushing someone's happiness button is like doing them a favor, or giving them a gift. Do we have social customs that demand favors and gifts always be exchanged simultaneously? Well, there are some customs like that, but in general no, because we have memory and can keep mental score.

Comment author: cousin_it 02 June 2010 09:21:28AM 3 points [-]

Hah. Status is relative, remember? Your setup just ensures that "dodging" at the last moment, getting your button pressed without pressing theirs, is seen as a glorious display of high status.

Comment author: Christian_Szegedy 02 June 2010 09:46:23PM *  9 points [-]

We already have these buttons on LessWrong... ;)

Comment author: cousin_it 02 June 2010 09:59:04PM *  3 points [-]

Karma does make me feel important, but when it comes to happiness karma can't hold a candle to loud music, alcohol and girls (preferably in combination). I wish more people recognized these for the eternal universal values they are. If only someone invented a button to send me some loud music, alcohol and girls, that would be the ultimate startup ever.

Comment author: NaN 01 June 2010 10:14:47PM *  20 points [-]

Why is LessWrong not an Amazon affiliate? I recall buying at least one book due to it being mentioned on LessWrong, and I haven't been around here long. I can't find any reliable data on the number of active LessWrong users, but I'd guess it would number in the 1000s. Even if only 500 are active, and assuming only 1/4 buy at least one book mentioned on LessWrong, assuming a mean purchase value of $20 (books mentioned on LessWrong probably tend towards the academic, expensive side), that would work out at $375/year.

IIRC, it only took me a few minutes to sign up as an Amazon affiliate. They (stupidly) require a different account for each Amazon website, so 5*4 minutes (.com, .co.uk, .de, .fr), +20 for GeoIP database, +3-90 (wide range since coding often takes far longer than anticipated) to set up URL rewriting (and I'd be happy to code this) would give a 'worst case' scenario of $173 annualized returns per hour of work.

Now, the math is somewhat questionable, but the idea seems like a low-risk, low-investment and potentially high-return one, and I note that Metafilter and StackOverflow do this, though sadly I could not find any information on the returns they see from this. So, is there any reason why nobody has done this, or did nobody just think of it/get around to it?

Comment author: Douglas_Knight 02 June 2010 01:28:25AM 2 points [-]

From your link, a further link doesn't make it sound great at SO - 2-4x the utter failure. But they are very positive about it because the cost of implementation was very low. Just top-level posts or no geolocating would be even cheaper.

You may be amused (or something) by this search

Comment author: mattnewport 02 June 2010 01:47:23AM 4 points [-]

A possibly relevant data point: I usually post any links to books I put online with my amazon affiliate link and in the last 3 months I've had around 25 clicks from links to books I believe I posted in Less Wrong comments and no conversions.

Comment author: steven0461 01 June 2010 10:35:58PM *  14 points [-]

Marginal Revolution linked to A Fine Theorem, which has summaries of papers in decision theory and other relevant econ, including the classic "agreeing to disagree" results. A paper linked there claims that the probability settled on by Aumann-agreers isn't necessarily the same one as the one they'd reach if they shared their information, which is something I'd been wondering about. In retrospect this seems obvious: if Mars and Venus only both appear in the sky when the apocalypse is near, and one agent sees Mars and the other sees Venus, then they conclude the apocalypse is near if they exchange info, but if the probabilities for Mars and Venus are symmetrical, then no matter how long they exchange probabilities they'll both conclude the other one probably saw the same planet they did. The same thing should happen in practice when two agents figure out different halves of a chain of reasoning. Do I have that right?

ETA: it seems, then, that if you're actually presented with a situation where you can communicate only by repeatedly sharing probabilities, you're better off just conveying all your info by using probabilities of 0 and 1 as Morse code or whatever.

ETA: the paper works out an example in section 4.

Comment author: bentarm 01 June 2010 10:53:33PM *  17 points [-]

The entire world media seems to have had a mass rationality failure about the recent suicides at Foxconn. There have been 10 suicides there so far this year, at a company which employs more than 400,000 people. This is significantly lower than the base rate of suicide in China. However, everyone is up in arms about the 'rash', 'spate', 'wave'/whatever of suicides going on there.

When I first read the story I was reading a plausible explanation of what causes these suicides by a guy who's usually pretty on the ball. Partly due to the neatness of the explanation, it took me a while to realise that there was nothing to explain.

Your strength as a rationalist is your ability to be more confused by fiction than by reality. It's even harder to achieve this when the fiction comes ready-packaged with a plausible explanation (especially one which fits neatly with your political views).

Comment author: mattnewport 01 June 2010 11:01:05PM *  2 points [-]

The first question that came to mind when I heard about this story was 'what's the base rate?'. I didn't investigate further but a quick mental estimate made me doubt that this represented a statistically significant increase above the base rate. It's disappointing yet unsurprising that few if any media reports even consider this point.

Comment author: Bo102010 02 June 2010 12:53:16AM *  1 point [-]

Wasn't there a somewhat well-publicized "spate" of suicides at a large French telecom a while back? I remember the explanation being the same - the number observed was just about what you'd expect for an employer of that size.

ETA: http://en.wikipedia.org/wiki/France_Telecom

Comment author: mattnewport 02 June 2010 01:04:10AM 2 points [-]

Even if the suicide rate was somewhat higher than average it still doesn't necessarily tell you much. You should really be looking at the probability of that number of suicides occurring in some distinct subset of the population - given all the subsets of a population that you can identify you will expect some to have higher than suicide rates than for the population as a whole. The relevant question is 'what is the probability that you would observe this number of suicides by chance in some randomly selected subset of this size?'

Incidentally the rate appears to be below that of Cambridge University students:

RESULTS: We identified 157 student deaths during academic years 1970-1996, of which 36 appeared to be suicides. The overall suicide rate was 11.3/100,000 person years at risk. Suicide rates were similar to those seen amongst 15- to 24-year-olds in the general population. There were non-significant trends for male postgraduates to be over-represented and first-year undergraduates under-represented. Examination times were not associated with excess suicide. CONCLUSIONS: Suicide rates in University of Cambridge students do not appear to be unduly high.

Comment author: gwern 02 June 2010 05:52:25PM 1 point [-]

Yes, this is my counter-counter-criticism as well. 'Sure, the overall China rate may be the same, but what's the suicide rate for young, employed workers employed by a technical company with bright prospects? I'll bet it's lower than the overall rate...'

Comment author: SilasBarta 02 June 2010 05:57:50PM 2 points [-]

Agreed. Also, I think what got the suicides in China in the news was that the victim attributed the suicide specifically to some weird policy or rule the company adhered to. It could be that the "normal" suicides at the company are being ignored, and the ones being reported are the suicides on top of this, justifying that concern that this is abnormal.

Comment author: mattnewport 02 June 2010 06:11:56PM 0 points [-]

This was why I went looking for stats on suicides amongst university students. I remembered some talk when I was at Cambridge of a high suicide rate, which you might see as somewhat similarly counter-intuitive to a high suicide rate for 'young, employed workers employed by a technical company with bright prospects'.

Actually, there are a number of reasons to expect a somewhat elevated suicide rate in a relatively high pressure environment where large numbers of young people have left home for the first time and are living in close proximity to large numbers of strangers their own age. Stories about high suicide rates at elite universities tend to take a very different tack to stories about Chinese workers however.

Comment author: kodos96 02 June 2010 04:47:53AM *  11 points [-]

That's what I thought as well, until I read this post from "Fake Steve Jobs". Not the most reliable source, obviously, but he does seem to have a point:

But, see, arguments about national averages are a smokescreen. Sure, people kill themselves all the time. But the Foxconn people all work for the same company, in the same place, and they’re all doing it in the same way, and that way happens to be a gruesome, public way that makes a spectacle of their death. They’re not pill-takers or wrist-slitters or hangers. ... They’re jumpers. And jumpers, my friends, are a different breed. Ask any cop or shrink who deals with this stuff. Jumpers want to make a statement. Jumpers are trying to tell you something.

Now I'm not entirely sure of the details, but if it's true that all the suicides in the recent cluster consisted of jumping off the Foxconn factory roof, that does seem to be more significant than just 15 employees committing suicide in unrelated incidents. In fact, it seems like it might even be the case that there are a lot more suicides than the ones we've heard about, and the cluster of 15 are just those who've killed themselves via this particular, highly visible, method (I'm just speculating here).

I'm not sure what to make of this - without knowing more of the details its probably impossible to say what's going on. But the basic point seems sound: that the argument about being below national average suicide rates doesn't really hold up if there's something specific about a particular group of incidents that makes them non-independent. As an example, if the members of some cult commit suicide en masse, you can't look at the region the event happened in and say "well the overall suicide rate for the region is still below the national average, so there's nothing to see here"

Comment author: Douglas_Knight 02 June 2010 05:04:28AM 10 points [-]

Suicide and methods of suicide are contagious, FWIW.

Comment author: wedrifid 02 June 2010 05:33:48AM 3 points [-]

I was surprised when I read a statistical analysis on national death rates. Whenever there was a suicide by a particular method published in newspapers or on television, deaths of that form spiked in the following weeks. This is despite the copycat deaths often being called 'accidents' (examples included crashed cars and aeroplanes). Scary stuff (or very impressive statistics-fu).

Comment author: JoshuaZ 02 June 2010 05:44:34AM 1 point [-]

Yes, this is connected to the existence of suicide epidemics. The most famous example is the ongoing suicide epidemic over the last fifty years in Micronesia, where both the causes and methods of suicide have been the same (hanging). See for example this discussion.

Comment author: Eliezer_Yudkowsky 02 June 2010 07:40:17AM 9 points [-]

keyword = "werther effect"

Comment author: CannibalSmith 02 June 2010 01:13:19PM 7 points [-]
Comment author: Torben 02 June 2010 05:14:09AM 5 points [-]

If all the members of a cult committed suicide then the local rate is 100%.

The most local rate that we so far know of is 15/400,000 which is 4x below baseline. If these 15 people worked at, say, the same plant of 1,000 workers you may have a point. But we don't know.

At this point there is nothing to explain.

Comment author: kodos96 02 June 2010 06:23:18AM 3 points [-]

If all the members of a cult committed suicide then the local rate is 100%.

Fair enough - my example was poorly thought out in retrospect.

But I don't think it's correct that there's nothing to explain. If it's true that all 15 committed suicide by the same method - a fairly rare method frequently used by people who are trying to make a public statement with their death - then there seems to be something needing to be explained. As Fake Steve Jobs points out later in the cited article, if 15 employees of Walmart committed suicide within the span of a few months, all of them by way of jumping off the roof of their Walmart, wouldn't you think that was odd? Don't you think that would be more significant, and more deserving of an explanation, than the same 15 Walmart employees committing suicide in a variety of locations, by a variety of different methods?

I'm not committing to any particular explanation here (Douglas Knight's suggestion, for one, sounds like a plausible explanation which doesn't involve any wrongdoing on Foxconn's part), I'm just saying that I do think there's "something to explain".

Comment author: kodos96 02 June 2010 08:30:10PM 0 points [-]

Just curious: why the downvote? Was this just a case of downvote = disagree? If so, what do you disagree with specifically?

Comment author: SilasBarta 02 June 2010 08:58:08PM 1 point [-]

Strange. I thought it made a good point, so I just upvoted it.

Comment author: university_student 01 June 2010 11:13:48PM *  3 points [-]

(Wherein I seek advice on what may be a fairly important decision.)

Within the next week, I'll most likely be offered a summer job where the primary project will be porting a space weather modeling group's simulation code to the GPU platform. (This would enable them to start doing predictive modeling of solar storms, which are increasingly having a big economic impact via disruptions to power grids and communications systems.) If I don't take the job, the group's efforts to take advantage of GPU computing will likely be delayed by another year or two. This would be a valuable educational opportunity for me in terms of learning about scientific computing and gaining general programming/design skill; as I hope to start contributing to FAI research within 5-10 years, this has potentially big instrumental value.

In "Why We Need Friendly AI", Eliezer discussed Moore's Law as a source of existential risk:

Moore’s Law does make it easier to develop AI without understanding what you’re doing, but that’s not a good thing. Moore’s Law gradually lowers the difficulty of building AI, but it doesn’t make Friendly AI any easier. Friendly AI has nothing to do with hardware; it is a question of understanding. Once you have just enough computing power that someone can build AI if they know exactly what they’re doing, Moore’s Law is no longer your friend. Moore’s Law is slowly weakening the shield that prevents us from messing around with AI before we really understand intelligence. Eventually that barrier will go down, and if we haven’t mastered the art of Friendly AI by that time, we’re in very serious trouble. Moore’s Law is the countdown and it is ticking away. Moore’s Law is the enemy.

Due to the quality of the models used by the aforementioned research group and the prevailing level of interest in more accurate models of solar weather, successful completion of this summer project will probably result in a nontrivial increase in demand for GPUs. It seems that the next best use of my time this summer would be to work full time on the expression-simplification abilities of a computer algebra system.

Given all this information and the goal of reducing existential risk from unFriendly AI, should I take the job with the space weather research group, or not? (To avoid anchoring on other people's opinions, I'm hoping to get input from at least a couple of LW readers before mentioning the tentative conclusion I've reached.)

ETA: I finally got an e-mail response from the research group's point of contact and she said all their student slots have been taken up for this summer, so that basically takes care of the decision problem. But I might be faced with a similar choice next summer, so I'd still like to hear thoughts on this.

Comment author: NaN 01 June 2010 11:21:03PM 5 points [-]

Uninformed opinion: space weather modelling doesn't seem like a huge market, especially when you compare it to the truly massive gaming market. I doubt the increase in demand would be significant, and if what you're worried about is rate of growth, it seems like delaying it a couple of years would be wholly insignificant.

Comment author: Kaj_Sotala 02 June 2010 09:18:46PM 3 points [-]

I would say that there seem to be a lot of companies that are in one way or another trying to advance Moore's law. For as long as it doesn't seem like the one you're working on has a truly revolutionary advantage as compared to the other companies, just taking the money but donating a large portion of it to existential risk reduction is probably an okay move.

(Full disclosure: I'm an SIAI Visiting Fellow so they're paying my upkeep right now.)

Comment author: Yvain 01 June 2010 11:17:06PM *  43 points [-]

Cleaning out my computer I found some old LW-related stuff I made for graphic editing practice. Now that we have a store and all, maybe someone here will find it useful:

Comment author: Unnamed 02 June 2010 12:36:29AM 3 points [-]

We have a store? Where?

Comment author: arundelo 02 June 2010 12:45:28AM *  4 points [-]
Comment author: pjeby 02 June 2010 03:06:15AM 6 points [-]
Comment author: cousin_it 02 June 2010 09:42:34AM 3 points [-]

Yep, it was probably the first rationalist joke ever that made me laugh.

Comment author: Houshalter 02 June 2010 10:18:02PM 0 points [-]

"Aliens ate my baby!"

Lol, although, what does astrology have to do with anything less wrong-ish.

Comment author: cousin_it 02 June 2010 10:21:28PM 1 point [-]

That's a reference to Three Worlds Collide.

Comment author: steven0461 01 June 2010 11:25:25PM 7 points [-]

New papers from Nick Bostrom's site.

Comment author: timtyler 02 June 2010 01:08:44PM 1 point [-]

2nd one "ANTHROPIC SHADOW: OBSERVATION SELECTION EFFECTS AND HUMAN EXTINCTION RISKS" - is good reading.

Comment author: roland 02 June 2010 01:21:24AM *  8 points [-]

LW too focused on verbalizable rationality

This comment got me thinking about it. Of course LW being a website can only deal with verbalizable information(rationality). So what are we missing? Skillsets that are not and have to be learned in other ways(practical ways): interpersonal relationships being just one of many. I also think the emotional brain is part of it. There might me people here who are brilliant thinkers yet emotionally miserable because of their personal context or upbringing, and I think dealing with that would be important. I think a hollistic approach is required. Eliezer had already suggested the idea of a rationality dojo. What do you think?

Comment author: RomanDavis 02 June 2010 02:04:27AM 4 points [-]

I'm a draftsman and it always struck me how absolutely terrible the English language is for talking about ludicrously simple visual concepts precisely. Words like parallel and perpendicular should be one syllable long.

I wonder if there's a way to apply rationality/ mathematical think beyond geometry and to the world of art.

Comment author: realitygrill 02 June 2010 02:53:30AM 0 points [-]

I think it would be great to systematically explore and develop useful skillsets, perhaps in a modular fashion. We do have sequences. I would join a rationality dojo immediately.

What do you mean practical ways? I understand the difficulty of transferring kinesthetic or social understanding, but how can we overcome that in nonverbalized fashion?

Comment author: roland 02 June 2010 03:01:09AM 1 point [-]

What do you mean practical ways? I understand the difficulty of transferring kinesthetic or social understanding, but how can we overcome that in nonverbalized fashion?

Some things have to be shown, you have to sometimes take part in an activity to "get" it, learn by trial and error, get feedback pointing out mistakes that you are unaware of, etc...

Comment author: CannibalSmith 02 June 2010 01:10:16PM 1 point [-]

Some things

For example?

Comment author: RomanDavis 02 June 2010 05:04:23PM 2 points [-]

Do you think you could describe this image to an arbitrarily talented artist and end up with an image that even looked like it was based on it?

http://smithandgosling.files.wordpress.com/2009/05/the-reader.jpg

It's not so much, "Such insolence, our ideas are so awesome they can not be broken down by mere reductionism" as "Wow, words are really bad at describing things that are very different from what most of the people speaking the language do."

I think you could make an elaborate set of equations on a cartesian graph and come up with a drawing that looked like it and say fill up RGB values #zzzzzz at coordinates x,y or whatever, but that seems like a copout since that doesn't tell you anything about how Fragonard did it.

Comment author: Will_Newsome 02 June 2010 07:35:57AM 5 points [-]

I've been talking to various people about the idea of a Rationality Foundation (working title) which might end up sponsoring or facilitating something like rationality dojos. Needless to say this idea is in its infancy.

Comment author: Morendil 02 June 2010 02:29:33PM 2 points [-]

The example of coding dojos for programmers might be relevant, and not just for the coincidence in metaphors.

Comment author: Kaj_Sotala 02 June 2010 07:18:45AM 10 points [-]

My theory of happiness.

In my experience, happy people tend to be more optimistic and more willing to take risks than sad people. This makes sense, because we tend to be more happy when things are generally going well for us: that is when we can afford to take risks. I speculate that the emotion of happiness has evolved for this very purpose, as a mechanism that regulates our risk aversion and makes us more willing to risk things when we have the resources to spare.

Incidentally, this would also explain why people falling in love tend to be intensly happy at first. In order to get and keep a mate, you need to be ready to take risks. Also, if happiness is correlated with resources, then being happy signals having lots of resources, increasing your prospective mate's chances of accepting you. [...]

I was previously talking with Will about the degree to which people's happiness might affect their tendency to lean towards negative or positive utilitarianism. We came to the conclusion that people who are naturally happy might favor positive utilitarianism, while naturally unhappy people might favor negative utilitarianism. If this theory of happiness is true, then that makes perfect sense: risk aversion and a desire to avoid pain corresponds to negative utilitarianism, and willingness to tolerate pain corresponds to positive utilitarianism.

Note that most Western humans have a far greater access to resources than our ancestors did, so we are likely all far more risk-averse than would be optimal given the environment.

Comment author: Will_Newsome 02 June 2010 07:21:44AM *  0 points [-]

And a very condensed note I wrote to myself (in brainstormish mode, without regard for feasibility or testability):

Emotions are filters on the brain, brain subsystems activated for different reasons in response to different cognitive stimuli. This would explain why those who are happy have a hard time remembering things that are saddening or vice versa (possibly causing cascades). It seems that flow is the opposite of suffering, as both are responses to difficult problems such as the ones the brain evolved to solve. Pain asymbolia may be the opposite of something like bipolar disorder or multiple personality disorder, and the difference may be strength of emotion or the cognitive subsystems similar to emotion. It is odd that people who suffer are more often negative utilitarians: this is probably because the suffering filter is affecting what sorts of memories of experience they have access to, and biasing their thoughts in that direction.

Comment author: Alexandros 02 June 2010 08:46:35AM *  7 points [-]

Hi Kaj, I really liked the article. I had a relevant theory to explain the perceived difference of attitudes of north Europeans versus south Europeans. I guess you could call it a theory of unhappiness. Here goes:

I take as granted that mildly depressed people tend to make more accurate depictions of reality, that north Europeans have higher incidence of depression and also much better functioning economies and democracies. Given a low resource environment, one needs to plan further, and make more rational projections of the future. If being on the depressive side makes one more introspective and thoughtful, then it would be conducive to having better long-term plans. In a sense, happiness could be greed-inducing, in a greedy algorithm sense. This more or less agrees with kaj's theory. OTOH, not-happiness would encourage long-term planning and even more co-operative behaviour.

In the current environment, resources may not be scarce, but our world has become much more complex, actions having much deeper consequences than in the ancestral environment (Nassim Nicholas Taleb makes this point in Black Swan) therefore also needing better thought out courses of action. So northern Europeans have lucked out where their adaptation to climate has been useful for the current reality. If one sees corruption as a local-greedy behaviour as opposed to lawfulness as a global-cooperative behaviour, this would also explain why going closer to the equator you generally see an increase in corruption and also failures in democratic government. Taken further, it would imply that near-equator peoples are simply not well-adapted to democratic rule, which demands a certain limiting of short-term individual freedom for the longer-term common good, and a more distributed/localised form of governance would do much better. I think this (rambling) theory can more or less be pieced together with kaj's, adding long-term planning as a second dimension.

Disclaimer: Before anyone accuses me of discrimination, I am in fact a south European (Greek), living in north Europe (the UK), and while this does not absolve me of all possibility of racism against my own, this theory has formed from my effort to explain the cultural differences I experience on a daily basis. Take it for what it's worth.

Comment author: Houshalter 02 June 2010 01:40:50PM 5 points [-]

How does this make sense exactly? A happy person, with more resources, would be better off not taking risks that could result in him losing what he has. On the other hand, a sad person with few resources, would need to take more risks then the happy person to get the same results. If you told a rich person, jump off that cliff and I'll give you a million dollars, they probably wouldn't do it. On the other hand, if you told a poor person the same thing, they might do it as long as there was a chance they could survive.

My idea of why people were happy wasn't a static value of how many resources they had, but a comparative value. A rich person thrown into poverty would be very unhappy, but the poor person might be happy.

Comment author: pjeby 02 June 2010 04:19:25PM 6 points [-]

How does this make sense exactly? A happy person, with more resources, would be better off not taking risks that could result in him losing what he has. On the other hand, a sad person with few resources, would need to take more risks then the happy person to get the same results.

Kaj's hypothesis is a bit off: what he's actually talking about is the explore/exploit tradeoff. An animal in a bad (but not-yet catastrophic) situation is better off exploiting available resources than scouting new ones, since in the EEA, any "bad" situation is likely to be temporary (winter, immediate presence of a predator, etc.) and it's better to ride out the situation.

OTOH, when resources are widely available, exploring is more likely to be fruitful and worthwhile.

The connection to happiness and risk-taking is more tenuous.

If you told a rich person, jump off that cliff and I'll give you a million dollars, they probably wouldn't do it. On the other hand, if you told a poor person the same thing, they might do it as long as there was a chance they could survive.

I'd be interested in seeing the results of that experiment. But "rich" and "poor" are even more loosely correlated with the variables in question - there are unhappy "rich" people and unhappy "poor" people, after all.

(In other words, this is all about internal, intuitive perceptions of resource availability, not rational assessments of actual resource availability.)

Comment author: RobinZ 02 June 2010 04:41:01PM 2 points [-]

If I were to wager a guess, the people who would accept the deal are those who feel they are in a catastrophic situation.

Speaking of catastrophic situations, have you seen The Wages of Fear or any of the remakes? I've only seen Sorcerer, but it was quite good. It's a rather more realistic situation that jumping off a cliff, but the structure is the same: a group of desperate people driving cases of nitroglycerin-sweating dynamite across rough terrain to get enough money that they can escape.

Comment author: Houshalter 02 June 2010 10:11:20PM 0 points [-]

It's a rather more realistic situation that jumping off a cliff

Or maybe not...

Driving in teams of two, they meet various hazards on their journey, including a dilapidated rope-suspension bridge swinging violently in a huge storm over a flood-swollen river, a massive tree blocking the road, and a number of desperate, dangerous bandits.

Comment author: RobinZ 02 June 2010 10:33:47PM 1 point [-]

I'd buy "main road incorporating rope suspension bridges" over "millionaire hiring people to throw themselves off cliffs", but I see what you mean.

Comment author: Kaj_Sotala 02 June 2010 09:11:40PM 0 points [-]

Kaj's hypothesis is a bit off: what he's actually talking about is the explore/exploit tradeoff.

I believe you're right, now that I think about that.

Comment author: Kaj_Sotala 02 June 2010 09:11:12PM 1 point [-]

I was kind of thinking expected value. In principle, if you always go by expected value, in the long run you will end up maximizing your value. But this may not be the best move to make if you're low on resources, because with bad luck you'll run out of them and die even though you made the moves with the highest expected value.

However, your objection does make sense and Eby's reformulation of my theory is probably the superior one, now that I think about it.

Comment author: Will_Newsome 02 June 2010 07:28:20AM 10 points [-]

So I've started drafting the very beginnings of a business plan for a Less Wrong (book) store-ish type thingy. If anybody else is already working on something like this and is advanced enough that I should not spend my time on this mini-project, please reply to this comment or PM me. However, I would rather not be inundated with ideas as to how to operate such a store yet: I may make a Less Wrong post in the future to gather ideas. Thanks!

Comment author: Alexandros 02 June 2010 08:50:49AM *  13 points [-]

Observation: The may open thread, part 2, had very few posts in the last days, whereas this one has exploded within the first 24 hours of its opening. I know I deliberately withheld content from it as once it is superseded from a new thread, few would go back and look at the posts in the previous one. This would predict a slowing down of content in the open threads as the month draws to a close, and a sudden burst at the start of the next month, a distortion that is an artifact of the way we organise discussion. Does anybody else follow the same rule for their open thread postings? Is there something that should be done to solve this artificial throttling of discussion?

Comment author: billswift 02 June 2010 04:16:12PM *  9 points [-]

Some sites have gone to an every Friday open thread; maybe we should do it weekly instead of monthly, too.

Comment author: Blueberry 02 June 2010 08:12:30PM 1 point [-]

I would support that.

Comment author: RobinZ 02 June 2010 08:50:17PM 0 points [-]

From observations even of previous "Part 2"s, it would seem that there is enough content to support that frequency of open thread.

Comment author: Kaj_Sotala 02 June 2010 09:07:41PM 3 points [-]

I don't post in the open threads much, but if I run into a good rationality quote I tend to wait until the next rationality quotes thread is opened unless the current one is less than a week or so old.

Comment author: Alexandros 02 June 2010 08:53:14AM *  10 points [-]

To the powers that be: Is there a way for the community to have some insight into the analytics of LW? That could range from periodic reports, to selective access, to open access. There may be a good reason why not, but I can't think of it. Beyond generic transparency brownie points, since we are a community interested in popularising the website, access to analytics may produce good, unforeseen insights. Also, authors would be able to see viewership of their articles, and related keyword searches, and so be better able to adapt their writing to the audience. For me, a downside of posting here instead of my own blog is the inability to access analytics. Obviously i still post here, but this is a downside that may not have to exist.

Comment author: CronoDAS 02 June 2010 03:45:44PM 2 points [-]
Comment author: Antisuji 02 June 2010 04:58:38PM 0 points [-]

An engaging video, thanks. The study sounded familiar, so I looked for it... turns out I'd seen the guy's TED talk a while back: http://www.ted.com/talks/dan_pink_on_motivation.html

Comment author: Seth_Goldin 02 June 2010 05:52:19PM *  2 points [-]

In A Technical Explanation of Technical Explanation, Eliezer writes,

You should only assign a calibrated confidence of 98% if you're confident enough that you think you could answer a hundred similar questions, of equal difficulty, one after the other, each independent from the others, and be wrong, on average, about twice. We'll keep track of how often you're right, over time, and if it turns out that when you say "90% sure" you're right about 7 times out of 10, then we'll say you're poorly calibrated.

...

What we mean by "probability" is that if you utter the words "two percent probability" on fifty independent occasions, it better not happen more than once

...

If you say "98% probable" a thousand times, and you are surprised only five times, we still ding you for poor calibration. You're allocating too much probability mass to the possibility that you're wrong. You should say "99.5% probable" to maximize your score. The scoring rule rewards accurate calibration, encouraging neither humility nor arrogance.

So I have a question. Is this not an endorsement of frequentism? I don't think I understand fully, but isn't counting the instances of the event exactly frequentist methodology? How could this be Bayesian?

Comment author: Morendil 02 June 2010 06:21:25PM *  4 points [-]

As I understand it, frequentism requires large numbers of events for its interpretation of probability, whereas the bayesian interpretation allows the convergence of relative frequencies with probabilities but claims that probability is a meaningful concept when applied to unique events, as a "degree of plausibility".

Comment author: Seth_Goldin 02 June 2010 05:57:48PM 5 points [-]
Comment author: Daniel_Burfoot 02 June 2010 09:01:01PM 2 points [-]

I would like to see a top-level link post and discussion of this article (and maybe other related papers).

Comment author: CronoDAS 02 June 2010 06:37:27PM 0 points [-]

Anyone here live in California? Specifically, San Diego county?

The judicial election on June 8th has been subject to a campaign by a Christian conservative group. You probably don't want them to win, and this election is traditionally a low turnout one, so you might want to put a higher priority on this judicial election than you normally would. In other words, get out there and vote!

Comment author: Eneasz 02 June 2010 09:07:05PM 4 points [-]

Are there any rationalist psychologists?

Also, more specifically but less generally relevant to LW; as a person being pressured to make use of psychological services, are there any rationalist psychologists in the Denver, CO area?

Comment author: SilasBarta 02 June 2010 10:11:50PM *  4 points [-]

Thought I might pass this along and file it under "failure of rationality". Sadly, this kind of thing is increasingly common -- getting deep in education debt, but not having increased earning power to service the debt, even with a degree from a respected university.

Summary: Cortney Munna, 26, went $100K into debt to get worthless degrees and is deferring payment even longer, making interest pile up further. She works in an unrelated area (photography) for $22/hour, and it doesn't sound like she has a lot of job security.

We don't find out until the end of the article that her degrees are in women's studies and religious studies.

There are much better ways to spend $100K. Twentysomethings like her are filling up the workforce. I'm worried about the future implications.

I thank my lucky stars I'm not in such a position (in the respects listed in the article -- Munna's probably better off in other respects). I didn't handle college planning as well as I could have, and I regret it to this day. But at least I didn't go deep into debt for a worthless degree.

Comment author: mkehrt 02 June 2010 10:19:15PM 7 points [-]

Forgive me if this is beating a dead horse, or if someone brought up an equivalent problem before; I didn't see such a thing.

I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.) But now I have an example that seems to be equivalent to DSvs.T, easily understandable via my moral intuition and give the "wrong" (i.e., not purely utilitarian) answer.

Suppose I have ten people and a stick. The appropriate infinitely powerful theoretical being offers me a choice. I can hit all ten of them with a stick, or I can hit one of them nine times. "Hitting with a stick" has some constant negative utility for all the people. What do I do?

This seems to me to be exactly dust specks vs. torture scaled down to humanly intuitable scales. I think the obvious answer is to hit all the people once. Examining my intuition tells me that this is because I think the aggregation function for utility is different across different people than across one person's possible futures. Specifically, my intuition tells me to maximize across people the minimum expected utilty across an individual's future.

So, is there a name for this position?

Do people think my example is equivalent to DSvsT?

Do people get the same or different answer with this question as they do with DSvsT?

Comment author: Blueberry 02 June 2010 10:24:58PM 2 points [-]

There's one difference, which is that the inequality of the distribution is much more apparent in your example, because one of the options distributes the pain perfectly evenly. If you value equality of distribution as worth more than one unit of pain, it makes sense to choose the equal distribution of pain. This is similar to economic discussions about policies that lead to greater wealth, but greater economic inequality.

Comment author: Blueberry 02 June 2010 10:28:32PM 3 points [-]

I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.)

Oh, and I'd love to hear what you mean about this.

Comment author: RomanDavis 02 June 2010 10:30:00PM *  1 point [-]

I think the point of Dust Specks Vs Torture was scope failure. Even allowing for some sort of "negative marginal utility" once you hit a wacky number 3^^^3, it doesn't matter. .000001 negative utility point multiplied by 3^^^3 is worse than anything, because 3^^^3 is wacky huge.

For the stick example, I'd say it would have to depend on a lot of factors about human psychology and such, but I think I'd hit the one. Marginal utility tends to go down for a product, and I think that the shock of repeated blows would be less than the shock of the one against ten separate people.

I think your opinion basically is an appeal to egalitarianism, since you expect negative utility to yourself from an unfair world where one person gets something that ten other people did not, for no good or fair reason.

Comment author: Blueberry 02 June 2010 10:46:43PM 1 point [-]

Marginal utility tends to go done for a product, and I think that the shock of repeated blows would be less than the shock of the one against ten separate people.

Part of the assumption of the problem was that hitting with a stick has some constant negative utility for all the people.

Comment author: Khoth 02 June 2010 10:36:03PM 5 points [-]

I don't think maximising the minima is what you want. Suppose your choice is to hit one person 20 times, or five people 19 times each. Unless your intuition is different from mine, you'll prefer the first option.