Open Thread: June 2010
To whom it may concern:
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
(After the critical success of part II, and the strong box office sales of part III in spite of mixed reviews, will part IV finally see the June Open Thread jump the shark?)
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (651)
Should we buy insurance at all?
There is a small remark in Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making about insurance saying that all insurance has negative expected utility, we pay too high a price for too little a risk, otherwise insurance companies would go bankrupt. If this is the case should we get rid of all our insurances? If not, why not?
Risk is more expensive when you have a smaller bankroll. Many slot machines actually offer positive expected value payouts - they make their return on people plowing their winnings back in until they go broke.
Ahh, Kelly criterion, correct?
...
*looks up Kelly criterion*
That's definitely a related result. (So related, in fact, that thinking about the +EV slots the other day got me wondering what the optimal fraction of your wealth was to bid on an arbitrary bet - which, of course, is just the Kelly criterion.)
Citation please? A cursory search suggests that machines go through +EV phases, just like blackjack, but that individual machines are -EV. It's not just that they expect people to plow the money back in, but that pros have to wait for fish to plow money in to get to the +EV situation.
The difference with blackjack is that you can (in theory) adjust your bet to take advantage of the different phases of blackjack. Your first sentence seems to match Roland's comment about the Kelly criterion (you lose betting against snake eyes if you bet your whole bankroll every time), but that doesn't make sense with fixed-bet slots. There, if it made sense to make the first bet, it makes sense to continuing betting after a jackpot.
On the scale from "saw it in The Da Vinci Code" to "saw it in Nature", I'd have to say all I have is an anecdote from a respectable blogger:
I'll give you that "many" is almost certainly flat wrong, on reflection, but such machines are (were?) probably out there.
That move was full of falsehoods. For example, people named Silas are actually no more or less likely than the general population to be tall homicidal albino monks -- but you wouldn't guess that from seeing the movie, now, would you?
That's why it represents the bottom end of my "source-reliability" scale.
The only relevant part of the quote seems to be:
I'm pretty sure it's not that unlikely to come up ahead 'three or four' times when playing slot machines (if it weren't so late I'd actually do the sums). It seems much more plausible that the blog author was just lucky than that the machines were actually set to regularly pay out positive amounts.
This comes up frequently in gambling and statistics circles. "Citation please" is the correct response - casinos do NOT expect to make a profit by offering losing (for them) bets and letting "gambler's ruin" pay them off. It just doesn't work that way.
The fact that a +moneyEV bet can be -utilityEV for a gambler does NOT imply that a -moneyEV bet can be +utilityEV for the casino. It's -utility for both participants.
The only reason casinos offer such bets ever is for promotional reasons, and they hope to make the money back on different wagers the gambler will make while there.
The Kelly calculations work just fine for all these bets - for cyclic bets, it ends up you should bet 0 when -EV. When +EV, bet some fraction of your bankroll that maximizes mean-log-outcome for each wager.
Some casinos advertise that they have slots with "up to" a 101% rate of return. Good luck finding the one machine in the casino that actually has a positive EV, though!
Obviously if you know your utility function and the true distribution of possible risks, it's easy to decide whether to take a particular insurance deal.
The standard advice is that if you can afford to self-insure, you should, for the reason you cite (that insurance companies make a profit, on average).
That's a heuristic that holds up fine except when you know (for reasons you will keep secret from insurers) your own risk is higher than they could expect; then, depending on how competitive insurers are, even if you're not too risk-averse, you might find a good deal, even to the extent that you turn an expected (discounted) profit, and so should buy it even if you have zero risk aversion. Apparently in California, auto insurers are required to publish the algorithm by which they assign premiums (and are possibly prohibited from using certain types of information).
Conversely, you may choose to have no insurance (or extremely high deductible) in cases where you believe your personal risk is far below what the insurer appears to believe, even when you're actually averse to that risk.
Of course, it's not sufficient to know how wrong the insurer's estimate of your risk is; they insist on a pretty wide vig - not just to survive both uncertainties in their estimation of risk and the market returns on the float, but also to compensate for the observed amount of successful adverse selection that results from people applying the above heuristic.
I suppose it may also be possible that the insurer won't pay. I don't know what exactly what guarantees we have in the U.S.
Actually, I think that for voluntary insurance, the observed adverse selection is negative, but I can't find the cite. People simply don't do cost-benefit calculations. People who buy insurance are those who are terribly risk-averse or see it as part of their role. Such people tend to be more careful than the general population. In a competitive market, the price of insurance would be bid down to reflect this, but it isn't.
No -- Insurance has negative expected monetary return, which is not the same as expected utility. If your utility function obeys the law of diminishing marginal utility, then it also obeys the law of increasing marginal disutility. So, for example, losing 10x will be more than ten times as bad as losing x. (Just as gaining 10x is less than ten times as good as gaining x.)
Therefore, on your utility curve, a guaranteed loss of x can be better than a 1/1000 chance of losing 1000x.
ETA: If it helps, look at a logarithmic curve and treat it as your utility as a function of some quantity. Such a curve obeys diminishing marginal utility. At any given point, your utility increases less than proportionally going up, but more than proportionally going down.
(Incidentally, I acutally wrote an embarrasing article arguing in favor of the thesis roland presents, and you can still probably find on it the internet. That exchange is also an example of someone being bad at explaining. If my opponent had simply stated the equivalence between DMU and IMD, I would have understood why that argument about insurance is wrong. Instead, he just resorted to lots of examples of when people buy insurance that are totally unconvincing if you accept the quoted argument.)
I voted this up, but I want to comment to point out that this is a really important point. Don't be tricked into not getting insurance just because it has a negative expected monetary value.
I voted Silas up as well because it's an important point but it shouldn't be taken as a general reason to buy as much insurance as possible (I doubt Silas intended it that way either). Jonathan_Graehl's point that you should self-insure if you can afford to and only take insurance for risks you cannot afford to self-insure is probably the right balance.
Personally I don't directly pay for any insurance. I live in Canada (universal health coverage) and have extended health insurance through work (much to my dismay I cannot decline it in favor of cash) which means I have far more health insurance than I would purchase with my own money. Given my aversion to paperwork I don't even fully use what I have. I do not own a house or a car which are the other two areas arguably worth insuring. I don't have dependents so have no need for life or disability coverage. All other forms of insurance fall into the 'self-insure' category for me given my relatively low risk aversion.
This might be old news to everyone "in", or just plain obvious, but a couple days ago I got Vladimir Nesov to admit he doesn't actually know what he would do if faced with his Counterfactual Mugging scenario in real life. The reason: if today (before having seen any supernatural creatures) we intend to reward Omegas, we will lose for certain in the No-mega scenario, and vice versa. But we don't know whether Omegas outnumber No-megas in our universe, so the question "do you intend to reward Omega if/when it appears" is a bead jar guess.
Whatever our prior for encountering No-mega, it should be counterbalanced by our prior for encountering Yes-mega (who rewards you if you are counterfactually-muggable).
You haven't considered the full extent of the damage. What is your prior over all crazy mind-reading agents that can reward or punish you for arbitrary counterfactual scenarios? How can you be so sure that it will balance in favor of Omega in the end?
In fact, I can consider all crazy mind-reading reward/punishment agents at once: For every such hypothetical agent, there is its hypothetical dual, with the opposite behavior with respect to my status as being counterfactually-muggable (the one rewarding what the other punishes, and vice versa). Every such agent is the dual of its own dual; in the universal prior, being approached by an agent is about as likely as being approached by its dual; and I don't think I have any evidence that one agent will be more likely to appear than its dual. Thus, my total expected payoff from these agents is 0.
Omega itself does not belong to this class of agent; it has no dual. (ETA: It has a dual, but the dual is a deceptive Omega, which is much less probable than Omega. See below.) So Omega is the only one I should worry about.
I should add that I feel a little uneasy because I can't prove that these infinitesimal priors don't dominate everything when the symmetry is broken, especially when the stakes are high.
Why? Can't your definition of dual be applied to Omega? I admit I don't completely understand the argument.
Okay, I'll be more explicit: I am considering the class of agents who behave one way if they predict you're muggable and behave another way if they predict you're unmuggable. The dual of an agent behaves exactly the same as the original agent, except the behaviors are reversed. In symbols:
What about Omega?
What would Omega* be?
So the dual of Omega is something that looks like Omega but is in fact deceptive. By hypothesis, Omega is trustworthy, so my prior probability of encountering Omega* is negligible compared to meeting Omega.
(So yeah, there is a dual of Omega, but it's much less probable than Omega.)
Then, when I calculate expected utility, each agent A is balanced by its dual A* , but Omega is not balanced by Omega*.
If we assume you can tell "deceptive" agents from "non-deceptive" ones and shift probability weight accordingly, then not every agent is balanced by its dual, because some "deceptive" agents probably have "non-deceptive" duals and vice versa. No?
(Apologies if I'm misunderstanding - this stuff is slowly getting too complex for me to grasp.)
The reason we shift probability weight away from the deceptive Omega* is that, in the original problem, we are told that we believe Omega to be non-deceptive. The reasoning goes like this: If it looks like Omega and talks like Omega, then it might be Omega or Omega* . But if it were Omega* , then it would be deceiving us, so it's most probably Omega.
In the original problem, we have no reason to believe that No-mega and friends are non-deceptive.
(But if we did, then yes, the dual of a non-deceptive agent would be deceptive, and so have lower prior probability. This would be a different problem, but it would still have a symmetry: We would have to define a different notion of dual, where the dual of an agent has the reversed behavior and also reverses its claims about its own behavior.
What would Omega* be in that case? It would not claim to be Omega. It would truthfully tell you that if it predicted you would not give it $5 on tails, then it would flip a coin and give you $100 on heads; and otherwise it would not give you anything. This has no bearing on your decision in the Omega problem.)
Edit: Formatting.
By your definitions, Omega* would condition its decision on you being counterfactually muggable by the original Omega, not on you giving money to Omega* itself. Or am I losing the plot again? This notion of "duality" seems to be getting more and more complex.
"Duality" has become more complex because we're now talking about a more complex problem — a version of Counterfactual Mugging where you believe that all superintelligent agents are trustworthy. The old version of duality suffices for the ordinary Counterfactual Mugging problem.
My thesis is that there's always a symmetry in the space of black swans like No-mega.
In the case currently under consideration, I'm assuming Omega's spiel goes something like "I just flipped a coin. If it had been heads, I would have predicted what you would do if I had approached you and given my spiel...." Notice the use of first-person pronouns. Omega* would have almost the same spiel verbatim, also using first-person pronouns, and make no reference to Omega. And, being non-deceptive, it would behave the way it says it does. So it wouldn't condition on your being muggable by Omega.
You could object to this by claiming that Omega actually says "I am Omega. If Omega had come up to you and said....", in which case I can come up with a third notion of duality.
Surely the last thing on anyone's mind, having been persuaded they're in the presence of Omega in real life, is whether or not to give $100 :)
I like the No-mega idea (it's similar to a refutation of Pascal's wager by invoking contrary gods), but I wouldn't raise my expectation for the number of No-mega encounters I'll have by very much upon encountering a solitary Omega.
Generalizing No-mega to include all sorts of variants that reward stupid or perverse behavior (are there more possible God-likes that reward things strange and alien to us?), I'm not in the least bit concerned.
I suppose it's just a good argument not to make plans for your life on the basis of imagined God-like beings. There should be as many gods who, when pleased with your action, intervene in your life in a way you would not consider pleasant, and are pleased at things you'd consider arbitrary, as those who have similar values they'd like us to express, and/or actually reward us copacetically.
You don't have to. Both Omega and No-mega decide based on what your intentions were before seeing any supernatural creatures. If right now you say "I would give money to Omega if I met one" - factoring in all belief adjustments you would make upon seeing it - then you should say the reverse about No-mega, and vice versa.
ETA: Listen, I just had a funny idea. Now that we have this nifty weapon of "exploding counterfactuals", why not apply it to Newcomb's Problem too? It's an improbable enough scenario that we can make up a similarly improbable No-mega that would reward you for counterfactual two-boxing. Damn, this technique is too powerful!
By not believing No-mega is probable just because I saw an Omega, I mean that I plan on considering such situations as they arise on the basis that only the types of godlike beings I've seen to date (so far, none) exist. I'm inclined to say that I'll decide in the way that makes me happiest, provided I believe that the godlike being is honest and really can know my precommitment.
I realize this leaves me vulnerable to the first godlike huckster offering me a decent exclusive deal; I guess this implies that I think I'm much more likely to encounter 1 godlike being than many.
The caveat is of course that Counterfactual Mugging or Newcomb Problem are not to be analyzed as situations you encounter in real life: the artificial elements that get introduced are specified explicitly, not by an update from surprising observation. For example, the condition that Omega is trustworthy can't be credibly expected to be observed.
The thought experiments explicitly describe the environment you play your part in, and your knowledge about it, the state of things that is much harder to achieve through a sequence of real-life observations, by updating your current knowledge.
I dunno, Newcomb's Problem is often presented as a situation you'd encounter in real life. You're supposed to believe Omega because it played the same game with many other people and didn't make mistakes.
In any case I want a decision theory that works on real life scenarios. For example, CDT doesn't get confused by such explosions of counterfactuals, it works perfectly fine "locally".
ETA: My argument shows that modifying yourself to never "regret your rationality" (as Eliezer puts it) is impossible, and modifying yourself to "regret your rationality" less rather than more requires elicitation of your prior with humanly impossible accuracy (as you put it). I think this is a big deal, and now we need way more convincing problems that would motivate research into new decision theories.
If you do present observations that move the beliefs to represent the thought experiment, it'll work just as well as the magically contrived thought experiment. But the absence of relevant No-megas is part of the setting, so it too should be a conclusion one draws from those observations.
Yes, but you must make the precommitment to love Omegas and hate No-megas (or vice versa) before you receive those observations, because that precommitment of yours is exactly what they're judging. (I think you see that point already, and we're probably arguing about some minor misunderstanding of mine.)
http://fora.tv/2010/05/22/Adam_Savage_Presents_Problem_Solving_How_I_Do_It
Delightful, and has a nice breakdown of the sort of questions to ask yourself (what exactly is the problem, how much precision is actually needed, what is the condition of the tools, etc.) if you want to get things done efficiently.
I've been reading the Quantum Mechanics sequence, and I have a question about Many-Worlds. My understanding of MWI and the rest of QM is pretty much limited to the LW sequence and a bit of Wikipedia, so I'm sure there will be no shortage of people here who have a better knowledge of it and can help me.
My question is this: why are the Born Probabilites a problem for MWI?
I'm sure it's a very difficult problem, I think I just fail to understand the implications of some step along the way. FWIW, my understanding of the Born Probabilities mainly clicks here:
Firstly, I know probability is the wrong word, but I'm going to use it here, insufficiently, in the same way that it's normally insufficiently used to talk about QM. I sure hope that's okay because it is a pain to nail down in English.
So... If a quantum event has a 30% chance of going LEFT and a 70% chance of going right (which you could observe without entangling yourself, for example by blasting a whole bunch of photons through slits and seeing the overall density pattern without measuring individual photons) (I think), then if you entangle yourself with a single instance of it, you'll have a 30% probability of observing LEFT and a 70% probability of observing RIGHT.
So why is this surprising? Obviously if we're just counting observers then we would expect a 50/50 probability spread, but I assume the problem isn't that naive. Obviously if the particles themselves exhibit a 30/70 preference, then we, being made of particles, should expect to do the same. Or... if the particles themselves can exist along a (psuedo)probability continuum, then why should we, the entagled, not expect to do the same? If those quarks are 70/30, then why aren't yours? Why should MWI necessarily imply the sudden creation of exactly 2 worlds with equal weight, as opposed to just dividing experience, locally and where necessary, into a weighted continuum?
I think I'll try this from another angle. MWI gets points for treating people/observers as particles, governed by the same laws as everything else. But are we really treating ourselves equally if we don't assume that we too follow this 30/70 split? It seems like this should be the default assumption, the one requiring no extra postulates, that we divide up not into discrete worlds but along a weighted continuum. Obviously it's easier on our typical conception of conciousness if we can just have the whole universe split neatly in two, but that feels to me like putting the weirdness where it logically belongs (on our comparatively weak understanding of concious experience).
Hope this makes at least some since to someone who can steer me in the right direction. I'd appreciate responses as to where specifically I've erred, as this will continue to bug me until I see where exactly I went wrong. Thanks in advance.
The surprising (or confusing, mysterious, what have you) thing is that quantum theory doesn't talk about a 30% probability of LEFT and a 70% probability of RIGHT; what it talks about is how LEFT ends up with an "amplitude" of 0.548 and RIGHT with an "amplitude" of 0.837. We know that the observed probability ends up being the square of the absolute value of the amplitude, but we don't know why, or how this even makes sense as a law of physics.
Ah. So it's not the idea that it's weighted so much as the specific act of squaring the amplitude. "Why squaring the amplitude, why not something else?".
I suppose the way I had been reading, I thought that the problem came from expecting a different result given the squared amplitude probability thing, not from the thing itself.
That is helpful, many thanks.
Yes, precisely.
That's one issue, but as Warrigal said, the other issue is "how this even makes sense." it seems to say that the amplitude is a measure of how real the configuration is.
Any recommendations for how much redundancy is needed to make ideas more likely to be comprehensible?
It really depends upon the topic and upon how much inferential difference there is between your ideas and the reader's understanding of the topic. Eliezer's earlier posts are easily understandable to someone with no prior experience in statistics, cognitive science, etc. because he uses a number of examples and metaphors to clearly illustrate his point. In fact, it might be helpful to use his posts as a metric to help answer your question. In general, though, it's probably best to repeat yourself by summarizing your point at both the beginning and end of your essay/post/whatever and by using several examples to illustrate whatever you are talking about, especially if writing for non-experts.
There's a general rule in writing that if you don't know how many items to put in a list, you use three. So if you're giving examples and you don't know how many to use, use three. Don't know if that helps, but it's the main heuristic I know that's actually concrete.
The only guideline I'm familiar with is "Tell me three times - tell me what you're going to explain, then explain it, then tell me what you just explained." This seems to work on multiple scales - from complete books to shorter essays (though I'm not sure if it works on the level of individual paragraphs).
I believe that's called the Bellman's Rule.
This post is about the distinctions between Traditional and Bayesian Rationality, specifically the difference between refusing to hold a position on an idea until a burden of proof is met versus Bayesian updating.
Good quality government policy is an important issue to me (it's my Something to Protect, or the closest I have to one), and I tend to approach rationality from that perspective. This gives me a different perspective from many of my fellow aspiring rationalists here at Less Wrong.
There are two major epistemological challenges in policy advice, in addition to the normal difficulties we all have to deal with: 1) Policy questions fall almost entirely within the social sciences. That means the quality of evidence is much lower than it is in the physical sciences. Uncontrolled observations, analysed with statistical techniques, are generally the strongest possible evidence, and sometimes you have nothing but theory or professional instinct to work with.
2) You have a very limited time in which to find an answer. Cabinet Ministers often want an answer within weeks, a timeframe measured in months is luxurious. And often a policy proposal is too sensitive to discuss with the general public, or sometimes with anyone outside your team.
By the standards of Traditional Rationality, policy advice is often made without meeting a burden of proof. Best guesses and theoretical considerations are too weak to reach conclusions. A proper practitioner of Traditional Rationality wouldn't be able to make any kind of recommendation, one could identify some promising initial hypotheses, but that's it.
But Just because you didn't have time to come up with a good answer doesn't mean that Ministers don't expect an answer. And a practitioner of Bayesian Rationality always has a best guess as to what is true, even if the evidence base is non-existent you can fall back on your prior. You don't want to be overconfident in stating your position, assumptions must be outlined and sensitivities should be explored. But you still need to give an answer and that's what attracts me to Bayesian approaches: you don't have to be officially agnostic until being presented with a level of evidence that is unrealistically high for policy work.
It seems to me that if you have very good quality evidence then Bayesian and Traditional Rationality are very similar. Good evidence either proves or disproves a proposition for a Traditional Rationalist, and for a Bayesian Rationalist it will shift their probability estimate, as well as increasing their confidence a lot. The biggest difference seem to me to be that Bayesian Rationality seems is able to make use of weak evidence in a way Traditional Rationality can't.
I am not at all like you. I don't have much interest in policy at all, and I do tend to refuse to hold a position, being very mindful of how easy it is to be completely off course (Probably from reading too much history of science. It's "the graveyard of dead ideas", after all.). I'm likely to tell the Cabinet Ministers to get off my back or they'll have absolutely useless recommendations.
However, I think you have hit upon the point that makes Bayesianism attractive to me: it's rationality you can use to act in real-time, under uncertainty, in normal life. Traditional Rationality is slow.
I see your point, the trouble is that a recommendation that comes too late often is absolutely useless. A lot of policy is time-dependant, if you don't act within a certain time frame then you might a swell do nothing. While sometimes doing nothing is the right thing to do, a late recommendation is often no better than no recommendation.
Yeah, I forgot to add that you've budged me slightly from my staunch positivist attitude for social science. Thanks. Reading up on complex adaptive systems has made me just that much more skeptical about our ability to predict policy's effects, and perhaps biased me.
It's nice to know I've had an influence :)
As it happens, I'm pretty sceptical as to how much we can know as well. There's nothing like doing policy to gain an understanding of how messy it can be. While the social sciences have a less than wonderful record in developing knowledge (look at the record of development economics, as one example), and economic forecasting is still not much better than voodoo but it's not like there's another group out there with all the answers. We don't have all of the answers, or even most of them, but we're better than nothing, which is the only alternative.
Nothing is often a pretty good alternative. Government action always comes at a cost, even if only the deadweight loss of taxation (keyphrase "public choice" for reasons you might expect the cost to be higher than that). I'm not trying to turn this into a political debate, but you should consider doing nothing not necessarily a bad thing, and what you do not necessarily better.
Politicians' logic: “Something must be done. This is something. Therefore we must do it.”
Reminded me of one of my favorite movie dialogues - from Sunshine. Context was actually physics, but the complexity of the situation and the time frame but the characters in the same situation as you with the Cabinet ministers.
Capa: It's the problem right there. Between the boosters and the gravity of the sun the velocity of the payload will get so great that space and time will become smeared together and everything will distort. Everything will be unquantifiable.
Kaneda: You have to come down on one side or the other. I need a decision.
Capa: It's not a decision, it's a guess. It's like flipping a coin and asking me to decide whether it will be heads or tails.
Kaneda: And?
Capa: Heads... We harvested all Earth's resources to make this payload. This is humanity's last chance... our last, best chance... Searle's argument is sound. Two last chances are better than one.
http://www.imdb.com/title/tt0448134/quotes?qt0386955
Yes, that's a good example. There are times when a decision has to be made, and saying you don't know isn't very useful. Even if you have very little to go on, you still have to decide one way or the other.
William Saletan at Slate is writing a series of articles on the history and uses of memory falsification, dealing mainly with Elizabeth Loftus and the ethics of her work. Quote from the latest article:
(This topic has, of course, been done to death around these parts.)
Interesting. I have read several of Loftus's books, but the last one was The Myth of Repressed Memory: False Memories and Allegations of Sexual Abuse over ten years ago. I think I'll go see what she has written since. Thanks for reminding me of her work.
The blog of Scott Adams (author of Dilbert) is generally quite awesome from a rationalist perspective, but one recent post really stood out for me: Happiness Button.
Classical game theorists establish a scientific consensus that the only rational course of action is not to push the buttons. Anyone who does is regarded with contempt or pity and gets lowered in the social stratum, before finally managing to rationalize the idea out of conscious attention, with the help of the instinct to conformity. A few free-riders smugly teach the remaining naive pushers a bitter lesson, only to stop receiving the benefit. Everyone gets back to business as usual, crazy people spinning the wheels of a mad world.
How does that work? I suppose it makes sense a little considering that the world has to go on and can't stop because everyones on the ground being "happy", but it wouldn't mean that people wouldn't do it, or even that it wouldn't be the "rational" thing to do.
Is everyone missing the obvious subtext in the original article - that we already live in just such a world but the button is located not on the forehead but in the crotch?
That would not model the True Prisoner's Dilemma.
What's that got to do with the price of eggs?
Except that sex, unlike the button in the story, doesn't always make people happy. Sometimes, for some people, it comes with complications that decrease net utility. (Also, it is possible to push your own button with sex.)
Sure, but it's not my comparison - I'm just saying it appears to be the obvious subtext of the original article.
But two poor, "lonely" people could just get together and push each others buttons. Thats the problem with this, any two people that can cooperate with each other can get the advantage. There was once an expiriment to evolve different programs in a genetic algorithm that could play the prisoners dilema. I'm not sure exactly how it was organized, which would really make or break different strategies, but the result was a program which always cooperated except when the other wasn't and it continued refusing to cooperate with the other untill it believed they were "even".
Are you thinking of tit for tat?
I'm not trying to argue for or against the comparison. Would you agree that the subtext exists in the original article or do you think I'm over-interpreting?
No, the subtext is definitely there in the original article. At least, I saw it immediately, as did most of the commenters:
I think the best analogy would be drugs, but those have bad things associated with them that the button example doesn't. They take up money, they cause health problems, etc.
But you can touch that button yourself...
How does that compare to when someone else touches your button with their button?
I've never done that, so I don't know.
I see that subtext, but I also see a subtext of geeks blaming the obvious irrationality of everyone else for them not getting any, like, it's just poking a button, right?
Are you saying that classical game theorists would model the button-pushing game as one-shot PD? Why would they fail to notice the repetitive nature of the game?
The theory says to defect in the iterated dilemma as well (under some assumptions).
Here's what the theory actually says: if you know the number of iterations exactly, it's a Nash equilibrium for both to defect on all iterations. But if you know the chance that this iteration will be the last, and this chance isn't too high (e.g. below 1/3, can't be bothered to give an exact value right now), it's a Nash equilibrium for both to cooperate as long as the opponent has cooperated on previous iterations.
I'd be far more willing to believe in game theorists calling for defection on the iterated PD than in mathematicians steering mainstream culture.
However, with the positive-sum nature of this game, I'd expect theorists to go with Schelling instead of Nash; and then be completely disregarded by the general public who categorize it under "physical ways of causing pleasure" and put sexual taboos on it.
This comment was very entertaining... but...
I actually do think people in such a world ought not to press buttons. But not very strongly... only about the same "oughtnotness" as people ought not to waste time looking at porn.
The argument is the same: Aren't there better things we could be doing?
Ideally, in button-world, people will devise a way to remove their buttons.
But if that couldn't be done, and we're seriously asking "what would happen?" I suppose it might end up being treated like sex. Having one's button publicly visible is "indecent" - buttons are only pushed in private. Etc. etc.
I dunno, this strikes me as a somewhat sex-negative attitude. Responding seriously to your question about the better things we could be doing, it strikes me that we people spend most of our time doing worthless things. We seldom really know whether we are happy, what it means to be happy, or how what we are doing might connect to somebody's future happiness.
If the buttons actually made people happy from time to time, it could be quite useful as a 'reality check.' People suspecting that X led to happiness could test and falsify their claim by seeing whether X produced the same mental/emotional state that the button did.
Obviously we shouldn't spend all our time pressing buttons, having sex, or looking at porn. But I sometimes wonder whether we wouldn't be better off if most people, especially in the developed world where labor seems to be over-supplied and the opportunity cost of not working is low, spent a couple hours a day doing things like that.
Isn't that a bit like snorting some coke (or perhaps just masturbating) after a happy experience (say, proving a particularly interesting theorem) to test whether it was really 'happy'?
There are many different kinds of 'happiness', and what makes an experience a happy or an unhappy one is not at all simple to pin down. A kind of happiness that one can obtain at will, as often as desired, and which is unrelated to any "objective improvement" in oneself or the things one cares about, isn't really happiness at all.
Pretend it's new year's eve and you're planning some goals for next year - some things that, if you achieve them, you will look back with pride and a sense of accomplishment. Is 'looking at lots of porn' on your list (even assuming that it's free and no-one was harmed in producing it)?
I don't mean to imply anything about sex, because sex has a whole lot of things associated with it that make it extremely complicated. But the 'pleasure button' scenario gives us a clean slate to work from, and to me it seems an obvious reductio ad absurdum of the idea that pleasure = utility.
You seem to be confusing happiness with accomplishment:
Sure it is. It may not be accomplishment, or meaningfulness, but it is happiness, by definition. I think the confusion comes because you seem to value many other things more than happiness, such as pride and accomplishment. Happiness is just a feeling; it's not defined as something that you need to value most, or gain the most utility from.
Depends on how you define happiness. If you define it as "how much dopamine is in my system" ,"joy" or "these are the neat brainwaves my brain is giving off" then yes, you could achieve happiness by pressing a button (in theory).
A lot of people seem to assume happiness = utility measured in utilons, which is a whole different thing altogether.
Sort of like seeing some one writhe in ecstasy after jamming a needle in their arm and saying, "I'm so happy I'm not a heroin addict."
Oh, really? How can I get a cheap, legal, repeatable dopamine rush to my brain?
Edited my post to reflect your point. Although, I'm a young male and can achieve orgasm multiple times in under ten minutes with the aid of some lube and free porn. You probably didn't want to know that.
That's amazing. A drug that could eliminate refractory period like that would sell better than Viagra.
Yes, I've noticed that assumption, and I think even Jeremy Bentham talked about pleasure in utility terms. I don't think it's accurate for everyone, for instance, someone who values accomplishment more than happiness will assign higher utility to choices that lead to unhappy accomplishment than to unproductive leisure.
...and then they're happier working. By definition. Welcome to semantics.
That's a strange definition of "happier". They're happier with a choice just because they prefer that choice? Even if they appear frustrated and tired and grumpy all the time? Even if they tell you they're not happy and they prefer this unhappiness to not accomplishing anything?
(In real life, I suspect happy people actually accomplish more, but consider a hypothetical where you have to choose between unhappy accomplishment and unproductive leisure.)
How do you distinguish a degenerate case of 'happiness' from 'satiation of a need'. Is the smoker or heroin addict made 'happy' by their fix? Does a glass of water make you 'happy' if you're dying from thirst, or does it just satiate the thirst?
And can't the same sensation be either 'happy' or 'unhappy' depending on the circumstances. A person with persistent sexual arousal syndrome isn't made 'happy' by the orgasms they can't help but 'endure'.
The idea that there's a "raw happiness feeling" detachable from the information content that goes with it is intuitively appealing but fatally flawed.
Yes, this is true. We will need to assume that the button can analyze the context to determine how to provide happiness for the particular brain it's attached to.
My point is that happiness is not necessarily associated with accomplishment or objective improvement in oneself (though it can be). In such a situation, some people might not value this kind of detached happiness, but that doesn't mean it's not happiness.
The analogy to sex is rough. From a historical and evolutionary perspective, sex is treated the way it is because it leads to gene replication and parenthood, not because it leads to pleasure. The lack of side effects from the buttons makes them more comparable to rubbing someone's back, smiling, or saying something nice to someone.
OK - well that's one possibility. But in discussing either of these analogies, aren't we just showing (a) that the pleasure-button scenario is underdetermined, because there are many different kinds of pleasure and (b) that it's redundant, because people can actually give each other pats on the back, or hand-jobs or whatever.
A social custom would be established that buttons are only to be pressed by knocking foreheads together. Offering to press a button in a fashion that doesn't ensure mutuality is seen as a pathetic display of low status.
Pushing someone's happiness button is like doing them a favor, or giving them a gift. Do we have social customs that demand favors and gifts always be exchanged simultaneously? Well, there are some customs like that, but in general no, because we have memory and can keep mental score.
Hah. Status is relative, remember? Your setup just ensures that "dodging" at the last moment, getting your button pressed without pressing theirs, is seen as a glorious display of high status.
We already have these buttons on LessWrong... ;)
Karma does make me feel important, but when it comes to happiness karma can't hold a candle to loud music, alcohol and girls (preferably in combination). I wish more people recognized these for the eternal universal values they are. If only someone invented a button to send me some loud music, alcohol and girls, that would be the ultimate startup ever.
Why is LessWrong not an Amazon affiliate? I recall buying at least one book due to it being mentioned on LessWrong, and I haven't been around here long. I can't find any reliable data on the number of active LessWrong users, but I'd guess it would number in the 1000s. Even if only 500 are active, and assuming only 1/4 buy at least one book mentioned on LessWrong, assuming a mean purchase value of $20 (books mentioned on LessWrong probably tend towards the academic, expensive side), that would work out at $375/year.
IIRC, it only took me a few minutes to sign up as an Amazon affiliate. They (stupidly) require a different account for each Amazon website, so 5*4 minutes (.com, .co.uk, .de, .fr), +20 for GeoIP database, +3-90 (wide range since coding often takes far longer than anticipated) to set up URL rewriting (and I'd be happy to code this) would give a 'worst case' scenario of $173 annualized returns per hour of work.
Now, the math is somewhat questionable, but the idea seems like a low-risk, low-investment and potentially high-return one, and I note that Metafilter and StackOverflow do this, though sadly I could not find any information on the returns they see from this. So, is there any reason why nobody has done this, or did nobody just think of it/get around to it?
From your link, a further link doesn't make it sound great at SO - 2-4x the utter failure. But they are very positive about it because the cost of implementation was very low. Just top-level posts or no geolocating would be even cheaper.
You may be amused (or something) by this search
A possibly relevant data point: I usually post any links to books I put online with my amazon affiliate link and in the last 3 months I've had around 25 clicks from links to books I believe I posted in Less Wrong comments and no conversions.
Marginal Revolution linked to A Fine Theorem, which has summaries of papers in decision theory and other relevant econ, including the classic "agreeing to disagree" results. A paper linked there claims that the probability settled on by Aumann-agreers isn't necessarily the same one as the one they'd reach if they shared their information, which is something I'd been wondering about. In retrospect this seems obvious: if Mars and Venus only both appear in the sky when the apocalypse is near, and one agent sees Mars and the other sees Venus, then they conclude the apocalypse is near if they exchange info, but if the probabilities for Mars and Venus are symmetrical, then no matter how long they exchange probabilities they'll both conclude the other one probably saw the same planet they did. The same thing should happen in practice when two agents figure out different halves of a chain of reasoning. Do I have that right?
ETA: it seems, then, that if you're actually presented with a situation where you can communicate only by repeatedly sharing probabilities, you're better off just conveying all your info by using probabilities of 0 and 1 as Morse code or whatever.
ETA: the paper works out an example in section 4.
The entire world media seems to have had a mass rationality failure about the recent suicides at Foxconn. There have been 10 suicides there so far this year, at a company which employs more than 400,000 people. This is significantly lower than the base rate of suicide in China. However, everyone is up in arms about the 'rash', 'spate', 'wave'/whatever of suicides going on there.
When I first read the story I was reading a plausible explanation of what causes these suicides by a guy who's usually pretty on the ball. Partly due to the neatness of the explanation, it took me a while to realise that there was nothing to explain.
Your strength as a rationalist is your ability to be more confused by fiction than by reality. It's even harder to achieve this when the fiction comes ready-packaged with a plausible explanation (especially one which fits neatly with your political views).
The first question that came to mind when I heard about this story was 'what's the base rate?'. I didn't investigate further but a quick mental estimate made me doubt that this represented a statistically significant increase above the base rate. It's disappointing yet unsurprising that few if any media reports even consider this point.
Wasn't there a somewhat well-publicized "spate" of suicides at a large French telecom a while back? I remember the explanation being the same - the number observed was just about what you'd expect for an employer of that size.
ETA: http://en.wikipedia.org/wiki/France_Telecom
Even if the suicide rate was somewhat higher than average it still doesn't necessarily tell you much. You should really be looking at the probability of that number of suicides occurring in some distinct subset of the population - given all the subsets of a population that you can identify you will expect some to have higher than suicide rates than for the population as a whole. The relevant question is 'what is the probability that you would observe this number of suicides by chance in some randomly selected subset of this size?'
Incidentally the rate appears to be below that of Cambridge University students:
Yes, this is my counter-counter-criticism as well. 'Sure, the overall China rate may be the same, but what's the suicide rate for young, employed workers employed by a technical company with bright prospects? I'll bet it's lower than the overall rate...'
Agreed. Also, I think what got the suicides in China in the news was that the victim attributed the suicide specifically to some weird policy or rule the company adhered to. It could be that the "normal" suicides at the company are being ignored, and the ones being reported are the suicides on top of this, justifying that concern that this is abnormal.
This was why I went looking for stats on suicides amongst university students. I remembered some talk when I was at Cambridge of a high suicide rate, which you might see as somewhat similarly counter-intuitive to a high suicide rate for 'young, employed workers employed by a technical company with bright prospects'.
Actually, there are a number of reasons to expect a somewhat elevated suicide rate in a relatively high pressure environment where large numbers of young people have left home for the first time and are living in close proximity to large numbers of strangers their own age. Stories about high suicide rates at elite universities tend to take a very different tack to stories about Chinese workers however.
That's what I thought as well, until I read this post from "Fake Steve Jobs". Not the most reliable source, obviously, but he does seem to have a point:
Now I'm not entirely sure of the details, but if it's true that all the suicides in the recent cluster consisted of jumping off the Foxconn factory roof, that does seem to be more significant than just 15 employees committing suicide in unrelated incidents. In fact, it seems like it might even be the case that there are a lot more suicides than the ones we've heard about, and the cluster of 15 are just those who've killed themselves via this particular, highly visible, method (I'm just speculating here).
I'm not sure what to make of this - without knowing more of the details its probably impossible to say what's going on. But the basic point seems sound: that the argument about being below national average suicide rates doesn't really hold up if there's something specific about a particular group of incidents that makes them non-independent. As an example, if the members of some cult commit suicide en masse, you can't look at the region the event happened in and say "well the overall suicide rate for the region is still below the national average, so there's nothing to see here"
Suicide and methods of suicide are contagious, FWIW.
I was surprised when I read a statistical analysis on national death rates. Whenever there was a suicide by a particular method published in newspapers or on television, deaths of that form spiked in the following weeks. This is despite the copycat deaths often being called 'accidents' (examples included crashed cars and aeroplanes). Scary stuff (or very impressive statistics-fu).
Yes, this is connected to the existence of suicide epidemics. The most famous example is the ongoing suicide epidemic over the last fifty years in Micronesia, where both the causes and methods of suicide have been the same (hanging). See for example this discussion.
keyword = "werther effect"
http://en.wikipedia.org/wiki/Werther_effect
If all the members of a cult committed suicide then the local rate is 100%.
The most local rate that we so far know of is 15/400,000 which is 4x below baseline. If these 15 people worked at, say, the same plant of 1,000 workers you may have a point. But we don't know.
At this point there is nothing to explain.
Fair enough - my example was poorly thought out in retrospect.
But I don't think it's correct that there's nothing to explain. If it's true that all 15 committed suicide by the same method - a fairly rare method frequently used by people who are trying to make a public statement with their death - then there seems to be something needing to be explained. As Fake Steve Jobs points out later in the cited article, if 15 employees of Walmart committed suicide within the span of a few months, all of them by way of jumping off the roof of their Walmart, wouldn't you think that was odd? Don't you think that would be more significant, and more deserving of an explanation, than the same 15 Walmart employees committing suicide in a variety of locations, by a variety of different methods?
I'm not committing to any particular explanation here (Douglas Knight's suggestion, for one, sounds like a plausible explanation which doesn't involve any wrongdoing on Foxconn's part), I'm just saying that I do think there's "something to explain".
Just curious: why the downvote? Was this just a case of downvote = disagree? If so, what do you disagree with specifically?
Strange. I thought it made a good point, so I just upvoted it.
(Wherein I seek advice on what may be a fairly important decision.)
Within the next week, I'll most likely be offered a summer job where the primary project will be porting a space weather modeling group's simulation code to the GPU platform. (This would enable them to start doing predictive modeling of solar storms, which are increasingly having a big economic impact via disruptions to power grids and communications systems.) If I don't take the job, the group's efforts to take advantage of GPU computing will likely be delayed by another year or two. This would be a valuable educational opportunity for me in terms of learning about scientific computing and gaining general programming/design skill; as I hope to start contributing to FAI research within 5-10 years, this has potentially big instrumental value.
In "Why We Need Friendly AI", Eliezer discussed Moore's Law as a source of existential risk:
Due to the quality of the models used by the aforementioned research group and the prevailing level of interest in more accurate models of solar weather, successful completion of this summer project will probably result in a nontrivial increase in demand for GPUs. It seems that the next best use of my time this summer would be to work full time on the expression-simplification abilities of a computer algebra system.
Given all this information and the goal of reducing existential risk from unFriendly AI, should I take the job with the space weather research group, or not? (To avoid anchoring on other people's opinions, I'm hoping to get input from at least a couple of LW readers before mentioning the tentative conclusion I've reached.)
ETA: I finally got an e-mail response from the research group's point of contact and she said all their student slots have been taken up for this summer, so that basically takes care of the decision problem. But I might be faced with a similar choice next summer, so I'd still like to hear thoughts on this.
Uninformed opinion: space weather modelling doesn't seem like a huge market, especially when you compare it to the truly massive gaming market. I doubt the increase in demand would be significant, and if what you're worried about is rate of growth, it seems like delaying it a couple of years would be wholly insignificant.
I would say that there seem to be a lot of companies that are in one way or another trying to advance Moore's law. For as long as it doesn't seem like the one you're working on has a truly revolutionary advantage as compared to the other companies, just taking the money but donating a large portion of it to existential risk reduction is probably an okay move.
(Full disclosure: I'm an SIAI Visiting Fellow so they're paying my upkeep right now.)
Cleaning out my computer I found some old LW-related stuff I made for graphic editing practice. Now that we have a store and all, maybe someone here will find it useful:
We have a store? Where?
Roko Mijic has a Zazzle store. (See also.)
Sweet!
Yep, it was probably the first rationalist joke ever that made me laugh.
Lol, although, what does astrology have to do with anything less wrong-ish.
That's a reference to Three Worlds Collide.
New papers from Nick Bostrom's site.
2nd one "ANTHROPIC SHADOW: OBSERVATION SELECTION EFFECTS AND HUMAN EXTINCTION RISKS" - is good reading.
LW too focused on verbalizable rationality
This comment got me thinking about it. Of course LW being a website can only deal with verbalizable information(rationality). So what are we missing? Skillsets that are not and have to be learned in other ways(practical ways): interpersonal relationships being just one of many. I also think the emotional brain is part of it. There might me people here who are brilliant thinkers yet emotionally miserable because of their personal context or upbringing, and I think dealing with that would be important. I think a hollistic approach is required. Eliezer had already suggested the idea of a rationality dojo. What do you think?
I'm a draftsman and it always struck me how absolutely terrible the English language is for talking about ludicrously simple visual concepts precisely. Words like parallel and perpendicular should be one syllable long.
I wonder if there's a way to apply rationality/ mathematical think beyond geometry and to the world of art.
I think it would be great to systematically explore and develop useful skillsets, perhaps in a modular fashion. We do have sequences. I would join a rationality dojo immediately.
What do you mean practical ways? I understand the difficulty of transferring kinesthetic or social understanding, but how can we overcome that in nonverbalized fashion?
Some things have to be shown, you have to sometimes take part in an activity to "get" it, learn by trial and error, get feedback pointing out mistakes that you are unaware of, etc...
For example?
Do you think you could describe this image to an arbitrarily talented artist and end up with an image that even looked like it was based on it?
http://smithandgosling.files.wordpress.com/2009/05/the-reader.jpg
It's not so much, "Such insolence, our ideas are so awesome they can not be broken down by mere reductionism" as "Wow, words are really bad at describing things that are very different from what most of the people speaking the language do."
I think you could make an elaborate set of equations on a cartesian graph and come up with a drawing that looked like it and say fill up RGB values #zzzzzz at coordinates x,y or whatever, but that seems like a copout since that doesn't tell you anything about how Fragonard did it.
I've been talking to various people about the idea of a Rationality Foundation (working title) which might end up sponsoring or facilitating something like rationality dojos. Needless to say this idea is in its infancy.
The example of coding dojos for programmers might be relevant, and not just for the coincidence in metaphors.
My theory of happiness.
And a very condensed note I wrote to myself (in brainstormish mode, without regard for feasibility or testability):
Hi Kaj, I really liked the article. I had a relevant theory to explain the perceived difference of attitudes of north Europeans versus south Europeans. I guess you could call it a theory of unhappiness. Here goes:
I take as granted that mildly depressed people tend to make more accurate depictions of reality, that north Europeans have higher incidence of depression and also much better functioning economies and democracies. Given a low resource environment, one needs to plan further, and make more rational projections of the future. If being on the depressive side makes one more introspective and thoughtful, then it would be conducive to having better long-term plans. In a sense, happiness could be greed-inducing, in a greedy algorithm sense. This more or less agrees with kaj's theory. OTOH, not-happiness would encourage long-term planning and even more co-operative behaviour.
In the current environment, resources may not be scarce, but our world has become much more complex, actions having much deeper consequences than in the ancestral environment (Nassim Nicholas Taleb makes this point in Black Swan) therefore also needing better thought out courses of action. So northern Europeans have lucked out where their adaptation to climate has been useful for the current reality. If one sees corruption as a local-greedy behaviour as opposed to lawfulness as a global-cooperative behaviour, this would also explain why going closer to the equator you generally see an increase in corruption and also failures in democratic government. Taken further, it would imply that near-equator peoples are simply not well-adapted to democratic rule, which demands a certain limiting of short-term individual freedom for the longer-term common good, and a more distributed/localised form of governance would do much better. I think this (rambling) theory can more or less be pieced together with kaj's, adding long-term planning as a second dimension.
Disclaimer: Before anyone accuses me of discrimination, I am in fact a south European (Greek), living in north Europe (the UK), and while this does not absolve me of all possibility of racism against my own, this theory has formed from my effort to explain the cultural differences I experience on a daily basis. Take it for what it's worth.
How does this make sense exactly? A happy person, with more resources, would be better off not taking risks that could result in him losing what he has. On the other hand, a sad person with few resources, would need to take more risks then the happy person to get the same results. If you told a rich person, jump off that cliff and I'll give you a million dollars, they probably wouldn't do it. On the other hand, if you told a poor person the same thing, they might do it as long as there was a chance they could survive.
My idea of why people were happy wasn't a static value of how many resources they had, but a comparative value. A rich person thrown into poverty would be very unhappy, but the poor person might be happy.
Kaj's hypothesis is a bit off: what he's actually talking about is the explore/exploit tradeoff. An animal in a bad (but not-yet catastrophic) situation is better off exploiting available resources than scouting new ones, since in the EEA, any "bad" situation is likely to be temporary (winter, immediate presence of a predator, etc.) and it's better to ride out the situation.
OTOH, when resources are widely available, exploring is more likely to be fruitful and worthwhile.
The connection to happiness and risk-taking is more tenuous.
I'd be interested in seeing the results of that experiment. But "rich" and "poor" are even more loosely correlated with the variables in question - there are unhappy "rich" people and unhappy "poor" people, after all.
(In other words, this is all about internal, intuitive perceptions of resource availability, not rational assessments of actual resource availability.)
If I were to wager a guess, the people who would accept the deal are those who feel they are in a catastrophic situation.
Speaking of catastrophic situations, have you seen The Wages of Fear or any of the remakes? I've only seen Sorcerer, but it was quite good. It's a rather more realistic situation that jumping off a cliff, but the structure is the same: a group of desperate people driving cases of nitroglycerin-sweating dynamite across rough terrain to get enough money that they can escape.
Or maybe not...
I'd buy "main road incorporating rope suspension bridges" over "millionaire hiring people to throw themselves off cliffs", but I see what you mean.
I believe you're right, now that I think about that.
I was kind of thinking expected value. In principle, if you always go by expected value, in the long run you will end up maximizing your value. But this may not be the best move to make if you're low on resources, because with bad luck you'll run out of them and die even though you made the moves with the highest expected value.
However, your objection does make sense and Eby's reformulation of my theory is probably the superior one, now that I think about it.
So I've started drafting the very beginnings of a business plan for a Less Wrong (book) store-ish type thingy. If anybody else is already working on something like this and is advanced enough that I should not spend my time on this mini-project, please reply to this comment or PM me. However, I would rather not be inundated with ideas as to how to operate such a store yet: I may make a Less Wrong post in the future to gather ideas. Thanks!
Observation: The may open thread, part 2, had very few posts in the last days, whereas this one has exploded within the first 24 hours of its opening. I know I deliberately withheld content from it as once it is superseded from a new thread, few would go back and look at the posts in the previous one. This would predict a slowing down of content in the open threads as the month draws to a close, and a sudden burst at the start of the next month, a distortion that is an artifact of the way we organise discussion. Does anybody else follow the same rule for their open thread postings? Is there something that should be done to solve this artificial throttling of discussion?
Some sites have gone to an every Friday open thread; maybe we should do it weekly instead of monthly, too.
I would support that.
From observations even of previous "Part 2"s, it would seem that there is enough content to support that frequency of open thread.
I don't post in the open threads much, but if I run into a good rationality quote I tend to wait until the next rationality quotes thread is opened unless the current one is less than a week or so old.
To the powers that be: Is there a way for the community to have some insight into the analytics of LW? That could range from periodic reports, to selective access, to open access. There may be a good reason why not, but I can't think of it. Beyond generic transparency brownie points, since we are a community interested in popularising the website, access to analytics may produce good, unforeseen insights. Also, authors would be able to see viewership of their articles, and related keyword searches, and so be better able to adapt their writing to the audience. For me, a downside of posting here instead of my own blog is the inability to access analytics. Obviously i still post here, but this is a downside that may not have to exist.
Here's an interesting video.
Drive: The Surprising Truth About What Motivates Us
An engaging video, thanks. The study sounded familiar, so I looked for it... turns out I'd seen the guy's TED talk a while back: http://www.ted.com/talks/dan_pink_on_motivation.html
In A Technical Explanation of Technical Explanation, Eliezer writes,
So I have a question. Is this not an endorsement of frequentism? I don't think I understand fully, but isn't counting the instances of the event exactly frequentist methodology? How could this be Bayesian?
As I understand it, frequentism requires large numbers of events for its interpretation of probability, whereas the bayesian interpretation allows the convergence of relative frequencies with probabilities but claims that probability is a meaningful concept when applied to unique events, as a "degree of plausibility".
Good article on the abuse of p-values: http://www.sciencenews.org/view/feature/id/57091/title/Odds_are,_its_wrong
I would like to see a top-level link post and discussion of this article (and maybe other related papers).
Anyone here live in California? Specifically, San Diego county?
The judicial election on June 8th has been subject to a campaign by a Christian conservative group. You probably don't want them to win, and this election is traditionally a low turnout one, so you might want to put a higher priority on this judicial election than you normally would. In other words, get out there and vote!
Are there any rationalist psychologists?
Also, more specifically but less generally relevant to LW; as a person being pressured to make use of psychological services, are there any rationalist psychologists in the Denver, CO area?
Thought I might pass this along and file it under "failure of rationality". Sadly, this kind of thing is increasingly common -- getting deep in education debt, but not having increased earning power to service the debt, even with a degree from a respected university.
Summary: Cortney Munna, 26, went $100K into debt to get worthless degrees and is deferring payment even longer, making interest pile up further. She works in an unrelated area (photography) for $22/hour, and it doesn't sound like she has a lot of job security.
We don't find out until the end of the article that her degrees are in women's studies and religious studies.
There are much better ways to spend $100K. Twentysomethings like her are filling up the workforce. I'm worried about the future implications.
I thank my lucky stars I'm not in such a position (in the respects listed in the article -- Munna's probably better off in other respects). I didn't handle college planning as well as I could have, and I regret it to this day. But at least I didn't go deep into debt for a worthless degree.
Forgive me if this is beating a dead horse, or if someone brought up an equivalent problem before; I didn't see such a thing.
I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.) But now I have an example that seems to be equivalent to DSvs.T, easily understandable via my moral intuition and give the "wrong" (i.e., not purely utilitarian) answer.
Suppose I have ten people and a stick. The appropriate infinitely powerful theoretical being offers me a choice. I can hit all ten of them with a stick, or I can hit one of them nine times. "Hitting with a stick" has some constant negative utility for all the people. What do I do?
This seems to me to be exactly dust specks vs. torture scaled down to humanly intuitable scales. I think the obvious answer is to hit all the people once. Examining my intuition tells me that this is because I think the aggregation function for utility is different across different people than across one person's possible futures. Specifically, my intuition tells me to maximize across people the minimum expected utilty across an individual's future.
So, is there a name for this position?
Do people think my example is equivalent to DSvsT?
Do people get the same or different answer with this question as they do with DSvsT?
There's one difference, which is that the inequality of the distribution is much more apparent in your example, because one of the options distributes the pain perfectly evenly. If you value equality of distribution as worth more than one unit of pain, it makes sense to choose the equal distribution of pain. This is similar to economic discussions about policies that lead to greater wealth, but greater economic inequality.
Oh, and I'd love to hear what you mean about this.
I think the point of Dust Specks Vs Torture was scope failure. Even allowing for some sort of "negative marginal utility" once you hit a wacky number 3^^^3, it doesn't matter. .000001 negative utility point multiplied by 3^^^3 is worse than anything, because 3^^^3 is wacky huge.
For the stick example, I'd say it would have to depend on a lot of factors about human psychology and such, but I think I'd hit the one. Marginal utility tends to go down for a product, and I think that the shock of repeated blows would be less than the shock of the one against ten separate people.
I think your opinion basically is an appeal to egalitarianism, since you expect negative utility to yourself from an unfair world where one person gets something that ten other people did not, for no good or fair reason.
Part of the assumption of the problem was that hitting with a stick has some constant negative utility for all the people.
I don't think maximising the minima is what you want. Suppose your choice is to hit one person 20 times, or five people 19 times each. Unless your intuition is different from mine, you'll prefer the first option.