Comment author: gattsuru 03 December 2012 11:44:09PM -1 points [-]

It turns out that facts, when viewed as a large body of knowledge, are just as predictable. Facts, in the aggregate, have half-lives: we can measure the amount of time for half of a subject's knowledge to be overturned.


Samuel Abesman, The Half-Life of Facts: Why Everything We Know Has an Expiration Date.

Comment author: Mass_Driver 07 December 2012 02:13:15AM 3 points [-]

Also, this book was a horrible agglomeration of irrelevant and un-analyzed factoids. If you've already read any two Malcolm Gladwell books or Freakonomics, It'd be considerably more educational to skip this book and just read the cards in a Trivial Pursuit box.

Comment author: Mass_Driver 06 December 2012 11:36:40PM 1 point [-]

The undergrad majors at Yale University typically follow lukeprog's suggestion -- there will be 20 classes on stuff that is thought to constitute cutting-edge, useful "political science" or "history" or "biology," and then 1 or 2 classes per major on "history of political science" or "history of history" or "history of biology." I think that's a good system. It's very important not to confuse a catalog of previous mistakes with a recipe for future progress, but for the same reasons that general history is interesting and worthwhile for the general public to know something about, the history of a given discipline is interesting and worthwhile for students of that discipline to look into.

Comment author: lavalamp 06 December 2012 09:52:24PM 7 points [-]
Comment author: Mass_Driver 06 December 2012 11:33:22PM 4 points [-]

I honestly have no idea which, if any, of the reddit philosphers are trolling. It's highly entertaining reading, though.

Comment author: Mass_Driver 16 October 2012 06:49:50PM 0 points [-]

We could bemoan these legacies, but it makes more sense to confront them head on, to consider just how we should live not in light of the bodies we wish we had but instead with the ones we are born with, bodies that evolved in the wild, thanks to ancestors who only just barely got away.

http://www.slate.com/articles/health_and_science/human_evolution/2012/10/evolution_of_anxiety_humans_were_prey_for_predators_such_as_hyenas_snakes.2.html

Comment author: Bugmaster 16 October 2012 05:04:36PM 2 points [-]

If money doesn't buy you happiness, you don't have enough money.

For example, what would you do if you had ten billion dollars ? Some people would answer, "I'd buy my own zoo !" or whatever, but the real answer is, "I would never work again; instead, I'd pursue whatever projects I found interesting“. That kind of freedom could enable you to be quite happy.

I'm not sure if this kind of experience scales to lower amounts of money; there's probably a minimum threshold above which wealth becomes entirely self-sustaining, and below which you'd still have to work for a living. Still, even below the threshold, you can still spend your money on automating and outsourcing smaller chunks of your daily drudgery, thus indirectly purchasing happiness.

Comment author: Mass_Driver 16 October 2012 05:58:28PM 7 points [-]

If money doesn't buy you happiness, you don't have enough money.

It's trivially true that multiplying the amount of money you have by 10,000 will probably make you much happier, but the interesting question is whether this is the easiest or most efficient route to increasing happiness. Since most people have no practical path to acquiring ten billion dollars, and most people could learn to enjoy their possessions more, Alicorn's piece is quite useful.

Comment author: carey 14 October 2012 10:47:09AM 0 points [-]

Note Carl Shulman's counterargument to the assumption of a normal prior here and the comments traded between Holden and Carl.

"If your prior was that charity cost-effectiveness levels were normally distributed, then no conceivable evidence could convince you that a charity could be 100x as good as the 90th percentile charity. The probability of systematic error or hoax would always be ludicrously larger than the chance of such an effective charity. One could not believe, even in hindsight, that paying for Norman Borlaug’s team to work on the Green Revolution, or administering smallpox vaccines (with all the knowledge of hindsight) actually did much more good than typical. The gains from resources like GiveWell would be small compared to acting like an index fund and distributing charitable dollars widely."

Comment author: Mass_Driver 16 October 2012 07:59:58AM 1 point [-]

The problem with this analysis is that it assumes that the prior should be given the same weight both ex ante and ex post. I might well decide to evenly weight my prior (intuitive) distribution showing a normal curve and my posterior (informed) distribution showing a huge peak for the Green Revolution, in which case I'd only think the Green Revolution was one of the best charitable options, and would accordingly give it moderate funding, rather than all available funding for all foreign aid. But, then, ten years later, with the benefit of hindsight, I now factor in a third distribution, showing the same huge peak for the Green Revolution. And, because the third distribution is based not on intuition or abstract predictive analysis but on actual past results --it's entitled to much more weight. I might calculate a Bayesian update based on observing my intuition once, my analysis once, and the historical track record ten or twenty times. At that point, I would have no trouble believing that a charity was 100x as good as the 90th percentile. That's an extraordinary claim, but the extraordinary evidence to support it is well at hand. By contrast, no amount of ex ante analysis would persuade me that your proposed favorite charity is 100x better than the current 90th percentile, and I have no problem with that level of cynicism. If your charity's so damn good, run a pilot study and show me. Then I'll believe you.

Comment author: Andreas_Giger 23 August 2012 07:51:06PM 0 points [-]

I mean, "terribly horrible" on what scale?

On a scale from 0 to "a million people died because someone was being irrational", it would be around "two million people died because someone was being irrational."

On an unrelated note, the idea of precommitting in non-repeated IPD is silly; because if both players are precommitting simultaneously (before learning of their opponent's precommittment) it's the same as no-one precommitting, since they can't update their strategy with that knowledge, and otherwise it's an asymmetrical problem.

The solution to that asymmetrical problem, if you're the one who has to make the precommittment, is to precommit to simple TFT, I think. (~100% confidence)

Comment author: Mass_Driver 23 August 2012 10:54:16PM 1 point [-]

I'm not sure what's silly about it. Just because there's only one game of IPD doesn't mean there can't be multiple rounds of communication before, during, and after each iteration.

As for the asymmetrical problem, if you're really close to 100% confident, would you like to bet $500 against my $20 that I can't find hard experimental evidence that there's a better solution than simple TFT, where "better" means that the alternative solution gets a higher score in an arena with a wide variety of strategies? If I do find an arena like that, and you later claim that my strategy only outperformed simple TFT because of something funky about the distribution of strategies, I'll let you bid double or norhing to see if changing the distribution in any plausible way you care to suggest changes the result.

In response to comment by [deleted] on The Ethical Status of Non-human Animals
Comment author: syllogism 10 January 2012 12:38:09PM 2 points [-]

Then it's more ethical to give the maximisers what they want.

Comment author: Mass_Driver 23 August 2012 10:42:27PM 3 points [-]

Even though there's no moral realism, it still seems wrong that such an important ethical question turns out to hinge on whether humans or paper-clip-maximizers started breeding first. One way of not biting that bullet is to say that we shouldn't be "voting" at all. The only good reason to vote is when there are scarce, poorly divisible resources. For example, it makes sense to vote on what audio tracks to put on the Pioneer satellite; we can only afford to launch, e.g. 100 short sound clips, and making the clips even shorter to accommodate everyone's preferred tracks would just ruin them for everyone. On the other hand, if five people want to play jump rope and two people want to play hopscotch, the solution isn't to hold a vote and make everyone play jump rope -- the solution is for five people to play jump rope and two people to play hopscotch. Similarly, if 999 billion Clippys want to make paperclips and a billion humans want to build underground volcano lairs, and they both need the same matter to do it, and Clippies experience roughly the same amount of pleasure and pain as humans, then let the Clippies use 99.9% of the galaxy's matter to build paper clips, and let the humans use 0.1% of the galaxy's matter to build underground volcano lairs. There's no need to hold a vote or even to attempt to compare the absolute value of human utility with the absolute value of Clippy utility.

The interesting question is what to do about so-called "utility monsters" -- people who, for whatever reason, experience pleasure and pain much more deeply than average. Should their preferences count more? What if they self-modified into utility monsters specifically in order to have their preferences count more? What if they did so in an overtly strategic way, e.g., +20 utility if all demands are met, and -1,000,000 utility if any demands are even slightly unmet? More mundanely, if I credibly pre-commit to being tortured unless I get to pick what kind of pizza we all order, should you give in?

Comment author: Andreas_Giger 23 August 2012 01:38:02PM 1 point [-]
  1. Any strategy that takes being publicly announced (and precommitted to) into account and still allows the opponent to get away with defecting the first round ist a terribly horrible strategy.

  2. Publicly announcing is not actually precommitting. If Clippy says it plays TFT-1D, for how long would you really cooperate?

It seems to me there is little benefit to playing the strategy you announced, or vice versa.

Comment author: Mass_Driver 23 August 2012 07:19:43PM 0 points [-]

I mean, "terribly horrible" on what scale? If the criterion is "can it be strictly dominated by another strategy in terms of results if we ignore the cost of making the strategy more complicated," then, sure, a strategy that reliably allows opponents to costlessly defect on the first of 100 rounds fails that criterion. I'd argue that a more interesting set of criteria are "is the expected utility close to the maximum expected utility generated by any strategy," "is the standard deviation in expected utility acceptably low," and "is the strategy simple enough that it can be taught, shared, and implemented with little or no error?" Don't let the perfect become the enemy of the good.

Comment author: MinibearRex 23 August 2012 02:37:25AM 0 points [-]

If I know this is your strategy, I don't have to be a perfect rationalist to know that I can defect on the first round.

Comment author: Mass_Driver 23 August 2012 08:43:46AM 0 points [-]

Are there strategies that, if publicly announced, will let a more sophisticated player defect on the first round and get away with it? Sure. There are also slightly better strategies that can be publicly announced without allowing for useful first-round defection. Either way, though, the gains from even shaky cooperation in a 100-round game are on the order of 70 or 80 million lives -- letting those gains slip by because you're worried about losing 1 million lives on the first round is a mistake. There's a tendency to worry about losing face, or, as Andreas puts it, not being defeated. But with real stakes on the table, you should only worry about maximizing points. Your pride isn't worth millions of lives.

View more: Prev | Next