All of Alexander de Vries's Comments + Replies

Is there any reason we should trust Omega to be telling the truth in the XOR-trolley problem?

1Tapatakt
"if and only if this message is true"

I intended the karmic argument to be implicit in the negative space of the argument on ignoble origins of wealth :) To me it's a matter of course that if you fairly trade with someone, you have moral claim to the wealth thereby earned!

Also, funny: I'm a compatibilist non-moral-realist as well (and inspired by the Hogfather quote, too)! Morality is a social construct, and so is desert, I just find it important to think about the interplay of the two.

Thank you for the kind words :)

I'll give a real-world example for a world ontologically prior to redistributive taxation (and, in fact, most taxation): the US, pre-1900. Minimal taxes were collected (90% of federal government was funded by taxes on tobacco, beer, and liquor) and pretty much none of it went to redistribution. To me, it seems that there was an ideological change somewhere in the early 1900s, driven by the Progressive movement, such that Americans started to approve of redistributive taxation as a concept. 

So basically what I'm saying is... (read more)

3AnthonyC
Ah, ok. True in the American context. I was thinking about things like the Roman bread dole, or religious tithing and charity laws when backed by state power. Though I guess America explicitly rejected the latter.

I believe it because physicists-as-a-whole seem quite sure about it, and I've never seen (modern) physicists be wrong about something they're this sure about, and frankly I know fuck-all about physics beyond what I learned in high school. 98-ish % confidence that it's as correct as any model of the universe reasonably can be.

ACX also links to this paper analyzing federal cancer research, which claims it is so effective it only costs $326 in federal investment cost per life saved.

They claim $326 per life-year (specifically, DALY), not per life. Huge difference!

3DirectedEvolution
Yes, from the results section: That's about $20,000 for 60 life-years.

I agree with what you're saying; the reason I used trillions was exactly because it's an amount nobody has. Any being which can produce a trillion dollars on the spot is likely (more than 50%, is my guess) powerful enough to produce two trillion dollars, while the same cannot be said for billions.

As for expected utility vs expected payoff, I agree that under conditions of diminishing marginal utility the offer is almost never worth taking. I am perhaps a bit too used to the more absurd versions of Pascal's Mugging, where the mugger promises to grant you ut... (read more)

3DirectedEvolution
My instant response is that this strongly suggests that lives saved and years of torture prevented do not in fact have constant marginal utility to you. Or more specifically, the part of you that is in control of your intuitive reactions. I share your lack of temptation to take the offer. My explanations are either or both of the following: * My instinctive sense of "altruistic temptation" is badly designed and makes poor choices in these scenarios, or else I am not as altruistic as I like to think. * My intuition for whether Pascalian Muggings are net positive expected value is correctly discerning that they are not, no matter the nature of the promised reward. Even in the case of an offer of increasing amounts of utility (defined as "anything for which twice as much is always twice as good"), I can still think that the offer to produce it is less and less likely to pay off the more that is offered.

For smaller amounts of money (/utility), this works. But think of the scenario where the mugger promises you one trillion $ and you say no, based on the expected value. He then offers you two trillion $ (let's say your marginal utility of money is constant at this level, because you're an effective altruist and expect to save twice as many lives with twice the money). Do you really think that the mugger being willing to give you two trillion is less than half as likely as him being willing to give you one trilion? It seems to me that anyone willing and able to give a stranger one trillion for a bet is probably also able to give twice as much money.

3DirectedEvolution
I do. You’re making a practical argument, so let’s put this in billions, since nobody has two trillion dollars. Today, according to Forbes, there is one person with over $200 billion in wealth, and 6 people (actually one is a family, but I’ll count them as unitary) with over $100 billion in wealth. So at a base rate, being offered a plausible $200 billion by a Pascalian mugger is about 17% as likely as being offered $100 billion. This doesn’t preclude the possibility that in some real world situation you may find some higher offers more plausible than some lower offers. But as I said in another comment, there are only two possibilities: your evaluation is that the mugger’s offer is likely enough that it has positive expected utility to you, or that it is too unlikely and therefore doesn’t. In the former case, you are a fool not to accept. In the latter case, you are a fool to take the offer. To be clear, I am talking about expected utility, not the expected payoff. If $100 is not worth twice as much to you as $50 in terms of utility, then it’s worse, not neutral, to go from a 50% chance at a $50 payoff to a 25% chance of a $100 payoff. This also helps explain why people are hesitant to accept the mugger’s offers. Not only might they become less likely, and perhaps even exponentially less likely, to receive the payoff, the marginal utility per dollar may decrease at the same time. This is a practical argument though, and I don’t think it’s possible to give a conclusive account of what our likelihood or utility function ought to be in this contrived and hypothetical scenario.

I fully support this initiative, and am excited to see how it works out!

Do you have a theory of where think tanks with seemingly similar methods and goals, like the Brookings Institution and Niskanen Center, might be going wrong? Things Balsa could/would do differently to be more effective than them?