What is the most anti-altruistic way to spend a million dollars?
Edit: The purpose of this question is not to make the world worse, but to see whether we actually have concrete ideas of what would, and my guess is that most of us don't, not in a really concrete way. From the downvotes I'm wondering if everyone else is thinking way darker directions than I am. If so please share.
There is a lot of discussion here about effective altruism. Organizations like GiveWell with donations, using criterion like quality-life-years-saved-per-dollar. People distinguish warm-and-fuzzy giving from the most effective use of dollars from various utilitarian perspectives.
But I want to ask a different question: What would effective anti-altruism be?
To make it more concrete:
I am an eccentric multimillionaire, proposing a contest to all of you, who will for the purposes of this exercise play greedy and callous, yet honest and efficient, contest entrants.
Whoever can propose the most negative possible use for my money, in the sense that it causes the greatest amount of global misery, (feel free to argue for your own interpretation of the details of what this means) will receive $1 million to carry out his or her proposal and $1 million to keep for him or herself to with as desired.
A few rules:
1) Everything must be 100% legal in whatever jurisdiction you propose. Edit: People had trouble with the old phrasing, so I'll add that it should not only be legal in the letter of the law, but also in some reasonable interpretation of the spirit of the law.
1a) In fact, I encourage you to think of things that aren't merely legal but that would also be legal under whatever your favorite hypothetical laws are. Maybe that means non-coercive, non-violent, or something else in that vein.
2) This money may be used as seed funding for a non-profit or for-profit anti-altruistic venture, but I will take into account both the risk and the marginal impact of only the first million dollars.
3) Risk and plausibility are factors just as they would be in any investment for effective altruism
4) If you're going to propose that you keep and embezzle the first million dollars, you should have an extremely good justification for why such a mundane plan would match my standards for anti-altruism.
I hope this pushes you all to think of truly anti-altruistic means of spending this money. I think you may find that effective anti-altruism is a good deal harder than you'd believe.
Can infinite quantities exist? A philosophical approach
Initially attracted to Less Wrong by Eliezer Yudkowsky's intellectual boldness in his "infinite-sets atheism," I've waited patiently to discover its rationale. Sometimes it's said that our "intuitions" speak for infinity or against, but how could one, in a Kahneman-appropriate manner, arrive at intuitions about whether the cosmos is infinite? Intuitions about infinite sets might arise from an analysis of the concept of actually realized infinities. This is a distinctively philosophical form of analysis and one somewhat alien to Less Wrong, but it may be the only way to gain purchase on this neglected question. I'm by no means certain of my reasoning; I certainly don't think I've settled the issue. But for reasons I discuss in this skeletal argument, the conceptual—as opposed to the scientific or mathematical—analysis of "actually realized infinities" has been largely avoided, and I hope to help begin a necessary discussion.
1. The actuality of infinity is a paramount metaphysical issue.
2. The principle of the identity of indistinguishables applies to physics and to sets, not to everything conceivable.
3. Arguments against actually existing infinite sets.
A. Argument based on brute distinguishability.
B. Argument based on probability as limiting relative frequency.
4. The nonexistence of actually realized infinite sets and the principle of the identity of indistinguishable sets together imply the Gold model of the cosmos.
Stop Using LessWrong: A Practical Interpretation of the 2012 Survey Results
Link to those results: http://lesswrong.com/lw/fp5/2012_survey_results/
I've been basically lurking this site for more than a year now and it's incredible that I have actually taken anything at all on this site seriously, let alone that at least thousands of others have. I have never received evidence that I am less likely to be overconfident about things than people in general or that any other particular person on this site is.
Yet in spite of this apparently 3.7% of people answering the survey have actually signed up for cryonics which is surely greater than the percent of people in the entire world signed up for cryonics. The entire idea seems to be taken especially seriously on this site. Evidently 72.9% of people here are at least considering signing up. I think the chance of cryonics working is trivial, for all practical purposes indistinguishable from zero (the expected value of the benefit is certainly not worth several hundred thousand dollars in future value considerations). Other people here apparently disagree, but if the rest of the world is undervaluing cryonics at the moment then why do those here with privileged information not invest heavily in the formation of new for-profit cryonics organizations, or start them alone, or invest in technology which will soon develop to make the revival of cryonics patients possible? If the rest of the world is underconfident about these ideas, then these investments would surely have an enormous expected rate of return.
There is also a question asking about the relative likelihood of different existential risks, which seems to imply that any of these risks are especially worth considering. This is not really a fault of the survey itself, as I have read significant discussion on this site related to these ideas. In my judgment this reflects a grand level of overconfidence in the probabilities of any of these occurring. How many people responding to this survey have actually made significant personal preparations for survival, like a fallout shelter with food and so on which would actually be useful under most of the different scenarios listed? I generously estimate 5% have made any such preparations.
I also see mentioned in the survey and have read on this site material related to in my view meaningless counterfactuals. The questions on dust specks vs torture and Newcomb's Problem are so unlikely to ever be relevant in reality that I view discussion about them as worthless.
My judgment of this site as of now is that way too much time is spent discussing subjects of such low expected value (usually because of absurdly low expected probability of occurring) for using this site to be worthwhile. In fact I hypothesize that this discussion actually causes overconfidence related to such things happening, and at a minimum I have seen insufficient evidence for the value of using this site to continue doing so.
Miracle Mineral Supplement
We can always use more case studies of insanity that aren't religion, right?
Well, Miracle Mineral Supplement is my new go-to example for Bad Things happening to people with low epistemic standards. "MMS" is a supposed cure for everything ranging from the common cold to HIV to cancer. I just saw it recommended in another Facebook thread to someone who was worried about malaria symptoms.
It's industrial-strength bleach. Literally just bleach. Usually drunk, sometimes injected, and yes, it often kills you. It is every bit as bad as it sounds if not worse.
This is beyond Poe's Law. Medieval blood draining via leeches was far more of an excusable error than this, they had far less evidence it was a bad idea. I think if I was trying to guess what was the dumbest alternative medicine on the planet, I still would not have guessed this low. My brain is still not pessimistic enough about human stupidity.
[Link] "An OKCupid Profile of a Rationalist"
The rationalist in question, of course, is our very own EY.
Quotes giving a reasonable sample of the spectrum of reactions:
Epic Fail on the e-harmony profile. He’s over-signalling intelligence. There’s a good paper about how much to optimally signal, like when you have a PhD to put it on your business card or not. This guy is going around giving out business cards that read Prof. Dr. John Doe, PhD, MA, BA. He won’t be getting laid any time soon.
His profile is probably very effective for aspergery girls who like reading the kinds of things that appear on LessWrong. Yudkowsky is basically a celebrity within a small niche of hyper-nerdy rationalists, so I doubt he has much trouble getting laid by girls in that community.
You make it sound like a cult leader or something....And reading the profile again with that lens, it actually makes a lot of sense.
I was about to agree [that the profile is oversharing], but then come to think of it, I realize I have an orgasm denial fetish, too. It’s an aroused preference that never escaped to my non-aroused self-consciousness.
Why is this important to consider?
LessWrong as a community is dedicated to trying to "raise the sanity waterline," and its most respected members in particular put a lot of resources into outreach, via CFAR, HPMoR, and maintaining this site. But a big factor in how people perceive our brand of rationality is about image. If we're serious about raising the sanity waterline, that means image management - or at least avoiding active image malpractice - is something we should enthusiastically embrace as a way to achieve our goals. [1]
This is also a valuable exercise in considering the outside view. Marginal Revolution is already a fairly WEIRD site, focused on abstract economic issues. If any major blog is likely to be sympathetic to our cultural quirks, this would be it. Yet a plurality of commenters reacted negatively.
To the extent that we didn't notice anything strange about LW's figurehead having this OKCupid profile, LW either failed at calibrating mainstream reaction, or failed at consequentialism and realizing the drag this would have on our other recruitment efforts. In our last discussion, there were only a few commenters raising concerns, and the consensus of the thread was that it was harmless and had no PR consequences worth noting.
As one commenter cogently put it,
I’m not saying that he’s trying to make a statement with this, I’m saying that he is making a statement about this whether he’s trying to or not. Ideas have consequences for how we live our lives, and that Eliezer has a public, identifiable profile up where he talks about his sexual fetishes is not some sort of randomly occurring event with no relationship to his other ideas.
I'd argue the same reasoning applies to the community at large, not just EY specifically.
[1] From Anna's excellent article: 5. I consciously attempt to welcome bad news, or at least not push it away. (Recent example from Eliezer: At a brainstorming session for future Singularity Summits, one issue raised was that we hadn't really been asking for money at previous ones. My brain was offering resistance, so I applied the "bad news is good news" pattern to rephrase this as, "This point doesn't change the fixed amount of money we raised in past years, so it is good news because it implies that we can fix the strategy and do better next year.")
The deeper solution to the mystery of moralism—Believing in morality and free will are hazardous to your mental health
[Crossposted.]
Sexual predator Jerry Sandusky will serve his time in a minimal security prison, where he’s allowed groups of visitors five days a week.
What a belief in moral realism and free will do is nothing less than change the architecture of decision-making. When we practice principles of integrity and internalize them, they and nonmoral considerations co-determine our System 1 judgments, whereas according to moral realism and free will, moral good is the product of conscious free choice, so System 2 contrastsits moral opinion to System 1’s intuition, for which System 2 compensates—and usually overcompensates. The voter had to weigh the imperatives of the duty to vote and the duty to avoid “lowering the bar” when both candidates are ideologically and programmatically distasteful. System 2 can prime and program System 1 by studying the issues, but the multifaceted decision is itself best made by System 1. What happens when System 2 tries to decide these propositions? System 2 makes the qualitative judgment that System 1 is biased one way or the other and corrects System 1. This will implicate the overcompensation bias, in which conscious attempts to counteract biases usually overcorrect. A voter who thinks correction is needed for a bias toward shirking duty will vote when not really wanting to, all things considered. A voter biased toward "lowering the bar" will be excessively purist. Whatever standard the voter uses will be taken too far.
- It retards people in adaptively changing their principles of integrity.
- It prevents people from questioning their so-called foundations.
- It systematically exaggerates the compellingness of moral claims.
A possible solution to pascals mugging.
Now I tend not to follow this form very much, so please excuse me if this has been suggested before. Still, I don't know that there's anyone else on this board who could actually carry out these threats.
If anyone accepts a pascals mugging style trade off with full knowledge of the problem, then I will slowly torture to death 3^^^^3 sentient minds. Or a suitably higher number if they include a higher (plausible, from my external viewpoint) number. Rest assured I can at least match their raw computing power from where I am. Good luck.
EDIT: I'm told that Eleizer proposed a similar solution over here, although more eloquently then I have.
Firewalling the Optimal from the Rational
Followup to: Rationality: Appreciating Cognitive Algorithms (minor post)
There's an old anecdote about Ayn Rand, which Michael Shermer recounts in his "The Unlikeliest Cult in History" (note: calling a fact unlikely is an insult to your prior model, not the fact itself), which went as follows:
Branden recalled an evening when a friend of Rand's remarked that he enjoyed the music of Richard Strauss. "When he left at the end of the evening, Ayn said, in a reaction becoming increasingly typical, 'Now I understand why he and I can never be real soulmates. The distance in our sense of life is too great.' Often she did not wait until a friend had left to make such remarks."
Many readers may already have appreciated this point, but one of the Go stones placed to block that failure mode is being careful what we bless with the great community-normative-keyword 'rational'. And one of the ways we do that is by trying to deflate the word 'rational' out of sentences, especially in post titles or critical comments, which can live without the word. As you hopefully recall from the previous post, we're only forced to use the word 'rational' when we talk about the cognitive algorithms which systematically promote goal achievement or map-territory correspondences. Otherwise the word can be deflated out of the sentence; e.g. "It's rational to believe in anthropogenic global warming" goes to "Human activities are causing global temperatures to rise"; or "It's rational to vote for Party X" deflates to "It's optimal to vote for Party X" or just "I think you should vote for Party X".
If you're writing a post comparing the experimental evidence for four different diets, that's not "Rational Dieting", that's "Optimal Dieting". A post about rational dieting is if you're writing about how the sunk cost fallacy causes people to eat food they've already purchased even if they're not hungry, or if you're writing about how the typical mind fallacy or law of small numbers leads people to overestimate how likely it is that a diet which worked for them will work for a friend. And even then, your title is 'Dieting and the Sunk Cost Fallacy', unless it's an overview of four different cognitive biases affecting dieting. In which case a better title would be 'Four Biases Screwing Up Your Diet', since 'Rational Dieting' carries an implication that your post discusses the cognitive algorithm for dieting, as opposed to four contributing things to keep in mind.
Meta: LW Policy: When to prohibit Alice from replying to Bob's arguments?
In light of recent (and potential) events, I wanted to start a discussion here about a certain method of handling conflicts on this site's discussion threads, and hopefully form a consensus on when to use the measure described in the title. Even if the discussion has no impact on site policy ("executive veto"), I hope administrators will at least clarify when such a measure will be used, and for what reason.
I also don't want to taint or "anchor" the discussion by offering hypothetical situations or arguments for one position or another. Rather, I simply want to ask: Under what conditions should a specific poster, "Alice" be prohibited from replying directly to the arguments in a post/comment made by another poster, "Bob"? (Note: this is referring specifically to replies to ideas and arguments Bob has advanced, not general comments about Bob the person, which should probably go under much closer scrutiny because of the risk of incivility.)
Please offer your ideas and thoughts here on when this measure should be used.
How To Lose 100 Karma In 6 Hours -- What Just Happened
- 7 weeks ago, I precommitted that censoring a post or comment on LessWrong would cause a 0.0001% increase in existential risk.
- Earlier today, Yudkowsky censored a post on less wrong
- 20 minutes later, existential risks increased 0.0001% (to the best of my estimation).
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)