What is the most anti-altruistic way to spend a million dollars?

-4 Punoxysm 24 March 2014 09:50PM

Edit: The purpose of this question is not to make the world worse, but to see whether we actually have concrete ideas of what would, and my guess is that most of us don't, not in a really concrete way. From the downvotes I'm wondering if everyone else is thinking way darker directions than I am. If so please share.

There is a lot of discussion here about effective altruism. Organizations like GiveWell with donations, using criterion like quality-life-years-saved-per-dollar. People distinguish warm-and-fuzzy giving from the most effective use of dollars from various utilitarian perspectives.

But I want to ask a different question: What would effective anti-altruism be?

To make it more concrete:

I am an eccentric multimillionaire, proposing a contest to all of you, who will for the purposes of this exercise play greedy and callous, yet honest and efficient, contest entrants.

Whoever can propose the most negative possible use for my money, in the sense that it causes the greatest amount of global misery, (feel free to argue for your own interpretation of the details of what this means) will receive $1 million to carry out his or her proposal and $1 million to keep for him or herself to with as desired. 

A few rules:

1) Everything must be 100% legal in whatever jurisdiction you propose. Edit: People had trouble with the old phrasing, so I'll add that it should not only be legal in the letter of the law, but also in some reasonable interpretation of the spirit of the law.

1a) In fact, I encourage you to think of things that aren't merely legal but that would also be legal under whatever your favorite hypothetical laws are. Maybe that means non-coercive, non-violent, or something else in that vein.

2) This money may be used as seed funding for a non-profit or for-profit anti-altruistic venture, but I will take into account both the risk and the marginal impact of only the first million dollars.

3) Risk and plausibility are factors just as they would be in any investment for effective altruism

4) If you're going to propose that you keep and embezzle the first million dollars, you should have an extremely good justification for why such a mundane plan would match my standards for anti-altruism.

 

I hope this pushes you all to think of truly anti-altruistic means of spending this money. I think you may find that effective anti-altruism is a good deal harder than you'd believe.

Can infinite quantities exist? A philosophical approach

-9 metaphysicist 03 January 2013 10:52PM

 

[Crossposted]

Initially attracted to Less Wrong by Eliezer Yudkowsky's intellectual boldness in his "infinite-sets atheism," I've waited patiently to discover its rationale. Sometimes it's said that our "intuitions" speak for infinity or against, but how could one, in a Kahneman-appropriate manner, arrive at intuitions about whether the cosmos is infinite? Intuitions about infinite sets might arise from an analysis of the concept of actually realized infinities. This is a distinctively philosophical form of analysis and one somewhat alien to Less Wrong, but it may be the only way to gain purchase on this neglected question. I'm by no means certain of my reasoning; I certainly don't think I've settled the issue. But for reasons I discuss in this skeletal argument, the conceptual—as opposed to the scientific or mathematical—analysis of "actually realized infinities" has been largely avoided, and I hope to help begin a necessary discussion.

1. The actuality of infinity is a paramount metaphysical issue.

Some major issues in science and philosophy demand taking a position on whether there can be an infinite number of things or an infinite amount of something. Infinity’s most obvious scientific relevance is to cosmology, where the question of whether the universe is finite or infinite looms large. But infinities are invoked in various physical theories, and they seem often to occur in dubious theories. In quantum mechanics, an (uncountable) infinity of worlds is invoked by the “many worlds interpretation,” and anthropic explanations often invoke an actual infinity of universes, which may themselves be infinite. These applications make real infinite sets a paramount metaphysical problem—if it indeed is metaphysical—but the orthodox view is that, being empirical, it isn’t metaphysical at all. To view infinity as a purely empirical matter is the modern view; we’ve learned not to place excessive weight on purely conceptual reasoning, but whether conceptual reasoning can definitively settle the matter differs from whether the matter is fundamentally conceptual.

Two developments have discouraged the metaphysical exploration of actually existing infinities: the mathematical analysis of infinity and the proffer of crank arguments against infinity in the service of retrograde causes. Although some marginal schools of mathematics reject Cantor’s investigation of transfinite numbers, I will assume the concept of infinity itself is consistent. My analysis pertains not to the concept of infinity as such but to the actual realization of infinity. Actual infinity’s main detractor is a Christian fundamentalist crank named William Lane Craig, whose critique of infinity, serving theist first-cause arguments, has made infinity eliminativism intellectually disreputable. Craig’s arguments merely appeal to the strangeness of infinity’s manifestations, not to the incoherence of its realization. The standard arguments against infinity, which predate Cantor, have been well-refuted, and I leave the mathematical critique of infinity to the mathematicians, who are mostly satisfied. (See Graham Oppy, Philosophical perspectives on infinity (2006).) 

2. The principle of the identity of indistinguishables applies to physics and to sets, not to everything conceivable.

My novel arguments are based on a revision of a metaphysical principle called the identity of indistinguishables, which holds that two separate things can’t have exactly the same properties. Things are constituted by their properties; if two things have exactly the same properties, nothing remains to make them different from one another. Physical objects do seem to conform to the identity of indistinguishables because physical objects are individuated by their positions in space and time, which are properties, but this is a physical rather than a metaphysical principle. Conceptually, brute distinguishability, that is differing from all other things simply in being different, is a property, although it provides us with no basis for identifying one thing and not another. There may be no way to use such a property in any physical theory, we may never learn of such a property and thus never have reason to believe it instantiated, but the property seems conceptually possible.

But the identity of indistinguishables does apply to sets: indistinguishable sets are identical. Properties determine sets, so you can’t define a proper subset of brutely distinguishable things.

3. Arguments against actually existing infinite sets.

A. Argument based on brute distinguishability.

To show that the existence of an actually existing infinite set leads to contradiction, assume the existence of an infinite set of brutely distinguishable points. Now another point pops into existence. The former and latter sets are indistinguishable, yet they aren’t identical. The proviso that the points themselves are indistinguishable allows the sets to be different yet indistinguishable when they’re infinite, proving they can’t be infinite.

B. Argument based on probability as limiting relative frequency.

The previous argument depends on the coherence of brute distinguishability. The following probability argument depends on different intuitions. Probabilities can be treated as idealizations at infinite limits. If you toss a coin, it will land heads roughly 50% of the time, and it gets closer to exactly 50% as the number of tosses “approaches infinity.” But if there can actually be an infinite number of tosses, contradiction arises. Consider the possibility that in an infinite universe or an infinite number of universes, infinitely many coin tosses actually occur. The frequency of heads and of tails is then infinite, so the relative frequency is undefined. Furthermore, the frequency of rolling a 1 on a die also equals the frequency of rolling 2 – 6: both are (countably) infinite. But if infinite quantities exist, then relative frequency should equal probability. Therefore, infinite quantities don’t exist.

4. The nonexistence of actually realized infinite sets and the principle of the identity of indistinguishable sets together imply the Gold model of the cosmos.

Before applying the conclusion that actually realized infinities can’t exist together with the principle of the identity of indistinguishables to a fundamental problem of cosmology, caveats are in order. The argument uses only the most general and well-established physical conclusions and is oblivious to physical detail, and not being competent in physics, I must abstain even from assessing the weight the philosophical analysis that follows should carry; it may be very slight. While the cosmological model I propose isn’t original, the argument is original and as far as I can tell, novel. I am not proposing a physical theory as much as suggesting metaphysical considerations that might bear on physics, whereas it is for physicists to say how weighty these considerations are in light of actual physical data and theory.

The cosmological theory is the Gold model of the universe, once favored by Albert Einstein, according to which the universe undergoes a perpetual expansion, contraction, and re-expansion. I assume a deterministic universe, such that cycles are exactly identical: any contraction is thus indistinguishable from any other, and any expansion is indistinguishable from any other. Since there is no room in physics for brute distinguishability, they are identical because no common spatio-temporal framework allows their distinction. Thus, although the expansion and contraction process is perpetual and eternal, it is also finite; in fact, its number is unity.

The Gold universe—alone, with the possible exception of the Hawking universe—avoids the dilemma of the realization of infinite sets or origination ex nihilo.

 

Stop Using LessWrong: A Practical Interpretation of the 2012 Survey Results

-37 aceofspades 30 December 2012 10:00PM

Link to those results: http://lesswrong.com/lw/fp5/2012_survey_results/

I've been basically lurking this site for more than a year now and it's incredible that I have actually taken anything at all on this site seriously, let alone that at least thousands of others have. I have never received evidence that I am less likely to be overconfident about things than people in general or that any other particular person on this site is.

Yet in spite of this apparently 3.7% of people answering the survey have actually signed up for cryonics which is surely greater than the percent of people in the entire world signed up for cryonics. The entire idea seems to be taken especially seriously on this site. Evidently 72.9% of people here are at least considering signing up. I think the chance of cryonics working is trivial, for all practical purposes indistinguishable from zero (the expected value of the benefit is certainly not worth several hundred thousand dollars in future value considerations). Other people here apparently disagree, but if the rest of the world is undervaluing cryonics at the moment then why do those here with privileged information not invest heavily in the formation of new for-profit cryonics organizations, or start them alone, or invest in technology which will soon develop to make the revival of cryonics patients possible? If the rest of the world is underconfident about these ideas, then these investments would surely have an enormous expected rate of return.

There is also a question asking about the relative likelihood of different existential risks, which seems to imply that any of these risks are especially worth considering. This is not really a fault of the survey itself, as I have read significant discussion on this site related to these ideas. In my judgment this reflects a grand level of overconfidence in the probabilities of any of these occurring. How many people responding to this survey have actually made significant personal preparations for survival, like a fallout shelter with food and so on which would actually be useful under most of the different scenarios listed? I generously estimate 5% have made any such preparations.

I also see mentioned in the survey and have read on this site material related to in my view meaningless counterfactuals. The questions on dust specks vs torture and Newcomb's Problem are so unlikely to ever be relevant in reality that I view discussion about them as worthless.

My judgment of this site as of now is that way too much time is spent discussing subjects of such low expected value (usually because of absurdly low expected probability of occurring) for using this site to be worthwhile. In fact I hypothesize that this discussion actually causes overconfidence related to such things happening, and at a minimum I have seen insufficient evidence for the value of using this site to continue doing so.

Miracle Mineral Supplement

16 Eliezer_Yudkowsky 20 November 2012 09:17PM

We can always use more case studies of insanity that aren't religion, right?

Well, Miracle Mineral Supplement is my new go-to example for Bad Things happening to people with low epistemic standards. "MMS" is a supposed cure for everything ranging from the common cold to HIV to cancer. I just saw it recommended in another Facebook thread to someone who was worried about malaria symptoms.

It's industrial-strength bleach. Literally just bleach. Usually drunk, sometimes injected, and yes, it often kills you. It is every bit as bad as it sounds if not worse.

This is beyond Poe's Law. Medieval blood draining via leeches was far more of an excusable error than this, they had far less evidence it was a bad idea. I think if I was trying to guess what was the dumbest alternative medicine on the planet, I still would not have guessed this low. My brain is still not pessimistic enough about human stupidity.

http://en.wikipedia.org/wiki/Miracle_Mineral_Supplement

[Link] "An OKCupid Profile of a Rationalist"

-16 Athrelon 14 November 2012 01:48AM

The rationalist in question, of course, is our very own EY.

Quotes giving a reasonable sample of the spectrum of reactions:

Epic Fail on the e-harmony profile. He’s over-signalling intelligence. There’s a good paper about how much to optimally signal, like when you have a PhD to put it on your business card or not. This guy is going around giving out business cards that read Prof. Dr. John Doe, PhD, MA, BA. He won’t be getting laid any time soon.

His profile is probably very effective for aspergery girls who like reading the kinds of things that appear on LessWrong. Yudkowsky is basically a celebrity within a small niche of hyper-nerdy rationalists, so I doubt he has much trouble getting laid by girls in that community.

You make it sound like a cult leader or something....And reading the profile again with that lens, it actually makes a lot of sense.

I was about to agree [that the profile is oversharing], but then come to think of it, I realize I have an orgasm denial fetish, too. It’s an aroused preference that never escaped to my non-aroused self-consciousness.

Why is this important to consider? 

LessWrong as a community is dedicated to trying to "raise the sanity waterline," and its most respected members in particular put a lot of resources into outreach, via CFAR, HPMoR, and maintaining this site.  But a big factor in how people perceive our brand of rationality is about image.  If we're serious about raising the sanity waterline, that means image management - or at least avoiding active image malpractice - is something we should enthusiastically embrace as a way to achieve our goals. [1]

This is also a valuable exercise in considering the outside view.  Marginal Revolution is already a fairly WEIRD site, focused on abstract economic issues.  If any major blog is likely to be sympathetic to our cultural quirks, this would be it.  Yet a plurality of commenters reacted negatively. 

To the extent that we didn't notice anything strange about LW's figurehead having this OKCupid profile, LW either failed at calibrating mainstream reaction, or failed at consequentialism and realizing the drag this would have on our other recruitment efforts.  In our last discussion, there were only a few commenters raising concerns, and the consensus of the thread was that it was harmless and had no PR consequences worth noting.

As one commenter cogently put it,

I’m not saying that he’s trying to make a statement with this, I’m saying that he is making a statement about this whether he’s trying to or not. Ideas have consequences for how we live our lives, and that Eliezer has a public, identifiable profile up where he talks about his sexual fetishes is not some sort of randomly occurring event with no relationship to his other ideas.

I'd argue the same reasoning applies to the community at large, not just EY specifically.

[1] From Anna's excellent article: 5. I consciously attempt to welcome bad news, or at least not push it away. (Recent example from Eliezer: At a brainstorming session for future Singularity Summits, one issue raised was that we hadn't really been asking for money at previous ones. My brain was offering resistance, so I applied the "bad news is good news" pattern to rephrase this as, "This point doesn't change the fixed amount of money we raised in past years, so it is good news because it implies that we can fix the strategy and do better next year.")

The deeper solution to the mystery of moralism—Believing in morality and free will are hazardous to your mental health

-19 metaphysicist 14 October 2012 01:21PM

[Crossposted.]

The complex relationship between Systems 1 and 2 and construal level

The distinction between pre-attentive and focal-attentive mental processes  has dominated cognitive psychology for some 35 years. In the past decade has arisen another cognitive dichotomy specific to social psychology: processes of abstract construal (far cognition) versusconcrete construal (near cognition). This essay will theorize about the relationship between these dichotomies to clarify further how believing in the existence of free will and in the objective existence of morality can thwart reason by causing you to choose what you don’t want.

The state of the art on pre-attentive and focal-attentive processes is Daniel Kahneman’s bookThinking, Fast and Slow, where he calls pre-attentive processes System 1 and focal-attentive processes System 2. The reification of processes into fictional systems also resembles Freud’sSystem Csc (Conscious) and System Pcs (Preconscious). I’ll adopt the language System 1 andSystem 2, but readers can apply their understanding of conscious –preconscious, pre-attentive – focal-attentive, or automatic processes – controlled processes dichotomies. They name the same distinction, in which System 1 consists of processes occurring quickly and effortlessly in parallel outside awareness; System 2 consists of processes occurring slowly and effortfully in sequentialawareness, which in this context refers to the contents of working memory rather than raw experience and accompanies System 2 activity.

To integrate Systems 1 and 2 with construal-level theory, we note that System 2—the conscious part of our minds—can perform any of three routines in making a decision about taking some action, such as whether to vote in an election, a good example not just for timeliness but also for linkages to our main concern with morality: voting is a clear example of an action without tangible benefit. The potential voter might:

Case 1. Make a conscious decision to vote based on applying the principle that citizens owe a duty to vote in elections.
Case 2. Decide to be open to the candidates’ substantive positions and vote only if either candidate seems worthy of support.
Case 3. Experience a change of mind between 1 and 2.

The preceding were examples of the three routines System 2 can perform:

Case 1. Make the choice.
Case 2. “Program” System 1 to make the choice based on automatic criteria that don’t require sequential thinking.
Case 3. Interrupt System 1 in the face of anomalies.

When System 2 initiates action, whether it retains the power to decide or passes to System 1 is the difference between concrete and abstract construal. The second routine is key to understanding how Systems 1 and 2 work to produce the effects construal-level theory predicts. Keep in mind that the unconscious, automatic System 1 includes not just hardwired patterns but also skilled habits. Meanwhile, System 2 is notoriously “lazy,” unwilling to interrupt System 1, as in Case 3, but despite the perennial biases that plague system 1, resulting from letting System 1 have its way, the highest levels of expertise also occur in System 1.

A delegate System 1 operates with potentially complex holistic patterns typifying far cognition. This pattern is far because we offload distant matter to System 1 but exercise sequential control under System 2 as immediacy looms—although there are many exceptions. It is critical to distinguish far cognition from the lazy failure of System 2 to perform properly in Case 3. Such failure isn’t specific to mode. Far cognition, System 1 acting as delegate for System 2, is a narrower concept than automatic cognition, but far cognition is automatic cognition. Nearcognition admits no easy cross-classification.

Belief in free will and moral realism undermine our “fast and frugal heuristics”

The two most important recent books on the cognitive psychology of decision and judgment areThinking, Fast and Slow by Daniel Kahneman and Gut Reactions: The Intelligence of the Unconscious by Gerd Gigerenzer, and both insist on the contrast between their positions, although conflicts aren’t obvious. Kahneman explains System 1 biases as due to the mechanisms employed outside the range of evolutionary usefulness; Gigerenzer describes “fast and frugal heuristics” that sometimes misfire to produce biases. Where these half-empty versus half-full positions on heuristics and biases really differ is their overall appraisal of near and far processes, as Gigerenzer is a far thinker and Kahneman a near thinker, and they are both naturally biased for their preferred modes. Far thought shows more confidence in fast-and-frugal heuristics, since it offloads to System 1, whose province is to employ them.

The fast-and-frugal-heuristics way of thinking is particularly useful in understanding the effect of moral realism and free will: they cause System 2 to supplant System 1 in decision-making. When we apply principles of integrity to regulate our conduct, sometimes we do better in far mode, where System 2 offloads the task of determining compliance to System 1. To the contrary, if you have a principle of integrity that includes an absolute obligation to vote, you act as in Case 1: on a conscious decision. But principles of integrity do not really take this absolute form, an illusion begotten by moral realism. A principle of integrity flexible enough for actual use might favor voting (based, say, on a general principle embracing an obligation to perform duties) but disfavor it for “lowering the bar” when there’s only a choice between the lesser of evils. To practice the art of objectively applying these principles depends on your honest appraisal of the strength of your commitment to each virtue. System 2 is incapable of this feat; when it can be accomplished, it’s due to System 1’s automatic skills, operating unconsciously.Principles of integrity are applied more accurately in far-mode than near-mode. [Hat Tip to Overcoming Bias for these convenient phrases.]

But belief in moral realism and free will impel moral actors to apply their principles in near-mode. Objective morality and moral realism imply that compliance with morality results from freely willed acts. I’m not going to defend this premise thoroughly here, but this thought experiment might carry some persuasive weight. Read the following in near mode, and introspect your emotions:

 

Sexual predator Jerry Sandusky will serve his time in a minimal security prison, where he’s allowed groups of visitors five days a week.

 


Some readers will experience a sense of outrage. Then remind yourself: There’s no free will.If you believe the reminder, your outrage will subside; if you’ve long been a convinced and consistent determinist, you might not need to remind yourself. Morality inculpates based on acts of free will: morality and free will are inseparable.

A point I must emphasize because of its novelty: it’s System 1 that ordinarily determines what you want. System 2 doesn’t ordinarily deliberate about the subject directly; it deliberates about relevant facts, but in the end, you can only intuit your volition. You can’t deduce it.

What a belief in moral realism and free will do is nothing less than change the architecture of decision-making. When we practice principles of integrity and internalize them, they and nonmoral considerations co-determine our System 1 judgments, whereas according to moral realism and free will, moral good is the product of conscious free choice, so System 2 contrastsits moral opinion to System 1’s intuition, for which System 2 compensates—and usually overcompensates. The voter had to weigh the imperatives of the duty to vote and the duty to avoid “lowering the bar” when both candidates are ideologically and programmatically distasteful. System 2 can prime and program System 1 by studying the issues, but the multifaceted decision is itself best made by System 1. What happens when System 2 tries to decide these propositions? System 2 makes the qualitative judgment that System 1 is biased one way or the other and corrects System 1. This will implicate the overcompensation bias, in which conscious attempts to counteract biases usually overcorrect. A voter who thinks correction is needed for a bias toward shirking duty will vote when not really wanting to, all things considered. A voter biased toward "lowering the bar" will be excessively purist. Whatever standard the voter uses will be taken too far.

Belief in moral realism and free will biases practical reasoning

This essay presents the third of three ways that belief in objective morality and free will can cause people to do what they don’t want to do:

 

  1. It retards people in adaptively changing their principles of integrity.
  2. It prevents people from questioning their so-called foundations.
  3. It systematically exaggerates the compellingness of moral claims.

 

Some will be tempted to think that the third either is contrary to experience or is socially desirable. It’s neither. In moralism, an exaggerated subjective sense of duty and excessive sense of guilt co-exist with unresponsiveness to morality’s practical demands.

A possible solution to pascals mugging.

-19 staticIP 13 October 2012 12:00AM

Now I tend not to follow this form very much, so please excuse me if this has been suggested before. Still, I don't know that there's anyone else on this board who could actually carry out these threats.

 

If anyone accepts a pascals mugging style trade off with full knowledge of the problem, then I will slowly torture to death 3^^^^3 sentient minds. Or a suitably higher number if they include a higher (plausible, from my external viewpoint) number. Rest assured I can at least match their raw computing power from where I am. Good luck.

 

EDIT: I'm told that Eleizer proposed a similar solution over here, although more eloquently then I have.

Firewalling the Optimal from the Rational

86 Eliezer_Yudkowsky 08 October 2012 08:01AM

Followup to: Rationality: Appreciating Cognitive Algorithms  (minor post)

There's an old anecdote about Ayn Rand, which Michael Shermer recounts in his "The Unlikeliest Cult in History" (note: calling a fact unlikely is an insult to your prior model, not the fact itself), which went as follows:

Branden recalled an evening when a friend of Rand's remarked that he enjoyed the music of Richard Strauss. "When he left at the end of the evening, Ayn said, in a reaction becoming increasingly typical, 'Now I understand why he and I can never be real soulmates. The distance in our sense of life is too great.' Often she did not wait until a friend had left to make such remarks."

Many readers may already have appreciated this point, but one of the Go stones placed to block that failure mode is being careful what we bless with the great community-normative-keyword 'rational'. And one of the ways we do that is by trying to deflate the word 'rational' out of sentences, especially in post titles or critical comments, which can live without the word.  As you hopefully recall from the previous post, we're only forced to use the word 'rational' when we talk about the cognitive algorithms which systematically promote goal achievement or map-territory correspondences.  Otherwise the word can be deflated out of the sentence; e.g. "It's rational to believe in anthropogenic global warming" goes to "Human activities are causing global temperatures to rise"; or "It's rational to vote for Party X" deflates to "It's optimal to vote for Party X" or just "I think you should vote for Party X".

If you're writing a post comparing the experimental evidence for four different diets, that's not "Rational Dieting", that's "Optimal Dieting". A post about rational dieting is if you're writing about how the sunk cost fallacy causes people to eat food they've already purchased even if they're not hungry, or if you're writing about how the typical mind fallacy or law of small numbers leads people to overestimate how likely it is that a diet which worked for them will work for a friend. And even then, your title is 'Dieting and the Sunk Cost Fallacy', unless it's an overview of four different cognitive biases affecting dieting. In which case a better title would be 'Four Biases Screwing Up Your Diet', since 'Rational Dieting' carries an implication that your post discusses the cognitive algorithm for dieting, as opposed to four contributing things to keep in mind.

continue reading »

Meta: LW Policy: When to prohibit Alice from replying to Bob's arguments?

-3 SilasBarta 12 September 2012 03:29AM

In light of recent (and potential) events, I wanted to start a discussion here about a certain method of handling conflicts on this site's discussion threads, and hopefully form a consensus on when to use the measure described in the title.  Even if the discussion has no impact on site policy ("executive veto"), I hope administrators will at least clarify when such a measure will be used, and for what reason.

I also don't want to taint or "anchor" the discussion by offering hypothetical situations or arguments for one position or another.  Rather, I simply want to ask: Under what conditions should a specific poster, "Alice" be prohibited from replying directly to the arguments in a post/comment made by another poster, "Bob"?  (Note: this is referring specifically to replies to ideas and arguments Bob has advanced, not general comments about Bob the person, which should probably go under much closer scrutiny because of the risk of incivility.)

Please offer your ideas and thoughts here on when this measure should be used.

How To Lose 100 Karma In 6 Hours -- What Just Happened

-31 waitingforgodel 10 December 2010 08:27AM
As with all good posts, we begin with a hypothetical:
Imagine that, in the country you are in, a law is passed saying that if you drive your car without your seat belt on, you will be fined $100.
Here's the question: Is this blackmail? Is this terrorism?
Certainly it's a zero-sum interaction (at least in the short term). You either have to endure the inconvenience of putting on a seat belt, or risk the chance of a $100 fine.
You may also want to consider that cooperating with the seat belt fine may also cause lawmakers to believe that you'll also follow future laws.

If that one seems too obvious, here's another: A law is passed establishing a $500 fine for pirating an album on the internet.
Does this count as blackmail? does this count as terrorism?

What if, instead of passing a law, the music companies declare that they will sue you for $500 every time you pirate an album?
Is it blackmail yet? terrorism? Will complying teach the music companies that throwing their weight around works?

Enough with the hypothetical, this one's real: The moderator of one of your favorite online forums declares that if you post things he feels are dangerous to read, he will censor them. He may or may not tell you when he does this. If you post such things repeatedly, you will be banned.
Does this count as blackmail? Does this count as terrorism? Should we not comply with him to prevent similar future abuses of power?

Two months ago, I found a third option to the comply/revolt dilemma: turn the force back on the forceful.
Imagine this: you're the moderator of an online forum and care primarily about one thing: reducing existential risks. One day, one of your form members vows to ensure that censoring posts will cause a small increase in existential risks.
Does this count as blackmail? Does this count as terrorism? Would you not comply to prevent similar future abuses of power?


(Please pause here if you're feeling emotional -- what follows is important, and deserves a cool head)


It is my opinion that none of these are blackmail.
Blackmail is fundamentally a single shot game.
Laws and rules, are about the structure of the world's payoffs, and changing them to incentivize behavior.
Now it's fair to say that there are just laws, and there are unjust laws... and perhaps we should refuse to follow unjust laws... but to call a law blackmail or terrorism seems incorrect.

Here's what happened:
  • 7 weeks ago, I precommitted that censoring a post or comment on LessWrong would cause a 0.0001% increase in existential risk.
  • Earlier today, Yudkowsky censored a post on less wrong
  • 20 minutes later, existential risks increased 0.0001% (to the best of my estimation).

This will continue for the foreseeable future. I'm not happy about it either. Basically I think the sanest way to think about the situation is to assume that Yudkowsky's "delete" link also causes a 0.0001% increase in existential risk, and hope that he uses it appropriately.
He doesn't feel this way. He feels that the only correct answer here is to ignore the 0.0001% increase. We are at an impasse.

FAQ:
Q: Will you reconsider?
A: Sadly no. This situation is symmetric -- just as I am not immune to Yudkowsky's laws (censorship on LW if I talk about "dangerous" ideas), he is not immune to my laws.

Q: How can you be sure that a post was censored rather than deleted by the owner?
A: This is sometimes hard, and sometimes easy. In general I will err on the side of caution.

Q: How can you be sure that you haven't missed a deleted comment?
A: I use, and am improving, an automated solution.

Q: What is the nature of the existential risk increase?
A: Emails. (Yes, emails). Maybe some phone calls.
There is a simple law that I believe makes intuitive sense to the conservative right. A law that will be easy for them to endorse. This law would be disastrous for the relative chance of our first AI being a FAI vs a UFAI. Every time EY decides to take a 0.0001% step, an email or phone call will be made to raise awareness about this law.

Q: Is there any way for me to gain access to the censored content?
A: I am working on a website that will update in real time as posts are deleted from LessWrong. Stay tuned!

Q: Will you still post here under waitingforgodel
A: Yes, but less. Replying to 100+ comments is very time consuming, and I have several projects in dire need of attention.

Thank you very much for your time and understanding,
-wfg

Edit: This post is describing what happened, not why. For a discussion about why I feel that the precommitment will result in an existential risk savings, please see the "precommitment" thread, where it is talked about extensively.

View more: Next