Comment author: Unnamed 10 December 2010 08:03:13PM 22 points [-]

All four examples involve threats - one party threatening to punish another unless the other party obeys some rule - but the last threat (threatening to increase existential risk contingent on acts of forum moderation) sticks out as different from the others in several ways:

  1. Proportionality. The punishment in the other examples seems roughly proportional to the offense ($500 may seem a bit high for one album, but is in the ballpark given the low chance of being caught), but over 6,000 deaths (in expectation) plus preventing who-knows-how-many people from ever living is disproportionate to the offense of deleting forum comments.
  2. Narrow targeting. Most of the punishments are narrowly targeted at the offender - the offender is the one who suffers the negative consequences of the punishment, as much as possible (although there are some broader consequences - e.g., the rest of the forum is deprived of a banned poster's comments). But the existential risk threat is not targeted at all - it's aimed at the whole world. Threats to third parties are usually frowned upon - think of hostage taking, or threats to harm someone's family.
  3. Legitimate authority. There are laws & conventions regarding who has authority over what, and these limit what threats are seen as acceptable. Threats can be dangerous and destructive, because of the possibility that they will actually be carried out and because of the risk of escalating threats and counter-threats as people try to influence each other's behavior, and these conventions about domains of authority help limit the damage. It's widely accepted that the government is allowed to regulate driving and intellectual property, and to use fines as punishment. The law grants IP-holders rights to sue for money. Forum moderators are understood to have control over what gets posted on their forum, and who posts. But a single forum user does not have the authority to dictate what gets posted on a forum.
  4. Accountability. Those with legitimate authority are usually accountable to a broader public. If citizens oppose a law they can replace the legislators with ones who will change the law, and since legislators know this and want to keep their jobs they pay attention to the citizens' views when passing laws. Members of an online forum can leave en masse to another forum if they disagree strongly with the moderation policy, and forums take this into account when they set their moderation policy. But one person who threatens to increase existential risk if his preferred forum policy isn't put into place is not accountable to anyone - it doesn't matter how many people disagree with his preferred forum policy, or with his proposed punishment.

I'm not entirely in agreement with the first three threats, but they're at least within the bounds of the kinds of threats that are commonly acceptable. The fourth is not.

Comment author: waitingforgodel 11 December 2010 05:15:14AM *  -1 points [-]

Re #1: EY claimed his censorship caused something like 0.0001% risk reduction at the time, hence the amount chosen -- it is there to balance his motivation out.

Re #2: Letting Christians/Republicans know that they should be interested in passing a law is not the same as hostage taking or harming someone's family. I agree that narrow targeting is preferable.

Re #3 and #4: I have a right to tell Christians/Republicans about a law they're likely to feel should be passed -- it's a right granted to me by the country I live in. I can tell them about that law for whatever reason I want. That's also a right granted to me by the country I live in. By definition this is legitimate authority, because a legitimate authority granted me these rights.

Comment author: waitingforgodel 10 December 2010 06:36:50PM 0 points [-]

The common misunderstanding from these comments is that they didn't click on the "precommitment" link and read the reasons why the precommitment reduced existential risk.

If I ever do this again, I'll make the reasoning more explicit. In the mean time I'm not sure what to do except add this comment, and the edit at the bottom of the article for new readers.

Comment author: Lightwave 10 December 2010 02:46:18PM *  8 points [-]

In that case they should present their evidence and/or a strong argument for this, not attempt to blackmail moderators.

Comment author: waitingforgodel 10 December 2010 06:18:12PM 3 points [-]

I actually explicitly said what oscar said in the discussion of the precommitment.

I also posted my reasoning for it.

Those are both from the "precommitted" link in my article.

Comment author: Psychohistorian 10 December 2010 01:28:14PM 5 points [-]

Laws are not comparable to blackmail because they have process behind them. If one loan individual told me that if I didn't wear my seatbelt, he'd bust my kneecaps, then that would be blackmail. Might even qualify as terrorism, since he is trying to constrain my actions by threat of illegitimate force.

A lone individual making a threat against the main moderator of a site if he uses his discretion in a certain way is indeed blackmail/terrorism, particularly when the threat is over a thing substantially outside the purview of the site, and the act threatened is on its own clearly immoral (e.g. it'd be legitimate to threaten leaving the site, or reposting censored material on a separate site). As it stands, it's an attempt to force another's will without any semblance of legitimate authority, which seems to qualify as " clearly wrong."

Comment author: waitingforgodel 10 December 2010 06:02:37PM *  -2 points [-]

If one loan individual told me that if I didn't wear my seatbelt, he'd bust my kneecaps, then that would be blackmail.

I think this is closer to if one lone individual said that every time he saw you not wear a seatbelt (which for some reason a law couldn't get passed for), he'd nudge gun control legislation closer to being enacted (assuming he knew you'd hate gun control legislation)

Comment author: Eliezer_Yudkowsky 10 December 2010 08:29:08AM 4 points [-]

Moved post to Discussion section. Note that user's karma has now dropped below what's necessary to submit to the main site.

Comment author: waitingforgodel 10 December 2010 08:33:05AM 1 point [-]

Also note that it wasn't when I submitted to the main site...

Comment author: CarlShulman 09 December 2010 09:53:14PM *  13 points [-]

Do you understand the math behind the Roko post deletion?

Yes, his post was based on (garbled versions of) some work I had been doing at FHI, which I had talked about with him while trying to figure out some knotty sub-problems.

What do you think about the Roko post deletion?

I think the intent behind it was benign, at least in that Eliezer had his views about the issue (which is more general, and not about screwed-up FAI attempts) previously, and that he was motivated to prevent harm to people hearing the idea and others generally. Indeed, he was explicitly motivated enough to take a PR hit for SIAI.

Regarding the substance, I think there are some pretty good reasons for thinking that the expected value (with a small probability of a high impact) of the info for the overwhelming majority of people exposed to it would be negative, although that estimate is unstable in the face of new info.

It's obvious that the deletion caused more freak-out and uncertainty than anticipated, leading to a net increase in people reading and thinking about the content compared to the counterfactual with no deletion. So regardless of the substance about the info, clearly it was a mistake to delete (which Eliezer also recognizes).

What do you think about future deletions?

Obviously, Eliezer is continuing to delete comments reposting on the topic of the deleted post. It seems fairly futile to me, but not entirely. I don't think that Less Wrong is made worse by the absence of that content as such, although the fear and uncertainty about it seem to be harmful. You said you were worried because it makes you uncertain about whether future deletions will occur and of what.

After about half an hour of trying, I can't think of another topic with the same sorts of features. There may be cases involving things like stalkers or bank PINs or 4chan attacks or planning illegal activities. Eliezer called on people not to discuss AI at the beginning of Less Wrong to help establish its rationality focus, and to back off from the gender warfare, but hasn't used deletion powers for such things.

Less Wrong has been around for 20 months. If we can rigorously carve out the stalker/PIN/illegality/spam/threats cases I would be happy to bet $500 against $50 that we won't see another topic banned over the next 20 months.

Comment author: waitingforgodel 10 December 2010 08:27:52AM 0 points [-]

I should have taken this bet

How To Lose 100 Karma In 6 Hours -- What Just Happened

-31 waitingforgodel 10 December 2010 08:27AM
As with all good posts, we begin with a hypothetical:
Imagine that, in the country you are in, a law is passed saying that if you drive your car without your seat belt on, you will be fined $100.
Here's the question: Is this blackmail? Is this terrorism?
Certainly it's a zero-sum interaction (at least in the short term). You either have to endure the inconvenience of putting on a seat belt, or risk the chance of a $100 fine.
You may also want to consider that cooperating with the seat belt fine may also cause lawmakers to believe that you'll also follow future laws.

If that one seems too obvious, here's another: A law is passed establishing a $500 fine for pirating an album on the internet.
Does this count as blackmail? does this count as terrorism?

What if, instead of passing a law, the music companies declare that they will sue you for $500 every time you pirate an album?
Is it blackmail yet? terrorism? Will complying teach the music companies that throwing their weight around works?

Enough with the hypothetical, this one's real: The moderator of one of your favorite online forums declares that if you post things he feels are dangerous to read, he will censor them. He may or may not tell you when he does this. If you post such things repeatedly, you will be banned.
Does this count as blackmail? Does this count as terrorism? Should we not comply with him to prevent similar future abuses of power?

Two months ago, I found a third option to the comply/revolt dilemma: turn the force back on the forceful.
Imagine this: you're the moderator of an online forum and care primarily about one thing: reducing existential risks. One day, one of your form members vows to ensure that censoring posts will cause a small increase in existential risks.
Does this count as blackmail? Does this count as terrorism? Would you not comply to prevent similar future abuses of power?


(Please pause here if you're feeling emotional -- what follows is important, and deserves a cool head)


It is my opinion that none of these are blackmail.
Blackmail is fundamentally a single shot game.
Laws and rules, are about the structure of the world's payoffs, and changing them to incentivize behavior.
Now it's fair to say that there are just laws, and there are unjust laws... and perhaps we should refuse to follow unjust laws... but to call a law blackmail or terrorism seems incorrect.

Here's what happened:
  • 7 weeks ago, I precommitted that censoring a post or comment on LessWrong would cause a 0.0001% increase in existential risk.
  • Earlier today, Yudkowsky censored a post on less wrong
  • 20 minutes later, existential risks increased 0.0001% (to the best of my estimation).

This will continue for the foreseeable future. I'm not happy about it either. Basically I think the sanest way to think about the situation is to assume that Yudkowsky's "delete" link also causes a 0.0001% increase in existential risk, and hope that he uses it appropriately.
He doesn't feel this way. He feels that the only correct answer here is to ignore the 0.0001% increase. We are at an impasse.

FAQ:
Q: Will you reconsider?
A: Sadly no. This situation is symmetric -- just as I am not immune to Yudkowsky's laws (censorship on LW if I talk about "dangerous" ideas), he is not immune to my laws.

Q: How can you be sure that a post was censored rather than deleted by the owner?
A: This is sometimes hard, and sometimes easy. In general I will err on the side of caution.

Q: How can you be sure that you haven't missed a deleted comment?
A: I use, and am improving, an automated solution.

Q: What is the nature of the existential risk increase?
A: Emails. (Yes, emails). Maybe some phone calls.
There is a simple law that I believe makes intuitive sense to the conservative right. A law that will be easy for them to endorse. This law would be disastrous for the relative chance of our first AI being a FAI vs a UFAI. Every time EY decides to take a 0.0001% step, an email or phone call will be made to raise awareness about this law.

Q: Is there any way for me to gain access to the censored content?
A: I am working on a website that will update in real time as posts are deleted from LessWrong. Stay tuned!

Q: Will you still post here under waitingforgodel
A: Yes, but less. Replying to 100+ comments is very time consuming, and I have several projects in dire need of attention.

Thank you very much for your time and understanding,
-wfg

Edit: This post is describing what happened, not why. For a discussion about why I feel that the precommitment will result in an existential risk savings, please see the "precommitment" thread, where it is talked about extensively.
Comment author: FormallyknownasRoko 09 December 2010 10:23:37PM 0 points [-]

Yes. THIS IS NOT CENSORSHIP. Just in case anyone missed it.

Comment author: waitingforgodel 09 December 2010 10:28:29PM 6 points [-]

YES IT IS. In case anyone missed it. It isn't Roko's post we're talking about right now

Comment author: FormallyknownasRoko 09 December 2010 10:17:05PM 10 points [-]

May I at this point point out that I agree that the post in question should not appear in public. Therefore, it is a question of the author's right to retract material, not of censorship.

Comment author: waitingforgodel 09 December 2010 10:27:43PM 4 points [-]

In this case, the comment censored was not posted by you. Therefore you're not the author.

FYI the actual author didn't even know it was censored.

How Greedy Bastards Have Saved More Lives Than Mother Theresa Ever Did

14 waitingforgodel 03 December 2010 06:20AM

And how you can use the same techniques to save a stranger's life for under $600


It's a strange world we live in.

When I first heard of Optimal Philanthropy, it was in a news article about Bill Gates's plan for retirement. He'd decided to donate tens of billions of dollars to charity, but had decided that no existing charity was worth donating to.

Gates felt they weren't run properly.

You see, at the time most people thought that "efficient charities" were those that had little or no overhead. Everyone wanted as much money to go to the front lines as possible, with little or none for administration.

Gates didn't care about any of that.

No, what Gates wanted was measurable results... and if more administration would get better results, he was all for it.

In business, it all comes down to return on investment. How much money did you use (to rent buildings, buy supplies, hire employees), and how much money did you earn in return.

Gates felt that something similar was needed for charity.

If the charity saved lives, Gates reasoned, then it should be judged by how much money it used to save that life. If a charity could save twice as many lives on the same budget by using more administrators, they by all means they should do that.

As you may have heard, Bill Gates was appalled that he couldn't find a charity he could measure.

Here he was, trying to selflessly give away over ten billion dollars to any charity that could prove it would have the highest impact.... and finding a bunch of nonsense answers about how that's not the way charity works... or how little overhead there was.

And as you may have also heard, Mr. Gates turned that frustration into a revolution in the world of charity -- and inspired others to follow him. His foundation -- the Bill and Melinda Gates Foundation -- is now the biggest in the world, and makes a difference everyday in the areas of world education, malaria, and sustainable energy.

 

But Enough About All That! This Isn't About Bill Gates, This Is About You

Although the billionaires of the world have gotten their heads screwed on right about charity (and saving hundreds of millions more lives as a result), us non-billionaires didn't seem to get the memo.

And that means, if you are the sort of person who donates, you're not doing nearly the amount of good you could.

Here are 3 simple steps you can use right away that will at least double the impact your donations have.

Pause a second to think about what that would mean.

Why do you donate?

How would it feel to know that those donations now to twice as much good in this world? To know that at least twice as many people were helped?

Ready to hear the steps? Great!

 

Step 1: Make your reason for donating CONCRETE!

This step requires being very honest with yourself. It means not donating to the Haiti relief fund just because it was tragic (or because Bill Clinton said you should), but instead thinking about what that donation to Haiti would accomplish.

Something along the lines of: save lives and put good people back into homes. Whatever you hope your donation will accomplish.

What we're doing is moving from causes and goals (global warming, world peace, freedom from dictators), to concrete outcomes (reducing or negating carbon emissions, preventing wars, saving solders lives, educating people about the benefits of democracy).

Once you've got a concrete outcome you'd like to see in the world, it's time to find out the best way to accomplish that goal.

 

Step 2: Use 3rd party charity evaluations that focus on outcomes, and donate where it will do the most good.

Go to givewell.com and see if your current charity is listed, and what kinds of results they can get per donated dollar.

Also, don't forget to look at similar outcomes your donation money can accomplish. It's not uncommon to find out that, for example, the cost of giving a blind child a seeing eye dog is three times more than the cost of preventing childhood blindness in the first place.

Yes it might seem tragic to think of a little blind girl without a dog to guide her, but it's even worse to think that we'd give that girl a seeing eye dog at the expense of three other children going blind.

If nothing else, visit givewell.com, it will change the way you think about donating for the rest of your life.

 

Step 3: Donate what you can, but don't donate time unless you earn less than $10 an hour.

The strange truth of the matter is that, unless you're donating your time as a professional (Doctor's Without Borders, Pro-Bono Legal Aid), it's often more cost effective to simply work an extra hour and donate the money.

If you make $25/hr, your cause can probably can get 150 minutes of work for every hour of income you donate.

 


Okay! If you do those three steps you will get more good from your donation money than 90% of all the donors out there.

If you felt that this letter helped you, please consider forwarding it to your friends and family, or at least talking about these important issues with them.

Together, we can make a difference.

 

View more: Prev | Next