(it's not obvious whether the change would be for the better)
It would certainly be for the worse if the banning was selectively enforced based on whether the mod in question liked the opinion being expressed.
(it's not obvious whether the change would be for the better)
It would certainly be for the worse if the banning was selectively enforced based on whether the mod in question liked the opinion being expressed.
I don't see a certainty in this. Policies have downsides. It's not clear how significant a bit of systematic injustice and bias would be compared to the other effects.
I agree with entirelyuseless in that I endorse banning advancedatheist because he had a long string of low-quality posting
Do you have any idea how many LW users that would apply to? Come to think of it, looking through polymathwannabe's recent history the highest quality content appears to be the open threads he initiates.
Do you have any idea how many LW users that would apply to?
This illustrates the effect size of the action. It's one of a few things that seem to me to have the potential of changing the current situation, although it's likely useless on its own, and it's not obvious whether the change would be for the better. A few years ago I maintained a list of users whose comments I was subscribed to (via rss), and two other lists, marked "toxic" and "clueless". Getting rid of those users might make lesswrong a better place, if it won't scare away the rest.
I've banned them without prior notice because I'm not giving them more chances to downvote.
I think a "We've observed X. It appears to be bad behavior. Do you have an alternative explanation?" discussion should be started in any case. Otherwise there will be no justice for false positives.
Is the "because I'm not giving them more chances to downvote" a real argument? It won't be if it's technically possible to prohibit downvoting (maybe by temporarily taking away their Karma, so that the Karma-based voting limits would kick in), or if it's possible to eventually retract their (recent) votes, so that current votes won't matter as much.
This is true in principle, but since I take disagreements pretty seriously I think it is normally false in practice. In other words there is actual harm and actual benefit in almost every real case.
Of course the last part of your comment is still true, namely that a mixed cause could still be better than a pure benefit case. However, this will not be true on average, and especially if I am always acting on my own opinion, since I will not always be right.
... a mixed cause could still be better than a pure benefit case. However, this will not be true on average ...
That's the question, what is the base rate of the options you are likely to notice. If visible causes come in equivalent pairs, one with harm in it and another without, all other traits similar, that would be true. Similarly if pure benefit causes tend to be stronger. But it could be the case that best pure benefit causes have less positive impact than best mixed benefit causes.
... since I take disagreements pretty seriously I think it is normally false in practice. In other words there is actual harm and actual benefit in almost every real case.
How does your taking disagreements seriously (what do you mean by that?) inform the question of whether most real (or just contentious?) causes involve actual harm as well as benefit? (Do you mean to characterize your use of the term "disagreement", which causes you point to as involving disagreement? For example, global warming could be said to involve no disagreement that's to be taken seriously.)
"Do you believe that impersonal and accidental forces of history generate as much misery, which you can fight against, as the deliberate efforts of people who disagree with you? Wouldn't that be surprising if it were true?"
Yes, I believe that, and no, it is not surprising. Issues where people disagree are likely to be mixed issues, where making changes will do harm as well as benefit. That is exactly why people disagree. So working on those issues will tend to do less benefit than working on the issues everyone agrees on, which are likely to be much less mixed.
Issues where people disagree are likely to be mixed issues, where making changes will do harm as well as benefit. That is exactly why people disagree. So working on those issues will tend to do less benefit than working on the issues everyone agrees on, which are likely to be much less mixed.
A disagreement could resolve into one side being mostly right and another mostly wrong, so actual harm+benefit isn't necessary, only expected harm+benefit. All else equal, harm+benefit is worse than pure benefit, but usually there are other relevant distinctions, so that the effect of a harm+benefit cause could overwhelm available pure benefit causes.
I'd like to play the devil's advocate here for a moment. I'm not entirely sure how I should respond to the following argument.
The second thought – that we try to make things better – is shared by every plausible moral system and every decent person.
That begs the question: people often disagree on what is a better state of things. (And of course they say those who disagree with you are not "decent".)
Don't ignore the fact that people agree on only a very small set of altrustic acts. And even then, many people are neutral about them, or almost so, or they only support them if they ignore the lost opportunities of e.g. giving money to them and not to those other less fortunate people.
The great majority of things people want, they don't want in common. Do you want to improve technology and medicine, or prevent unfriendly AI, or convert people to Christianity, or allow abortion, or free slaves, or prevent use of birth control, or give women equal legal rights, or make atheism legal, or prevent the disrespect and destruction of holy places, or remove speech restrictions, or allow free market contracts? Name any change you think a great historical moral advance, and you'll find people who fought against it.
Most great causes have people fighting for and against. This is unsurprising: when everyone is on the same side, the problem tends to be resolved quickly. The only things everyone agrees are bad, but which keep existing for decades, are those people are apathetic about - not the greatest moral causes of the day.
Does selecting causes for the widest moral consensus mean selecting the most inoffensive ones? If not, why not? Do you believe that impersonal and accidental forces of history generate as much misery, which you can fight against, as the deliberate efforts of people who disagree with you? Wouldn't that be surprising if it were true?
Do you disagree with the point you are making, or merely with the pro-book/anti-book side where it fits? I think being a devil's advocate is about the former, not the latter. (There is also the move of steelmanning a flaw, looking for a story that paints it as clearly bad, to counteract the drive to excuse it, which might be closer to what you meant.)
Btw, Scott recently wrote a post about issues with admitting controversial causes in altruism.
This post seems better suited for the Discussion section.
Moved to Discussion.
Hi everyone.
I'm about to start my second year of college in Utah. My intent is to major in math and/or computer science, although more generally I'm interested in many of the subjects that LessWrongers seem to gravitate towards (philosophy, physics, psychology, economics, etc.)
I first noticed something that Eliezer Yudkowsky posted on Facebook several months ago, and have since been quietly exploring the rationality-sphere and surrounding digital territories (although I'm no longer on FB). Joining LessWrong seemed like the obvious next step given the time I had spent on adjacent sites. I'm here solely out of curiosity and philosophical interest.
Thanks to Sarunas and predecessors for the welcome page, and the LW community more generally. I look forward to being a part of it.
I'm here solely out of curiosity and philosophical interest.
And if you did in fact have a secret agenda, you wouldn't reveal it.
When I first worked through this book, it didn't result in long-term retention of the material (I'm sure some people will be able to manage, just not me, not without meditating on it much longer than it takes to work through or setting up a spaced repetition system). In that respect, Enderton's Elements of Set Theory worked much better. Enderton's book goes into more detail, giving enough time to exercise intuition about standard proofs. At the same time, it's an easier read, which might be helpful if Halmos's text seems difficult.
I don't think you get effective forum moderation by having public discussions about every moderation action.
Are you aware of a functioning online community which does things like that?
No, not public of course. The currently 122 comments to the present post illustrate how it's very distracting to announce moderation actions in a way that invites public discussion.