Today we have banned two users, curi and Periergo from LessWrong for two years each. The reasoning for both is bit entangled but are overall almost completely separate, so let me go individually:
Periergo is an account that is pretty easily traceable to a person that Curi has been in conflict with for a long time, and who seems to have signed up with the primary purpose of attacking curi. I don't think there is anything fundamentally wrong about signing up to LessWrong to warn other users of the potentially bad behavior of an existing user on some other part of the internet, but I do think it should be done transparently.
It also appears to be the case that he has done a bunch of things that go beyond merely warning others (like mailbombing curi, i.e. signing him up for tons of email spam that he didn't sign up for, and lots of sockpupetting on forums that curi frequents), and that seem better classified as harassment, and overall it seemed to me that this isn't the right place for Periergo.
Curi has been a user on LessWrong for a long time, and has made many posts and comments. He also has the dubious honor of being by far the most downvoted account in all of LessWrong history at -67...
Today we have banned two users, curi and Periergo from LessWrong for two years each.
I wanted to reply to this because I don't think it's right to judge curi the way you have. Periergo I don't have an issue w/. (it's a sockpuppet acct anyway)
I think your decision should not go unquestioned/uncriticized, which is why I'm posting. I also think you should reconsider curi's ban under a sort of appeals process.
Also, the LW moderation process is evidently transparent enough for me to make this criticism, and that is notable and good. I am grateful for that.
On his blog, he and others maintain a long list of people who engaged with him and others in the Critical Rationalist community, but then stopped, in a way that is very hard to read as anything but a public attack.
You are judging curi and FI (Fallible Ideas) via your standards (LW standards), not FI's standards. I think this is problematic.
I'd like to note I am on that list. (like 1/2 way down) I am also a public figure in Australia, having founded a federal political party based on epistemic principles with nearly 9k members. I am okay with being on that list. Arguably, if there is something truly wrong with the list, I should h...
You are judging curi and FI (Fallible Ideas) via your standards (LW standards), not FI's standards. I think this is problematic.
The above post explicitely says that the ban isn't a personal judgement of curi. It's rather a question of whether it's good or not to have curi around on LessWrong and that's where LW standards matter.
Unpopularity is no reason for a ban
That seems like a sentiment indicative of ignoring the reason for which he was banned. It was a utilitarian argument. The fact that someone gets downvoted is Bayesian evidence that it's not valuable for people to interact with him on LessWrong.
How is this different to pre-crime?
If you imprision someone who murdered in the past because you are afarid they murder again, that's not pre-crime in most common senses of the word.
Additionally even if it would be, LW is not a place with virtue ethics standards but one with utilitarian standards. Taking action to prevent things that are likely to negatively effect LW from happening in the future is perfectly fine with the idea of good gardening.
If you stand in your garden you don't ask "what crimes did the plants commit and how should they be punished?" but you focus on the future.
This is not a reason to ban him, or anyone. Being disliked is not a reason for punishment.
The traditional guidance for up/downvotes has been "upvote what you would like want to see more of, downvote what you would like to see less of". If this is how votes are interpreted, then heavy downvotes imply "the forum's users would on average prefer to see less content of this kind". Someone posting the kind of content that's unwanted on a forum seems like a reasonable reason to bar that person from the forum in question.
I agree with "being disliked is not a reason for punishment", but people also have the right to choose who they want to spend their time with, even if someone who they preferred not to spend time with viewed that as being punished. In my book, banning people from a private forum is more like "choosing not to invite someone to your party again, after they previously caused others to have a bad time" than it is like "punishing someone".
I googled the definition, and these are the two (for define:threat)
- a statement of an intention to inflict pain, injury, damage, or other hostile action on someone in retribution for something done or not done.
- a person or thing likely to cause damage or danger.
Neither of these apply.
I prefer this definition, "a declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course; menace". I think the word "retribution" implies undue justice. A "threat" need only imply retaliation, not retribution, of hostile action.
We have substantial disagreements about what constitutes a threat,
Evidently yes, as do dictionaries.
I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)
I agree that if we wanted to extend him more opportunities/resources/etc., we could, and that a ban is a decision to not do that. But it seems to me like you're focusing on the benefit to him / "is there any chance he would get better?", as opposed to the benefit to the community / "is it reasonable to expect that he would get better?".
As stewards of the community, we need to make decisions taking into account both the direct impact (on curi for being banned or not) and the indirect impact (on other people deciding whether or not to use the site, or their experience being better or worse).
I don't recall learning in school that most of "the bad guys" from history (e.g., Communists, Nazis) thought of themselves as "the good guys" fighting for important moral reasons. It seems like teaching that fact, and instilling moral uncertainty in general into children, would prevent a lot of serious man-made problems (including problems we're seeing play out today). So why hasn't civilization figured that out already? Or is not teaching moral uncertainty some kind of Chesterton's Fence, and teaching it widely would make the world even worse off on expectation?
I wonder if anyone has ever written a manifesto for moral uncertainty, maybe something along the lines of:
We hold these truths to be self-evident, that we are very confused about morality. That these confusions should be properly reflected as high degrees of uncertainty in our moral epistemic states. That our moral uncertainties should inform our individual and collective actions, plans, and policies. ... That we are also very confused about normativity and meta-ethics and don't really know what we mean by "should", including in this document...
Yeah, I realize this would be a hard sell in today's environment, but what if building Friendly AI requires a civilization sane enough to consider this common sense? I mean, for example, how can it be a good idea to gift a super-powerful "corrigible" or "obedient" AI to a civilization full of people with crazy amounts of moral certainty?
The full-text version of the Embedded Agency sequence has colors! And it's not just in the form of an image, but they're actually embedded as text. Is there any way a normal LW user can do the same with any of the three editors? (I.e., LW docs, Draft-JS, or Markdown.)
Apparently OpenAI has sold Microsoft some sort of exclusive licence to GPT-3. I assume this is bad for the prospects of anyone else doing serious research on it.
I recently realized that I've been confused about an extremely basic concept: the difference between an Oracle and an autonomous agent.
This feels obvious in some sense. But actually, you can 'get' to any AI system via output behavior + robotics. If you can answer arbitrary questions, you can also answer the question 'what's the next move in this MDP', or less abstractly, 'what's the next steering action of the imaginary wheel' (for a self-driving car). And the difference can't be 'an autonomous agent has a robotic component'.
The essential difference seems ...
I'm going on a 30-hour roadtrip this weekend, and I'm looking for math/science/hard sci-fi/world-modelling Audible recommendations. Anyone have anything?
Golden raises $14.5M. I wrote about Golden here as an example of the most common startup failure mode: lacking a single well-formed use case. I’m confused about why someone as savvy as Mark Andreessen is tripling down and joining their board. I think he’s making a mistake.
If anyone happens to be willing to privately discuss some potentially infohazardous stuff that's been on my mind (and not in a good way) involving acausal trade, I'd appreciate it - PM me. It'd be nice if I can figure out whether I'm going batshit.
So which simulacrum level are ants on when they are endlessly following each other in a circle?
Do those of you who live in America fear the scenarios discussed here? ("What If Trump Loses And Won’t Leave?")
I do not know whether this has already been mentioned on Lesswrong, but 4-6 weeks ago you could read in German news websites that commercially available mouth wash has been tested to kill coronavirus in the lab and the (positive) results have been published in Journal of Infectious Diseases.
You can click through this article to see the ranked names of the mouth wash brands and their "reduction factor" though I found the sample sizes seemed quite small. You can also find a list in this overview article. In an article I saw today on this topic, the...
I'm so bored of my job, I need a programming job that has actual math/algorithms :/ I'm curious to hear about people here who have programming jobs that are more interesting. In college I competed at a high level in ICPC, but I got into my head that there are so few programming jobs with actual advanced algorithms that if your name on topcoder isn't red you might as well forget about it. I ended up just taking a boring job at a top tech company that pays well but does very little for society and is not intellectually stimulating at all.
If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)
And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here.