Rank: #10 out of 4859 in peer accuracy at Metaculus for the time period of 2016-2020.
Public transport systems should check if you're not paying for your tickets, and ban you.
That sounds like punishing people who fail to buy ticket with an inability to buy tickets? That seem like a strange choice to me.
Amazon should check if you're producing fraudulent products and ban you. This is because they're unusually skilled and experienced with this kind of thing, and have good info about it.
Why do you believe that Amazon is unusually skilled or experienced with it? Louis Rossmann's investigation of fuses that Amazon sells suggests that Amazon is quite willing to sell fraudulent fuses and doesn't really do something about it. Fraudulent fuses are especially bad for Amazon to sell because they are products that are safety critical. Houses might burn down because of Amazon selling the fraudulent products.
Recent whistleblowing from Meta suggests that 10% of their ad revenue (or 25% of their profits) used to be fraud according to their own estimates. That leaves the question of how many percent of revenue of Facebook would need to be about defrauding customers to be on the level of SBF.
For Facebook the situation seems to be, "The fine we have to pay for facilitating the fraud of our customers are much lower then the profit we make so we do it."
For Amazon it's less clear to me why they aren't doing more about fraud. It seems more of a matter of just not really caring. It isn't really the job of anyone with power at Amazon to reduce fraud and many people have their KPI's that they have to reach that are about other priorities.
It's unclear to me why you think that SBF meets the threshold of being evil in the sense of "prefers bad outcomes, and is working to make them happen". I think he was certainly wrong is using customer funds but I don't think he was in any way intending to get into a situation where he can't return the funds. To me that doesn't look like sadism but more like narcissism.
I remember one conversation at a LessWrong community weekend where I made a contrarian argument. The other person responded with something like "I don't know the subject matter well enough to judge your arguments, I rather stay with believing the status quo. The topic isn't really relevant enough for my to invest time into it."
That's the kind of answer you can get when speaking with rationalists but you don't really get when talking to non-rationalists. That person wasn't "glad to learn that they were wrong" but they were far from irrational. They had an idea about their own beliefs and how it makes sense to change their own beliefs that was the result of reasoning in a way that non-rationalist don't tend to do.
Adam sounds to me naive about what goes into actually changing your mind. He seems to take "learn that you were wrong" in a goal in itself. The person I was speaking about in the above example didn't have a goal to have a sophisticated understanding of the domain I was talking about that's was probably completely in line with their utility function.
When it comes to issues where it's actually important to change your mind, it's complex in another way. Someone might give you a convincing rational argument but in the back of your mind there's a part of you that feels wary. While you could ignore that part at the back of your mind and just update your belief, it's not clear that this is always the best idea.
There are a few people who were faced with pretty convincing arguments about the central importance of AI safety and that it's important for them to do everything they can to fight for AI safety. Then a year later, they have burnout because they invest all their energy into AI safety. They ignored a part of themselves and their ability to change their mind turned to their determinant. A lot of what CFAR did when it comes to Focusing and internal double crux is about listening to more internal information instead of suppressing it.
Another problem when it comes to teaching rationality is that even if someone does the right thing 99% but the 1% they are doing the wrong thing when it actually matters, the result is still a failure. Just because someone can do it in the dojo where they train kata's doesn't mean that they can do it when it's actually important.
Julia Galef had the Scout vs. Soldier mindset as one alternative to the paradigm of teaching individual skills. The idea is that the problem often isn't that people lack the skills but that they are in soldier mindset and thus don't use skills they have.
So I see much of Vassarism as claiming: These protections against high-energy memes are harmful. We need to break them down so that we can properly hold people in power accountable, and freely discuss important risks.
It's been a while since this was written, but I don't think this summaries what Vassar says well.
If anyone wants to get a good idea of what kind of arguments Vassar makes, his talk with Spencer Greenberg is a good source.
On of the reason you might see him as dangerous is that he advocates that people should see a lot more interactions as being about conflict theory. Switching from assuming good intent of other people to using conflict theory, can be quite disruptive to many interactions.
Getting someone to stop assuming that the people around them have good intent can be quite bad for their mental health even if it's true.
My comment was written in 2023 ago, so I'm unsure what specific source I was referring to at the time but there's Scott comment from 2024 saying:
But I wasn't able to find any direct causal link between Michael and the psychotic breaks - people in this group sometimes had breaks before encountering him, or after knowing him for long enough that it didn't seem triggered by meeting him, or triggered by obvious life events. I think there's more reverse causation (mentally fragile people who are interested in psychedelics join, or get targeted for recruitment into, his group) than direct causation (he convinces people to take psychedelics and drives them insane), though I do think there's a little minor direct causation in a few cases.
It got the impression that you asserted that Yudkowsky's work somehow has a central in a religious way to rationality.
To me the fact that CFAR started out with the idea that Bayesian might be important and then found in their research that teaching Bayes formula isn't, is a prime example that rationality works differently than religion.
Things aren't just taken as given and religiously believed.
Conscious phenomenology should only arise in systems whose internal states model both the world and their own internal dynamics as an observer within that world. Neural or artificial systems that lack such recursive architectures should not report or behave as though they experience an “inner glow.”
It's unclear to me what "inner glow" mean here. I have the impression that we see LLMs being trained on a lot of text of humans where some of it is humans speaking about being able to report they are conscious even if they don't really have an recursive structure. It's possible for an LLM to just repeat what they read and report the same thing that humans do.
Experimental manipulations that increase transparency of underlying mechanisms should not increase but rather degrade phenomenology.
This suggests that as people learn to meditate and get more transparency, on average they should become less convinced of consciousness being primary. While some people feel like the phenomenology gets degreed (and it sounds like you are one of them), I get the impression that there are more reports of people who are long-term meditators who get strengthened by their meditation experience in the belief that consciousness is ontology primitive.
No. Bayes' theorem is a minor descriptive detail of Yudkowky's focus on being more rational.
Your post uses "notably and especially" as adjectives to describe Bayes' theorem and now you say "minor".
Most of what you wrote is quite vague and thus hard to falsify. Your claim about Bayes' theorem isn't and you seem to agree that I have falsfied it when you now say 'minor'.
Is talking about p(doom), p(anything), or "updating" in a certain direction a cultish and religion-like use of the language of Bayesian probability?
Each knowledge community whether scientific or elsewhere has their own terminology. Having it's own terminology is not unique to religions.
If you believe "That people can become better at distinguishing true things, or if you prefer, become more rational, by a series of practices, notably and especially applying Bayes' Theorem from probability theory to evaluating facts" is one of the main things that rationalism is about in a religious way, than how do you explain that so few posts on LessWrong are about using Bayes Theorem to evaluate facts?
When I look at the LessWrong.com frontpage I don't see any post being about that and I think there are very if if any posts written on LessWrong in 2025 that are about that. How does that match your thesis?
I abstractly described a thing that would work
Why do you believe that it would work? Psychology is a field where abstract ideas that people think would work frequently turn out to not work. Why do you think you understand the underlying mechanics well enough to know what would work?
Don't you need to monitor the ban if you ban them? That sounds to me like hassle as well.