Super upvoted.
With that said, why is the optimal amount of woo not zero?
Also I think nonaccomodationist vegans have tended to be among the crazier people, so maybe you want enough vegetables for the accommodationists but also beef from moderately less tortured cows.
I just saw one recently on the EA forum to the effect that EAs who shortened their timelines only after chatGPT had the intelligence of a houseplant.
Somebody asked if people got credit for <30 year timelines posted in 2025. I replied that this only demonstrated more intelligence than a potted plant.
If you do not understand how this is drastically different from the thing you said I said, ask an LLM to explain it to you; they're now okay at LSAT-style questions if provided sufficient context.
In reply to your larger question, being very polite about the house burning down wasn't working. Possibly being less polite doesn't work either, of course, but it takes less time. In any case, as several commenters have noted, the main plan is to have people who aren't me do the talking to those sorts of audiences. As several other commenters have noted, there's a plausible benefit to having one person say it straight. As further commenters have noted, I'm tired, so you don't really have an option of continuing to hear from a polite Eliezer; I'd just stop talking instead.
Noted as a possible error on my part.
I looked at "AI 2027" as a title and shook my head about how that was sacrificing credibility come 2027 on the altar of pretending to be a prophet and picking up some short-term gains at the expense of more cooperative actors. I didn't bother pushing back because I didn't expect that to have any effect. I have been yelling at people to shut up about trading their stupid little timelines as if they were astrological signs for as long as that's been a practice (it has now been replaced by trading made-up numbers for p(doom)).
When somebody at least pretending to humility says, "Well, I think this here estimator is the best thing we have for anchoring a median estimate", and I stroll over and proclaim, "Well I think that's invalid", I do think there is a certain justice in them demanding of me, "Well, would you at least like to say then in what direction my expectation seems to you to be predictably mistaken?"
If you can get that or 2050 equally well off yelling "Biological Anchoring", why not admit that the intuition comes first and then you hunt around for parameters you like? This doesn't sound like good methodology to me.
Is your take "Use these different parameters and you get AGI in 2028 with the current methods"?
I think OpenPhil was guided by Cotra's estimate and promoted that estimate. If they'd labeled it: "Epistemic status: Obviously wrong but maybe somebody builds on it someday" then it would have had a different impact and probably not one I found objectionable.
Separately, I can't imagine how you could build something not-BS on that foundation and if people are using it to advocate for short timelines then I probably regard that argument as BS and invalid as well.
Will MacAskill could serve as exemplar. More broadly I'm thinking of people who might have called themselves 'longtermists' or who hybridized Bostrom with Peter Singer.
It cannot be answered that simply to the Earthlings, because if you answer "Because I don't expect that to actually work or help", some of them and especially the more evil ones will pounce in reply, "Aha, so you're not replying, 'I'd never do that because it would be wrong and against the law', what a terrible person you must be!"