When somebody at least pretending to humility says, "Well, I think this here estimator is the best thing we have for anchoring a median estimate", and I stroll over and proclaim, "Well I think that's invalid", I do think there is a certain justice in them demanding of me, "Well, would you at least like to say then in what direction my expectation seems to you to be predictably mistaken?"
If you can get that or 2050 equally well off yelling "Biological Anchoring", why not admit that the intuition comes first and then you hunt around for parameters you like? This doesn't sound like good methodology to me.
Is your take "Use these different parameters and you get AGI in 2028 with the current methods"?
I think OpenPhil was guided by Cotra's estimate and promoted that estimate. If they'd labeled it: "Epistemic status: Obviously wrong but maybe somebody builds on it someday" then it would have had a different impact and probably not one I found objectionable.
Separately, I can't imagine how you could build something not-BS on that foundation and if people are using it to advocate for short timelines then I probably regard that argument as BS and invalid as well.
Will MacAskill could serve as exemplar. More broadly I'm thinking of people who might have called themselves 'longtermists' or who hybridized Bostrom with Peter Singer.
I again don't consider this a helpful thing to say on a sinking ship when somebody is trying to organize passengers getting to the lifeboats.
Especially if your definition of "AI takeover" is such as to include lots of good possibilities as well as bad ones; maybe the iceberg rockets your ship to the destination sooner and provides all the passengers with free iced drinks, who can say?
You can do better by saying "I don't know" than by saying a bunch of wrong stuff. My long reply to Cotra was, "You don't know, I don't know, your premises are clearly false, and if you insist on my being Bayesian and providing a direction of predictable error when I claim predictable error then fine your timelines are too long."
People ask me questions. I answer them honestly, not least because I don't have the skill to say "I'm not answering that" without it sending some completely different set of messages. Saying a bunch of stuff in private without giving anyone a chance to respond to what I'm guessing about them is deontologically weighed-against by my rules, though not forbidden depending on circumstances. I do not do this in hopes any good thing results, but then acts with good consequences are few and far between in any case, these days.
Why, that's my job too! But it's a very different job depending on whether you consider it an indispensable requirement to have people coming away with a roughly accurate picture of reality, or if your job is to be an entertainer.
I looked at "AI 2027" as a title and shook my head about how that was sacrificing credibility come 2027 on the altar of pretending to be a prophet and picking up some short-term gains at the expense of more cooperative actors. I didn't bother pushing back because I didn't expect that to have any effect. I have been yelling at people to shut up about trading their stupid little timelines as they were astrological signs for as long as that's been a practice (it has now been replaced by trading made-up numbers for p(doom)).