I'm surprised that you're surprised. To me you've always been a go-to example of someone exceptionally good at both original seeing and taking weird ideas seriously, which isn't a well-trodden intersection.
We need an epistemic-clarity-win that’s stable at the the level of a few dozen world/company leaders.
If you disagree with the premise of “we’re pretty likely to die unless the political situation changes A Lot”, well, it makes sense if you’re worried about the downside risks of the sort of thing I’m advocating for here. We might be political enemies some of the time, sorry about that.
These propositions seem in tension. I think that we're unlikely to die, but agree with you that without an "epistemic-clarity-win" your side won't get its desired policies implemented. Of course, the beauty of asymmetric weapons is that if I'm right and you're wrong, epistemic clarity would reveal that and force you to change your approaches. So, we don't appear to be political enemies, in ways that matter.
general public making bad arguments
My point is that "experts disagree with each other, therefore we're justified in not taking it seriously" is a good argument, and this is what people mainly believe. If they instead offer bad object-level arguments, then sure, dismissing those is fine and proper.
Yoshua Bengio or Geoffrey Hinton do take AI doom seriously, and I agree that their attitude is reasonable (though for different reasons than you would say)
I agree that their attitude is reasonable, conditional on superintelligence being achievable in the foreseeable future. I personally think this is unlikely, but I'm far from certain.
And I think AI is exactly such a case, where conditional on AI doom being wrong, it will be for reasons that the general public mostly won’t know/care to say, and will still give bad arguments against AI doom.
Most people are clueless about AI doom, but they have always been clueless about approximately everything throughout history, and get by through having alternative epistemic strategies of delegating sense-making and decision-making to supposed experts.
Supposed experts clearly don't take AI doom seriously, considering that many of them are doing their best to race as fast as possible, therefore people don't either, an attitude that seems entirely reasonable to me.
Also, you haven’t linked to your comment properly, when I notice the link it goes to the post rather than your comments.
Thank you, fixed.
My core claim here is that most people, most of the time, are going to be terrible critics of your extreme idea. They will say confused, false, or morally awful things to you, no matter what idea you have.
I think that most unpopular extreme ideas have good simple counterarguments. E.g. for Marxism it's that it whenever people attempt it, this leads to famines and various extravagant atrocities. Of course, "real Marxism hasn't been tried" is the go-to counter-counterargument, but even if you are a true believer, it should give you pause that it has been very difficult to implement in practice, and it's reasonable for people to be critical by default because of those repeated horrible failures.
The clear AI implication I addressed elsewhere.
the only divided country left after Germany
China/Taiwan seem to be (slightly) more so these days, after Kim explicitly repudiated the idea of reunification.
publishing the evidence is prosocial, because it helps people make higher-quality decisions regarding friendship and trade opportunities with Mallory
And by the same token, subsequent punishment would be prosocial too. Why, then, would Alice want to disclaim it? Because, of course, in reality the facts of the matter whether somebody deserves punishment are rarely unambiguous, so it makes sense for people to hedge. But that's basically wanting to have the cake and eat it too.
The honorable thing for Alice to do would be to weigh the reliability of the evidence that she possesses, and disclose it only if she thinks that it's sufficient to justify the likely punishment that would follow it. No amount of nuances of wording and tone could replace this essential consideration.
Feels true to me, but what’s the distinction between theoretical and non-theoretical arguments?
Having decent grounding for the theory at hand would be a start. To take the ignition of the atmosphere example, they did have a solid enough grasp of the underlying physics, with validated equations to plug numbers into. Another example would be global warming, where even though nobody has great equations, the big picture is pretty clear, and there were periods when the Earth was much hotter in the past (but still supported rich ecosystems, which is why most people don't take the "existential risk" part seriously).
Whereas, even the notion of "intelligence" remains very vague, straight out of philosophy's domain, let alone concepts like "ASI", so pretty much all argumentation relies on analogies and intuitions, also prime philosophy stuff.
Policy has also ever been guided by arguments with little related maths, for example, the MAKING FEDERAL ARCHITECTURE BEAUTIFUL AGAIN executive order.
I mean, sure, all sorts of random nonsense can sway national policy from time to time, but strictly-ish enforced global bans are in an entirely different league.
Maybe the problem with AI existential risk arguments is that they’re not very convincing.
Indeed, and I'm proposing an explanation why.
I think that the primary heuristic that prevents drastic anti-AI measures is the following: "A purely theoretical argument about a fundamentally novel threat couldn't seriously guide policy."
There are, of course, very good reasons for it. For one, philosophy's track record is extremely unimpressive, with profound, foundational disagreements between groups of purported subject matter experts continuing literally for millennia, and philosophy being the paradigmatic domain of purely theoretical arguments. For another, plenty of groups throughout history predicted an imminent catastrophic end of the world, yet the world stubbornly persists even so.
Certainly, it's not impossible that "this time it's different", but I'm highly skeptical that humanity will just up and significantly alter the way it does things. For the nuclear non-proliferation playbook to becomes applicable, I expect that truly spectacular warning shots will be necessary.
Nah, the weird idea is AI x-risk, something that almost nobody outside of LW-sphere takes seriously, even if some labs pay lip service to it.