All of Patrice Ayme's Comments + Replies

Realistic Cynicism In Climate Science:

Apparent paradox: the more scientists worry about “climate change”, the less they believe in geoengineering. They also are more inclined to geoengineering when the impact gets personal. It actually makes rational sense.

People who are more realistic see the doom of the GreenHouse Gas (GHG) crisis: all time high CO2 production, all time high coal burning, the activation of tipping points, such as collapsing ice shelves, acceleration of ice streams, generalized burning of forests, peat and even permafrost combusting ... (read more)

7Lao Mein
Epistemic status: ~30% sophistry There's a difference between thinking something won't work in practice and opposing it. The paper examines opposition. As in, taking steps to make geoengineering less likely.  My personal suspicion is that the more plausible a climate scientist thinks geoengineering is, the more likely they are to oppose it, not the other way around. Just like climate modeling for nuclear famine isn't actually about accurate climate modeling and finding ways to mitigate deaths from starvation (it's about opposition to nuclear war), I suspect that a lot of global-warming research is more about opposing capitalism/technology/greed/industrial development than it is about finding practical ways to mitigate the damage. This is because these fields are about using utilizing how bad their catastrophe is to shape policy. Mitigation efforts hamper that. Therefore, they must prevent mitigation research. There is a difference between thinking about a catastrophe as an ideological tool, in which case you actively avoid talking about mitigation measures and actively sabotage mitigation efforts, and thinking about it as a problem to be solved, in which you absolutely invest in damage mitigation. Most nuclear famine and global warming research absolutely seem like the former, while AI safety still looks like the latter.  AI alignment orgs aren't trying to sabotage mitigation (prosaic alignment?). The people working on, for example, interpretability and value alignment, might view prosaic alignment as ineffective, but they aren't trying to prevent it from taking place. Even those who want to slow down AI development aren't trying to stop prosaic alignment research. Despite the surface similarities, there are fundamental differences between the fields of climate change and AI alignment research. I vaguely remember reading a comment on Lesswrong about how the anti-geoengineering stance is actually 3d chess to force governments to reduce carbon emissions by removi