The Hanson-Yudkowsky AI-Foom Debate focused on whether AI progress is winner-take-all. But even if it isn't, humans might still fare badly.
Suppose Robin is right. Instead of one basement project going foom, AI progresses slowly as many organizations share ideas with each other, leading to peaceful economic growth worldwide - a rising tide of AI. (I'm including uploads in that.)
With time, keeping biological humans alive will become a less and less profitable use of resources compared to other uses. Robin says humans can still thrive by owning a lot of resources, as long as property rights prevent AIs from taking resources by force.
But how long is that? Recall the displacement of nomadic civilizations by farming ones (which happened by force, not farmers buying land from nomads) or enclosure in England (which also happened by force). When potential gains in efficiency become large enough, property rights get trampled.
Robin argues that it won't happen, because it would lead to a slippery slope of AIs fighting each other for resources. But the potential gains from that are smaller, like a landowner trying to use enclosure on another landowner. And most such gains can be achieved by AIs sharing improvements, which is impossible with humans. So AIs won't be worried about that slippery slope, and will happily take our resources by force.
Maybe humans owning resources could upload themselves and live off rent, instead of staying biological? But even uploaded humans might be very inefficient users of resources (e.g. due to having too many neurons) compared to optimized AIs, so the result is the same.
Instead of hoping that institutions like property rights will protect us, we should assume that everything about the future, including institutions, will be determined by the values of AIs. To achieve our values, working on AI alignment is necessary, whether we face a "basement foom" or "rising tide" scenario.
I've got a bit more time now.
I agree "Things need to be done" in a rising tide scenario. However different things need to be done to the foom scenario. The distribution of AI safety knowledge is different in an important way.
Discovering ai alignment is not enough in the rising tide scenario. You want to make sure the proportion of aligned AIs vs misaligned AIs is sufficient to stop the misaligned AIs outcompeting the aligned AIs. There will be some misaligned AIs due to parts wear, experiments gone wrong, AIs aligned with insane people that are not sufficiently aligned with the rest of humanity to allow negotiation/discussion.
The biggest risk is around the beginning. Everyone will be enthusiastic to play around with AGI. If they don't have good knowledge of alignment (because it has been a secret project) then they may not know how it should work and how it should be used safely. They may also buy AGI products from people that haven't done there due diligence in making sure their product is aligned.
It might be that it requires special hardware for alignment (e.g there is the equivalent of spectre that needs to be fixed in current architectures to enable safe AI), then there is the risk of the software getting out and being run on emulators that don't fix the alignment problem. Then you might get lots of misaligned AGIs.
In this scenario you need lots of things that are antithetical to the strategy of fooming AGI, of keeping things secret and hoping that a single group brings it home. You need a well educated populace/international community, regulation of computer hardware and AGI vendors (preferably before AGI hits). All that kind of stuff.
Knowing whether we are fooming or not is pretty important. The same strategy does not work for both. IMO.
I think if you carefully read everything in these links and let it stew for a bit, you'll get something like my approach.
More generally, having ideas is great but don't stop there! Always take the next step, make things slightly more precise, push a little bit past the point where you have everything figured out. That way you're almost guaranteed to find new territory soon enough. I have an old post about that.