Some researchers at my university have in the past expressed extreme skepticism at AGI and the safety research field, and recently released a preprint taking a stab at the "inevitability of AGI". In this 'journal club' post I take a look at their article, and end up thinking that they a) have a point, AGI might be farther away than I previously thought, and b) they actually made a very AI safety-like argument in the article, which I'm not sure they realised.
[Epistemic state: posting first drafts in order to produce better thoughts]
Some people argue that humanity is currently not co-opting everything, and that this is evidence that the AI would not necessarily co-opt everything. While the argument is logically true as stated ("There exists AI systems which would not co-opt everything"), in practice it is an abuse of probabilities and gross anthropomorphising of systems which are not necessarily like humans (and which we would have to work to make like ...
I think the frame of "trying to 'solve the whole' future is aking to gripping too hard" might be relevant for me changing my mind about research directions. But it still doesn't present a positive vision for why one should work on e.g. incremental prosaic methods, so then it's even less clear for me what areas to focus. I had been focusing on "actually solving the problm" in the more safety at all scales or permanent safety versions of the term, but I think even those directions might need to be minimally developed in the minimal superintelligence-assisted... (read more)