At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.
EDIT: Thanks for all the contribution! Keep them coming...
I hadn't seen this before. Hanson's conception of intelligence actually seems much simpler and more plausible than how I had previously imagined it. I think 'intelligence' can easily act as a Semantic Stopsign because it feels like a singular entity through the experience of consciousness, but actually may be quite modular as Hanson suggests.
Intelligence must be very modular - that's what drives Moravec's paradox (problems like vision and locomotion that we have good modules for feel "easy", problems that we have to solve with "general" intelligence feel "hard"), the Wason Selection task results (people don't always have a great "general logic" module even when they could easily solve an isomorphic problem applied to a specific context), etc.
Does this greatly affect the AGI takeoff debate, though? So long as we can't create a module which is itself capa... (read more)