If Strong AI turns out to not be possible, what are our best expectations today as to why?
I'm thinking of trying myself at writing a sci-fi story, do you think exploring this idea has positive utility? I'm not sure myself: it looks like the idea that intelligence explosion is a possibility could use more public exposure, as it is.
I wanted to include a popular meme image macro here, but decided against it. I can't help it: every time I think "what if", I think of this guy.
Then you wouldn't exist. Next question?
I think I understand the implication you're invisibly asserting, and will try to outline it:
If there cannot be Strong AI, then there is an intelligence maximum somewhere along the scale of possible intelligence levels, which is sufficiently low that an AI which appears to us to be Strong would violate the maximum.
There is no reason a priori for this limit to be above human normal but close to it.
Therefore, the proposition "either the intelligence maximum is far above human levels or it is below human levels" has probability ~1. (Treating