If Strong AI turns out to not be possible, what are our best expectations today as to why?
I'm thinking of trying myself at writing a sci-fi story, do you think exploring this idea has positive utility? I'm not sure myself: it looks like the idea that intelligence explosion is a possibility could use more public exposure, as it is.
I wanted to include a popular meme image macro here, but decided against it. I can't help it: every time I think "what if", I think of this guy.
Depends what you mean by strong AI. The best we know for sure we can do is much faster human intelligence minus the stupid parts, and with more memory. That's pretty danged smart, but if you think that's not 'strong AI' then it isn't much of a stretch to suppose that that's the end of the road - we're close enough to optimal that once you've fixed the blatant flaws you're well into diminishing returns territory.