If Strong AI turns out to not be possible, what are our best expectations today as to why?
I'm thinking of trying myself at writing a sci-fi story, do you think exploring this idea has positive utility? I'm not sure myself: it looks like the idea that intelligence explosion is a possibility could use more public exposure, as it is.
I wanted to include a popular meme image macro here, but decided against it. I can't help it: every time I think "what if", I think of this guy.
Do you mean "impossible in principle" or "will never be built by our civilization"?
If first, then it is a well-known an widely accepted without much evidence idea that brain just can't be simulated by any sort of Turing machine. For in-story explanation why there are no AIs in future, that is enough.
If second, there is a very real possibility than technical progress will slow down to a halt, and we just never reach a technical capability to build an AI. On this topic, some people say that progress is accelerating right now and some say that it is slowing down since the late 19 century, and of course future is even more unclear.
I didn't distinguish between the two; for me, any would be fine; thanks.