If Strong AI turns out to not be possible, what are our best expectations today as to why?
I'm thinking of trying myself at writing a sci-fi story, do you think exploring this idea has positive utility? I'm not sure myself: it looks like the idea that intelligence explosion is a possibility could use more public exposure, as it is.
I wanted to include a popular meme image macro here, but decided against it. I can't help it: every time I think "what if", I think of this guy.
As with all arguments against strong AI, there are a bunch of unintended consequences.
What prevents someone from, say, simulating a human brain on a computer, then simulating 1,000,000 human brains on a computer, then linking all their cortices with a high-bandwidth connection so that they effectively operate as a superpowered highly-integrated team?
Or carrying out the same feat with biological brains using nanotech?
In both cases, the natural limitations of the human brain have been transcended, and the chances of such objects engineering strong AI go up enormously. You would then have to explain, somehow, why no such extension of human brain capacity can break past the AI barrier.
Why do you think that linking brains together directly would be so much more effective than email?
It's a premise to a scifi story, where the topology is to be never discussed. If you are to actually think in the detail... how are you planning to connect your million brains?
Let's say you connect the brains as a 3d lattice, where each connects to 6 neighbours, 100x100x100. Far from closely cooperating team, you get a game of Chinese whispers from brains on one side to brains on the other.