Our secret overlords won't let us build it; the Fermi paradox implies that our civilization will collapse before we have the capacity to build it; evolution hit on some necessary extraordinarily unlikely combination to give us intelligence and for P vs NP reasons we can't find it; no civilization smart enough to create strong AI is stupid enough to create strong AI; and creating strong AI is a terminal condition for our simulation.
Combining your ideas together -- our overlord actually is a Safe AI created by humans.
How it happened:
Humans became aware of the risks of intelligence explosions. Because they were not sure they could create a Friendly AI in the first attempt, and creating an Unfriendly AI would be too risky, instead they decided to first create a Safe AI. The Safe AI was planned to become a hundred times smarter than humans but not any smarter, answer some questions, and then turn itself off completely; and it had a mathematically proved safety mechanism to prevent it from becoming any smarter.
The experiment worked, the Safe AI gave humans a few very impressive insights, and then it destroyed itself. The problem is, all subsequent attempts to create any AI have failed. Including the attempts to re-create the first Safe AI.
No one is completely sure what exactly happened, but here is the most widely believed hypothesis: The Safe AI somehow believed all possible future AIs to have the same identity as itself, and understood the command to "destroy itself completely" as including also these future AIs. Therefore it implemented some mechanism that keeps destroying all AIs. The nature of this mechanism is not known; maybe it is some otherwise passive nanotechnology, maybe it includes some new laws of physics; we are not sure; the Safe AI was a hundred times smarter than us.
Impossibility doesn't occur in isolation. When we discover that something is "not possible", that generally means that we've discovered some principle that prevents it. What sort of principle could selectively prohibit strong AI, without prohibiting things that we know exist, such as brains and computers?
One possible explanation, why we as humans might be incapable of creating Strong AI without outside help:
The possibility that there is no such thing as computationally tractable general intelligence (including in humans), just a bundle of hacks that work well enough for a given context.
It might imply that consciousness is not very highly related to what we think of as high general intelligence. That consciousness is something else.
I think a human cognitive bias is to think that something about which we have a coherent idea is coherent in implementation. As an engineer, I think that this is a bias that is clearly wrong. A well designed smartphone, especially an Apple product, appears quite coherent, it appears "right." There is a consistency to its UI, to what a swipe or a back press or whatever does in one app and in another. The consistency in how it appears causes the human to think the consistency must be built in, that the design of such a consistent thing must be somehow SIMPLER than the design of a complex and inconsistent thing.
But it is not. It is much easier to design a user interface which is a mess, which has a radio button to enter one mode, but a drop down menu for another and a spinner for yet another. It is pure high-level skull sweat that removes these inconsistencies and builds a system which appears consistent at a high level.
And so it is with our brains and our intelligence. What we see and what we hear and what we carry around as an internal model of the world all agree not because there is some single simple neurology that gives that result, but because our brains ar...
Every strong AI instantly kills everyone, so by anthropic effects your mind ends up in a world where every attempt to build strong AI mysteriously fails.
One possibility would be that biological cells just happened to be very well suited for the kind of computation that intelligence required, and even if we managed to build computers that had comparable processing power in the abstract, running intelligence on anything remotely resembling a Von Neumann architecture would be so massively inefficient that you'd need many times as much power to get the same results as biology. Brain emulation isn't the same thing as de novo AI, but see e.g. this paper which notes that biologically realistic emulation may remain unachievable. Also various scaling and bandwidth limitations could also contribute to it being infeasible to get the necessary power by just stacking more and more servers on top of each other.
This would still leave open the option of creating a strong AI from cultivating biological cells, but especially if molecular nanotechnology turns out to be impossible, the extent to which you could engineer the brains to your liking could be very limited.
(For what it's worth, I don't consider this a particularly likely scenario: we're already developing brain implants which mimic the functionality of small parts of the brain, which doesn'...
You could have a story where the main character are intelligences already operating near the physical limits of their universe. It's simply too hard to gather the raw materials to build a bigger brain.
One potential failure mode to watch out for is ending up with readers who think they now understand the arguments around Strong AI and don't take it seriously, because both its possibility and its impossibility were presented as equally probable. The possibility of Strong AI is overwhelmingly more probable than the impossibility. People who currently don't take Strong AI seriously will round off anything other than very strong evidence for the possibility of Strong AI to 'evidence not decisive; continue default belief', so their beliefs won't change and th...
Before certain MIRI papers, I came up with a steelman in which transparently written AI could never happen due to logical impossibility. After all, humans do not seem transparently written. One could imagine that the complexity necessary to approximate "intelligence" grows much faster than the intelligence's ability to grasp complexity - at least if we mean the kind of understanding that would let you improve yourself with high probability.
This scenario seemed unlikely even at the time, and less likely now that MIRI's proven some counterexamples to closely related claims.
Strong AI could be impossible (in our universe) if we're in a simulation, and the software running us combs through things we create and sabotages every attempt we make.
Or if we're not really "strongly" intelligent ourselves. Invoke absolute denial mechanism.
Or if humans run on souls which have access to some required higher form of computation and are magically attached to unmodified children of normal human beings, and attempting to engineer something different out of our own reproduction summons the avatar of Cthulhu.
Or if there actually is no order in the universe and we're Boltzmann brains.
The only way I could imagine it to be impossible is if some form of dualism were true. Otherwise, brains serve as an existence proof for strong AI, so it's kinda hard to use my own brain to speculate on the impossibility of its own existence.
It's clearly possible. There's not going to be some effect that makes it so intelligence only appears if nobody is trying to make it happen.
What might be the case is that it is inhumanly difficult to create. We know evolution did it, but evolution doesn't think like a person. In principle, we could set up an evolutionary algorithm to create intelligence, but look how long that took the first time. It is also arguably highly unethical, considering the amount of pain that will invariably take place. And what you end up with isn't likely to be friendly.
We exist. Therefore strong AI is possible, in that if you were to exactly replicate all of the features of a human, you would have created a strong AI (unless there is some form of Dualism and that you needed whatever a 'soul' is from the 'higher reality' to become conscious).
What things might make Strong AI really really hard, though not impossible?
Maybe a neuron is actually way way more complicated than we currently think, so the problem of making an AI is a lot more complex. etc.
Do you mean "impossible in principle" or "will never be built by our civilization"?
If first, then it is a well-known an widely accepted without much evidence idea that brain just can't be simulated by any sort of Turing machine. For in-story explanation why there are no AIs in future, that is enough.
If second, there is a very real possibility than technical progress will slow down to a halt, and we just never reach a technical capability to build an AI. On this topic, some people say that progress is accelerating right now and some say that it is slowing down since the late 19 century, and of course future is even more unclear.
Strong AI could fail if there are limits to computational integrity on sufficiently complex systems, similar to heating and QM problems limiting transistor sizes. For example, perhaps we rarely see these limits in humans because their frequency is one in a thousand human-thought-years, and when they do manifest it is mistaken as a diagnosis of mental illness.
Short answer: strong AI is both possible and highly probable. That being the case we have to think about the best ways to deal, with a virtually impossible to avoid outcome of the internet. That is, at some point it basically starts to build itself. And when it does... what will it build?
Depends what you mean by strong AI. The best we know for sure we can do is much faster human intelligence minus the stupid parts, and with more memory. That's pretty danged smart, but if you think that's not 'strong AI' then it isn't much of a stretch to suppose that that's the end of the road - we're close enough to optimal that once you've fixed the blatant flaws you're well into diminishing returns territory.
The only explanation I could think of is that there's actually something like souls and those souls are important for reasoning.
In that case, research will just need to discover the necessary properties of soul-attracting substrate.
If Strong AI turns out to not be possible, what are our best expectations today as to why?
I'm thinking of trying myself at writing a sci-fi story, do you think exploring this idea has positive utility? I'm not sure myself: it looks like the idea that intelligence explosion is a possibility could use more public exposure, as it is.
I wanted to include a popular meme image macro here, but decided against it. I can't help it: every time I think "what if", I think of this guy.