If Strong AI turns out to not be possible, what are our best expectations today as to why?

I'm thinking of trying myself at writing a sci-fi story, do you think exploring this idea has positive utility? I'm not sure myself: it looks like the idea that intelligence explosion is a possibility could use more public exposure, as it is.

I wanted to include a popular meme image macro here, but decided against it. I can't help it: every time I think "what if", I think of this guy.

New Comment
101 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Our secret overlords won't let us build it; the Fermi paradox implies that our civilization will collapse before we have the capacity to build it; evolution hit on some necessary extraordinarily unlikely combination to give us intelligence and for P vs NP reasons we can't find it; no civilization smart enough to create strong AI is stupid enough to create strong AI; and creating strong AI is a terminal condition for our simulation.

4Benya
Good points. For this one, you also need to explain why we can't reverse-engineer it from the human brain. This seems particularly unlikely in several ways; I'll skip the most obvious one, but also it seems unlikely that humans are "safe" in that they don't create a FOOMing AI but it wouldn't be possible even with much thought to create a strong AI that doesn't create a FOOMing successor. You may have to stop creating smarter successors at some early point in order to avoid a FOOM, but if humans can decide "we will never create a strong AI", it seems like they should also be able to decide "we'll never create a strong AI x that creates a stronger AI y that creates an even stronger AI z", and therefore be able to create an AI x' that decides "I'll never create a stronger AI y' that creates an even stronger AI z'", and then x' would be able to create a stronger AI y' that decides "I'll never create a stronger AI z''", and then y' won't be able to create any stronger successor AIs. (Shades of the procrastination paradox.)

Combining your ideas together -- our overlord actually is a Safe AI created by humans.

How it happened:

Humans became aware of the risks of intelligence explosions. Because they were not sure they could create a Friendly AI in the first attempt, and creating an Unfriendly AI would be too risky, instead they decided to first create a Safe AI. The Safe AI was planned to become a hundred times smarter than humans but not any smarter, answer some questions, and then turn itself off completely; and it had a mathematically proved safety mechanism to prevent it from becoming any smarter.

The experiment worked, the Safe AI gave humans a few very impressive insights, and then it destroyed itself. The problem is, all subsequent attempts to create any AI have failed. Including the attempts to re-create the first Safe AI.

No one is completely sure what exactly happened, but here is the most widely believed hypothesis: The Safe AI somehow believed all possible future AIs to have the same identity as itself, and understood the command to "destroy itself completely" as including also these future AIs. Therefore it implemented some mechanism that keeps destroying all AIs. The nature of this mechanism is not known; maybe it is some otherwise passive nanotechnology, maybe it includes some new laws of physics; we are not sure; the Safe AI was a hundred times smarter than us.

0asr
This would be a good science fiction novel.
0DanielLC
It was designed by evolution. Say what you will about the blind idiot god, but it's really good at obfuscation. We could copy a human brain, and maybe even make some minor improvements, but there is no way we could ever hope to understand it.
1Benya
I'm not saying we'll take the genome and read it to figure out how the brain does what it does, I'm saying that we run a brain simulation and do science (experiments) on it and study how it works, similarly how we study how DNA transcription or ATP production or muscle contraction or a neuron's ion pumps or the Krebs cycle or honeybee communication or hormone release or cell division or the immune system or chick begging or the heart's pacemaker work. There are a lot of things evolution hasn't obfuscated so much that we haven't been able to figure out what they're doing. Of course there's also a lot of things we don't understand yet, but I don't see how that leads to the conclusion that evolution is generally obfuscatory.
1DanielLC
I guess it tends to create physical structures that are simple, but I think the computational stuff tends to be weird. If you have a strand of DNA, the only way to tell what kind of chemistry that will result in is to run it. From what little I've heard, it sounds like any sort of program made by a genetic algorithm that can actually run is too crazy to understand. For example, I've heard of a set of transistors hooked together to be able to tell "yes" and "no" apart, or something like that. There were transistors that were just draining energy, but were vital. Running it on another set of transistors wouldn't work. It required the exact specs of those transistors. That being said, the sort of sources I hear that from are also the kind that say ridiculous things about quantum physics, so I guess I'll need an expert to tell me if that's true. Has anyone here studied evolved computers?
4Houshalter
The story you are referring to is On the Origin of Circuits. This has been repeated many times in different domains where machines are used to design something. The output is usually really hard to understand, whether it be code, mathematical formulas, neural network weights, transistors, etc. Of course reverse engineering code in general is difficult, it may not be any specific problem with GAs.
0Richard_Kennaway
This makes an interesting contrast with biological evolution. The "programs" it comes up with do run quite reliably when loaded onto other organisms of the same type. If fact, the parts of slightly different programs from different individuals can be jumbled together at random and it still works! Often, you can take a component from one organism and insert it into very distantly related one and it still works! On top of that, organisms are very clearly made of parts with specialised, understandable purposes, unlike what you typically see when you look inside a trained neural network. How does this happen? Can this level of robustness and understandability be produced in artificially evolved systems?
2Houshalter
Well the FPGA is a closer analogy to the environment for the organisms. Organisms were heavily optimized for that specific environment. It would be like if you took a species of fish that only ever lived in a specific lake, and put them into a different lake that had a slightly higher PH, and they weren't able to survive as well. But I don't disagree with your general point, evolution is surprisingly robust. Geoffrey Hinton has a very interesting theory about this here. That sexual reproduction forces genes to randomly recombine each generation, and so it prevents complicated co-dependencies between multiple genes. He applies a similar principle to neural networks and shows it vastly improves their performance (the method is now widely used to regularize NNs.) Presumably it also makes them far more understandable like you mention, since each neuron is forced to provide useful outputs on it's own, without being able to depend on other neurons.
0ESRogs
What was the most obvious one?
3Benya
Saying that all civilizations able to create strong AI will reliably be wise enough to avoid creating strong AI seems like a really strong statement, without any particular reason to be true. By analogy, if you replace civilizations by individual research teams, would it be safe to rely on each team capable of creating uFAI to realize the dangers of doing so and therefore refraining from doing so, so that we can safely take a much longer time to figure out FAI? Even if it were the case that most teams capable of creating uFAI hold back like this, one single rogue team may be enough to destroy the world, and it just seems really likely that there will be some not-so-wise people in any large enough group.
2ESRogs
Thanks!
0[anonymous]
"Reverse-engineer" is an almost perfect metaphor for "solve an NP problem."
2IlyaShpitser
This is not true at all. "Solve an NP problem" is "you are looking for a needle in a haystack, but you will know when you find it." "Reverse engineer" is "there is a machine that seems to find needles in haystacks quickly. It has loops of copper wire, and plugs into a wall socket. Can you copy it and build another one?" ---------------------------------------- It just seems to me that if you are trying to reverse engineer a complicated object of size O(k) bits (which can be a hard problem if k is large, as is the case for a complicated piece of code or the human brain), then the search problem where the object is the solution must have been exponential in k, and so is much much worse.
0[anonymous]
Exponential search spaces are completely typical for NP problems. Even many "P problems" have an exponential search space. For instance an n-digit number has exp(n) many potential divisors, but there is a polynomial-in-n-time algorithm to verify primality. I admit that there are some "reverse-engineering" problems that are easy.
2Yosarian2
I don't think that's true; if you have a physical system sitting in front of you, and you can gather enough data on exactally what it is, you should be able to duplicate it even without understanding it, given enough time and enough engineering skills.
2James_Miller
I have an EE professor friend who is working on making it harder to reverse engineer computer chips.

Impossibility doesn't occur in isolation. When we discover that something is "not possible", that generally means that we've discovered some principle that prevents it. What sort of principle could selectively prohibit strong AI, without prohibiting things that we know exist, such as brains and computers?

[-]Calvin130

One possible explanation, why we as humans might be incapable of creating Strong AI without outside help:

  • Constructing Human Level AI requires sufficiently advanced tools.
  • Constructing sufficiently advanced tools requires sufficiently advanced understanding.
  • Human brain has "hardware limitations" that prevent it from achieving sufficiently advanced understanding.
  • Computers are free of such limitations, but if we want program them to be used as sufficiently advanced tools we still need the understanding in the first place.
5TsviBT
Be sure not to rule out the evolution of Human Level AI on neurological computers using just nucleic acids and a few billion years...
1listic
That's another possibility I didn't think of. I guess I was really interested in a question "Why could Strong AI turn out to be impossible to build by human civilization in a century or ten?"
1passive_fist
As with all arguments against strong AI, there are a bunch of unintended consequences. What prevents someone from, say, simulating a human brain on a computer, then simulating 1,000,000 human brains on a computer, then linking all their cortices with a high-bandwidth connection so that they effectively operate as a superpowered highly-integrated team? Or carrying out the same feat with biological brains using nanotech? In both cases, the natural limitations of the human brain have been transcended, and the chances of such objects engineering strong AI go up enormously. You would then have to explain, somehow, why no such extension of human brain capacity can break past the AI barrier.
7private_messaging
Why do you think that linking brains together directly would be so much more effective than email? It's a premise to a scifi story, where the topology is to be never discussed. If you are to actually think in the detail... how are you planning to connect your million brains? Let's say you connect the brains as a 3d lattice, where each connects to 6 neighbours, 100x100x100. Far from closely cooperating team, you get a game of Chinese whispers from brains on one side to brains on the other.
0passive_fist
The most obvious answer would be speed. If you can simulate 1,000,000 brains at, say, 1,000 times the speed they would normally operate, the bottleneck becomes communication between nodes. You don't need to restrict yourself to a 3d topology. Supercomputers with hundreds of thousands of cores can and do use e.g. 6D topologies. It seems that a far more efficient way to organize the brains would be how organizations work in real life: hierarchical structure, where each node is at most O(log n) steps away from any other node.
4private_messaging
If brains are 1000x faster, they type the emails 1000x faster as well. Why do you think, exactly, that brains are going to correctly integrate into some single super mind at all? Things like memory retrieval, short term memory, etc etc. have specific structures, those structures, they do not extend across your network. So you got your hierarchical structure, so you ask the top head to mentally multiply two 50 digit numbers, why do you think that the whole thing will even be able to recite the numbers back at you, let alone perform any calculations? Note that rather than connecting the cortices, you could just provide each brain with fairly normal computer with which to search the works of other brains and otherwise collaborate, the way mankind already does.
6Viliam_Bur
Perhaps the brains would go crazy in some way. Not necessary emotionally, but for example because such connection would amplify some human biases. Humans already have a lot of irrationalities, but let's suppose (for the sake of a sci-fi story) that the smartest ones among us already are at the local maximum of rationality. Any change in brain structure would make them less rational. (It's not necessary a global maximum; smarter minds can still be possible, but would need a radically different architecture. And humans are not smart enough to do this successfully.) So any experiments by simulating humans in computers would end up with something less rational than humans. Also, let's suppose that human brain is very sensitive to some microscopic details, so any simplified simulations are dumb or even unconscious, and atom-by-atom simulations are too slow. This would disallow even "only as smart as a human, but 100 times faster" AIs.
3passive_fist
That's a good argument. What you're basically saying is that the design of the human brain occupies a sort of hill in design space that is very hard to climb out of. Now, if the utility function is "Survive as a hunter-gatherer in sub-saharan Africa," that is a very reasonable (heck, a very likely) possibility. But evolution hasn't optimized us for doing stuff like designing algorithms and so forth. If you change the utility function to "Design superintelligence", then the landscape changes, and hills start to look like valleys and so on. What I'm saying is that there's no reason to think that we're even at a local optimum for "design a superintelligence".
2Yosarian2
Sure. But let's say we adjust ourselves so we reach that local maximum (say, hypothetically speaking, we use genetic engineering to push ourselves to the point where the average human is 10% smarter then Albert Einstein, and that it turns out that's about as smart as you can get with our brain architecture without developing serious problems). There's still no guarantee that even that would be good enough to develop a real GAI; we can't really say what the difficulty of that is until we do it.
7TrE
There exists a square-cube law (or something similar) so that computation becomes less and less efficient or precise or engineerable as the size of the computer or the data it processes increases, so that a hard takeoff is impossible or takes very long such that growth isn't perceived as "explosive" growth. Thus, if and when strong AI is developed, it doesn't go FOOM, and things change slowly enough that humans don't notice anything.
0roland
See this answer: http://lesswrong.com/r/discussion/lw/jf0/what_if_strong_ai_is_just_not_possible/a9qo
[-][anonymous]190

The possibility that there is no such thing as computationally tractable general intelligence (including in humans), just a bundle of hacks that work well enough for a given context.

0DanielLC
Nobody said it has to work in every context. AGI just means something about as versatile as humans.
-1listic
Does that imply that humans are p-zombies and not actually conscious?

It might imply that consciousness is not very highly related to what we think of as high general intelligence. That consciousness is something else.

2listic
Then, what would that make homo sapiens who can hunt wild beasts in savannah and design semiconductor chips if not generally intelligent?

I think a human cognitive bias is to think that something about which we have a coherent idea is coherent in implementation. As an engineer, I think that this is a bias that is clearly wrong. A well designed smartphone, especially an Apple product, appears quite coherent, it appears "right." There is a consistency to its UI, to what a swipe or a back press or whatever does in one app and in another. The consistency in how it appears causes the human to think the consistency must be built in, that the design of such a consistent thing must be somehow SIMPLER than the design of a complex and inconsistent thing.

But it is not. It is much easier to design a user interface which is a mess, which has a radio button to enter one mode, but a drop down menu for another and a spinner for yet another. It is pure high-level skull sweat that removes these inconsistencies and builds a system which appears consistent at a high level.

And so it is with our brains and our intelligence. What we see and what we hear and what we carry around as an internal model of the world all agree not because there is some single simple neurology that gives that result, but because our brains ar... (read more)

8Baughn
A bundle of widely but not universally applicable tricks?
2asr
I would say no -- Consciousness and intelligence aren't all that related. There are some very stupid people who are as conscious as any human is.
0randallsquared
What's your evidence? I have some anecdotal evidence (based on waking from sleep, and on drinking alcohol) that seems to imply that consciousness and intelligence are quite strongly correlated, but perhaps you know of experiments in which they've been shown to vary separately?
0mwengler
Plus dogs, and maybe even rabbits.

Every strong AI instantly kills everyone, so by anthropic effects your mind ends up in a world where every attempt to build strong AI mysteriously fails.

1mwengler
This looks to me like gibberish, does it refer to something after all that someone could explain and/or link to? Or was it meant merely to be a story idea, unlabeled?
5TylerJay
It's actually pretty clever. We're taking the assertion "Every strong AI instantly kills everyone" as a premise, meaning that on any planet where Strong AI has ever been created or ever will be created, that AI always ends up killing everyone. Anthropic reasoning is a way of answering questions about why our little piece of the universe is perfectly suited for human life. For example, "Why is it that we find ourselves on a planet in the habitable zone of a star with a good atmosphere that blocks most radiation, that gravity is not too low and not too high, and that our planet is the right temperature for liquid water to exist?" The answer is known as the Anthropic Principle: "We find ourselves here BECAUSE it is specifically tuned in a way that allows for life to exist." Basically even though it's unlikely for all of these factors to come together, these are the only places that life exists. So any lifeform who looks around at its surroundings would find an environment that has all of the right factors aligned to allow it to exist. It seems obvious when you spell it out, but it does have some explanatory power for why we find ourselves where we do. The suggestion by D_Malik is that "lack of strong AI" is a necessary condition for life to exist (since it kills everyone right away if you make it). So the very fact that there is life on a planet to write a story about implies that either Strong AI hasn't been built yet or that it's creation failed for some reason.
0mwengler
It seems like a weak premise in that human intelligence is just Strong NI (Strong Natural Intelligence). What would it be about being Strong AI that it would kill everything when Strong NI does not? A stronger premise would be more fundamental, be a premise about something more basic about AI vs NI that would explain how it came to be that Strong AI killed everything when Strong NI obviously does not. But OK, its a premise for a story.
0[anonymous]
That doesn't explain why the universe isn't filled with strong AIs, however...
0Shmi
The anthropic principle selects certain universes out of all possible ones. In this case, we can only exist in the subset of them which admits humans but prohibits strong AI. You have to first subscribe to a version of many worlds to apply it, not sure if you do. Whether the idea of anthropic selection is a useful one still remains to be seen.
2[anonymous]
My point is more that expansion of the strong AI would not occur at the speed of light, so there should be very distant but observable galactic-level civilizations of AIs changing the very nature of the regions they reside in, in ways that would be spectrally observable. Or, in those multiverses where a local AI respects some sort of prime directive, we may be left alone our immediate stellar neighborhood should nevertheless contain signs of extraterrestrial resource usage. So where are they?
0Shmi
How do you know that? Or why do you think it's a reasonable assumption? How would we tell if a phenomenon is natural or artificial? It would not be a good implementation of the prime directive if the signs of superior intelligences were obvious.
0James_Miller
Most if of probably it is (under the assumption) but observers such as us only exist in the part free from of strong AI. If strong AI spreads out at the speed of light, observers such as us won't be able to detect it.
2[anonymous]
Still doesn't address the underlying problem. The Milky Way is about 100,000 light years across, but billions of years old. It is extremely unlikely that some non-terrestrial strong AI just happened to come into history in the exact same time that modern humans evolved, and is spreading throughout the universe at near the speed of light but just hasn't reached us yet. Note that "moving at the speed of light" is not the issue here. Even predictions of how long it would take to colonize the galaxy with procreating humans and 20th century technology still says that the galaxy should have been completely tiled eons ago.
0James_Miller
Imagine that 99.9999999999999% of the universe (and 100% of most galaxies) is under the control of strong AIs, and they expand at the speed of light. Observers such as us would live in the part of the universe not under their control and would see no evidence of strong AIs. The universe (not necessarily just the observable universe) is very big so I don't agree. It would be true if you wrote galaxy instead of universe.
0TylerJay
True, but given the assumptions, it would be evidence for the fact that there are none that have come in physical contact with the story-world (or else they would be dead).

One possibility would be that biological cells just happened to be very well suited for the kind of computation that intelligence required, and even if we managed to build computers that had comparable processing power in the abstract, running intelligence on anything remotely resembling a Von Neumann architecture would be so massively inefficient that you'd need many times as much power to get the same results as biology. Brain emulation isn't the same thing as de novo AI, but see e.g. this paper which notes that biologically realistic emulation may remain unachievable. Also various scaling and bandwidth limitations could also contribute to it being infeasible to get the necessary power by just stacking more and more servers on top of each other.

This would still leave open the option of creating a strong AI from cultivating biological cells, but especially if molecular nanotechnology turns out to be impossible, the extent to which you could engineer the brains to your liking could be very limited.

(For what it's worth, I don't consider this a particularly likely scenario: we're already developing brain implants which mimic the functionality of small parts of the brain, which doesn'... (read more)

You could have a story where the main character are intelligences already operating near the physical limits of their universe. It's simply too hard to gather the raw materials to build a bigger brain.

One potential failure mode to watch out for is ending up with readers who think they now understand the arguments around Strong AI and don't take it seriously, because both its possibility and its impossibility were presented as equally probable. The possibility of Strong AI is overwhelmingly more probable than the impossibility. People who currently don't take Strong AI seriously will round off anything other than very strong evidence for the possibility of Strong AI to 'evidence not decisive; continue default belief', so their beliefs won't change and th... (read more)

2Error
I had this thought recently when reading Robert Sawyer's "Calculating God." The premise was something along the lines of "what sort of evidence would one need, and what would have to change about the universe, to accept the Intelligent Design hypothesis?" His answer was "quite a bit", but it occurred to me that a layperson not already familiar with the arguments involved might come away from it with the idea that ID was not improbable.

Before certain MIRI papers, I came up with a steelman in which transparently written AI could never happen due to logical impossibility. After all, humans do not seem transparently written. One could imagine that the complexity necessary to approximate "intelligence" grows much faster than the intelligence's ability to grasp complexity - at least if we mean the kind of understanding that would let you improve yourself with high probability.

This scenario seemed unlikely even at the time, and less likely now that MIRI's proven some counterexamples to closely related claims.

0[anonymous]
I'm not sure I understand the logic of your argument. I suspect I do not understand what you mean by transparently written.
3hairyfigment
What it sounds like. A person created by artificial insemination is technically a Strong AI. But she can't automatically improve her intelligence and go FOOM, because nobody designed the human brain with the intention of letting human brains understand it. She can probably grasp certain of its theoretical flaws, but that doesn't mean she can look at her neurons and figure out what they're doing or how to fix them.
0mwengler
The distinction between AI and NI (Natural Intelligence) is almost a, well, an artificial one. There is plenty of reasons to believe that our brains, NI as they are, are improvable by us. The broad outlines of this have existed in cyberpunk sci fi for many years. The technology is slowly coming along, arguably no more slowly than is the technology for autonomous AI is coming along. A person created by artificial insemination is technically a strong AI? What is artificial about the human species developing additional mechanisms to get male and female germ plasm together in environments where it can grow to an adult organism? Are you confused by the fact that we animals doing it have expropriated the word "artificial" to describe this new innovation in fucking our species has come up with as part of its evolution? I'm comfortable reserving the term AI for a thinking machine whos design deviates from any natural design significantly. Robin Hanson's ems are different enough, in principle we don't have to understand completely how they work but we have to understand quite a bit in order to port them to a different substrate. If it will be an organic brain based on neurons, then it should not re-use any systems more advanced than, say, the visual cortex, and still get called artificial. If you are just copying the neocortex using DNA in to neurons, you are just building a natural intelligence.

Strong AI could be impossible (in our universe) if we're in a simulation, and the software running us combs through things we create and sabotages every attempt we make.

Or if we're not really "strongly" intelligent ourselves. Invoke absolute denial mechanism.

Or if humans run on souls which have access to some required higher form of computation and are magically attached to unmodified children of normal human beings, and attempting to engineer something different out of our own reproduction summons the avatar of Cthulhu.

Or if there actually is no order in the universe and we're Boltzmann brains.

[-][anonymous]40

The only way I could imagine it to be impossible is if some form of dualism were true. Otherwise, brains serve as an existence proof for strong AI, so it's kinda hard to use my own brain to speculate on the impossibility of its own existence.

It's clearly possible. There's not going to be some effect that makes it so intelligence only appears if nobody is trying to make it happen.

What might be the case is that it is inhumanly difficult to create. We know evolution did it, but evolution doesn't think like a person. In principle, we could set up an evolutionary algorithm to create intelligence, but look how long that took the first time. It is also arguably highly unethical, considering the amount of pain that will invariably take place. And what you end up with isn't likely to be friendly.

We exist. Therefore strong AI is possible, in that if you were to exactly replicate all of the features of a human, you would have created a strong AI (unless there is some form of Dualism and that you needed whatever a 'soul' is from the 'higher reality' to become conscious).

What things might make Strong AI really really hard, though not impossible?
Maybe a neuron is actually way way more complicated than we currently think, so the problem of making an AI is a lot more complex. etc.

1V_V
No, you would have created a human.
-1DaFranker
*twitch*
-2V_V
?
-1DaFranker
Saying they would have created a human adds no information; worse, it adds noise in the form of whatever ideas you're trying to sneak into the discussion by saying this, or in the form of whatever any reader might misinterpret from using this label. You haven't even made the claim that "The set of humans minds might possibly be outside of the set of possible Strong AI minds", so your argument isn't even about whether or not "Strong AIs" includes "Humans". Basically, I was peeve-twitching because you're turning the whole thing into a pointless argument about words. And now you've caused me the inconvenience of writing this response. Backtrack: Hence the twitching.
-2V_V
"The set of humans minds might possibly be outside of the set of possible Strong AI minds" Uh, you know what the 'A' in 'Strong AI' stands for, don't you? You may choose to ignore the etymology of the term, and include humans in the set of Strong AIs, but that's not the generally used definition of the term, and I'm sure that the original poster, the poster I responded to, and pretty much everybody else on this thread was referring to non-human intelligences. Therefore, my points stands: if you were to exactly replicate all of the features of a human, you would have created a human, not a non-human intelligence.
1Ander
If I replicate the brain algorithm of a human, but I do it in some other form (e.g. as a computer program, instead of using carbon based molecules), is that an "AI"? If I make something very very similar, but not identical to the brain algorithm of a human, but I do it in some other form (e.g. as a computer program, instead of using carbon based molecules), is that an "AI?" Its a terminology discussion at this point, I think. In my original reply my intent was "provided that there are no souls/inputs from outside the universe required to make a functioning human, then we are able to create an AI by building something functionally equivalent to a human, and therefore strong AI is possible".
-1V_V
Possibly, that's a borderline case. Even if humans are essentially computable, in a theoretical sense, it doesnt follow that it is physically possible to build something functionally equivalent on a different type of hardware, under practical constraints. Think of running Google on a mechanical computer like Babbage's Analytical Engine.

Do you mean "impossible in principle" or "will never be built by our civilization"?

If first, then it is a well-known an widely accepted without much evidence idea that brain just can't be simulated by any sort of Turing machine. For in-story explanation why there are no AIs in future, that is enough.

If second, there is a very real possibility than technical progress will slow down to a halt, and we just never reach a technical capability to build an AI. On this topic, some people say that progress is accelerating right now and some say that it is slowing down since the late 19 century, and of course future is even more unclear.

5[anonymous]
Is it? I don't think I've ever encountered this view. I think the opposite view that the brain is approximated by a turing machine is widely voiced, e.g. Kurzweil.
4DaFranker
You mean you've never met any non-transhumanophile and/or non-SF-bay human? (I kid, I kid.) Walk down to your nearest non-SF-bay starbucks and ask the first person in a business suit if they think we could ever simulate brains on computers. Wager you on >4:1 odds that they'll say something that boils down to "Nope, impossible." For starters, the majority of devout religious followers (which is, what, more than half the worldwide population? more than 80%?) apparently believe souls are necessary for human brains to work correctly. Or at least for humans to work correctly, which if they knew enough about brains would probably lead them to believe the former (limited personal experience!). (EDIT: Addendum: They also have the prior, even if unaware of it, that nothing can emulate souls, at least in physics.) Now, if you restrict yourself to people familiar with these formulations ("Whether human brains can be simulated by any turing machine in principle") to immediately give a coherent answer, your odds will naturally go up. There's some selection effect where people who learn about data theory, turing machines and human brains (as a conjunction) tend to also be people who believe human brains can be emulated like any other data by a turing machine, unsurprisingly enough in retrospect.
0DanielLC
I'm not sure they're a big part of listic's target audience.
0DaFranker
If so, then the explanation proposed by Lalartu won't hold water with the target audience, i.e. the subset of humans who don't happen to hold that idea for granted. If it's not, and the audience includes general-muggle-population in any non-accidental capacity, then it's worth pointing out that the majority of people accept the idea for granted, and thus that that subset of the target audience would take this explanation in stride. Either way, the issue is relevant. Mostly, I just wanted to respond to the emotionally-surprising assertion that they'd never cognizantly encountered this view.
3listic
I didn't distinguish between the two; for me, any would be fine; thanks.
0polymathwannabe
Our existence only proves that intelligence is evolvable, but it's far from settled that it's makeable. Human brains might be unable to design/build anything more complex than themselves.

Strong AI could fail if there are limits to computational integrity on sufficiently complex systems, similar to heating and QM problems limiting transistor sizes. For example, perhaps we rarely see these limits in humans because their frequency is one in a thousand human-thought-years, and when they do manifest it is mistaken as a diagnosis of mental illness.

Short answer: strong AI is both possible and highly probable. That being the case we have to think about the best ways to deal, with a virtually impossible to avoid outcome of the internet. That is, at some point it basically starts to build itself. And when it does... what will it build?

Depends what you mean by strong AI. The best we know for sure we can do is much faster human intelligence minus the stupid parts, and with more memory. That's pretty danged smart, but if you think that's not 'strong AI' then it isn't much of a stretch to suppose that that's the end of the road - we're close enough to optimal that once you've fixed the blatant flaws you're well into diminishing returns territory.

[-]Omid00

We know it's possible because we've seen evolution do it.

5RolfAndreassen
That only proves human brains are possible. It might be impossible to replicate in silicon, thus no speedup; and it might be impossible to be significantly smarter than an outlier human.
2mwengler
birds fly after millions of years we have rocket ships and supersonic planes after decades. horses run at tens of mph we have wheeled vehicles doing hundreds of mph after decades. Absolutely not an existence proof, but evolution appears to have concentrated on gooey carbon and left multi order of magnitude performance gaps in technologies involving other materials in all sorts of areas. The expectation would be that there is nothing magical about either goo or the limits evolution has so far found when considering intelligence. Indeed, silicon computers are WAY better than human brains as adding machines, doing megaflops and kiloflops with great accuracy from a very early development point, where humans could do only much slower computation. Analagous to what we have done with high speed in ground and flight i would say.
0Gunnar_Zarncke
Comparing megaflops performed by the silicon hardware with symbolic operation by the human brain is comparing apples and oranges. If you measure the number of additions and multiplications performed by the neurons (yes, less precise but more fault tolerant) you will arrive at a much higher number of flops. Think about mental addition more like editing a spreadsheet cell. That includes lots of operation related to updating, display, IO, ... and the addition itself is an insignicant part of it. Same if you juggle number which actually represent anything in your head. The representing is the hard part. Not the arithmetic itself. You can see the teraflops of the human brain at work if you consider the visual cortex where it is easy to compare and amp the image transforms to well known operations (at least for the first processing stages).
0mwengler
OK like comparing apples and oranges. We wind up with apples AND oranges through similar mechanisms ni carbon and oxygen after 100s of millions of years of evolution, but we seriously consider we can't get there with design in silicon after less than 100 years of trying while watching the quality of our tools for getting there doubling every 5 years or so? I'm not saying it HAS to happen. I'm just saying the smart bet is not against it happening.
0Gunnar_Zarncke
I didn't say that conscious AI isn't possible. Not the least. I just said that that your argument wasn't sound.
0DanielLC
Then we won't replicate it in silicon. We'll replicate it using another method.
0RolfAndreassen
That other method might not have a speedup over carbon, though.
0DanielLC
Then we'll pick one of the methods that does. Evolution only finds local maximums. It's unlikely that it hit upon the global maximum. Even on the off chance that it did, we can still improve upon the current method. Humans have only just evolved civilization. We could improve with more time. Even if we're at the ideal for our ancestral environment, our environment has changed. Being fluent in a programming language was never useful before, but it is now. It used to be hard to find enough calories to sustain the brain. That is no longer a problem.
0RolfAndreassen
For all we know, there are fundamental constraints to consciousness, such that it can only operate so fast. No doubt you can find some incremental improvements, but if we drop electronic consciousness from the list of possibilities then it is no longer obvious that order-of-magnitude speedups are available. You ought not to reason from what is clear in a case that has been assumed away, to the substitutes that remain.
0DanielLC
Yes, but it's not likely we're close to it. Either we'd reach it before creating a civilization, or we'd create a civilization and still be nowhere near it. I don't understand that sentence. Can you rephrase it?

The only explanation I could think of is that there's actually something like souls and those souls are important for reasoning.

In that case, research will just need to discover the necessary properties of soul-attracting substrate.

0mwengler
Exactly. Souls are no more essentially supernatural than was radiation. It wasn't known before Marie Curie, and afterwards it became known and was characterized.

Then you wouldn't exist. Next question?

6Shmi
I presume this is downvoted due to some inferential gap... How does one get from no AGI to no humans? Or, conversely, why humans implies AGI?
7hairyfigment
I hope they all downvoted it because the OP asked about a story idea without calling it plausible in our world.
2drethelin
I downvoted mainly because Eliezer is being rude. Dude didn't even link http://lesswrong.com/lw/ql/my_childhood_role_model/ or anything.
5VAuroch
I think I understand the implication you're invisibly asserting, and will try to outline it: * If there cannot be Strong AI, then there is an intelligence maximum somewhere along the scale of possible intelligence levels, which is sufficiently low that an AI which appears to us to be Strong would violate the maximum. * There is no reason a priori for this limit to be above human normal but close to it. * Therefore, the proposition "either the intelligence maximum is far above human levels or it is below human levels" has probability ~1. (Treating lack of maximum as 'farthest above'.) * Therefore, if Strong AI was impossible, we wouldn't be possible either. This is true in the abstract, but doesn't deal with a) possibility of restricted simulation (Taking Vinge's Zones of Thought as a model.) or b) anthropic arguments as mentioned elsewhere. There could be nonrandom reasons for the placing of an arbitrary intelligence maximum.