I periodically get email from folks who, having read "Accelerando", assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. I find this mildly distressing, and so I think it's time to set the record straight and say what I really think.

Short version: Santa Claus doesn't exist.

- Charles Stross, Three arguments against the singularity, 2011-06-22

EDITED TO ADD: don't get your hopes up, this is pretty weak stuff.

New to LessWrong?

New Comment
54 comments, sorted by Click to highlight new comments since: Today at 10:13 PM

I sometimes update in favor of a controversial claim when I see a smart opponent of that claim put forth their best argument against it, and the argument just isn't that good.

I'm not sure which parts of the essay are supposed to be more convincing than others, so I made a standing offer on HN to argue against any individual point.

FWIW, I hardly updated my views about the future at all - but I did update my views about Charles Stross - and it enlarged my list of strange things people actually think about intelligent machines a little.

As you should, otherwise you could never update the other way.

But he's not addressing issues with getting from here to superhuman intelligence, just with its theoretical possibility.

It seems like his best points are tangential to the supposed subject of argument.

He strangely seems to take embryonic personhood for granted while in the same paragraph noting that the relevant metric is consciousness. His assertion that general intelligence requires emotional instability or large amounts of sleep seems unsupported in principle. Also, his suggestion that all programs will become intelligent once any AI exists is absurd. The idea of reorienting an AI's sense of self is an interesting one, though.

He seems to have a very primitive concept of uploading - if we can upload a brain, we can simulate a body for it. If uploaded humans want to be able to compete with AI, they could participate in specialized training environments to rewire their senses toward the pure digital instead of analog-to-digital. Rewiring senses is something we already know how to do and in a digital upload, there are no biological limits on neuroplasticity. For those who would rather remain in a human-esque form, set up digital reservations, problem solved.

Wow, that Wired link is amazing. I barely believe some parts--balance sense staying after the prosthetics come off, and visual data to the tongue being subjectively perceived as images. I'd love a chance to try one of those devices out.

I remembered a previous conversation about the compass belt described in the article; it contains several links for building or ordering one.

By the way, I do not think that it is a coincidence that Robin Hanson also wrote a blog post on the Singularity yesterday evening. I wrote both of them at the weekend ;-)

Did they field your questions?

Did they field your questions?

Stross did, Hanson said he doesn't like email interviews. But Stross didn't state his explicit permission that I am allowed to publish his answers. And since I was mainly interested in his opinion myself I didn't bother to ask him again. But I don't think he would be bothered at all if I provide an extract?

He seems to think that embodied cognition plays an important role. In other words, human intelligence is strongly dependent on physiology.

Further, as far as I understand him, he believes to some extent into the Kurzweilian adaption of technology. It will be slowly creeping into our environment in the form of expert systems capable of human-like information processing.

Other points include that risks from friendly AI are to be taken seriously as well. For example, having some superhuman intelligence around who has it all figured out will deprive much of our intellectual curiosity of its value.

He mostly commented on existential risks, e.g. that we don't need AI to wipe us out:

I'm much more concerned about cheap, out-of-control gene hacking and synthetic organisms. The Australian mousepox/interleukin-4 experiment demonstrates the probable existence of a short, cheap path to human extinction, and that's something they stumbled across pretty much by accident. There's a lot we don't know about human immunobiology that can bite us on the ass -- see the TGN1412 accident, for example, or SARS. I find the idea of some idiot accidentally releasing a strain of the common cold that triggers cytokine storm terrifying. Not because anyone would want to do that, but because it could happen by accident.

Grey goo seems to be thermodynamically impossible, and anyway, we've got an existence proof for nanotechnology replicators in the shape of carbon-based life -- but knowing grey goo is impossible is no damn consolation if you've got necrotising fasciitis or ebola zaire.

His comment on AI and FOOM:

Similarly, I suspect we're still a way from understanding the neurobiological underpinnings of human intelligence -- we don't even have a consensus definition of what it means to be human and conscious...it's like asking, during a debate on the possibility of heavier-than-air powered flight in 1901, what the possibility is of developing an ornithopter than will eventually lay eggs. The question relies on multiple sequential assumptions that I think are individually invalid, or at least highly unlikely...

Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?

A good functional simulation of a human neuron, and a good block level functional model of how the human brain and peripheral nervous system operates. There may well be unforseen gotchas along the way: immune system modulation, hormone effects, other tissues found to conduct an action potential (e.g. glial cells) which radically modify the picture before we get there.

Alternatively, show me a household robot that can clean the toilet or sew a new hem on a pair of jeans and I'll start stocking up on canned goods and ammunition. (A lot of "trivial" tasks in the human environment are remarkably hard to handle by mechanised logic.)

(Oh, and he somtimes reads LW.)

Stross: +1 point for placing the scary roadblock below human-level intelligence.

I would have guessed that cleaning a toilet was much easier than sewing a new hem on a pair of jeans, anyone with expertise care to comment?

[-][anonymous]12y20

Depends on how thoroughly the toilet needs to be cleaned, of course, but here's some rough idea of the procedure:

-Look at the toilet and determine if it needs cleaning, and what kind, and where. You can't always assume a toilet is made of white porcelain, with its implications for simple, visual hacks to toilet-cleaning.

In any environment where a lot of dust accumulates, the toilet will periodically get a layer of scum and dust (which isn't a contamination risk AFAIK the way leftover, ah, residue might be but is definitely visually-unpleasant, and well outside the scope of "clean" for most people). This will mostly affect the lid and the tank-top. In low-dust environments it may never come up at all -- better hope the toilet-cleaning-robot designer understands that geography makes a difference!

Residue inside the bowl can be effluents, or just mineralization (the pinkish-orange colour you sometimes see inside of porcelain bowls). The internal angles and shapes of toilet bowls vary; you have to adapt your basic "brushstroke/cleanser-application" procedure to the actual shape, which may not be plannable ahead of time.

There's the floor around the toilet and its base, as well! This can get pretty messy in some situations, whether from settled dust, spilled effluents or mineralization. There are usually bolt covers protecting the bolts that hold the toilet down, ones that mustn't be dislodged during cleaning (and won't respond the same way to pressure as the substrate they rest on; they're designed to be removable by unassisted human hands to make unscrewing the bolt easy).

There's the bit where the lid and seat attach -- this has hinges which may be made of another material yet, or they may be covered.

The toilet might not be porcelain -- and if it is, it might not be white. The seat color might not match the base, the toilet might have a shag rug over the lid or on the tank top. Some people stack things on top of their toilets; these must generally be removed and yet not thrown away, and placed back when they're done. And it's hard to make simple algorithms for this -- if your toilet-cleaning robot shortcuts by comparing against a template of "porcelain white" and inferring the type of mess by probabilistic colour-matching (effluents, mineralization and dust usually look different), what does it do when it encounters a white porcelain toilet, mineralized, with a hardwood seat and a black shag carpet on the tank top, with some magazines stacked there?

Basically there's a huge number of trivially-demonstrable, real possible variations on the basic abstract idea of "scrub toilet surface so it is clean in appearance and not covered with contaminants." A human faced with three wildly-different toilet designs can probably make this all work out using just some spray cleaner and a brush, clearing off the items as needed, and trivially vary the angle of their arm or whatever to get at difficult spots. You need to be flexible, both mentally and physically, to clean a toilet under the full range of possible conditions for that task...

Sewing a new hem, one that wasn't there before, is also more complex than it sounds -- or rather, there's a lot of embedded complexity that's not obvious from the surface, and that's trivial for a human but surprisingly easy to screw up in terms of designating a flowchart or whatnot to build program around.

[-][anonymous]12y40

Residue inside the bowl can be effluents, or just mineralization (the pinkish-orange colour you sometimes see inside of porcelain bowls).

Bad news about that.

[-][anonymous]12y30

I think by far the most important test of the toilet-cleaning AI is the following: what does it do when it encounters something that is not a toilet?

[-][anonymous]12y00

Attach brain slugs to it.

His argument seems much better to me, I tried(!) to make a point similar to "there is no grand unified theory of intelligence" here.

I wrote both of them at the weekend ;-)

It seems as though you set off quite a blog-quake here! The butterfly effect in action!

Summary:

  • Human-level or above AI is impossible because either they're going to be people, which would be bad (we don't want to have to give them rights, and it'd be wrong to kill them), or they're going to refuse to self-improve because they don't care about themselves.
  • Uploading is possible but will cause a religious war. Also, if there are sentient AIs around, they'll beat us.
  • It's unlikely we're in a simulation because why would anyone want to simulate us?

Pretty reasonable for someone who says "rapture of the nerds". The main problem is anthropomorphism; Stross should read up on optimization processes. There's no reason AIs have to care about themselves to value becoming smarter.

(I've never found a good argument for "AGI is unlikely in theory". It makes me sad, because Stross is looking at practical aspects of uploading, and I need more arguments for/against "AGI is unlikely in practice".)

In some sense, AIs will need to care about themselves-- otherwise they won't adequately keep from damaging themselves as they try to improve themselves, and they won't take measures to protect themselves from outside threats.

The alternative is that they care about their assigned goals, but unless there's some other agent which can achieve their goals better than they can, I don't see a practical difference between AIs taking care of themselves for the sake of the goal and taking care of themselves because that's an independent motivation.

Sounds like he doesn't believe in the possibility of nonperson predicates.

No, it seems to be a different mistake. He thinks nonperson AIs are possible, but they will model themselves as... roughly, body parts of humans. So they won't optimize for anything, just obey explicit orders.

Stross makes a few different claims, but they don't seem to add up to any kind of substantial critique of singularity-belief. The simulation argument is just a tangent: it doesn't actually have anything to do with the probability of a technological singularity. He basically says he doesn't think uploading is likely and then describes why it is undesirable anyway.

The argument that religious opposition to uploading will prevent it from being introduced is parochial, since there are huge countries like Korea, China, and Japan which are overwhelmingly secular, and where the technology could very plausibly be developed. In addition, secularism is increasingly dominant in North America and Europe, so that viewpoint is short-sighted as well. More to the point, I sincerely doubt religious people would really have a problem with (other people) uploading in the first place. Stross's view that people would launch holy wars against uploaders seems like paranoia to me.

More to the point, I sincerely doubt religious people would really have a problem with (other people) uploading in the first place.

I think you underestimate the ability of some religious people to have problems with things, and overestimate the ability of many other religious people to speak against their pushy, controlling brethren. See, e.g., gay marriage.

I'm honestly not sure that the social conservatism we associate with religion has all that much to do, in the average case, with religion per se. Fundamentalism is kind of a special case, but most of the time what we see is religious opinion reflecting whatever the standard conservative-leaning package of social beliefs is at that particular place and time. Change the standard and religious opinion follows, perhaps after some lag time -- it's just that religiosity is correlated with a bundle of factors that tend to imply conservatism, and that church events give you a ready-made venue for talking to other conservatives.

For that matter, most of the analysis of fundamentalism I've read treats it as a reaction against secular influences, not an endogenous religious phenomenon. Its phenomenology in the wild seems to be consistent with that -- fundamentalist strains first tend to pop up a decade or two after a culture's exposure to Enlightenment values.

it's just that religiosity is correlated with a bundle of factors that tend to imply conservatism, and that church events give you a ready-made venue for talking to other conservatives.

That is probably an important contributor to the phenomenon; there is a certain "social conservative" mindset. However, I think that religion leads to some social norms, such as the idea that it's acceptable to pick one's morality out of old books, which provide fuel to socially conservative movements and attitudes.

Gay marriage is actually a good example of the secular side winning, since gay marriage is spreading to more states and countries, and everyone thinks it will continue spreading in the future. That is in spite of the fact that their scriptures are totally opposed to homosexuality, whereas I don't believe brain uploading is mentioned.

It's a good example that the secular side wins eventually, but it's not a good example of religious people not having problems with something. Abortion is not explicitly mentioned in the Bible, either.

Abortion is not explicitly mentioned in the Bible, either.

Although God did allegedly slay someone for the act of spilling semen on the ground during sex rather than impregnating the mate. May be best to err on the side of caution!

The secular side only has to win "eventually" in order for it to argue against Stross' point. In any case, even in the case of abortion, which triggers powerful disgust reflexes, there isn't anything like a holy war against abortion, let alone a holy war with a good chance of succeeding everywhere in the world.

http://en.wikipedia.org/wiki/Anti-abortion_violence

I do agree that the "holy war to end holy wars" phrase was hyperbolic, but it's also hyperbolic to expect no objection from religious people. When Stross first addresses the issue, he says:

However, if it becomes plausible in the near future we can expect extensive theological arguments over it. If you thought the abortion debate was heated, wait until you have people trying to become immortal via the wire. Uploading implicitly refutes the doctrine of the existence of an immortal soul, and therefore presents a raw rebuttal to those religious doctrines that believe in a life after death. People who believe in an afterlife will go to the mattresses to maintain a belief system that tells them their dead loved ones are in heaven rather than rotting in the ground.

That's not an extreme conclusion. It's only in the "in summary" section at the end that the rhetoric really picks up.

[-][anonymous]12y10

I wouldn't call Korea "overwhelmingly secular" in the same way Japan is. Christianity is very common in South Korea, and Japan is...more religious than you might think; the local mix of Shinto and Buddhism that prevails works rather differently than Christianity in the West.

I agree he had a number of arguments that were not related to his thesis concerning a hard take-off and he didn't really provide very good support for that thesis.

On further reflection, this piece and the comments on his site make me skeptical about ever achieving human intelligence.

The post by Stross triggered a reaction by Alex Knapp, 'What’s the Likelihood of the Singularity? Part One: Artificial Intelligence', which in turn caused Michael Anissimov to respond, 'Responding to Alex Knapp at Forbes'.

I also think that human-level AI is unlikely for a number of reasons – more than a few of them related to the differences between biological and machine intelligence. For one thing, we’re approaching the end of Moore’s Law in about a decade and a half or so, and generalized quantum computing isn’t likely to be with us anytime soon yet. For example, D-Wave’s adiabatic quantum computer isn’t a general computer – it’s focused on optimization problems. But even with that, the differences between human, animal and machine intelligence are profound.

...

At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply “workaround” the issue misses the underlying point that we can’t yet quantify the difference between human intelligence and machine intelligence.

...

...it’s definitely worth noting that computers don’t play “human-level” chess. Although computers are competitive with grandmasters, they aren’t truly intelligent in a general sense – they are, basically, chess-solving machines. And while they’re superior at tactics, they are woefully deficient at strategy, which is why grandmasters still win against/draw against computers.

...computers lose to humans because they are simply no match for humans at creating long-term chess strategy.

...

As for Watson’s ability to play Jeopardy!, it’s important to note that while Watson did win the Jeopardy tournament he played in, it’s worth noting that Watson’s primary advantage was being able to beat humans to the buzzer (electric relays are faster than chemical ones). Moreover, as many who watched the tournament (such as myself) noted, Watson got worse the more abstract the questions got.

...

What computers are smart at are brute-force calculations and memory retrieval. They’re not nearly as good at pattern recognition or the ability to parse meaning and ambiguity, nor are they good at learning.

...

...there are more than a few limitations on human-level AI, not the least of which are the actual physical limitations coming with the end of Moore’s Law and the simple fact that, in the realm of science, we’re only just beginning to understand what intelligence, consciousness, and sentience even are, and that’s going to be a fundamental limitation on artificial intelligence for a long time to come. Personally, I think that’s going to be the case for centuries.

Michael Anissimov wrote:

To say that we have to exactly copy a human brain to produce true intelligence, if that is what Knapp and Stross are thinking, is anthropocentric in the extreme. Did we need to copy a bird to produce flight? Did we need to copy a fish to produce a submarine? Did we need to copy a horse to produce a car? No, no, and no. Intelligence is not mystically different.

I think he doesn't understand what those people are saying. Nobody doubts that you don't need to imitate human intelligence to get artificial general intelligence but that a useful approximation of AIXI is much harder than understanding human intelligence.

AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence. Just as you won’t be able to upload yourself into the Matrix because you showed that in some abstract sense you can simulate every physical process.

I think he doesn't understand what those people are saying. Nobody doubts that you don't need to imitate human intelligence to get artificial general intelligence but that a useful approximation of AIXI is much harder than understanding human intelligence.

AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence.

It seems to me that you are systematically underestimating the significance of this material. Solomonoff induction (which AIXI is based on) is of immense theoretical and practical significance.

The content of the arguments has been effectively refuted already, but I have one thing to add. The piece, and especially the comments section on Stross' site, really need to sort out their terms. They all seem to be equivocating between intelligence and consciousness. Something can be a really powerful optimization process without having humanlike emotions or thought processes.

He seems to be pretty optimistic about an artificial intelligence architecture requiring lots of custom-crafted crunchy bits to go from being able to autonomously clean and fix your apartment, buy you groceries, make you lunch, drive you around town and figure out it needs to save your life and do so when you get a cardiac arrest to being able to do significant runaway learning and reasoning that will have serious and cumulative unintended consequences. And per Omohundro's Basic AI Drives paper, needing to have efficient real-world intelligence might count as a potentially dangerous motivation on its own, so not giving the things a goal system cribbed from humans won't help much.

I like how he goes from:

Uploading ... is not obviously impossible unless you are a crude mind/body dualist.

to:

But even if mind uploading is possible

To be sure, it continues as an "if ... and ...", but way to suggest two arguments where you've made just one.

[-][anonymous]12y00

Grammatically, what he's saying is "but that in-principle possibility aside..."

I was hoping for something similar to Nick Szabo's claim that specialized AI will pose existential threats before we ever have to worry about self improvers given his (Stross) fictional works.

I'm not sure exactly what arguments 2 and 3 are.

I think his three arguments are as follows:

Argument 1: superintelligent AI is unlikely, because human-level AI (a necessary step on the path) is unlikely, because that isn't really what people want (they want more tightly-focused problem solvers) and because there will be regulatory roadblocks on account of ethical concerns; and if we do make human-level AI we'll probably give it goal structures that make it not want to improve itself recursively.

Argument 2: mind-uploading is unlikely, because lots of people will argue against it for quasi-theological reasons, and because uploaded minds won't fare well without simulated bodies and simulated physics and so forth.

Argument 3: the simulation argument is unfalsifiable, so let's ignore it. Also, why would anyone bother simulating their ancestors anyway?

... and his tying together of the three goes like this: by argument 1, we are unlikely to get a singularity via conventional (non-uploading) AI because we won't be doing the kinds of things that would produce one; by argument 2, we won't want to make use of uploading in ways that would lead to superintelligent AI because we'd prefer to stay in the real world or something indistinguishable from it; by argument 3, the possibility that we're in a simulation gives us no reason to expect that we'll witness a singularity.

I think he's largely right about the simulation argument, but I don't think anyone thinks the simulation argument is any reason to expect a singularity soon so I'm not sure why he bothers with it. His other two arguments seem awfully unconvincing to me, for reasons that are probably as obvious to everyone else here as they are to me. (Examples: the existence of regulations doesn't guarantee that everyone will obey them; something with roughly-human-level intelligence of any sort might well figure out, whatever its goals are, that making itself smarter might be a good way to achieve those goals, so trying to control an AI's goals is no guarantee of avoiding recursive self-improvement; if we're able to make uploads at all, we will probably be able to make many of them, running very fast, which is exactly the sort of phenomenon that might lead to an explosion even if the early stages don't involve anything more intelligent than human beings.)

I don't think anyone thinks the simulation argument is any reason to expect a singularity soon so I'm not sure why he bothers with it.

Perhaps Stross is treating Singularitarianism as a package of beliefs. Since people who talk about the Singularity also tend to talk about the Simulation Argument, the package of beliefs must contain the belief that we are living in a simulation. Thus any critique of the belief package must address the question of whether we live in a simulation.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements. While such an AI might accidentally jeopardize its human's well-being, it's no more likely to deliberately turn on it's external "self" than you or I are to shoot ourselves in the head. And it's no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you.

Interesting take on friendly AI.

If by "interesting" you mean "charmingly anthropomorphic"...

don't get your hopes up, this is pretty weak stuff.

It is not on the same page - and not in a good way.

I'm disappointed. Essays like this are supposed to contain at least an off-hand reference to Ayn Rand somewhere.

ETA: This comment being at -1 is making me worried that people are interpreting it as something else than a hopefully uncontroversial gripe about a rhetorical style that takes cheap shots against complicated technical claims by vaguely associating them with disliked political/cultural currents.

He did attack libertarianism, though. It is hard to even see this post as an argument and not a plea to not be associated with "those crazy people".

This may simply be because he is european, I have the feeling the she is not so well known/influential on this side of the atlantic. (My only evidence is that I first heard about her on Scott Aaronson's blog, incidentalliy where I first heard about Overcoming Bias, too.)

He's perfectly familiar with the works of Ayn Rand - as knb says, I guess he felt that the reference to libertarians suffices to ensure that the audience understand that singularitarians aren't the sort of people you want to be associated with.

[-][anonymous]13y-20

The exponential function has no singularities (unlike the hyperbola).

[This comment is no longer endorsed by its author]Reply