I appear today on Bloggingheads.tv, in "Science Saturday: Singularity Edition", speaking with John Horgan about the Singularity.  I talked too much.  This episode needed to be around two hours longer.

One question I fumbled at 62:30 was "What's the strongest opposition you've seen to Singularity ideas?"  The basic problem is that nearly everyone who attacks the Singularity is either completely unacquainted with the existing thinking, or they attack Kurzweil, and in any case it's more a collection of disconnected broadsides (often mostly ad hominem) than a coherent criticism.  There's no equivalent in Singularity studies of Richard Jones's critique of nanotechnology - which I don't agree with, but at least Jones has read Drexler.  People who don't buy the Singularity don't put in the time and hard work to criticize it properly.

What I should have done, though, was interpreted the question more charitably as "What's the strongest opposition to strong AI or transhumanism?" in which case there's Sir Roger Penrose, Jaron Lanier, Leon Kass, and many others.  None of these are good arguments - or I would have to accept them! - but at least they are painstakingly crafted arguments, and something like organized opposition.

New Comment
37 comments, sorted by Click to highlight new comments since:
[-]Roko00

That interview was quite funny. I really admire your patience; especially when Horgan made certain errors of reasoning that you'd carefully told him not to make earlier in the interview!

[-]poke30

Eliezer, serious question, why don't you re-brand your project as designing an Self-Improving Automated Science Machine rather than a Seed AI and generalize Friendly AI to Friendly Optimization (or similar)? It seems to me that: (a) this would be more accurate since it's not obvious (to me at least) that individual humans straightforwardly exhibit the traits you describe as "intelligence"; and (b) you'd avoid 90% of the criticism directed at you. You could, for example, avoid the usual "people have been promising AI for 60 years" line of argument.

[-]Ben600

I found the discussion really interesting, though I'm still fairly lost as to what this singularity stuff is. Don't worry about talking too much; as a regular bhtv viewer, I'd insist that John Horgan generally talks to much. He kinda likes to make forceful claims about admittedly non-scientific questions, generally of the form "branch of science X won't amount to anything useful." He'll give non-scientific reasons why he supports that position, but then responds to criticisms of his argument by either a) repeating his claim or b) shrugging off the objection as non-scientific.

I was quite surprised by your quoting Robert Pirsig, as I've taken him to be pretty marginalized by a lot of thinkers. Do his ideas play into singultarity?

John Horgan is a sloppy thinker. But if this was a contest to strengthen vs. weaken the credibility of AI research -- a kind of status competition -- then I think he got the better of you.

Is it important to convince nonprofessionals that the singularity is plausible, in advance of it actually happening? If so, then you need to find a way to address the "this is just an apocalyptic religion" charge that Mr. Horgan brings here. It will not be the last time you hear it, and it is particularly devastating in its own somewhat illogical way. 1. All people dismiss most claims that their lives will be radically different in the near future, without giving due consideration 2. This behavior is rational! At least, it is useful, since nearly all such claims are bogus and "due consideration" is costly, 3. Your own claims can be easily caricatured as resembling millenarianist trash (singularity = rapture, etc. One of the bloggingheads commenters makes a crack about your "messianism" as a product of your jewish upbringing.)

How do you get through the spam filter? I don't know, but "read my policy papers" sounds too much like "read my manifesto." It doesn't distinguish you from crazy people before they read it, so they won't. (Mr. Horgan didn't. Were you really surprised?) You need to find sound bites if you're going to appear on bloggingheads at all.

In the political blogosphere, this is called "concern trolling." Whatever.

Eli:

You presented so many intriguing ideas in that diavlog that I can't yet say anything meaningful in response, but I did want to drop by to tell how you how much I enjoyed the diavlog overall. I hope to see you come back to do another one, whether with John or with somebody else.

I do think John interfered with your presenting your ideas with his excessive and somewhat kneejerk skepticism, but perhaps he helped you to make some points in reaction. Anyway, I could tell that you had a lot more to say, and in addition to reading your work, I look forward to hearing you talk about it some more.

Goodness me. "How will superintelligence help me make better flint arrows? Thag, who is clever, doesn't seem to be any good at making arrows. Oh and by the way let me interrupt you on some other tangent..."

I don't see how you keep your morale up.

Interesting. Would be even better if you did this with Robin, Nick, etc. and on a weekly basis.

[-]John00

@poke:

I imagine Eliezer is more interested in doing what works than avoiding criticism. And the real danger associated with creating a superhuman AI is that things would spiral out of control. That danger is still present if humanity is suddenly introduced to 24th century science.

Eli, Enjoyed your conversation with John today, though I suspect he would have tried to convince the Wright brothers to quit because so many had failed.

I read your essay on friendly AI, and think this essay is off the mark. If the singularity happens, there will be so many burdens the AI can throw off (such as anthropomorphism) it will be orders of magnitude superior very quickly. I think the apt analogy isn't we would be apes among men, but more like worms among men. Men need not nor should be concerned with worms, and worms aren't all that important in a world with men.

Ed

john, I think a good case could be made that things are currently out of control, and that the danger of a superhumanAI gone wrong would be that we'd cease to exist as subjective conscious entities.

edbarbar, I think the apt analogy may be something like that we'd be silicon sand among IT companies.

Eliezer, it's possible for there to be a good argument for something without that implying that you should accept it. There might be one good argument for X, but 10 good arguments for not-X, so you shouldn't accept X despite the good argument.

This is an important point because if you think that a single good argument for something implies that you should accept it, and that therefore there can't be any good arguments for the opposite, this would suggest a highly overconfident attitude.

Hopefully anonymous: There are strong warnings against posting too much, but my personal suspicion is that the next generation of AI will not colonize other planets, convert stars, or any of the things we see as huge and important, but go in the opposite direction and become smaller and smaller. At least, should the thing decide that survival is ethical and desirable.

But as sand or worms or simply irrelevant, the result is the same. We shouldn't be worried that our children consume us: it's the nature of life, and that will continue even with the next super intelligent beings. To evolve, everything must die or be rendered insignificant, and there is no escape from death even for stagnant species. I think that will hold true for many generations.

[-][anonymous]00

deleted

He wasted 90% of the interview because Yudkowsky discussed how to be rational rather than answering implications of AGI being possible.

How does Yudkowsky's authority change our viewpoint of the feasibility of AGI being developed quickly when most experts clearly disagree? We need to go from the elders being wrong in technique to the path to AGI.

And what about the claim that a billion dollar project isn't needed? Singinst thinks they can do it alone, with a modest budget of a few millionaires? Isn't this a political position?

I am glad Yudkowsky is trying so hard but it seems he is doing more politics and philosophy than research. Perhaps in the long term this will be more effective, as the goal is to win, not to be right.

It was OK until the interviewer started going on about his ridiculous communist utopia, and in almost the same breath he accuses Eliezer of being pie in the sky!

By the way Eli, if you put an MP3 clip of how to pronounce your name somewhere on the web, maybe interviewers wouldn't have to ask all the time (don't you get sick of that?).

[-]FrF20

I'd like to read/hear an interview with Eliezer where he talks mainly about SF. Sure, we have his bookshelf page but it is nearly ten years old and by far not comprehensive enough to satisfy my curiosity!

Or how about a annotated general list from Eliezer titled "The 10/20/30/... most important books I read since 1999"?

"That's not how I experience my epiphanies, it's just sort of 'Oh, that's obviously correct.'"

I found that comment really resonated with me, but having been exposed to experimental psychology (which by a roundabout route is what led me to this blog in the first place), I've always struggled with how to distinguish that response from confirmation bias. It seems to me that I have in fact radically changed my opinions on certain issues on the basis of convincing evidence (convincingly argued), but that could just as well be revisionist memories.

Re: "Objections to the singularity" - if the singularity is defined as being an "intelligence explosion", then it's happening now - i.e. see my:

http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

"Strong AI" seems like a bit of an oxymoron. Maybe we should be using "Powerful AI" or "Smart AI" - rather than perpetuating an old misunderstanding of Searle's stupid game:

http://en.wikipedia.org/wiki/Strong_AI#Origin_of_the_term:_John_Searle.27s_strong_AI

I think Horgan's questions were good in that they were a straight forward expression of how many sceptics think. My own summary of this thinking goes something like this:

The singularity idea sounds kind of crazy, if not plain out ridiculous. Super intelligent machines and people living forever? I mean... come on! History is full of silly predictions about the future that turned out to be totally wrong. If you want me to take this seriously you're going to have to present some very strong arguments as to why this is going to happen.

Although I agree with most of what Eli said, rhetorically it sounded like he was avoiding this central question with a series of quibbles and tangents. This is not going to win over many sceptics' minds.

I think it's an important question to try to answer as directly and succinctly as possible -- a longish "elevator pitch" that forms a good starting point for discussion with a sceptic. I'll think about this and try to write a blog post.

Did you ever formulate anything good? I'd be interested to read it if so, I'm having trouble keeping the attention of my friends and family for long enough to explain...

FrFL: Or how about a annotated general list from Eliezer titled "The 10/20/30/... most important books I read since 1999"?

That would be great, but in the meantime see these recommendations.

No offense to Horgan, but I can't help but feel that he made a bad career choice in becoming a science journalist ... should've picked sports or something.

Well... I liked the video, especially to watch how all the concepts mentioned on OB before work in... real life. But showing how you should think to be effective (which Eliezer is writing about on OB) is a different goal from persuading people that the Singularity is not some other dull pseudo-religion. No, they haven't read OB, and they won't even have a reason to if they are told "you won't understand all this all of a sudden, see inferential distances, which is a concept I also can't explain now". To get thorough their spam filter, we'll need stories, even details, with a "this is very unprobable, but if you're interested, read OB" disclaimer at the end. See the question "but how could we use AI to fight poverty etc."... Why is the Singularity still "that strange and scary prediction some weird people make without any reason"?

But all of the beliefs about what the world will do once it hits a Singularity ARE a dull religion, because the whole point of a Singularity is that we can't trust our ability to extrapolate and speculate beyond it.

[-]Ian_C.-10

The interviewer accused Eliezer of being religious-like. But if the universe is deterministically moving from state to state then it's just like a computer, a machine that moves predictably from state to state. Therefore it's not religious at all to believe anything in the world (including intelligence) could eventually be reproduced in a computer.

But of course the universe is not like a computer. Everything a computer does until the end of time is implied in it's initial state, the nature of it's CPU, and subsequent inputs. It can never deviate from that course. It can never choose like a human, therefore it can never model a human.

And it's not possible to rationally argue that choice is an illusion because reason uses choice in it's operations. If you use something in the process of arguing against it, you fall in to absurdity. e.g. your proof comes out something like: "I presumed P, pondered Q and R, chose R, reasoned thusly about R vs S, finally choosing S. Therefore choice isn't really choosing."

Eliezer_Yudkowsky: Considering your thrice-daily mention of Aumann (two month running average), shouldn't you have been a little more prepared for a question like that?

Btw, I learned from that video that your first name has four syllables rather than three.

You need a chess clock next time. John talks way too much.

AI researchers of previous eras made predictions that were wildly wrong. Therefore, human-level AI (since it is a goal of some current AI researchers) cannot happen in the foreseeable future. They were wrong before, so they must be wrong now. And dawg-garn it, it seems like some kind of strange weirdo religious faith-based thingamajiggy to me, so it must be wrong.

Thanks for a good laugh, Mr. Horgan! Keep up the good work.

Ian: what makes you think the things humans do aren't implied by its initial state, nature, and inputs? The form of choice reason demands (different outputs given different inputs) is perfectly compatible with determinism, in fact it requires determinism, since nondeterministic factors would imply less entanglement between beliefs and reality. If your conclusion is not totally determined by priors and evidence, you're doing something wrong.

I thought you did an excellent job.

I would think the key line of attack in trying to describe why a singularity prediction is reasonable is in making clear what you are predicting and what you are not predicting.

Guys like Horgan hear a few sentences about the "singularity" and think humanoid robots, flying cars, phasers and force fields, that we'll be living in the star-trek universe.

Of course, as anyone with the Bayes-skilz of Eliezer knows, start making detailed predictions like that and you're sure to be wrong about most of it, even if the basic idea of a radically altered social structure and technology beyond our current imagination is highly probable. And that's the key: "beyond our current imagination". The specifics of what will happen aren't very predictable today. If they were, we'd already be in the singularity. The things that happen will seem strange and almost incomprehensible by today's standards, in the way that our world is strange and incomprehensible by the standards of the 19th century.

The last 200 years already are much like a singularity from the perspective of someone looking forward from 15th century europe and getting a vision of what happened between 1800 and 2000, even though the basic groundwork for that future was already being laid.

[-]Ian_C.-10

Nick: "what makes you think the things humans do aren't implied by its initial state, nature, and inputs?"

What humans do is determined by their nature, just like with a computer. The difference is, human nature is to be able to choose, and computer nature is not.

"The form of choice reason demands (different outputs given different inputs) is perfectly compatible with determinism, in fact it requires determinism, since nondeterministic factors would imply less entanglement between beliefs and reality. If your conclusion is not totally determined by priors and evidence, you're doing something wrong."

You're not doing something wrong, because I don't think reason is pure discipline, pure modus-ponens. I think it's more like tempered creativity - utilizing mental actions such as choice, focus, imagination as well as pure logic. The computer just doesn't have what it takes.

But the point I was making is that the whole idea of reason wouldn't arise in the first place without prior acceptance of free will. It is only by accepting that we control our minds that the question of how best to do so arises, and ideas like reason, deduction etc. come to be.

All these ideas therefore presuppose free will in their very genesis, and can not validly be used to argue against it. It would be like trying to use the concept "stealing" in a proof against the validity of "property" - there is no such thing as stealing without property. Likewise there is no such thing as reason without free will.

From the BH comments:

I've been reading overcomingbias.com for a long time, more out of interest than because I agree with their world view. It's certainly one of the most pretentious and eliteist blogs on the internet. They need to learn humility.

See also, Michael Anissimov's dissection of this discussion.

If re-asked the question "What's the strongest criticism you've seen of Singularity ideas?" I would now be able to unhesitatingly answer "Robin Hanson's critique of hard takeoff."

My concern isn't with the interview per se(everything I would add would best be put in another thread). It's with the reaction here in the comments here.

That 90% wasn't a waste anymore than overcomingbias as a blog is a waste. Horgan is hardly alone in remembering the Fifth Generation Project and it was worth it to get Yudkowsky to hammer out, once more, to a new audience why what happened in the 80's was not representative of what is to come in the 10ky timeframe. Those of you who are hard on Horgan he is not one of you. You cannot hold him to LW standards. Yudkowsky has spent a lot of time and effort trying to get other people to not make mistakes, for example mislabeling broad singulitarian thought on him as if he's kurzweil, vinge, the entirety of MIRI and whatnot personified and so it's understandable why he might be annoyed, but at the same time...the average person is not going to bother with the finer details. He probably put in about as much or more journalistic work as the average topic requires. This just goes to really drive home how different intelligence is from other fields, how hard science journalism in a world with AI research can be.

It's frustrating because it's hard. It's hard for many reasons, but one reason is because the layman's priors are very wrong. This it shares in common(for good reason) with economics and psychology more generally that people who are not in the field bring to the table a lot of preconceptions that have to be dismantled. Dismantling them all is a lot of work for a 1 hour podcast. Like those who answer Yahoo Answers! questions, Horgan is a critical point needed to convince on his own terms between Yudkowsky & a substantial chunk of a billion+ people who lived in the 80's who are not following where Science is being taken here.