Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Bloggingheads: Yudkowsky and Horgan

4 Post author: Eliezer_Yudkowsky 07 June 2008 10:09PM

I appear today on Bloggingheads.tv, in "Science Saturday: Singularity Edition", speaking with John Horgan about the Singularity.  I talked too much.  This episode needed to be around two hours longer.

One question I fumbled at 62:30 was "What's the strongest opposition you've seen to Singularity ideas?"  The basic problem is that nearly everyone who attacks the Singularity is either completely unacquainted with the existing thinking, or they attack Kurzweil, and in any case it's more a collection of disconnected broadsides (often mostly ad hominem) than a coherent criticism.  There's no equivalent in Singularity studies of Richard Jones's critique of nanotechnology - which I don't agree with, but at least Jones has read Drexler.  People who don't buy the Singularity don't put in the time and hard work to criticize it properly.

What I should have done, though, was interpreted the question more charitably as "What's the strongest opposition to strong AI or transhumanism?" in which case there's Sir Roger Penrose, Jaron Lanier, Leon Kass, and many others.  None of these are good arguments - or I would have to accept them! - but at least they are painstakingly crafted arguments, and something like organized opposition.

Comments (35)

Sort By: Old
Comment author: poke 08 June 2008 01:25:31AM 3 points [-]

Eliezer, serious question, why don't you re-brand your project as designing an Self-Improving Automated Science Machine rather than a Seed AI and generalize Friendly AI to Friendly Optimization (or similar)? It seems to me that: (a) this would be more accurate since it's not obvious (to me at least) that individual humans straightforwardly exhibit the traits you describe as "intelligence"; and (b) you'd avoid 90% of the criticism directed at you. You could, for example, avoid the usual "people have been promising AI for 60 years" line of argument.

Comment author: Ben6 08 June 2008 01:29:46AM 0 points [-]

I found the discussion really interesting, though I'm still fairly lost as to what this singularity stuff is. Don't worry about talking too much; as a regular bhtv viewer, I'd insist that John Horgan generally talks to much. He kinda likes to make forceful claims about admittedly non-scientific questions, generally of the form "branch of science X won't amount to anything useful." He'll give non-scientific reasons why he supports that position, but then responds to criticisms of his argument by either a) repeating his claim or b) shrugging off the objection as non-scientific.

I was quite surprised by your quoting Robert Pirsig, as I've taken him to be pretty marginalized by a lot of thinkers. Do his ideas play into singultarity?

Comment author: cole_porter 08 June 2008 01:30:10AM 0 points [-]

John Horgan is a sloppy thinker. But if this was a contest to strengthen vs. weaken the credibility of AI research -- a kind of status competition -- then I think he got the better of you.

Is it important to convince nonprofessionals that the singularity is plausible, in advance of it actually happening? If so, then you need to find a way to address the "this is just an apocalyptic religion" charge that Mr. Horgan brings here. It will not be the last time you hear it, and it is particularly devastating in its own somewhat illogical way. 1. All people dismiss most claims that their lives will be radically different in the near future, without giving due consideration 2. This behavior is rational! At least, it is useful, since nearly all such claims are bogus and "due consideration" is costly, 3. Your own claims can be easily caricatured as resembling millenarianist trash (singularity = rapture, etc. One of the bloggingheads commenters makes a crack about your "messianism" as a product of your jewish upbringing.)

How do you get through the spam filter? I don't know, but "read my policy papers" sounds too much like "read my manifesto." It doesn't distinguish you from crazy people before they read it, so they won't. (Mr. Horgan didn't. Were you really surprised?) You need to find sound bites if you're going to appear on bloggingheads at all.

In the political blogosphere, this is called "concern trolling." Whatever.

Comment author: bjkeefe 08 June 2008 01:40:50AM 0 points [-]

Eli:

You presented so many intriguing ideas in that diavlog that I can't yet say anything meaningful in response, but I did want to drop by to tell how you how much I enjoyed the diavlog overall. I hope to see you come back to do another one, whether with John or with somebody else.

I do think John interfered with your presenting your ideas with his excessive and somewhat kneejerk skepticism, but perhaps he helped you to make some points in reaction. Anyway, I could tell that you had a lot more to say, and in addition to reading your work, I look forward to hearing you talk about it some more.

Comment author: JulianMorrison 08 June 2008 01:40:55AM 0 points [-]

Goodness me. "How will superintelligence help me make better flint arrows? Thag, who is clever, doesn't seem to be any good at making arrows. Oh and by the way let me interrupt you on some other tangent..."

I don't see how you keep your morale up.

Comment author: Hopefully_Anonymous 08 June 2008 01:56:13AM 1 point [-]

Interesting. Would be even better if you did this with Robin, Nick, etc. and on a weekly basis.

Comment author: John 08 June 2008 04:48:23AM 0 points [-]

@poke:

I imagine Eliezer is more interested in doing what works than avoiding criticism. And the real danger associated with creating a superhuman AI is that things would spiral out of control. That danger is still present if humanity is suddenly introduced to 24th century science.

Comment author: edbarbar 08 June 2008 05:04:30AM 0 points [-]

Eli, Enjoyed your conversation with John today, though I suspect he would have tried to convince the Wright brothers to quit because so many had failed.

I read your essay on friendly AI, and think this essay is off the mark. If the singularity happens, there will be so many burdens the AI can throw off (such as anthropomorphism) it will be orders of magnitude superior very quickly. I think the apt analogy isn't we would be apes among men, but more like worms among men. Men need not nor should be concerned with worms, and worms aren't all that important in a world with men.

Ed

Comment author: Hopefully_Anonymous 08 June 2008 05:14:16AM 0 points [-]

john, I think a good case could be made that things are currently out of control, and that the danger of a superhumanAI gone wrong would be that we'd cease to exist as subjective conscious entities.

edbarbar, I think the apt analogy may be something like that we'd be silicon sand among IT companies.

Comment author: Unknown 08 June 2008 05:52:48AM 0 points [-]

Eliezer, it's possible for there to be a good argument for something without that implying that you should accept it. There might be one good argument for X, but 10 good arguments for not-X, so you shouldn't accept X despite the good argument.

This is an important point because if you think that a single good argument for something implies that you should accept it, and that therefore there can't be any good arguments for the opposite, this would suggest a highly overconfident attitude.

Comment author: edbarbar 08 June 2008 06:42:50AM 0 points [-]

Hopefully anonymous: There are strong warnings against posting too much, but my personal suspicion is that the next generation of AI will not colonize other planets, convert stars, or any of the things we see as huge and important, but go in the opposite direction and become smaller and smaller. At least, should the thing decide that survival is ethical and desirable.

But as sand or worms or simply irrelevant, the result is the same. We shouldn't be worried that our children consume us: it's the nature of life, and that will continue even with the next super intelligent beings. To evolve, everything must die or be rendered insignificant, and there is no escape from death even for stagnant species. I think that will hold true for many generations.

Comment author: [deleted] 08 June 2008 07:12:03AM *  0 points [-]

deleted

Comment author: Matthew2 08 June 2008 07:19:52AM 0 points [-]

He wasted 90% of the interview because Yudkowsky discussed how to be rational rather than answering implications of AGI being possible.

How does Yudkowsky's authority change our viewpoint of the feasibility of AGI being developed quickly when most experts clearly disagree? We need to go from the elders being wrong in technique to the path to AGI.

And what about the claim that a billion dollar project isn't needed? Singinst thinks they can do it alone, with a modest budget of a few millionaires? Isn't this a political position?

I am glad Yudkowsky is trying so hard but it seems he is doing more politics and philosophy than research. Perhaps in the long term this will be more effective, as the goal is to win, not to be right.

Comment author: Ian_C. 08 June 2008 08:11:48AM 0 points [-]

It was OK until the interviewer started going on about his ridiculous communist utopia, and in almost the same breath he accuses Eliezer of being pie in the sky!

By the way Eli, if you put an MP3 clip of how to pronounce your name somewhere on the web, maybe interviewers wouldn't have to ask all the time (don't you get sick of that?).

Comment author: FrF 08 June 2008 09:01:37AM 2 points [-]

I'd like to read/hear an interview with Eliezer where he talks mainly about SF. Sure, we have his bookshelf page but it is nearly ten years old and by far not comprehensive enough to satisfy my curiosity!

Or how about a annotated general list from Eliezer titled "The 10/20/30/... most important books I read since 1999"?

Comment author: Matt4 08 June 2008 09:01:48AM 1 point [-]

"That's not how I experience my epiphanies, it's just sort of 'Oh, that's obviously correct.'"

I found that comment really resonated with me, but having been exposed to experimental psychology (which by a roundabout route is what led me to this blog in the first place), I've always struggled with how to distinguish that response from confirmation bias. It seems to me that I have in fact radically changed my opinions on certain issues on the basis of convincing evidence (convincingly argued), but that could just as well be revisionist memories.

Comment author: Tim_Tyler 08 June 2008 10:38:23AM 0 points [-]

Re: "Objections to the singularity" - if the singularity is defined as being an "intelligence explosion", then it's happening now - i.e. see my:

http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

Comment author: Tim_Tyler 08 June 2008 11:18:28AM 0 points [-]

"Strong AI" seems like a bit of an oxymoron. Maybe we should be using "Powerful AI" or "Smart AI" - rather than perpetuating an old misunderstanding of Searle's stupid game:

http://en.wikipedia.org/wiki/Strong_AI#Origin_of_the_term:_John_Searle.27s_strong_AI

Comment author: Shane_Legg 08 June 2008 01:11:05PM 2 points [-]

I think Horgan's questions were good in that they were a straight forward expression of how many sceptics think. My own summary of this thinking goes something like this:

The singularity idea sounds kind of crazy, if not plain out ridiculous. Super intelligent machines and people living forever? I mean... come on! History is full of silly predictions about the future that turned out to be totally wrong. If you want me to take this seriously you're going to have to present some very strong arguments as to why this is going to happen.

Although I agree with most of what Eli said, rhetorically it sounded like he was avoiding this central question with a series of quibbles and tangents. This is not going to win over many sceptics' minds.

I think it's an important question to try to answer as directly and succinctly as possible -- a longish "elevator pitch" that forms a good starting point for discussion with a sceptic. I'll think about this and try to write a blog post.

Comment author: Dojan 24 December 2011 01:42:52PM 0 points [-]

Did you ever formulate anything good? I'd be interested to read it if so, I'm having trouble keeping the attention of my friends and family for long enough to explain...

Comment author: Recovering_irrationalist 08 June 2008 01:57:20PM 0 points [-]

FrFL: Or how about a annotated general list from Eliezer titled "The 10/20/30/... most important books I read since 1999"?

That would be great, but in the meantime see these recommendations.

Comment author: anonymous34 08 June 2008 04:28:12PM 1 point [-]

No offense to Horgan, but I can't help but feel that he made a bad career choice in becoming a science journalist ... should've picked sports or something.

Comment author: Latanius2 08 June 2008 04:30:18PM 0 points [-]

Well... I liked the video, especially to watch how all the concepts mentioned on OB before work in... real life. But showing how you should think to be effective (which Eliezer is writing about on OB) is a different goal from persuading people that the Singularity is not some other dull pseudo-religion. No, they haven't read OB, and they won't even have a reason to if they are told "you won't understand all this all of a sudden, see inferential distances, which is a concept I also can't explain now". To get thorough their spam filter, we'll need stories, even details, with a "this is very unprobable, but if you're interested, read OB" disclaimer at the end. See the question "but how could we use AI to fight poverty etc."... Why is the Singularity still "that strange and scary prediction some weird people make without any reason"?

Comment author: Caledonian2 08 June 2008 05:01:48PM 0 points [-]

But all of the beliefs about what the world will do once it hits a Singularity ARE a dull religion, because the whole point of a Singularity is that we can't trust our ability to extrapolate and speculate beyond it.

Comment author: Ian_C. 08 June 2008 05:24:41PM -1 points [-]

The interviewer accused Eliezer of being religious-like. But if the universe is deterministically moving from state to state then it's just like a computer, a machine that moves predictably from state to state. Therefore it's not religious at all to believe anything in the world (including intelligence) could eventually be reproduced in a computer.

But of course the universe is not like a computer. Everything a computer does until the end of time is implied in it's initial state, the nature of it's CPU, and subsequent inputs. It can never deviate from that course. It can never choose like a human, therefore it can never model a human.

And it's not possible to rationally argue that choice is an illusion because reason uses choice in it's operations. If you use something in the process of arguing against it, you fall in to absurdity. e.g. your proof comes out something like: "I presumed P, pondered Q and R, *chose* R, reasoned thusly about R vs S, finally *choosing* S. Therefore choice isn't really choosing."

Comment author: Silas 08 June 2008 07:09:13PM 0 points [-]

Eliezer_Yudkowsky: Considering your thrice-daily mention of Aumann (two month running average), shouldn't you have been a little more prepared for a question like that?

Btw, I learned from that video that your first name has four syllables rather than three.

Comment author: Michael_G.R. 08 June 2008 07:26:55PM 0 points [-]

You need a chess clock next time. John talks way too much.

Comment author: Joseph_Knecht 09 June 2008 06:08:33AM 0 points [-]

AI researchers of previous eras made predictions that were wildly wrong. Therefore, human-level AI (since it is a goal of some current AI researchers) cannot happen in the foreseeable future. They were wrong before, so they must be wrong now. And dawg-garn it, it seems like some kind of strange weirdo religious faith-based thingamajiggy to me, so it must be wrong.

Thanks for a good laugh, Mr. Horgan! Keep up the good work.

Comment author: Nick_Tarleton 09 June 2008 11:12:28PM 1 point [-]

Ian: what makes you think the things humans do aren't implied by its initial state, nature, and inputs? The form of choice reason demands (different outputs given different inputs) is perfectly compatible with determinism, in fact it requires determinism, since nondeterministic factors would imply less entanglement between beliefs and reality. If your conclusion is not totally determined by priors and evidence, you're doing something wrong.

Comment author: Nate4 10 June 2008 05:59:49AM 0 points [-]

I thought you did an excellent job.

Comment author: Michael_Sullivan 10 June 2008 03:53:28PM -1 points [-]

I would think the key line of attack in trying to describe why a singularity prediction is reasonable is in making clear what you *are* predicting and what you are *not* predicting.

Guys like Horgan hear a few sentences about the "singularity" and think humanoid robots, flying cars, phasers and force fields, that we'll be living in the star-trek universe.

Of course, as anyone with the Bayes-skilz of Eliezer knows, start making detailed predictions like that and you're sure to be wrong about most of it, even if the basic idea of a radically altered social structure and technology beyond our current imagination is highly probable. And that's the key: "beyond our current imagination". The specifics of what will happen aren't very predictable today. If they were, we'd already be *in* the singularity. The things that happen will seem strange and almost incomprehensible by today's standards, in the way that our world is strange and incomprehensible by the standards of the 19th century.

The last 200 years already are much like a singularity from the perspective of someone looking forward from 15th century europe and getting a vision of what happened between 1800 and 2000, even though the basic groundwork for that future was already being laid.

Comment author: Ian_C. 11 June 2008 08:00:26AM -1 points [-]

Nick: "what makes you think the things humans do aren't implied by its initial state, nature, and inputs?"

What humans do *is* determined by their nature, just like with a computer. The difference is, human nature is to be able to choose, and computer nature is not.

"The form of choice reason demands (different outputs given different inputs) is perfectly compatible with determinism, in fact it requires determinism, since nondeterministic factors would imply less entanglement between beliefs and reality. If your conclusion is not totally determined by priors and evidence, you're doing something wrong."

You're not doing something wrong, because I don't think reason is pure discipline, pure modus-ponens. I think it's more like tempered creativity - utilizing mental actions such as choice, focus, imagination as well as pure logic. The computer just doesn't have what it takes.

But the point I was making is that the whole idea of reason wouldn't arise in the first place without prior acceptance of free will. It is only by accepting that we control our minds that the question of how best to do so arises, and ideas like reason, deduction etc. come to be.

All these ideas therefore presuppose free will in their very genesis, and can not validly be used to argue against it. It would be like trying to use the concept "stealing" in a proof against the validity of "property" - there is no such thing as stealing without property. Likewise there is no such thing as reason without free will.

Comment author: Tim_Tyler 12 July 2008 08:57:58AM 0 points [-]

From the BH comments:

I've been reading overcomingbias.com for a long time, more out of interest than because I agree with their world view. It's certainly one of the most pretentious and eliteist blogs on the internet. They need to learn humility.

Comment author: Tim_Tyler 20 July 2008 02:22:21PM 0 points [-]

See also, Michael Anissimov's dissection of this discussion.

Comment author: Eliezer_Yudkowsky 13 December 2008 03:22:02PM 1 point [-]

If re-asked the question "What's the strongest criticism you've seen of Singularity ideas?" I would now be able to unhesitatingly answer "Robin Hanson's critique of hard takeoff."