Eliezer_Yudkowsky comments on Advice for AI makers - Less Wrong

7 Post author: Stuart_Armstrong 14 January 2010 11:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (196)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 15 January 2010 03:43:34AM 11 points [-]

"And I heard a voice saying 'Give up! Give up!' And that really scared me 'cause it sounded like Ben Kenobi." (source)

Friendly AI is a humongous damn multi-genius-decade sized problem. The first step is to realize this, and the second step is to find some fellow geniuses and spend a decade or two solving it. If you're looking for a quick fix you're out of luck.

The same (albeit to a lesser degree) is fortunately also true of Artificial General Intelligence in general, which is why the hordes of would-be meddling dabblers haven't killed us all already.

Comment author: Wei_Dai 16 January 2010 04:35:13AM *  30 points [-]

This article (which I happened across today) written by Ben Goertzel should make interesting reading for a would-be AI maker. It details Ben's experience trying to build an AGI during the dot-com bubble. His startup company, Webmind, Inc., apparently had up to 130 (!) employees at its peak.

According to the article, the AGI was almost completed, and the main reason his effort failed was that the company ran out of money due to the bursting of the bubble. Together with the anthropic principle, this seems to imply that Ben is the person responsible for the stock market crash of 2000.

I was always puzzled why SIAI hired Ben Goertzel to be its research director, and this article only deepens the mystery. If Ben has done an Eliezer-style mind-change since writing that article, I think I've missed it.

ETA: Apparently Ben has recently been helping his friend Hugo de Garis build an AI at Xiamen University under a grant from the Chinese government. How do you convince someone to give up building an AGI when your own research director is essentially helping the Chinese government build one?

Comment author: timtyler 25 June 2011 12:06:47PM *  7 points [-]

I was always puzzled why SIAI hired Ben Goertzel to be its research director, and this article only deepens the mystery.

Ben has a Phd, can program, has written books on the subject and has some credibility. Those kinds of things can help a little if you are trying to get people to give you money in the hope of you building a superintelligent machine. For more see here:

It has similarly been a general rule with the Singularity Institute that, whatever it is we're supposed to do to be more credible, when we actually do it, nothing much changes. "Do you do any sort of code development? I'm not interested in supporting an organization that doesn't develop code" -> OpenCog -> nothing changes. "Eliezer Yudkowsky lacks academic credentials" -> Professor Ben Goertzel installed as Director of Research -> nothing changes. The one thing that actually has seemed to raise credibility, is famous people associating with the organization, like Peter Thiel funding us, or Ray Kurzweil on the Board.

Comment author: Wei_Dai 20 January 2010 04:15:32AM 5 points [-]

I just came across an old post of mine that asked a similar question:

BTW, I still remember the arguments between Eliezer and Ben about Friendliness and Novamente. As late as January 2005, Eliezer wrote:

And if Novamente should ever cross the finish line, we all die. That is what I believe or I would be working for Ben this instant.

I'm curious how that debate was resolved?

From the reluctance of anyone at SIAI to answer this question, I conclude that Ben Goertzel being the Director of Research probably represents the outcome of some internal power struggle/compromise at SIAI, whose terms of resolution included the details of the conflict being kept secret.

What is the right thing to do here? Should we try to force an answer out of SIAI, for example by publicly accusing it of not taking existential risk seriously? That would almost certainly hurt SIAI as a whole, but might strengthen "our" side of this conflict. Does anyone have other suggestions for how to push SIAI in a direction that we would prefer?

Comment author: Eliezer_Yudkowsky 20 January 2010 04:25:05AM 9 points [-]

The short answer is that Ben and I are both convinced the other is mostly harmless.

Comment author: Wei_Dai 20 January 2010 04:36:07AM 3 points [-]

Have you updated that in light of the fact that Ben just convinced the Chinese government to start funding AGI? (See my article link earlier in this thread.)

Comment author: Eliezer_Yudkowsky 20 January 2010 04:39:01AM 7 points [-]

Hugo de Garis is around two orders of magnitude more harmless than Ben.

Comment author: Kevin 24 June 2010 08:28:27PM 12 points [-]

Update for anyone that comes across this comment: Ben Goertzel recently tweeted that he will be taking over Hugo de Garis's lab, pending paperwork approval.

http://twitter.com/bengoertzel/status/16646922609

http://twitter.com/bengoertzel/status/16647034503

Comment author: Wei_Dai 20 January 2010 05:11:33AM 4 points [-]

Hugo de Garis is around two orders of magnitude more harmless than Ben.

What about all the other people Ben might help obtain funding for, partly due to his position at SIAI?

And what about the public relations/education aspect? It's harmless that SIAI appears to not consider AI to be a serious existential risk?

Comment author: wedrifid 20 January 2010 12:26:58PM 6 points [-]

And what about the public relations/education aspect? It's harmless that SIAI appears to not consider AI to be a serious existential risk?

This part was not answered. It may be a question to ask someone other than Eliezer. Or just ask really loudly. That sometimes works too.

Comment author: Eliezer_Yudkowsky 20 January 2010 06:37:00AM 4 points [-]

What about all the other people Ben might help obtain funding for, partly due to his position at SIAI?

The reverse seems far more likely.

Comment author: Wei_Dai 20 January 2010 12:16:06PM 1 point [-]

What about all the other people Ben might help obtain funding for, partly due to his position at SIAI?

The reverse seems far more likely.

I don't know how to parse that. What do you mean by "the reverse"?

Comment author: wedrifid 20 January 2010 12:23:09PM 1 point [-]

I don't know how to parse that. What do you mean by "the reverse"?

Ben's position at SIAI may reduce the expected amount of funding he obtains for other existentially risky persons.

Comment author: wedrifid 20 January 2010 04:47:41AM 2 points [-]

How much of this harmlessness is perceived impotence and how much is it an approximately sane way of thinking?

Comment author: Eliezer_Yudkowsky 20 January 2010 05:01:11AM 6 points [-]

Wholly perceived impotence.

Comment author: XiXiDu 04 November 2010 06:53:23PM 3 points [-]

Do you believe the given answer? And if Ben is really that impotent, what do you think does it reveal about the SIAI, or whoever put Ben into a position within the SIAI?

Comment author: wedrifid 04 November 2010 07:00:43PM 5 points [-]

Do you believe the given answer?

I don't know enough about his capabilities when it comes to contributing to unfriendly AI research to answer that. Being unable to think sanely about friendliness or risks may have little bearing on your capabilities with respect to AGI research. The modes of thinking have very little bearing on each other.

And if Ben is really that impotent, what do you think does it reveal about the SIAI, or whoever put Ben into a position within the SIAI?

That they may be more rational and less idealistic than I may otherwise have guessed. There are many potential benefits the SIAI could gain from an affiliation with those inside the higher status AGI communities. Knowing who to know has many uses unrelated to knowing what to know.

Comment author: ata 04 November 2010 08:48:36PM *  6 points [-]

That they may be more rational and less idealistic than I may otherwise have guessed. There are many potential benefits the SIAI could gain from an affiliation with those inside the higher status AGI communities. Knowing who to know has many uses unrelated to knowing what to know.

Indeed. I read part of this post as implying that his position had at least a little bit to do with gaining status from affiliating with him ("It has similarly been a general rule with the Singularity Institute that, whatever it is we're supposed to do to be more credible, when we actually do it, nothing much changes. 'Do you do any sort of code development? I'm not interested in supporting an organization that doesn't develop code' -> OpenCog -> nothing changes. 'Eliezer Yudkowsky lacks academic credentials' -> Professor Ben Goertzel installed as Director of Research -> nothing changes.").

Comment author: XiXiDu 04 November 2010 07:41:15PM 7 points [-]

There are many potential benefits the SIAI could gain from an affiliation with those inside the higher status AGI communities. Knowing who to know has many uses unrelated to knowing what to know.

Does this suggest that founding a stealth AGI institute (to coordinate conferences, and communication between researchers) might be suited to oversee and influence potential undertakings that could lead to imminent high-risk situations?

By the way, I noticed from my server logs that the Institute for Defense Analyses seems to be reading LW. They visited my homepage, referred by my LW profile. So one should think about the consequences of discussing such matters in public, respectively not doing so.

Comment author: wedrifid 20 January 2010 04:39:07AM 3 points [-]

There is one 'mostly harmless' for people who you think will fail at AGI. There is an entirely different 'mostly harmless' for actually have a research director who tries to make AIs that could kill us all. Why would I not think the SIAI is itself an existential risk if the criteria for director recruitment is so lax? Being absolutely terrified of disaster is the kind of thing that helps ensure appropriate mechanisms to prevent defection are kept in place.

What is the right thing to do here? Should we try to force an answer out of SIAI, for example by publicly accusing it of not taking existential risk seriously?

Yes. The SIAI has to convince us that they are mostly harmless.

Comment author: Furcas 20 January 2010 04:48:32AM 2 points [-]

Can we know how you came to that conclusion?

Comment author: XiXiDu 24 June 2011 09:35:07AM 3 points [-]

According to the article, the AGI was almost completed, and the main reason his effort failed was that the company ran out of money due to the bursting of the bubble. Together with the anthropic principle, this seems to imply that Ben is the person responsible for the stock market crash of 2000.

Phew...I was almost going to call bullshit on this but that would be impolite.

Comment author: outlawpoet 16 January 2010 10:20:05PM 1 point [-]

That is an excellent question.

Comment author: Stuart_Armstrong 15 January 2010 10:20:19AM 2 points [-]

That's the justification he gave me: he won't be able to make much of a difference to the subject, so he won't be generating much risk.

Since he's going to do it anyway, I was wondering whether there were safer ways of doing so.

Comment author: Psy-Kosh 15 January 2010 05:16:06AM 3 points [-]

And now for a truly horrible thought:

which is why the hordes of would-be meddling dabblers haven't killed us all already.

I wonder to what extent we've been "saved" so far by anthropics. Okay, that's probably not the dominant effect. I mean, yeah, it's quite clear that AI is, as you note, REALLY hard.

But still, I can't help but wonder just how little or much that's there.

Comment author: cousin_it 18 January 2010 01:48:04PM *  6 points [-]

If you think anthropics has saved us from AI many times, you ought to believe we will likely die soon, because anthropics doesn't constrain the future, only the past. Each passing year without catastrophe should weaken your faith in the anthropic explanation.

Comment author: satt 30 June 2014 09:06:20PM *  1 point [-]

The first sentence seems obviously true to me, the second probably false.

My reasoning: to make observations and update on them, I must continue to exist. Hence I expect to make the same observations & updates whether or not the anthropic explanation is true (because I won't exist to observe and update on AI extinction if it occurs), so observing a "passing year without catastrophe" actually has a likelihood ratio of one, and is not Bayesian evidence for or against the anthropic explanation.

Comment author: Houshalter 01 October 2013 11:40:15PM 1 point [-]

Wouldn't the anthropic argument apply just as much in the future as it does now? The world not being destroyed is the only observable result.

Comment author: [deleted] 02 October 2013 12:50:06AM -1 points [-]

The future hasn't happened yet.

Comment author: Houshalter 02 October 2013 02:32:47AM 0 points [-]

Right. My point was in the future you are still going to say "wow the world hasn't been destroyed yet" even if in 99% of alternate realities it was. cousn_it said:

Each passing year without catastrophe should weaken your faith in the anthropic explanation.

Which shouldn't be true at all.

If you can not observe a catastrophe happen, then not observing a catastrophe is not evidence for any hypothesis.

Comment author: nshepperd 02 October 2013 04:45:22AM 1 point [-]

"Not observing a catastrophe" != "observing a non-catastrophe". If I'm playing russian roulette and I hear a click and survive, I see good reason to take that as extremely strong evidence that there was no bullet in the chamber.

Comment author: Houshalter 02 October 2013 06:19:51AM 1 point [-]

But doesn't the anthropic argument still apply? Worlds where you survive playing russian roulette are going to be ones where there wasn't a bullet in the chamber. You should expect to hear a click when you pull the trigger.

Comment author: nshepperd 02 October 2013 06:24:32AM *  0 points [-]

As it stands, I expect to die (p=1/6) if I play russian roulette. I don't hear a click if I'm dead.

Comment author: Houshalter 02 October 2013 10:18:18PM 1 point [-]

That's the point. You can't observe anything if you are dead, therefore any observations you make are conditional on you being alive.