Wei_Dai comments on Reframing the Problem of AI Progress - Less Wrong

21 Post author: Wei_Dai 12 April 2012 07:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 13 April 2012 01:21:10AM *  3 points [-]

This probably deserves a discussion post of its own, but here are some ideas that I came up with. We can:

  • persuade more AI researchers to lend credibility to the argument against AI progress, and to support whatever projects we decide upon to try to achieve a positive Singularity
  • convince the most promising AI researchers (especially promising young researchers) to seek different careers
  • hire the most promising AI researchers to do research in secret
  • use the argument on funding agencies and policy makers
  • publicize the argument enough so that the most promising researchers don't go into AI in the first place
Comment author: IlyaShpitser 13 April 2012 02:46:09PM 2 points [-]

You (as a group) need "street cred" to be persuasive. To a typical person you look like a modern day version of a doomsday cult. Publishing recognized AI work would be a good place to start.

Comment author: Dmytry 13 April 2012 05:20:39PM *  0 points [-]

The issue is that it is a doomsday cult if one is to expect extreme outlier (on doom belief) who had never done anything notable beyond being a popular blogger, to be the best person to listen to. That is incredibly unlikely situation for a genuine risk. Bonus cultism points for knowing Bayesian inference but not applying it here. Regardless of how real is the AI risk. Regardless of how truly qualified that one outlier may be. It is an incredibly unlikely world-state where the AI risk would be best coming from someone like that. No matter how fucked up is the scientific review process, it is incredibly unlikely that world's best AI talk is someone's first notable contribution.

Comment author: Wei_Dai 13 April 2012 05:44:42PM *  1 point [-]

Publishing AI work would help increase credibility, but it's a costly way of doing so since it directly promotes AI progress. At least some mainstream AI researchers already take SIAI seriously. (Evidence: 1 2) So I suggest bringing better arguments to them and convincing them to lend further credibility.

Comment author: IlyaShpitser 14 April 2012 02:25:16AM *  2 points [-]

By the way, what counts as "AI progress?" Do you consider statistics and machine learning a part of "AI progress"? Is theoretical work okay? What about building self-driving cars or speech recognition software? Where is, as someone here would call it, the Shelling point?

Do you consider stopping "AI progress" important enough to put something on the line besides talking about it?

Comment author: Wei_Dai 14 April 2012 04:28:29AM 4 points [-]

You raise a very good question. There doesn't seem to be a natural Schelling point, and actually the argument can be generalized to cover other areas of technological development that wouldn't ordinarily be considered to fall under AI at all, for example computer hardware. So somebody can always say "Hey, all those other areas are just as dangerous. Why are you picking on me?" I'm not sure what to do about this.

Do you consider stopping "AI progress" important enough to put something on the line besides talking about it?

I'm not totally sure what you mean by "put something on the line" but for example I've turned down offers to co-author academic papers on UDT and argued against such papers being written/published, even though I'd like to see my name and ideas in print as much as anybody. BTW, realistically I don't expect to stop AI progress, but just hope to slow it down some.

Comment author: IlyaShpitser 29 April 2012 06:55:11PM *  -2 points [-]

My understanding of Shelling points is there are, by definition, no natural Shelling points. You pick an arbitrary point to defend as a strategy vs slippery slopes. In Yvain's post he picked an arbitrary %, I think 95.

There is a slippery slope here. Where will you defend?

Comment author: David_Gerard 13 April 2012 07:02:21PM 3 points [-]

So ... the name is misleading - it's actually the Singularity Institute against Artificial Intelligence.

Comment author: Wei_Dai 13 April 2012 07:09:48PM 1 point [-]
Comment author: David_Gerard 13 April 2012 07:11:42PM *  4 points [-]
Comment author: Incorrect 13 April 2012 07:05:44PM 1 point [-]

or for exclusively friendly AI.

Comment author: timtyler 13 April 2012 11:29:22AM *  1 point [-]

convince the most promising AI researchers (especially promising young researchers) to seek different careers

Relinquishment? My estiamte of the effectiveness of that hovers around zero. I don't see any reason for thinking that it has any hope of being effective.

Especially not if the pitch is: YOU guys all relinquish the technology - AND LET US DEVELOP IT!!!

That will just smack of complete hypocracy.

Cosmetically splitting the organisation into the neo-luddute activists and the actual development team might help to mitigate this potential PR problem.

hire the most promising AI researchers to do research in secret

Surely secret progress is the worst kind - most likely to lead to a disruptive and unpleasant outcome for the majority - and to uncaught mistakes.

Comment author: cousin_it 13 April 2012 12:02:46PM 2 points [-]

How do I tell whether a small group doing secret research will be better or worse at saving the world than the global science/military complex? Does anyone have strong arguments either way?

Comment author: XiXiDu 13 April 2012 12:39:31PM *  1 point [-]

How do I tell whether a small group doing secret research will be better or worse at saving the world than the global science/military complex? Does anyone have strong arguments either way?

I haven't heard of any justification for why it might only take "nine people and a brain in a box in a basement". I think some people are too convinced of the AIXI approximation route and therefore believe that it is just a math problem that only takes some thinking and one or two deep insights.

Every success in AI so far relied on a huge team. IBM Watson, Siri, Big Dog or the various self-driving cars:

1)

With Siri, Apple is using the results of over 40 years of research funded by DARPA via SRI International's Artificial Intelligence Center through the Personalized Assistant that Learns Program and Cognitive Agent that Learns and Organizes Program CALO.

2)

When a question is put to Watson, more than 100 algorithms analyze the question in different ways, and find many different plausible answers–all at the same time. Yet another set of algorithms ranks the answers and gives them a score. For each possible answer, Watson finds evidence that may support or refute that answer. So for each of hundreds of possible answers it finds hundreds of bits of evidence and then with hundreds of algorithms scores the degree to which the evidence supports the answer. The answer with the best evidence assessment will earn the most confidence. The highest-ranking answer becomes the answer. However, during a Jeopardy! game, if the highest-ranking possible answer isn’t rated high enough to give Watson enough confidence, Watson decides not to buzz in and risk losing money if it’s wrong. The Watson computer does all of this in about three seconds.

It takes a company like IBM to design such a narrow AI. More than 100 algorithms. Could it have been done without a lot of computational and intellectual resources?

The basement approach seems ridiculous given the above.

Comment author: Dr_Manhattan 13 April 2012 08:19:01PM 2 points [-]

IBM Watson started in a rather small team (2-3 people); IBM started dumping resources on them once they saw serious potential.

Comment author: Wei_Dai 13 April 2012 04:07:38PM 2 points [-]

I haven't heard of any justification for why it might only take "nine people and a brain in a box in a basement".

I didn't mean to endorse that. What I was thinking when I wrote "hire the most promising AI researchers to do research in secret" was that if there are any extremely promising AI researchers who are convinced by the argument but don't want to give up their life's work, we could hire them to continue in secret just to keep the results away from the public domain. And also to activate suitable contingency plans as needed.

My thoughts on what the main effort should be is still described in Some Thoughts on Singularity Strategies.

Comment author: timtyler 13 April 2012 01:01:43PM 0 points [-]

I think some people are too convinced of the AIXI approximation route and therefore believe that it is just a math problem that only takes some thinking and one or two deep insights

Inductive inference is "just a math problem". That's the part that models the world - which is what our brain spends most of its time doing. However, it's probably not "one or two deep insights". Inductive inference systems seem to be complex and challenging to build.

Comment author: XiXiDu 13 April 2012 02:38:41PM 0 points [-]

Inductive inference is "just a math problem". That's the part that models the world - which is what our brain spends most of its time doing.

Everything is a math problem. But that doesn't mean that you can build a brain by sitting in your basement and literally think it up.

Team Basement

Comment author: timtyler 13 April 2012 02:48:14PM 0 points [-]

A well-specified math problem, then. By contrast with fusion or space travel.

Comment author: Dmytry 13 April 2012 05:18:29PM *  0 points [-]

how is intelligence well specified compared to space travel? We know physics well enough. We know we want to get from point A to point B. The intelligence: we don't even quite know what do exactly we want from it. We know of some ridiculous towers of exponents slow method, that means precisely nothing.

Comment author: timtyler 13 April 2012 09:53:21PM *  0 points [-]

The claim was: inductive inference is just a math problem. If we know how to build a good quality, general-purpose stream compressor, the problem would be solved.

Comment author: timtyler 13 April 2012 12:58:23PM 0 points [-]

A small group doing secret research sounds pretty screwed to me - with its main hope being an acquisition or a merger.

Comment author: quartz 13 April 2012 10:13:43PM 1 point [-]

These are interesting suggestions, but they don't exactly address the problem I was getting at: leaving a line of retreat for the typical AI researcher who comes to believe that his work likely contributes to harm.

My anecdotal impression is that the number of younger researchers who take arguments for AI risk seriously has grown substantially in the last years, but - apart from spreading the arguments and the option of career change - it is not clear how this knowledge should affect their actions.

If the risk of indifferent AI is to be averted, I expect that a gradual shift in what is considered important work is necessary in the minds of the AI community. The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work - in a way that makes use of their existing skillset and doesn't kill their careers.

Comment author: Wei_Dai 13 April 2012 10:35:49PM 1 point [-]

Ok, I had completely missed what you were getting at, and instead interpreted your comment as saying that there's not much point in coming up with better arguments, since we can't expect AI researchers to change their behaviors anyway.

The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work - in a way that makes use of their existing skillset and doesn't kill their careers.

This seems like a hard problem, but certainly worth thinking about.