You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

AlexMennen comments on [Link] Introducing OpenAI - Less Wrong Discussion

23 Post author: Baughn 11 December 2015 09:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread.

Comment author: AlexMennen 12 December 2015 12:26:44AM *  14 points [-]

From their website, it looks like they'll be doing a lot of deep learning research and making the results freely available, which doesn't sound like it would accelerate Friendly AI relative to AI as a whole. I hope they've thought this through.

Edit: It continues to look like their strategy might be counterproductive. [Edited again in response to this.]

Comment author: Kaj_Sotala 12 December 2015 05:45:14PM 18 points [-]

Edit: It continues to look like they don't know what they're doing.

Please don't use "they don't know what they're doing" as a synonym for "I don't agree with their approach".

Comment author: Rain 12 December 2015 03:46:59AM 4 points [-]

That interview is indeed worrying. I'm surprised by some of the answers.

Comment author: Viliam 14 December 2015 11:46:14AM *  4 points [-]

Like this?

If I’m Dr. Evil and I use it, won’t you be empowering me?

Musk: I think that’s an excellent question and it’s something that we debated quite a bit.

Altman: There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.

The first one is a non-answer, the second one suggests that a proper response to Dr. Evil making a machine that transforms the planet into a grey goo is Anonymous creating another machine which... transforms the grey goo into a nicer color of goo, I guess?

Comment author: HungryHobo 15 December 2015 11:44:16AM 1 point [-]

If you don't believe that a foom is the most likely outcome(a common and not unreasonable position) then it's probably better to have lots of weakly-superhuman AI than a single weakly-superhuman AI.

Comment author: Rain 23 December 2015 02:36:04PM 0 points [-]

Even in that case, whichever actor has the most processors would have the largest "AI farm", with commensurate power projection.

Comment author: Lumifer 14 December 2015 03:46:02PM 1 point [-]

the second one suggest...

I think the second one suggests that they don't believe the future AI will be a singleton.

Comment author: SilentCal 14 December 2015 07:20:01PM 3 points [-]

Their statement accords very well with the Hansonian vision of AI progress.

Comment author: Riothamus 12 December 2015 02:09:22PM 2 points [-]

If I am reading that right, they plan to oppose Skynet by giving everyone a Jarvis.

Does anyone know their technical people, and whether they can be profitably exposed to the latest work on safety?

Comment author: devi 12 December 2015 05:06:11PM 1 point [-]

They seem deeply invested in avoiding an AI arms race. This is a good thing, perhaps even if it speeds up research somewhat right now (avoiding increasing speedups later might be the most important thing: e^x vs 2+x etc etc).

Note that if the Deep Learning/ML field is talent limited rather than funding limited (seems likely given how much funding it has), the only acceleration effects we should expect are from connectedness and openness (i.e. better institutions). When some of this connectedness might be through collaboration with MIRI, this could very well advance AI Safety Research relative to AI research (via tighter integration of the research programs and choices of architecture and research direction, this seems especially important in how it will play out in the endgame).

In summary, this could actually be really good, it's just too early to tell.

Comment author: bogus 12 December 2015 04:24:44PM 0 points [-]

Edit: It continues to look like they don't know what they're doing.

Heh. Keep in mind, we've been through this before.

Comment author: adamzerner 12 December 2015 04:46:36PM 0 points [-]

Maybe the apparent incompetence is a publicity game, and the do actually know what they're doing?