AlexMennen comments on [Link] Introducing OpenAI - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (48)
From their website, it looks like they'll be doing a lot of deep learning research and making the results freely available, which doesn't sound like it would accelerate Friendly AI relative to AI as a whole. I hope they've thought this through.
Edit: It continues to look like their strategy might be counterproductive. [Edited again in response to this.]
Please don't use "they don't know what they're doing" as a synonym for "I don't agree with their approach".
That interview is indeed worrying. I'm surprised by some of the answers.
Like this?
The first one is a non-answer, the second one suggests that a proper response to Dr. Evil making a machine that transforms the planet into a grey goo is Anonymous creating another machine which... transforms the grey goo into a nicer color of goo, I guess?
If you don't believe that a foom is the most likely outcome(a common and not unreasonable position) then it's probably better to have lots of weakly-superhuman AI than a single weakly-superhuman AI.
Even in that case, whichever actor has the most processors would have the largest "AI farm", with commensurate power projection.
I think the second one suggests that they don't believe the future AI will be a singleton.
Their statement accords very well with the Hansonian vision of AI progress.
If I am reading that right, they plan to oppose Skynet by giving everyone a Jarvis.
Does anyone know their technical people, and whether they can be profitably exposed to the latest work on safety?
They seem deeply invested in avoiding an AI arms race. This is a good thing, perhaps even if it speeds up research somewhat right now (avoiding increasing speedups later might be the most important thing: e^x vs 2+x etc etc).
Note that if the Deep Learning/ML field is talent limited rather than funding limited (seems likely given how much funding it has), the only acceleration effects we should expect are from connectedness and openness (i.e. better institutions). When some of this connectedness might be through collaboration with MIRI, this could very well advance AI Safety Research relative to AI research (via tighter integration of the research programs and choices of architecture and research direction, this seems especially important in how it will play out in the endgame).
In summary, this could actually be really good, it's just too early to tell.
Heh. Keep in mind, we've been through this before.
Maybe the apparent incompetence is a publicity game, and the do actually know what they're doing?