From their website, it looks like they'll be doing a lot of deep learning research and making the results freely available, which doesn't sound like it would accelerate Friendly AI relative to AI as a whole. I hope they've thought this through.
Edit: It continues to look like their strategy might be counterproductive. [Edited again in response to this.]
Edit: It continues to look like they don't know what they're doing.
Please don't use "they don't know what they're doing" as a synonym for "I don't agree with their approach".
Like this?
If I’m Dr. Evil and I use it, won’t you be empowering me?
Musk: I think that’s an excellent question and it’s something that we debated quite a bit.
Altman: There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.
The first one is a non-answer, the second one suggests that a proper response to Dr. Evil making a machine that transforms the planet into a grey goo is Anonymous creating another machine which... transforms the grey goo into a nicer color of goo, I guess?
If you don't believe that a foom is the most likely outcome(a common and not unreasonable position) then it's probably better to have lots of weakly-superhuman AI than a single weakly-superhuman AI.
Even in that case, whichever actor has the most processors would have the largest "AI farm", with commensurate power projection.
the second one suggest...
I think the second one suggests that they don't believe the future AI will be a singleton.
If I am reading that right, they plan to oppose Skynet by giving everyone a Jarvis.
Does anyone know their technical people, and whether they can be profitably exposed to the latest work on safety?
They seem deeply invested in avoiding an AI arms race. This is a good thing, perhaps even if it speeds up research somewhat right now (avoiding increasing speedups later might be the most important thing: e^x vs 2+x etc etc).
Note that if the Deep Learning/ML field is talent limited rather than funding limited (seems likely given how much funding it has), the only acceleration effects we should expect are from connectedness and openness (i.e. better institutions). When some of this connectedness might be through collaboration with MIRI, this could very well advance AI Safety Research relative to AI research (via tighter integration of the research programs and choices of architecture and research direction, this seems especially important in how it will play out in the endgame).
In summary, this could actually be really good, it's just too early to tell.
Maybe the apparent incompetence is a publicity game, and the do actually know what they're doing?
Edit: It continues to look like they don't know what they're doing.
Heh. Keep in mind, we've been through this before.
As a result of this development, and assuming some level of collaboration between MIRI and OpenAI, do you believe the "discount rate" for MIRI donations has increased significantly (i.e. it is even more important to give now than later)?
Good question. I'm not sure. Given diminishing marginal returns if MIRI and OpenAI are doing the same things then the value of giving to MIRI goes way down. In contrast, if OpenAI is going to speed up the development of AI without putting too much thought into friendly AI, then MIRI and OpenAI are complements and it's more important than ever to give lots of money quickly to MIRI.
Another factor to consider: If AGI is 30+ years away, we're likely to have another "AI winter". Saving money to donate during that winter has some value.
With Sam Altman (CEO of YCombinator) talking so much about AI safety and risk over the last 2-3 months, I was so sure that he was working out a deal to fund MIRI. I wonder why they decided to create their own non-profit instead.
Although on second thought, they're aiming for different goals. While MIRI is focused on safety once strong AI occurs, OpenAI is trying to actually speed up the research of strong AI.
Nate Soares says there will be some collaboration between OpenAI and MIRI:
It interesting that their project is called OpenAI while both Facebook and Google open sourced AI algorithms in the last month and a half.
Neither Google nor Facebook seems to be in the OpenAI list but Amazon Web Services does.
Infosys as the second largest Indian IT company is also an interesting part of the list of funders. There an article from yesterday about forming a new company strategy that involves relying heavily on AI.
I expect OpenAI to actually develop software in a way that MIRI doesn't.
I was so sure that he was working out a deal to fund MIRI. I wonder why they decided to create their own non-profit instead.
In practice MIRI is more think-tank than research organization. AFAIK MIRI doesn't even yet claim to have a clear research agenda that leads to practical safe AGI. Their research is more abstract/theoretical/pie in the sky and much harder to measure. Given that numerous AI safety think tanks already now exist, creating a new actual research org non-profit makes sense - it fills in an empty niche. Creating a fresh structure gives the organizers/founders more control and allows them to staff it with researchers they believe in.
I left this comment on Hacker News exploring whether "AI for everyone" will be a good thing or not. Interested to hear everyone's thoughts.
My concern is similar to your last sentence. I think a lot of choices are being made up front without "thinking them through" as you put it; I wish the resources were spent more evenly to enable answering those questions better, and also allocating some to MIRI which is ironically running a fundraiser right now and getting some 10s and 20s while a quite a pile of resources has been allocated to something I don't (yet) have confidence in under the safety umbrella.
Good thing I hear MIRI is actively in touch with those guys, so I hope the end will be better than the beginning.
It's important to remember the scale we're talking about here. A $1B project (even when considered over its lifetime) in such an explosive field with such prominent backers, would be interpreted as nothing other than a power-grab unless it included a lot of talk about openness (it will still be, but as a less threatening one). Read the interview with Musk and Altman and note how they're talking about sharing data and collaborations. This will include some noticeable short term benefits for the contributors, and pushing for safety, either via including someone from our circles or by a more safety focused mission statement, would impede your efforts at gathering such a strong coalition.
It's easy to moan over civilizational inadequacy and moodily conclude that above shows us how (as a species) we're so obsessed with appropriateness and politics that we will avoid our one opportunity to save ourselves. Sure do some of that, and then think of the actual effects for a few minutes:
If the Value Alignment research program is solvable in the way we all hope it is (complete with a human universal CEV, stable reasoning under self-modification and about other instances of our algorithm) then having lots of implementations running around will be basically the same as distributing the code over lots of computers. If the only problem is that human values won't quite converge: this gives us a physical implementation of the merging algorithm of everyone just doing their own thing and (acausally?) trading with each other.
If we can't quite solve everything that we're hoping for, this does change the strategic picture somewhat. Mainly it seems to push us away from a lot of quick fixes, that will likely seem tempting as we approach the explosion: we can't have a sovereign just run the world like some kind of OS that keeps everyone separate, we'll also be much less likely to make the mistakes of creating CelestAI from Friendship is Optimal, something that optimizes most our goals but has some undesired lock-ins. There are a bunch of variations here, but we seem locked out of strategies that try to achieve some minimum level of the cosmic endowment, while possibly failing at getting a substantial constant fraction of our potential by achieving it at the cost of important values or freedoms.
Whether this is a bad thing or not really depends on how one evaluates two types of risk: (1) the risk of undesired lock-ins from an almost perfect superintelligence getting too much relative power, (2) the risk of bad multi-polar traps. Much of (2) seems solvable by robust cooperation, that we seem to be making good progress on. What keeps spooking me are risks due to consciousness: either mistakenly endowing algorithms with it creating suffering, or evolving to the point that we loose it. These aren't as easily solved by robust cooperation, especially if we don't notice them until it's too late. The real strategic problem right now is that there isn't really anyone we can trust to be unbiased in analyzing the relative dangers of (1) and (2), especially because they pattern-match so well with the ideological split between left and right.
It's important to remember the scale we're talking about here. A $1B project (...) in such an explosive field
I was sure this sentence was going to complete with something along the lines of "is not such a big deal". Silicon Valley is awash with cash. Mark Zuckerberg paid $22B for a company with 70 employees. Apple has $200B sitting in the bank.
(2) the risk of bad multi-polar traps. Much of (2) seems solvable by robust cooperation, that we seem to be making good progress on.
Not necessarily. In a multi-polar scenario consisting entirely of Unfriendly AIs, getting them to cooperate with each other doesn't help us.
Yes, robust cooperation is not much to us if its cooperation between the paperclip maximizer and the pencilhead minimizer. But if there are a hundred shards that make up human values, and tens of thousands of people running AI's trying to maximize the values they see fit. It's actually not unreasonable to assume that the outcome, while not exactly what we hoped for, is comparable to incomplete solutions that err on the side of (1) instead.
After having written this I notice that I'm confused and conflating: (a) incomplete solutions in the sense of there not being enough time to do what should be done, and (b) incomplete solutions in the sense of it being actually (provably?) impossible to implement what we right now consider essential parts of the solution. Has anyone got thoughts on (a) vs (b)?
If value alignment is sufficiently harder than general intelligence, then we should expect that given a large population of strong AIs created at roughly the same time, none of them should be remotely close to Friendly.
I would argue that this is a terrible, terrible, terrible idea. Once you've got an AI, you could just ask it how to make chemical or biological weapons. Or to hack into various computer systems. Or how to create self-replicating nano-bots. The problem is not every attack necessarily has a defense; or, even if such an defense exists it is typically if much more resource consuming. For example, if you want to protect your buildings against bomb attacks, an individual can just target the buildings with minimal defenses. There are also major problems with how good AIs could be at running scams or manipulating people.
"Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower. Altman: "I expect that [OpenAI] will [create superintelligent AI], but it will just be open source and useable by everyone <...> Anything the group develops will be available to everyone", "this is probably a multi-decade project <...> there’s all the science fiction stuff, which I think is years off, like The Terminator or something like that. I’m not worried about that any time in the short term"
It's like giving everybody a nuclear reactor and open source knowledge about how to make a bomb. Looks like to result in disaster.
I would like to call this type of thinking "billionaire arrogance" bias. A billionaire thinks that the fact that he is rich is an evidence that he is most clever person in world. But in fact it is evidence that he was lucky before.
Being a billionaire is evidence more of determination than of luck. I also don't think billionaires believe they are the smartest people in the world. But like everyone else, they have too much faith in their own opinions when it comes to areas in which they're not experts. They just get listened to more.
The whole point of open-source is distributed oversight. It also sounds like they will use the Apache license or the MIT license so there nothing forcing them to publish everything should they decide that isn't wise in two decades.
It also sounds like they will use the Apache license or the MIT license
Good. I was worried for a moment that our new artificial overlords will transform the whole universe into zillion tiny copies of GNU GPL.
My first thought was: "the way to avoid bad outcomes from bioweapons is to give everyone equal access to bioweapons. Oh, wait..." (Not entirely fair, I know.)
Still, since I heard of this, my quarterly donation to MIRI increased by 5% of my income.
You can tell pretty easily how good research in math or physics is. But in AI safety research, you can fund people working on the wrong things for years and never know, which is exactly the problem MIRI is currently crippled by. I think OpenAI plans to get around this problem by avoiding AI safety research altogether and just building AIs instead. That initial approach seems like the best option. Even if they contribute nothing to AI safety in the near-term, they can produce enough solid, measurable results to keep the organization alive and attract the best researchers, which is half the battle.
What troubles me is that OpenAI could set a precedent for AI safety as a political issue, like global warming. You just have to read the comments on the HN article to find that people don't don't think they need any expertise in AI safety to have strong opinions about it. In particular, if Sam Altman and Elon Musk have some false belief about AI safety, who is going to prove it to them? You can't just do an experiment like you can in physics. That may explain why they have gotten this far without being able to give well-thought-out answers on some important questions. What MIRI got right is that AI safety is a research problem, so only the opinions of the experts matter. While OpenAI is still working on ML/AI and producing measurable results, it might work to have the people who happened to be wealthy and influential in charge. But if they hope to contribute to AI safety, they will have to hand over control to the people with the correct opinions, and they can't tell who those people are.
That may explain why they have gotten this far without being able to give well-thought-out answers on some important questions.
Which of the answers do you consider not well-thought-out?
In fact we have too many good AI projects which may result in incompatible versions of AI friendliness and wars between AIs. It often happens in humans history before, most typically when two versions of one religion fight each other (like Shi'ah against Sunni, or different version of Buddhism).
I think it would be much better to concentrate all friendly AI efforts under control of one person or organisation.
Basically we are moving from underinvetmnet stage to overinvetmnet.
Ok, we have many nuclear powers in world, but only one main non-proliferation agency that is IAEA, and some how work. The same way we could have many AI-projects in the world, but one agency which provide safety guidelines (and it will be logical that it will be MIRI+Bostrom as they did most known research in the topic). But if we have many agencies which provide different guidelines or even several AI with slightly different friendliness we are doomed.
Strongly disagree that our current nuclear weapons situation "works". At this very moment a large number of hydrogen bombs sit atop missiles ready at a moments notice to kill hundreds of millions of people. Letting North Korea get atomic weapons required major civilization level incompetence.
Moreover, the nuclear weapons situation is much simpler than the AI situation. Pretty much everyone agrees that a nuclear weapon going off in an inhabited area is a big deal that can quickly make life worse for all involved. It is not the case that everyone agrees that general AI is a such a big deal. All the official nuclear powers know that there will be a significant negative response directed at them if they bomb anyone else. They do not know this about AI.
It will be probably much easier to use the AI against someone secretly.
You could try to drop an atomic bomb on someone without them knowing who dropped the bomb on them. But you cannot drop an atomic bomb on them without them knowing that someone dropped the bomb on them.
But you could give your AI a task to invent ways how to move things closer to your desired outcome without creating suspicion. The obvious options would be to make it happen as a "natural" outcome, or to cast the suspicion on someone else, or maybe try to reach the goal in a way that will make people believe it didn't happen or that it wasn't your goal at all. (A superhuman AI could find yet more options; some of them could be incomprehensive to humans. Also options like: the whole world turns into utter chaos; by the way your original goal is completed, but everyone is now too busy and too confused to even notice it or care about it.) How is anyone going to punish that?
I agree, it works in only limited sense, that is there is no nuclear war for 70 years, but proliferation and risks still exists and even grow.
From their site:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
The money quote is at the end, literally—$1B in committed funding from some of the usual suspects.