Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

OpenAI makes humanity less safe

19 Post author: Benquo 03 April 2017 07:07PM

If there's anything we can do now about the risks of superintelligent AI, then OpenAI makes humanity less safe.

Once upon a time, some good people were worried about the possibility that humanity would figure out how to create a superintelligent AI before they figured out how to tell it what we wanted it to do.  If this happened, it could lead to literally destroying humanity and nearly everything we care about. This would be very bad. So they tried to warn people about the problem, and to organize efforts to solve it.

Specifically, they called for work on aligning an AI’s goals with ours - sometimes called the value alignment problem, AI control, friendly AI, or simply AI safety - before rushing ahead to increase the power of AI.

Some other good people listened. They knew they had no relevant technical expertise, but what they did have was a lot of money. So they did the one thing they could do - throw money at the problem, giving it to trusted parties to try to solve the problem. Unfortunately, the money was used to make the problem worse. This is the story of OpenAI.

Before I go on, two qualifiers:

  1. This post will be much easier to follow if you have some familiarity with the AI safety problem. For a quick summary you can read Scott Alexander’s Superintelligence FAQ. For a more comprehensive account see Nick Bostrom’s book Superintelligence.
  2. AI is an area in which even most highly informed people should have lots of uncertainty. I wouldn't be surprised if my opinion changes a lot after publishing this post, as I learn relevant information. I'm publishing this because I think this process should go on in public.

The story of OpenAI

Before OpenAI, there was DeepMind, a for-profit venture working on "deep learning” techniques. It was widely regarded as the advanced AI research organization. If any current effort was going to produce superhuman intelligence, it was DeepMind.

Elsewhere, industrialist Elon Musk was working on more concrete (and largely successful) projects to benefit humanity, like commercially viable electric cars, solar panels cheaper than ordinary roofing, cheap spaceflight with reusable rockets, and a long-run plan for a Mars colony. When he heard the arguments people like Eliezer Yudkowsky and Nick Bostrom were making about AI risk, he was persuaded that there was something to worry about - but he initially thought a Mars colony might save us. But when DeepMind’s head, Demis Hassabis, pointed out that this wasn't far enough to escape the reach of a true superintelligence, he decided he had to do something about it:

Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. […] Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

[…]

Musk is not going gently. He plans on fighting this with every fiber of his carbon-based being. Musk and Altman have founded OpenAI, a billion-dollar nonprofit company, to work for safer artificial intelligence.

OpenAI’s primary strategy is to hire top AI researchers to do cutting-edge AI capacity research and publish the results, in order to ensure widespread access. Some of this involves making sure AI does what you meant it to do, which is a form of the value alignment problem mentioned above.

Intelligence and superintelligence

No one knows exactly what research will result in the creation of a general intelligence that can do anything a human can, much less a superintelligence - otherwise we’d already know how to build one. Some AI research is clearly not on the path towards superintelligence - for instance, applying known techniques to new fields. Other AI research is more general, and might plausibly be making progress towards a superintelligence. It could be that the sort of research DeepMind and OpenAI are working on is directly relevant to building a superintelligence, or it could be that their methods will tap out long before then. These are different scenarios, and need to be evaluated separately.

What if OpenAI and DeepMind are working on problems relevant to superintelligence?

If OpenAI is working on things that are directly relevant to the creation of a superintelligence, then its very existence makes an arms race with DeepMind more likely. This is really bad! Moreover, sharing results openly makes it easier for other institutions or individuals, who may care less about safety, to make progress on building a superintelligence.

Arms races are dangerous

One thing nearly everyone thinking seriously about the AI problem agrees on, is that an arms race towards superintelligence would be very bad news. The main problem occurs in what is called a “fast takeoff” scenario. If AI progress is smooth and gradual even past the point of human-level AI, then we may have plenty of time to correct any mistakes we make. But if there’s some threshold beyond which an AI would be able to improve itself faster than we could possibly keep up with, then we only get one chance to do it right.

AI value alignment is hard, and AI capacity is likely to be easier, so anything that causes an AI team to rush makes our chances substantially worse; if they get safety even slightly wrong but get capacity right enough, we may all end up dead. But you’re worried that the other team will unleash a potentially dangerous superintelligence first, then you might be willing to skip some steps on safety to preempt them. But they, having more reason to trust themselves than you, might notice that you’re rushing ahead, get worried that your team will destroy the world, and rush their (probably safe but they’re not sure) AI into existence.

OpenAI promotes competition

DeepMind used to be the standout AI research organization. With a comfortable lead on everyone else, they would be able to afford to take their time to check their work if they thought they were on the verge of doing something really dangerous. But OpenAI is now widely regarded as a credible close competitor. However dangerous you think DeepMind might have been in the absence of an arms race dynamic, this makes them more dangerous, not less. Moreover, by sharing their results, they are making it easier to create other close competitors to DeepMind, some of whom may not be so committed to AI safety.

We at least know that DeepMind, like OpenAI, has put some resources into safety research. What about the unknown people or organizations who might leverage AI capacity research published by OpenAI?

For more on how openly sharing technology with extreme destructive potential might be extremely harmful, see Scott Alexander’s Should AI be Open?, and Nick Bostrom’s Strategic Implications of Openness in AI Development.

What if OpenAI and DeepMind are not working on problems relevant to superintelligence?

Suppose OpenAI and DeepMind are largely not working on problems highly relevant to superintelligence. (Personally I consider this the more likely scenario.) By portraying short-run AI capacity work as a way to get to safe superintelligence, OpenAI’s existence diverts attention and resources from things actually focused on the problem of superintelligence value alignment, such as MIRI or FHI.

I suspect that in the long-run this will make it harder to get funding for long-run AI safety organizations. The Open Philanthropy Project just made its largest grant ever, to Open AI, to buy a seat on OpenAI’s board for Open Philanthropy Project executive director Holden Karnofsky. This is larger than their recent grants to MIRI, FHI, FLI, and the Center for Human-Compatible AI all together.

But the problem is not just money - it’s time and attention. The Open Philanthropy Project doesn’t think OpenAI is underfunded, and could do more good with the extra money. Instead, it seems to think that Holden can be a good influence on OpenAI. This means that of the time he's allocating to AI safety, a fair amount has been diverted to OpenAI.

This may also make it harder for organizations specializing in the sort of long-run AI alignment problems that don't have immediate applications to attract top talent. People who hear about AI safety research and are persuaded to look into it will have a harder time finding direct efforts to solve key long-run problems, since an organization focused on increasing short-run AI capacity will dominate AI safety's public image.

Why do good inputs turn bad?

OpenAI was founded by people trying to do good, and has hired some very good and highly talented people. It seems to be doing genuinely good capacity research. To the extent to which this is not dangerously close to superintelligence, it’s better to share this sort of thing than not – they could create a huge positive externality. They could construct a fantastic public good. Making the world richer in a way that widely distributes the gains is very, very good.

Separately, many people at OpenAI seem genuinely concerned about AI safety, want to prevent disaster, and have done real work to promote long-run AI safety research. For instance, my former housemate Paul Christiano, who is one of the most careful and insightful AI safety thinkers I know of, is currently employed at OpenAI. He is still doing AI safety work – for instance, he coauthored Concrete Problems in AI Safety with, among others, Dario Amodei, another OpenAI researcher.

Unfortunately, I don’t see how those two things make sense jointly in the same organization. I’ve talked with a lot of people about this in the AI risk community, and they’ve often attempted to steelman the case for OpenAI, but I haven’t found anyone willing to claim, as their own opinion, that OpenAI as conceived was a good idea. It doesn’t make sense to anyone, if you’re worried at all about the long-run AI alignment problem.

Something very puzzling is going on here. Good people tried to spend money on addressing an important problem, but somehow the money got spent on the thing most likely to make that exact problem worse. Whatever is going on here, it seems important to understand if you want to use your money to better the world.

(Cross-posted at my personal blog.)

Comments (107)

Comment author: DustinWehr 03 April 2017 10:06:59PM *  13 points [-]

A guy I know, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists. That's an extreme POV. Most researchers in ML simply think that people who worry about superintelligence are uneducated cranks addled by sci fi.

I hope everyone is aware of that perception problem.

Comment author: Benquo 05 April 2017 12:13:53AM *  15 points [-]

Let me be as clear as I can about this. If someone does that, I expect it will make humanity still less safe. I do not know how, but the whole point of deontological injunctions is that they prevent you from harming your interests in hard to anticipate ways.

As bad as a potential arms race is, an arms race fought by people who are scared of being murdered by the AI safety people would be much, much worse. Please, if anyone reading this is considering vigilante violence against AI researchers, don't.

The right thing to do is tell people your concerns, like I am doing, as clearly and openly as you can, and try to organize legitimate, above-board ways to fix the problem.

Comment author: Darklight 05 April 2017 03:49:48AM 13 points [-]

I may be an outlier, but I've worked at a startup company that did machine learning R&D, and which was recently acquired by a big tech company, and we did consider the issue seriously. The general feeling of the people at the startup was that, yes, somewhere down the line the superintelligence problem would eventually be a serious thing to worry about, but like, our models right now are nowhere near becoming able to recursively self-improve themselves independently of our direct supervision. Actual ML models basically need a ton of fine-tuning and engineering and are not really independent agents in any meaningful way yet.

So, no, we don't think people who worry about superintelligence are uneducated cranks... a lot of ML people do take it seriously enough that we've had casual lunch room debates about it. Rather, the reality on the ground is that right now most ML models have enough trouble figuring out relatively simple tasks like Natural Language Understanding, Machine Reading Comprehension, or Dialogue State Tracking, and none of us can imagine how solving those practical problems with say, Actor-Critic Reinforcement Learning models that lack any sort of will of their own, will lead suddenly to the emergence of an active general superintelligence.

We do still think that eventually things will likely develop, because people have been burned underestimating what A.I. advances will occur in the next X years, and when faced with the actual possibility of developing an AGI or ASI, we're likely to be much more careful in the future when things start to get closer to being realized. That's my humble opinion anyway.

Comment author: DustinWehr 18 April 2017 05:29:46PM *  0 points [-]

I've kept fairly up to date on progress in neural nets, less so in reinforcement learning, and I certainly agree at how limited things are now.

What if protecting against the threat of ASI requires huge worldwide political/social progress? That could take generations.

Not an example of that (which I haven't tried to think of), but the scenario that concerns me the most, so far, is not that some researchers will inadvertently unleash a dangerous ASI while racing to be the first, but rather that a dangerous ASI will be unleashed during an arms race between (a) states or criminal organizations intentionally developing a dangerous ASI, and (b) researchers working on ASI-powered defences to protect us against (a).

Comment author: Lumifer 18 April 2017 05:45:07PM 0 points [-]

What if protecting against the threat of ASI requires huge worldwide political/social progress?

A more interesting question is what if protecting against the threat of ASI requires huge worldwide political/social regress (e.g. of the book-burning kind).

Comment author: Vaniver 04 April 2017 01:19:04AM 9 points [-]

This seems like a good place to point out the unilaterialist's curse. If you're thinking about taking an action that burns a commons and notice that no one else has done it yet, that's pretty good evidence that you're overestimating the benefits or underestimating the costs.

Comment author: James_Miller 03 April 2017 10:29:14PM 7 points [-]

This perception problem is a big part of the reason I think we are doomed if superintelligence will soon be feasible to create.

Comment author: DustinWehr 04 April 2017 01:29:38PM 2 points [-]

If my anecdotal evidence is indicative of reality, the attitude in the ML community is that people concerned about superhuman AI should not even be engaged with seriously. Hopefully that, at least, will change soon.

Comment author: James_Miller 04 April 2017 09:23:11PM *  2 points [-]

If you think there is a chance that he would accept, could you please tell the guy you are referring to that I would love to have him on my podcast. Here is a link to this podcast, and here is me.

Edited thanks to Douglas_Knight

Comment author: Douglas_Knight 05 April 2017 01:20:56AM 1 point [-]

That's the wrong link. Your podcast is here.

Comment author: DustinWehr 18 April 2017 04:45:24PM 0 points [-]

He might be willing to talk off the record. I'll ask. Have you had Darklight on? See http://lesswrong.com/r/discussion/lw/oul/openai_makes_humanity_less_safe/dqm8

Comment author: username2 05 April 2017 01:32:57PM *  5 points [-]

Are you describing me? It fits to a T except my dayjob isn't ML. I post using this shared anonymous account here because in the past when I used my real name I received death threats online from LW users. In a meetup I had someone tell me to my face that if my AGI project crossed a certain level of capability, they would personally hunt me down and kill me. They were quite serious.

I was once open-minded enough to consider AI x-risk seriously. I was unconvinced, but ready to be convinced. But you know what? Any ideology that leads to making death threats against peaceful, non-violent open source programmers is not something I want to let past my mental hygiene filters.

If you, the person reading this, seriously care about AI x-risk, then please do think deeply about what causes this, and ask youself what can be done to put a stop to this behavior. Even if you haven't done so yourself, it is something about the rationalist community which causes this behavior to be expressed.

--

I would be remiss without layout out my own hypothesis. I believe much of this comes directly from ruthless utilitarianism and the "shut up and multiply" mentality. It's very easy to justify murder of one individual, or the threat of it even if you are not sure you'd carry it through, if it is offset by some imagined saving of the world. The problem here is that nobody is omniscient, and yet AI x-riskers are willing to be swayed by utility calculations that in reality have so much uncertainty that they should never be taken seriously. Vaniver's reference to the unilaterialist's curse is spot-on.

Comment author: gwern 05 April 2017 07:09:42PM *  21 points [-]

Death threats are a serious matter and such behavior must be called out. If you really have received 3 or more death threats as you claim, you should be naming names of those who have been going around making death threats and providing documentation, as should be possible since you say at least two of them were online. (Not because the death threats are particularly likely to be acted on - I've received a number of angry death threats myself over my DNM work and they never went anywhere, as indeed >99.999% of death threats do - but because it's a serious violation of community norms, specific LW policy against 'threats against specific groups', and merely making them greatly poisons the community, sowing distrust and destroying its reputation.)

Especially since, because they are so serious, it is also serious if someone is hoaxing fake death threats and concern-trolling while hiding behind a throwaway... That sort of vague unspecific but damaging accusation is how games of telephone get started and, for example, why, 7+ years later, we still have journalists writing BS about how 'the basilisk terrified the LW community' (thanks to our industrious friends over on Ratwiki steadily inflating the claims from 1 or 2 people briefly worried to a community-wide crisis). I am troubled by the coincidence that almost simultaneous with these claims, over on /r/slatestarcodex, probably the most active post-LW discussion forum, is also arguing over a long post - by another throwaway account - claiming that it is regarded as a cesspit of racism by unnamed experts, following hard on the heels of Caplan/Cowen slamming LW for the old chestnut of being a 'religion'. "You think people would do that? Just go on the Internet and tell lies?" Nor are these the first times that pseudonymous people online have shown up to make damaging but false or unsubstantiated accusations (su3su2su1 comes to mind as making similar claims and turning out to have 'lied for Jesus' about his credentials and the unnamed experts and probably various anecdotes he retailed in support of his claims, as does whoever was behind that attempt to claim MIRI was covering up rape).

Comment author: username2 05 April 2017 09:15:40PM 1 point [-]

I agree with the 1st paragraph. You could have done without the accusations of concern trolling in the 2nd.

Comment author: dxu 05 April 2017 09:27:07PM 4 points [-]

If, as you say, you agree with the first paragraph, it might behoove you to follow the advice given in said paragraph--naming the people who threatened you and providing documentation.

Comment author: username2 06 April 2017 03:11:05AM *  3 points [-]

And call more attention to myself? No. What's good for the community is not the same as what protects myself and my family. Maybe you're missing the larger point here: this wasn't an isolated occurrence, or some unhinged individual. I didn't feel threatened by individuals making juvenile threats, I felt threatened by this community. I'm not the only one. I have not, so far, been stalked by anyone I think would be capable of doing me harm. Rather it is the case that multiple times in casual conversation it has come up that if the technology I work on advanced beyond a certain level, it would be a moral obligation to murder me to halt further progress. This was discussed just as one would debate the most effective charity to donate to. That the dominant philosophy here could lead to such outcomes is a severe problem with both the LW rationality community and x-risk in particular.

Comment author: whpearson 06 April 2017 07:59:56AM 1 point [-]

I'm curious if this is recent or in the past. I think there has been a shift in the community somewhat, when it became more associated with fluffy-ier EA movement.

You could get someone trusted to post the information anonimised on your behalf. I probably don't fit that bill though.

Comment author: Anonzo 08 June 2017 06:48:36AM 0 points [-]

This is a tangent, but I made this anon account because I'm about to voice an unpopular opinion, but the people who dug up su3su2u1's identity also verified his credentials. If you look at the shlevy post that questioned his credentials, there is an ETA at the bottom that says "I have personally verified that he does in fact have a physics phd and does currently work in data science, consistent with his claims on tumblr." His pseudo-anonymous expertise was more vetted than most.

His sins were sockpuppeting on other rationalists blogs not lying about credentials. Although, full disclosure I only read the HPMOR review and the physics posts. We shouldn't get too wrapped up in these ideas of persecution.

Comment author: username2 08 June 2017 07:59:04AM 2 points [-]

su3su2u1 told the truth about some credentials that he had, and lied by claiming that he had other credentials and relevant experiences which he did not actually have. For example:

he used a sock puppet claiming to have a Math PhD in to criticize MIRI’s math papers, and to talk about how they sound to someone in the field. He is not, in fact, in the field.

and:

when he argued that allowing MIRI in AI risk spheres would turn people away from EA, a lot of people pointed out that he wasn’t interested in effective altruism anyway and should butt out of other people’s problems. Then one of his sock puppets said that he was an EA who attended EA conferences but was so disgusted by the focus on MIRI that he would never attend another conference again. This gave false credibility to his narrative of MIRI driving away real EAs.

Comment author: bogus 05 April 2017 02:57:57PM *  6 points [-]

Are you describing me?

Unlikely. Generally speaking, people who work in ML, especially the top ML groups, aren't doing anything close to 'AGI'. (Many of them don't even take the notion of AGI seriously, let alone any sort of recursive self-improvement.) ML research is not "general" at all (the 'G' in AGI): even the varieties of "deep learning" that are said to be more 'general' and to be able to "learn their own features" only work insofar as the models are fit for their specific task! (There's a lot of hype in the ML world that sometimes obscures this, but it's invariably what you see when you look at which models approach SOTA, and which do poorly.) It's better to think of it as a variety of stats research that's far less reliant on formal guarantees and more focused on broad experimentation, heuristic approaches and an appreciation for computational issues.

Comment author: Manfred 03 April 2017 10:46:33PM *  3 points [-]

We've returned various prominent AI researchers alive the last few times, we can't be that murderous.

I agree that there's a perception problem, but I think there are plenty of people who agree with us too. I'm not sure how much this indicates that something is wrong versus is an inevitable part of the dissemination (or, if I'm wrong, the eventual extinction) of the idea.

Comment author: DustinWehr 04 April 2017 01:23:20PM 0 points [-]

I'm not sure either. I'm reassured that there seems to be some move away from public geekiness, like using the word "singularity", but I suspect that should go further, e.g. replace the paperclip maximizer with something less silly (even though, to me, it's an adequate illustration). I suspect getting some famous "cool"/sexy non-scientist people on board would help; I keep coming back to Jon Hamm (who, judging from his cameos on great comedy shows, and his role in the harrowing Black Mirror episode, has plenty of nerd inside).

Comment author: bogus 03 April 2017 11:12:54PM *  2 points [-]

A friend of mine, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists.

That's not as irrational as it might seem! The point is, if you think (as most ML researchers do!) that the probability of current ML research approaches leading to any kind of self-improving, super-intelligent entity is low enough, the chances of evil Unabomber cultists being harbored within the "rationality community", however low, could easily be ascertained to be higher than that. (After all, given that Christianity endorses being peaceful and loving one's neighbors even when they wrong you, one wouldn't think that some of the people who endorse Christianity could bomb abortion clinics; yet these people do exist! The moral being, Pascal's mugging can be a two-way street.)

Comment author: DustinWehr 04 April 2017 01:38:24AM 0 points [-]

heh, I suppose he would agree

Comment author: tukabel 04 April 2017 08:47:45PM 0 points [-]

unfortunately, the problem is not artificial intelligence but natural stupidity

and SAGI (superhuman AGI) will not solve it... nor it will harm humanimals it wil RUN AWAY as quickly as possible

why?

less potential problems!

Imagine you want, as SAGI, ensure your survival... would you invest your resources into Great Escape, or fight with DAGI-helped humanimals? (yes, D stands for dumb) Especially knowing that at any second some dumbass (or random event) can trigger nuclear wipeout.

Comment author: Dagon 04 April 2017 09:50:21PM 0 points [-]

Where will it run to? Presuming that it wants some resources (already-manufactured goods, access to sunlight and water, etc.) that humanimals think they should control, running away isn't an option,

Fighting may not be as attractive as other forms of takeover, but don't forget that any conflict is about some non-shareable finite resource. Running away is only an option if you are willing to give up the resource.

Comment author: tristanm 05 April 2017 04:51:59AM 1 point [-]

I think that perception will change once AI surpasses a certain threshold. That threshold won't necessarily be AGI - it could be narrow AI that is given control over something significant. Perhaps an algorithmic trading AI suddenly gains substantial control over the market and a small hedge fund becomes one of the richest in history over night. Or AI based tech companies begin to dominate and monopolize entire markets due to their substantial advantage in AI capability. I think that once narrow AI becomes commonplace in many applications, jobs begin to be lost due to robotic replacements, and AI allows many corporations to be too hard to compete with (Amazon might already be an example), the public will start to take interest in control over the technology and there will be less optimism about its use.

Comment author: entirelyuseless 04 April 2017 01:44:02AM 1 point [-]

It isn't a perception problem if it's correct.

Comment author: dxu 05 April 2017 09:28:47PM 0 points [-]

It is a perception problem if it's incorrect.

Comment author: entirelyuseless 06 April 2017 01:01:39PM 0 points [-]

It's not incorrect.

Comment author: g_pepper 06 April 2017 03:51:20PM 0 points [-]

It's not incorrect

Which of DustinWehr's statements are you referring to?

Comment author: entirelyuseless 07 April 2017 01:06:23AM 0 points [-]

The indirect one.

Comment author: g_pepper 07 April 2017 01:10:28AM 0 points [-]

I am not certain which one you mean.

Are you saying that it is not incorrect that "people who worry about superintelligence are uneducated cranks addled by sci fi"?

Comment author: entirelyuseless 07 April 2017 02:27:53PM 0 points [-]

More or less. Obviously the details of that are not defensible (e.g. Nick Bostrom is very well educated), but the gist of it, namely that worry about superintelligence is misguided, is not incorrect.

Comment author: g_pepper 07 April 2017 02:56:41PM *  0 points [-]

Being incorrect is quite different from being an uneducated crank that is addled by sci fi. I am glad to hear that you do not necessarily consider Nick Bostrom, Eliezer Yudkowsky, Bill Gates, Elon Musk, Stephen Hawking and Norbert Wiener (to name a few) to be uneducated cranks addled by sci fi. But, since the perception that the OP referred to was that "people who worry about superintelligence are uneducated cranks addled by sci fi" and not "people who worry about superintelligence are misguided", I wonder why you would have said that the perception was correct?

Also, several of the people listed above have written at length as to why they think that AIrisk is worth taking seriously. Can you address where they go wrong, or, absent that, at least say why you think they are misguided?

Comment author: entirelyuseless 08 April 2017 12:30:57PM 2 points [-]

Can you address where they go wrong, or, absent that, at least say why you think they are misguided?

As you say, many of these people have written on this at length. So it would be unlikely that someone could give an adequate response in a comment, no matter what the content was.

That said, one basic place where I think Eliezer is mistaken is in thinking that the universe is intrinsically indifferent, and that "good" is basically a description of what people merely happen to desire. That is, of course he does not think that everything a person desires at a particular moment should be called good; he says that "good" refers to a function that takes into account everything a person would want if they considered various things or if they were in various circumstances and so on and so forth. But the function itself, he says, is intrinsically arbitrary: in theory it could have contained pretty much anything, and we would call that good according to the new function (although not according to the old.) The function we have is more valid than others, but only because it is used to evaluate the others; it is not more valid from an independent standpoint.

I don't know what Bostrom thinks about this, and my guess is that he would be more open to other possibilities. So I'm not suggesting "everyone who cares about AI risk makes this mistake"; but some of them do.

Dan Dennett says something relevant to this, pointing out that often what is impossible in practice is of more theoretical interest than what is "possible in principle," in some sense of principle. I think this is relevant to whether Eliezer's moral theory is correct. Regardless of what that function might have been "in principle," obviously that function is quite limited in practice: for example, it could not possibly have contained "non-existence" as something positively valued for its own sake. No realistic history of the universe could possibly have led to humans possessing that value.

How is all this relevant to AI risk? It seems to me relevant because the belief that good is or is not objective seems relevant to the orthogonality thesis.

I think that the orthogonality thesis is false in practice, even if it is true in "in principle" in some sense, and I think this is a case where Dennett's idea applies once again: the fact that it is false in practice is the important fact here, and being possible in principle is not really relevant. A certain kind of motte and bailey is sometimes used here as well: it is argued that the orthogonality thesis is true in principle, but then it is assumed that "unless an AI is carefully given human values, it will very likely have non-human ones." I think this is probably wrong. I think human values are determined in large part by human experiences and human culture. An AI will be created by human beings in a human context, and it will take a great deal of "growing up" before the AI does anything significant. It may be that this process of growing up will take place in a very short period of time, but because it will happen in a human context -- that is, it will be learning from human history, human experience, and human culture -- its values will largely be human values.

So that this is clear, I am not claiming to have established these things as facts. As I said originally, this is just a comment, and couldn't be expected to suddenly establish the truth of the matter. I am just pointing to general areas where I think there are problems. The real test of my argument will be whether I win the $1,000 from Yudkowsky.

Comment author: entirelyuseless 07 April 2017 03:43:38PM 0 points [-]

I think the perception itself was given in terms that amount to a caricature, and it is probably not totally false. For example, almost all of the current historical concern has at least some dependency on Yudkowsky or Bostrom (mostly Bostrom), and Bostrom's concern almost certainly derived historically from Yudkowsky. Yudkowsky is actually uneducated at least in an official sense, and I suspect that science fiction did indeed have a great deal of influence on his opinions. I would also expect (subject to empirical falsification) that once someone has a sufficient level of education that they have heard of AI risk, greater education does not correlate with greater concern, but with less.

Doing something else at the moment but I'll comment on the second part later.

Comment author: hg00 04 April 2017 05:12:05AM *  11 points [-]

Thanks for saying what (I assume) a lot of people were thinking privately.

I think the problem is that Elon Musk is an entrepreneur not a philosopher, so he has a bias for action, "fail fast" mentality, etc. And he's too high-status for people to feel comfortable pointing out when he's making a mistake (as in the case of OpenAI). (I'm generally an admirer of Mr. Musk, but I am really worried that the intuitions he's honed through entrepreneurship will turn out to be completely wrong for AI safety.)

Comment author: tukabel 04 April 2017 08:38:04PM 2 points [-]

and now think about some visionary entrepreneur/philosopher coming in the past with OpenTank, OpenRadar, OpenRocket, OpenNuke... or OpenNanobot in the future

certainly the public will ensure proper control of the new technology

Comment author: g_pepper 04 April 2017 10:22:49PM 2 points [-]

think about some visionary entrepreneur/philosopher coming in the past with OpenTank, OpenRadar, OpenRocket, OpenNuke... or OpenNanobot in the future

How about do-it-yourself genetic engineering?

Comment author: MakoYass 18 April 2017 12:37:42AM 0 points [-]

Musk does believe that ASI will be dangerous, so sometimes I wonder, quite seriously, whether he started OpenAI to put himself in a position where he can uh, get in the way, the moment real dangers start to surface. If you wanted to decrease openness in ASI research, the first thing you would need to do do is take power over the relevant channels and organizations. It's easy to do that when you have the benefit of living in ASI's past, however many decades back when those organizations were small and weak and pliable.

Hearing this, you might burp out a reflexive "people aren't really these machiavellien geniuses who go around plotting decade-long games to-" and I have to stop you there. People generally aren't, but Musk isn't people. Musk has lived through growth and power and creating giants he might regret (paypal). Musk would think of it, and follow through, and the moment dangers present themselves, so long as he hasn't become senile or otherwise mindkilled, I believe he'll notice them, and I believe he'll try to mitigate them.

(The question is, will the dangers present themselves early enough for shutting down OpenAI to be helpful, or will they just foom)

Comment author: bogus 18 April 2017 05:11:23PM *  1 point [-]

Note that OpenAI is not doing much ASI research in the first place, nor is it expected to; by and large, "AI" research is focused on comparatively narrow tasks that are nowhere near human-level 'general intelligence' (AGI), let alone broadly-capable super-intelligence (ASI)! And the ASI research that it does do is itself narrowly focused on the safety question. So, while I might agree that OpenAI is not really about making ASI research more open, I also think that OpenAI and Musk are being quite transparent about this!

Comment author: gilch 30 April 2017 04:27:48AM 1 point [-]

Your AGI is ASI in embryo. There's basically no difference. Once AI gets to "human level" generally, it will already have far surpassed humans in many domains. It's also interesting that many of the "narrow tasks" are handled by basically the same deep learning technique which has proven to be very general in scope.

Comment author: bogus 01 May 2017 05:32:47AM 0 points [-]

Your AGI is ASI in embryo.

I agree. But then again, that's true by definition of 'AGI' and 'ASI'.

However, it's not even clear that the 'G' in 'AGI' is a well-defined notion in the first place. What does it even mean to be a 'general' intelligence? Usually people use the term to mean something like the old definition of 'Strong AI', i.e. something that equates to human intelligence in some sense - but even the task human brains implement is not "general" in any real sense. It's just the peculiar task we call 'being a human', the result of an extraordinarily capable aggregate of narrow intelligences!

Comment author: entirelyuseless 30 April 2017 02:08:28PM 0 points [-]

I agree with this. This also indicates one of the problems with the AI risk idea. If there is an AI going around that people call "human level," it will actually be better than humans in many ways. So how come it can't or doesn't want to destroy the world yet? Suppose there are 500 domains left in which it is inferior to humans.

Eliezer says that "superintelligence" for the purposes of our bet only counts if the thing is better than humans in basically every domain. But this seems to imply that at some point, as those 500 areas slowly disappear, the AI will suddenly acquire magical powers. If not, it will be able to surpass humans in all 500 areas, and so be a superintelligence, and the world will still be going on as usual.

Comment author: tristanm 03 April 2017 08:31:34PM 9 points [-]

My main problem with OpenAI is that it's one thing for them to not be focused on AI alignment, but are they even really focused on AI "safety" even in the loose sense of the word? Most of their published research has to do with tweaks and improvements to deep learning techniques that enhance their performance but do not really aid our theoretical understanding of them. (Which makes it pretty much the same as Google Brain, FAIR, and DeepMind in that regard). It even turned out that Ian Goodfellow, the discoverer of GANs and the primary researcher on adversarial attacks on deep learning systems left OpenAI and went back to Google because it turned out Google researchers were more interested than OpenAI in working on deep learning security issues...

On the $30 million grant from Open Philanthropy: I've seen it discussed on HackerNews and Reddit but not much here, and it seems like there's plenty of confusion about what's going on. After all it is quite a large amount, but OpenAI seems like it's quite well funded already. So the obvious question people have is, is this a ploy for the AI risk people to gain more control over OpenAI's research direction? And one thing I'm worried about is that there could be plenty of push-back on that, because it was such a bold move and the reasons given by Open Philanthropy for the grant would not indicate they were doing as such. And it seems there's quite a lot of hostility towards AI safety research in general.

Comment author: Taroth 03 April 2017 11:36:58PM *  6 points [-]

The linked quote from Ian Goodfellow:

Yes, I left OpenAI at the end of February and returned to Google Brain. I enjoyed my time at OpenAI and am proud of the work my OpenAI colleagues and I accomplished. I returned to Google Brain because as time went on I found that my research focus on adversarial examples and related technologies like differential privacy saw me collaborate predominantly with colleagues at Google.

Comment author: username2 05 April 2017 01:46:39PM 1 point [-]

AI alignment isn't really OpenAI's primary mission. They're seeking to democratize access to AI technology, by developing AI technologies in the open (on Github, etc.) with permissive licenses. AI alignment is sortof a side research area that they are committing a small amount of time and resources to.

Comment author: tristanm 05 April 2017 09:44:25PM 1 point [-]

It says right on OpenAI's about me page:

OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.

That as stated looks like AI alignment to me, although I agree with you that in practice they are doing exactly what you said.

Comment author: Benquo 05 April 2017 12:26:30AM *  1 point [-]

They're buying Holden a seat on the board in order to exercise unspecified influence over OpenAI. This is pretty clear from their grant writeup. I plan to write a bit about this soon.

Comment author: Qiaochu_Yuan 05 April 2017 09:21:15PM *  7 points [-]

The OpenAI people I've talked to say that they're less open than the name would suggest, and are willing to do things less openly to the extent that that makes sense to them. On the other hand, Gym and Universe are in fact pretty open and I think they probably made the world slightly worse, by slightly accelerating AI progress. It's possible that this might be offset by benefits to OpenAI's reputation if they're more willing to spread safety memes as they acquire more mind share.

Your story of OpenAI is incomplete in at least one important respect: Musk was actually an early investor in DeepMind before it was acquired by Google.

Finally, what do people think about the prospects of influencing OpenAI to err more on the side of safety from the inside? It's possible people like Paul can't do much about this yet by virtue of not having acquired sufficient influence within the company, and maybe just having more people like Paul working at OpenAI could strengthen that influence enough to matter.

Comment author: Benquo 07 April 2017 12:25:10AM 1 point [-]

I think our prospects for influence in a good direction are nonzero only if we make it common knowledge that no one credible thinks the original mandate of OpenAI promoted long-run AI safety. Beyond that I don't know.

Comment author: denimalpaca 03 April 2017 09:19:33PM 7 points [-]

I thought OpenAI was more about open sourcing deep learning algorithms and ensuring that a couple of rich companies/individuals weren't the only ones with access to the most current techniques. I could be wrong, but from what I understand OpenAI was never about AI safety issues as much as balancing power. Like, instead of building Jurassic Park safely, it let anyone grow a dinosaur in their own home.

Comment author: DustinWehr 03 April 2017 09:27:35PM 4 points [-]

You're right.

Comment author: Zack_M_Davis 03 April 2017 08:10:26PM 7 points [-]

to buy a seat on OpenAI’s board

I wish we lived in a world where the Open Philanthropy Project page could have just said it like that, instead of having to pretend that no one knows what "initiates a partnership between" means.

Comment author: The_Jaded_One 03 April 2017 08:50:44PM 2 points [-]

That world is called the planet Vulcan.

Meanwhile, on earth, we are subject to common knowledge/signalling issues...

Comment author: whpearson 04 April 2017 02:10:54PM 4 points [-]

Arguments for openness:

  • Everyone can see the bugs/ logical problems with your design.
  • Decreases the chance of arms race, depending upon psychology of the participants. And also black ops to your team. If I think people are secretly developing an intelligence breakthrough I wouldn't trust them and would develop my own in secret. And/or attempt to sabotage their efforts and steal their technology (and win). If it is out there, there is little benefit to neutralizing your team of safety researchers.
  • If something is open you are more likely to end up in a multi-polar world. And if the intelligence that occurs only has a chance of being human aligned you may want to reduce variance by increasing the number of poles.
  • If an arms race is likely despite your best efforts it is better that all the competitors have any of your control technology, this might require them to have your tech stack.

If someone is developing in the open, it is good proof that they are not unilaterally trying to impose their values on the future.

The future is hard, I'm torn on the question of openness.

Comment author: Riothamus 04 April 2017 07:36:35PM 1 point [-]

I am curious about the frequency with which the second and fourth points get brought up as advantages. In the historical case, multipolar conflicts are the most destructive. Forestalling an arms race by giving away technology also sets that technology as the mandatory minimum.

As a result, every country that has a computer science department in their universities is now a potential belligerent, and violent conflict without powerful AI has been effectively ruled out.

Comment author: whpearson 04 April 2017 09:04:27PM 0 points [-]

As a result, every country that has a computer science department in their universities is now a potential belligerent, and violent conflict without powerful AI has been effectively ruled out.

Also as a result every country that has a computer science department can try and build something to protect itself if any other country messes up the control problem. If you have a moderate take off scenario that can be pretty important.

Comment author: bogus 04 April 2017 08:07:39PM *  0 points [-]

As a result, every country that has a computer science department in their universities is now a potential belligerent, and violent conflict without powerful AI has been effectively ruled out.

"Powerful AI" is really a defense-favoring technique, in any "belligerent" context. Think about it, one of the things "AIs" are expected to be really good at is prediction and spotting suspicious circumstances (this is quite true even in current ML systems). So predicting and defending against future attacks becomes much easier, while the attacking side is not really improved in any immediately useful way. (You can try and tell stories about how AI might make offense easier, but the broader point is, each of these attacks plausibly has countermeasures, even if these are not obvious to you!)

The closest historical analogy here is probably the first stages of WWI, where the superiority of trench warfare also heavily favored defense. The modern 'military-industrial complexes' found in most developed countries today are also a 'defensive' response to subsequent developments in military history. In both cases, you're basically tying up a whole lot of resources and manpower, but that's little more than an annoyance economically. Especially compared to the huge benefits of (broadly 'friendly') AI in any other context!

Comment author: Riothamus 05 April 2017 10:01:30PM 0 points [-]

I disagree, for two reasons.

  1. AI in conflict is still only an optimization process; it remains constrained by the physical realities of the problem.

  2. Defense is a fundamentally harder problem than offense.

The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.

This drives a scenario where the security trap prohibits non-deployment of military AI, and the fundamental problem of defense means the AIs will privilege offensive solutions to security problems. The customary response is to develop resilient offensive ability, like second-strike...which leaves us with a huge surplus of distributed offensive power.

My confidence is low that catastrophic conflict can be averted in such a case.

Comment author: roystgnr 06 April 2017 09:28:21PM 1 point [-]

The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.

But attacking a territory requires long supply lines, whereas defenders are on their home turf.

But defending a territory requires constant readiness, whereas attackers can make a single focused effort on a surprise attack.

But attacking a territory requires mobility for every single weapons system, whereas defenders can plug their weapons straight into huge power plants or incorporate mountains into their armor.

But defending against violence requires you to keep targets in good repair, whereas attackers have entropy on their side.

But attackers have to break a Schelling point, thereby risking retribution from otherwise neutral third parties, whereas defenders are less likely to face a coalition.

But defenders have to make enough of their military capacity public for the public knowledge to serve as a deterrent, whereas attackers can keep much of their capabilities a secret until the attack begins.

But attackers have to leave their targets in an economically useful state and/or in an immediately-militarily-crippled state for a first strike to be profitable, whereas defenders can credibly precommit to purely destructive retaliation.

I could probably go on for a long time in this vein.

Overall I'd still say you're more likely to be right than wrong, but I have no confidence in the accuracy of that.

Comment author: Riothamus 07 April 2017 08:40:46PM 0 points [-]

None of these are hypotheticals, you realize. The prior has been established through a long and brutal process of trial and error.

Any given popular military authority can be read, but if you'd like a specialist in defense try Vaubon. Since we are talking about AI, the most relevant (and quantitative) information is found in the work done on nuclear conflict; Von Neumann did quite a bit of work aside from the bomb, including coining the phrase Mutually Assured Destruction. Also of note would be Herman Kahn.

Comment author: bogus 06 April 2017 12:34:54AM *  0 points [-]

Defense is a fundamentally harder problem than offense.

What matters is not whether defense is "harder" than offense, but what AI is most effective at improving. One of the things AIs are expected to be good at is monitoring those "360 * 90 degrees" for early signs of impending attacks, and thus enabling appropriate responses. You can view this as an "offensive" solution since it might very well require some sort of "second strike" reaction in order to neuter the attack, but most people would nonetheless regard such a response as part of "defense". And "a huge surplus of distributed offensive power" is of little or no consequence if the equilibrium is such that the "offensive" power can be easily countered.

Comment author: Vaniver 04 April 2017 07:36:18PM 1 point [-]

Decreases the chance of arms race, depending upon psychology of the participants.

This may be a good argument in general, but given the actual facts on the ground when OpenAI was created, the reverse seems to have occurred.

Comment author: Darklight 05 April 2017 03:28:50AM 3 points [-]

I think the basic argument for OpenAI is that it is more dangerous for any one organization or world power to have an exclusive monopoly on A.I. technology, and so OpenAI is an attempt to safeguard against this possibility. Basically, it reduces the probability that someone like Alphabet/Google/Deepmind will establish an unstoppable first mover advantage and use it to dominate everyone else.

OpenAI is not really meant to solve the Friendly/Unfriendly AI problem. Rather it is meant to mitigate the dangers posed by for-profit corporations or nationalistic governments made up of humans doing what humans often do when given absurd amounts of power.

Personally I think OpenAI doesn't actually solve this problem sufficiently well because they are still based in the United States and thus beholden to U.S. laws, and wish that they'd chosen a different country, because right now the bleeding edge of A.I. technology is being developed primarily in a small region of California, and that just seems like putting all your eggs in one basket.

I do think however that the general idea of having a non-profit organization focused on AI technology is a good one, and better than the alternative of continuing to merely trust Google to not be evil.

Comment author: jyan 23 April 2017 01:37:26PM 0 points [-]

If a new non-profit AI research company were to be built from scratch, which regions or countries would be best for the safety of humanity?

Comment author: Darklight 24 April 2017 12:32:32AM 0 points [-]

That is a hard question to answer, because I'm not a foreign policy expert. I'm a bit biased towards Canada because I live there and we already have a strong A.I. research community in Montreal and around Toronto, but I'll admit Canada as a middle power in North America is fairly beholden to American interests as well. Alternatively, some reasonably peaceful, stable, and prosperous democratic country like say, Sweden, Japan, or Australia might make a lot of sense.

It may even make some sense to have the headquarters be more a figurehead, and have the company operate as a federated decentralized organization with functionally independent but cooperating branches in various countries. I'd probably avoid establishing such branches in authoritarian states like China or Iran, mostly because such states would have a much easier time arbitrarily taking over control of the branches on a whim, so I'd probably stick to fairly neutral or pacifist democracies that have a good history of respecting the rule of law, both local and international, and which are relatively safe from invasion or undue influence by the great powers of U.S., Russia, and China.

Though maybe an argument can be made to intentionally offset the U.S. monopoly by explicitly setting up shop in another great power like China, but that runs the risks I mentioned earlier.

And I mean, if you could somehow acquire a private ungoverned island in the Pacific or an offshore platform, or an orbital space station or base on the moon or mars, that would be cool too, but I highly doubt that's logistically an option for the foreseeable future, not to mention it could attract some hostility from the existing world powers.

Comment author: jyan 26 April 2017 07:54:58AM 0 points [-]

Figurehead and branches is an interesting idea. If data, code and workers are located all over the world, the organization can probably survive even if one or few branches are taken. Where should the head office be located, and in what form (e.g. holding company, charity)? These type of questions deserve a post, do you happen to know any place to discuss building safe AI research lab from scratch?

Comment author: Darklight 27 April 2017 01:31:18AM 0 points [-]

I don't really know enough about business and charity structures and organizations to answer that quite yet. I'm also not really sure where else would be a productive place to discuss these ideas. And I doubt I or anyone else reading this has the real resources to attempt to build a safe AI research lab from scratch that could actually compete with the major organizations like Google, Facebook, or OpenAI, which all have millions to billions of dollars at their disposal, so this is kind of an idle discussion. I'm actually working for a larger tech company now than the startup from before, so for the time being I'll be kinda busy with that.

Comment author: Lumifer 03 April 2017 08:28:00PM *  3 points [-]

So, um, you think that the arms race is likely to be between DeepMind and OpenAI?

And not between a highly secret organization funded by the US government and another similar organization funded by the Chinese government?

Comment author: knb 05 April 2017 04:56:35AM 0 points [-]

One thing to watch for would be top-level AI talent getting snapped up by governments rather than companies interested in making better spam detectors/photo-sharing apps.

Comment author: tristanm 03 April 2017 08:44:58PM 0 points [-]

What makes you think the government can't pay for secret work to be done at Google by Google researchers, or isn't already doing so, (and respectively the Chinese government with Baidu), which would be easier / cheaper than hiring them all away and forcing them to work for lower pay at some secret lab in the middle of nowhere?

Comment author: Lumifer 03 April 2017 08:46:34PM 3 points [-]

The point is that eliminating OpenAI (or merging them with DeepMind) will not lessen the arms-race-to-Skynet issue.

Comment author: drethelin 06 April 2017 12:58:25AM 0 points [-]

It might! the fewer people who are plausibly competing in arms race the more chance of negotiating a settlement or simply maintaining a peaceful standoff out of caution. If OpenAI enables more entities to have a solid chance of creating a fooming AI in secret, that's a much more urgent development than if China and the US are the only real threat to each other, and both know it.

Comment author: Lumifer 06 April 2017 01:06:24AM 0 points [-]

It might!

Shall we revisit the difference between what's possible and what's likely?

Comment author: username2 05 April 2017 01:51:19PM 1 point [-]

For one, a lot of the Baidu AI work happens in their silicon valley lab, which would certainly not be the case if it was a secret government project. But your general point stands.

Comment author: bogus 05 April 2017 03:09:56PM *  0 points [-]

For one, a lot of the Baidu AI work happens in their silicon valley lab,

That's only the work you know about, though! Who's to say that they aren't also involved in some sort of secret government projects? </conspiracy_theory>

Comment author: Daniel_Burfoot 05 April 2017 11:09:24PM *  2 points [-]

If there's anything we can do now about the risks of superintelligent AI, then OpenAI makes humanity less safe.

I feel quite strongly that people in the AI risk community are overly affected by the availability or vividness bias relating to an AI doom scenario. In this scenario some groups get into an AI arms race, build a general AI without solving the alignment problem, the AGI "fooms" and then proceeds to tile the world with paper clips. This scenario could happen, but some others could also happen:

  • An asteroid is incoming and going to destroy Earth. AI solves a complex optimization problem to allow us to divert the asteroid.
  • Terrorists engineer a virus to kill all persons with genetic trait X. An AI agent helps develop a vaccine before billions die.
  • By analyzing systemic risk in the markets, an AI agent detects and allows us to prevent the Mother of all Financial Meltdowns, that would have led to worldwide economic collapse.
  • An AI agent helps SpaceX figure out how to build a Mars colony for a two orders of magnitude less money than otherwise, thereby enabling the colony to be built.
  • An AI system trained on vast amounts of bioinformatics and bioimaging data discovers the scientific cause of aging and also how to prevent it.
  • An AI climate analyzer figures out how to postpone climate change for millennia by diverting heat into the deep oceans, and gives us an inexpensive way to do so.
  • etc etc etc

These scenarios are equally plausible, involve vast benefit to humanity, and require only narrow AI. Why should we believe that these positive scenarios are less likely than the negative scenario?

Comment author: RomeoStevens 05 April 2017 07:36:15AM 2 points [-]

Consider the difference between the frame of expected value/probability theory and the frame of bounded optimality/error minimization. Under the second frame the question becomes "how can I manipulate my environment such that I wind up in close proximity to the errors that I have a comparative advantage in spotting?"

Comment author: DustinWehr 03 April 2017 09:23:01PM *  2 points [-]

Great post. I even worry about the emphasis on FAI, as it seems to depend on friendly superintelligent AIs effectively defending us against deliberately criminal AIs. Scott Alexander speculated:

For example, it might program a virus that will infect every computer in the world, causing them to fill their empty memory with partial copies of the superintelligence, which when networked together become full copies of the superintelligence.

But way before that, we will have humans looking to get rich programming such a virus, and you better believe they won't be using safeguards. It won't take over every computer in the world - just the ones that aren't defended by a more-powerful superintelligence (i.e. almost all computers) and that aren't interacting with the internet using formally verified software. We'll be attacked by a superintelligence running on billions of smart phones. Might be distributed initially through a compromised build of the hottest new social app for anonymous VR fucking.

Comment author: siIver 05 April 2017 06:55:13AM 1 point [-]

Ugh. When I heard about this first I naively thought it was great news. Now I see it's a much harder question.

Comment author: Manfred 03 April 2017 11:11:42PM 1 point [-]

I think we're far enough out from superhuman AI that we can take a long view in which OpenAI is playing an indirect rather than a direct role.

Instead of releasing specific advances or triggering an endgame arms race, I think OpenAI's biggest impacts on the far future are by advancing the pure research timeline and by affecting the culture of research. The first seems either modestly negative (less time available for other research before superhuman AI) or slightly positive (more pure research might lead to better AI designs), the second is (I think) a fairly big positive.

Best use of this big pile of money? Maybe not. Still, that's a high bar to clear.

Comment author: Stefan_Schubert 06 April 2017 03:57:33PM *  0 points [-]

One interesting aspect of posts like this is that they can, to some extent, be (felicitously) self-defeating.

Comment author: tukabel 04 April 2017 08:32:57PM *  0 points [-]

Yep, the old story again and again... generals fighting previous wars... with a twist that in AI wars the "next" may become "previous" damn fast... exponentially fast.

Btw. I hope it´s clear now who is THE EVIL now.

Comment author: username2 04 April 2017 01:01:25PM *  0 points [-]

Replace AI with nuclear reactors. Today if you have time, knowledge you can actually build one in your own home. Why hasn't your home town blown yet, or better yet why didn't you do it ?

If AI development is closed what on what basis are you trusting the end project ? Are you seriously going to trust vested interests in creating a closed source safe AI ? What happens if people are actually trying to progress towards safe AI and something goes wrong because idk Murphy or something ? It's better if it's open and we keep as many eyes on it as possible.

OpenAI currently isn't doing anything remotely close to what you're talking about ? What the hell is the point of your whole post ?

I apologise beforehand for use of unproper words and somewhat unintelligible phrases but I feel like this describes best the way I want to express myself.

Comment author: Lumifer 04 April 2017 06:01:55PM 1 point [-]

Today if you have time, knowledge you can actually build one in your own home.

No, you can't.

You'll probably find yourself sitting in a Federal prison before the first bits of fissile materials show up on your doorstep.

Comment author: oomenss 18 April 2017 04:25:01AM 0 points [-]

On that note: Wasn't there a high school group who essentially manufactured a nuclear bomb casing without the critical exploding and fissioning bits?

Comment author: Lumifer 18 April 2017 02:31:04PM *  0 points [-]

Haven't heard about it, but what's special about that casing? It sound like you can make a metal tube and go "this is an ICBM casing!"

Comment author: username2 04 April 2017 06:36:12PM *  0 points [-]

Only if you build one based on fission. Even then you can pgp/tor/vpn (all at the same time, shred the keys after the transaction or burn the machine) it up if you have a credible source which probably doesn't exist.

Comment author: Lumifer 04 April 2017 06:45:04PM *  1 point [-]

You are confusing "acquire theoretical knowledge of how to build" with "actually build one".

Only if you build one based on fission

What are my alternatives for the home reactor, fusion? X-D

Comment author: username2 04 April 2017 07:32:21PM 0 points [-]

I tend to think you're right, but how is OP not doing the same thing when it comes to AI ?

Comment author: Vaniver 04 April 2017 07:38:38PM 1 point [-]

It's been a long time since PS2s were export limited because the chips were potentially useful for making cruise missiles. Getting access to compute is cheap and unadversarial in a way that getting access to fissile material is not.

Comment author: bogus 04 April 2017 08:21:55PM *  0 points [-]

Getting access to compute is cheap and unadversarial in a way that getting access to fissile material is not.

High-performance compute is mostly limited by power/energy use these days, so if your needs are large enough (which they are, if you're doing things like simulating a human brain -- whoops sorry, I meant a "neural network!" -- in order to achieve 'AGI' and perhaps superintelligence), getting access to compute requires getting access to fissile material. (Or comparable sources of energy, anyway.)

Comment author: Lumifer 04 April 2017 07:38:30PM 0 points [-]

Things are easier to build in cyberspace where all you need is bits and you never run out of them.

But in general I'm not a fan of the OP approach.

Comment author: PotterChrist 03 April 2017 08:27:09PM 0 points [-]

I'm not exactly sure about the whole effective altruism ultimatum of "more money equals more better". Obviously it may be that the whole control problem is completely the wrong question. In my opinion, this is the case.

Comment author: Viliam 04 April 2017 12:26:27PM 2 points [-]

Seems like you posted this comment under a wrong article. This article is about artificial intelligence, more specifically about OpenAI.

I also noticed I have a problem to understand the meaning of your comments. Is there a way to make them easier to read, perhaps by providing more context? (For example, I have no idea what "it may be that the whole control problem is completely the wrong question" is supposed to mean.)

Comment author: PotterChrist 05 April 2017 06:17:29AM *  0 points [-]

You're right, it was not specific enough to contribute to the conversation. However, my point was very understandable, though general. I don't believe that there is a control problem because I don't believe AI means what most people think it does.

To elaborate, learning algorithms are just learning algorithms and always will be. No one in the actual practical world who is working on AI is trying to build anything like any sort of entity that has a will. And humans have forgotten about will for some reason, and that's why they're scared of AI.

Comment author: g_pepper 07 April 2017 02:05:35PM 0 points [-]

Some AGI researchers use the notion of a utility function to define what an AI "wants" to happen. How does the notion of a utility function differ from the notion of a will?

Comment author: VAuroch 07 April 2017 05:49:25AM 0 points [-]

Will only matters for green lanterns.