"I'm increasingly inclined to thing there should be some regulatory oversight, maybe at the national and international level just to make sure that we don't do something very foolish."

http://www.cnet.com/news/elon-musk-we-are-summoning-the-demon-with-artificial-intelligence/#ftag=CAD590a51e

New Comment
44 comments, sorted by Click to highlight new comments since:

Large tech companies would capture the oversight agency and use it to hinder their potential competitors.

You say that like it's a bad thing :-P

It's wildly premature. We wouldn't have wanted to apply today's commercial aircraft standards to the Wright Brothers.

We might have wanted to apply today's anti-proliferation standards to early nuclear weapons (assuming this would have been possible).

Dale Carrico mocks Musk:

http://amormundi.blogspot.com/2014/10/summoning-demon-robot-cultist-elon-musk.html

Of course, Elon Musk has built real companies which make real stuff. Even The Atlantic magazine admits that:

http://www.theatlantic.com/national/archive/2014/10/what-it-took-for-spacex-to-become-a-serious-space-company/381724/?single_page=true

Musk's accomplishments don't necessarily make him an expert on the demonology of AI's. But his track record suggests that he has a better informed and organized way of thinking about the potentials of technology than Carrico's.

[-]knb90

Why even link to such a stupid and insubstantial article? (I'm referring to the first one of course).

Dale's blog apparently doesn't have many readers, judging by how few comments his posts have. But I find it interesting because he has uncritically bought into the Enlightenment's wishful thinking about democracy, equality, human fungibility and so forth, while he dismisses "robot cultism" as a competitive utopianism based on other people's fantasies he doesn't share.

[-]Sysice100

I find it very useful to have posts like these as an emotional counter to the echo chamber effect. Obviously this has little or no effect on the average LW reader's factual standpoint, but reminds us both of the heuristical absurdity of our ideas, and how much we have left to accomplish.

I don't think LW's ideas are heuristically absurd. If you look at the comments on the CNET article, people seem pretty evenly divided for and against.

(Criticism is still very valuable though.)

[-][anonymous]-10

I, for one, love that guy's blog.

[-]knb10

Because you're a connoisseur of insipid name-calling and delirious political grandstanding on non-political issues?

[-][anonymous]-10

More because I think his assessment of the effects and motivations of libertarianism-in-practice and the ideological and mythological underpinnings of singulatarianism are more often than not spot on, and the name calling based on that is just funny. Other posts on the blog I tend not to notice.

Dale's blog may have low readership, but it's worth noting that H+ magazine, a prominent transhumanist news source which has reported favorably on Superintelligence in the past, ran a very similar story recently.

It takes years of study to write as poorly as he does.

Musk's accomplishments don't necessarily make him an expert on the demonology of AI's. But his track record suggests that he has a better informed and organized way of thinking about the potentials of technology than Carrico's.

Would I, epistemically speaking, be better off adopting the beliefs hold by all those who have recently voiced their worries about AI risks? If I did that then I would end up believing that I was living in a simulation, in a mathematical universe, and that within my lifetime, thanks to radical life extension, I could hope to rent an apartment on a seastead on the high seas of a terraformed Mars. Or something along these lines...

The common ground between those people seems to be that they all hold weird beliefs, beliefs that someone who has not been indoctrinated...cough...educated by the sequences has a hard time to take seriously.

Thanks to radical life extension, I could hope to rent an apartment on a seastead on the high seas of a terraformed Mars

You’re confusing peoples’ goals with their expectations.

The common ground between those people seems to be that they all hold weird beliefs, beliefs that someone who has not been indoctrinated...cough...educated by the sequences has a hard time to take seriously.

Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true. Even if MIRI turned out to be a cynical cult, I wouldn’t take this to be evidence against the claims in that paper. Do you have some convincing counterarguments?

Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true.

I don't know what you are trying to communicate here. Do you think that mere arguments, pertaining to something that not even the relevant experts understand at all, entitles someone to demonize a whole field?

The problem is that armchair theorizing can at best yield very weak decision relevant evidence. You don't just tell the general public that certain vaccines cause autism, that genetically modified food is dangerous, or scare them about nuclear power...you don't do that if all you got are arguments that you personally find convincing. What you do is hard empirically science in order to verify your hunches and eventually reach a consensus among experts that your fears are warranted.

I am aware of many of the tactics that the sequences employ to dismiss the above paragraph. Tactics such as reversing the burden of proof, conjecturing arbitrary amounts of expected utility etc. All of the tactics are suspect.

Do you have some convincing counterarguments?

Yes, and they are convincing enough to me that I dismiss the claim that with artificial intelligence we are summoning the demon.

Mostly the arguments made by AI risk advocates suffer from being detached from an actual grounding in reality. You can come up with arguments that make sense in the context of your hypothetical model of the world, in which all the implicit assumptions you make turn out to be true, but which might actually be irrelevant in the real world. AI drives are an example here. If you conjecture the sudden invention of an expected utility maximizer that quickly makes huge jumps in capability, then AI drives are much more of a concern than e.g. within the context of a gradual development of tools that become more autonomous due to their increased ability of understading and doing what humans mean.

You criticize mere arguments and then respond with some of your own. Of all the non-normal LessWrong memes, the orthogonally thesis doesn’t strike me as particularly out there.

The basic athematic of AI risk is, [orthogonality thesis] + [agents more powerful than us seem feasible with near-future technology] + [the large space of possible goals] = [we have to be very carful building the first AIs]

These seem like conservative conclusions derived from conservative assumptions. You don’t even have to buy recursive self improvement at all.

Ironically, I think the blog you posted was an example of rank scientism. I mean, sure induction is great. But by his reasoning, we really shouldn’t worry about global warming until we’ve tested our models on several identical copies of earth. He thinks if its not physics, then its tarot.

I agree with many of your criticisms of MIRI. It was (as far as I can tell) extremely poorly run for a very long time, but don’t go throwing out the apocalypse with the bathwater. Isn’t it possible that MIRI is a dishonest cult and AI is extremely likely to kill us all.

I feel like citing Malthus as striking you as starkly true is a poor argument.

Would I, epistemically speaking, be better off adopting the beliefs hold by all those who have recently voiced their worries about AI risks?

Yes, assuming we're speaking about their actual beliefs, and not whatever mockery you make of them.

If I did that then I would end up believing that I was living in a simulation, in a mathematical universe, and that within my lifetime, thanks to radical life extension, I could hope to rent an apartment on a seastead on the high seas of a terraformed Mars. Or something along these lines.

I understand you've said your occupations have been "road builder, baker and gardener". As long as we're playing the status game, I think I'll trust Elon Musk and Stephen Hawking to have a better epistemic understanding of reality in regards to cosmology or the far possibilities of technology than your average road builder, baker or gardener does.

You're answering mockery with an ad hominem, for which there is no need. Refuting something just because it sounds strange is like "check mate in 1" for the opponent. By going personal it's like snatching rhetorical defeat from the jaws of victory. It makes you look like you have no strong argument when in fact you do. It's even contained in the ad hominem ("good understanding" etc.), but by making the matter personal you're invalidating it.

Also, I very much doubt XiXiDu belongs in the reference class of "average road builder, baker or gardener", just as you don't belong in the "average Greek" reference class. I know you guys are strongly at odds, but do you think the average road builder uses "epistemically speaking" in their common parlance? Proof by active vocabulary.

Yes, assuming we're speaking about their actual beliefs, and not whatever mockery you make of them.

Or alternatively the mockery created by dumping their beliefs down to a level where a reporter can understand them enough to write about them.

[-]satt10

Would I, epistemically speaking, be better off adopting the beliefs hold by all those who have recently voiced their worries about AI risks? If I did that then I would end up believing that I was living in a simulation, in a mathematical universe, [...]

Do "all those who have recently voiced their worries about AI risks" actually believe we live in a simulation in a mathematical universe? ("Or something along these lines..."?)

Do "all those who have recently voiced their worries about AI risks" actually believe we live in a simulation in a mathematical universe? ("Or something along these lines..."?)

Although I don't know enough about Stuart Russell to be sure, he seems rather down to earth. Shane Legg also seems reasonable. So does Laurent Orseau. With the caveat that these people also seem much less extreme in their views on AI risks.

I certainly do not want to discourage researchers from being cautious about AI. But what currently happens seems to be the formation of a loose movement of people who reinforce their extreme beliefs about AI by mutual reassurance.

There are whole books now about this topic. What's missing are the empirical or mathematical foundations. It just consists of non-rigorous arguments that are at best internally consistent.

So even if we were only talking about sane domain experts, if they solely engage in unfalsifiable philosophical musings then the whole endeavour is suspect. And currently I don't see them making any predictions that are less vague and more useful than the second coming of Jesus Christ. There will be an intelligence explosion by a singleton with a handful of known characteristics revealed to us by Omohundro and repeated by Bostrom. That's not enough!

[-]satt20

I don't understand how that answers my specific question. Your system 1 may have done a switcheroo on you.

I'm pretty sure that you can't give the sequences credit for all of that. Most people here were already some breed of transhumanists, futurists, or singularitarians before they found LessWrong and read the sequences, and were probably already interested in things like life extension, space travel and colonization, and so on.

Supposing you did want to regulate AI research, how could you tell whether a program was getting close enough to AI to be dangerous?

[-][anonymous]20

One idea for a first pass could be: Suppose you had a computer with 1000 times the computing power than the best supercomputer has. Would running your algorithm on that machine be dangerous on its own?

For example I think even with 1000x computing power the Deep learning type approach would be ok. It would just let you have really good image/voice/action recognizers. On the other hand consider Deep Mind's general game playing program which plays a variety of simple video games near optimally including exploiting bugs. A system like this at 1000x power given decent models of parts of the world and robotics, may be hard to contain. So in summary, I would say a panel of experts, rating the danger of the program running with 1000x computing power would be an ok first pass.

I know the architecture of Deep Mind, (It's reinforcement learning + deep learning, basically) and can guarantee you that 1000x computing power would have a hard time getting you to NES games, let alone anything dangerous.

As long as the computer is in its own simulated world, with no input from the outside world, we're almost certainly safe. It cannot model the real world.

But hook it up to some cameras and microphones, and then you have the potential for something that could wind up being dangerous.

So I'd say there's no reason to speculate about 1000x computing power. Just stick it in a virtual world with no human communication and let in run for a while and see if it shows signs of the kind of intelligence that would be worrying.

(The AI Box argument does not apply here)

The challenge, of course, is coming up with a virtual world that is complex enough to be able to discern high intelligence while being different enough from the real world that it could not apply knowledge gained in the simulation to the real world.

As long as the computer is in its own simulated world, with no input from the outside world, we're almost certainly safe. It cannot model the real world.

Note: given really really large computational resources, an AI can always "break out by breaking in"; generate some physical laws ordered by complexity, look what sort of intelligent life arises in those cosmologies, craft an attack that works against it on the assumption that it's running the AI in a box, repeat for the hundred simplest cosmologies. This potentially needs a lot of computing power, but it might take very little depending on how determined our minds are by our physics.

I'd say that if it started running a huge number of simulations of physical realities, and analyzing the intelligence of beings that resulted, that would fall squarely into the 'worrying level of intelligence' category.

In fact if it started attempting to alter the physics of the virtual world it's in at any level - either by finding some in-game way to hack the virtual world, or by running simulations of alternate physics - that would be incredibly worrying.

I know the architecture of Deep Mind, (It's reinforcement learning + deep learning, basically) and can guarantee you that 1000x computing power would have a hard time getting you to NES games, let alone anything dangerous.

[-][anonymous]-10

Is there anyway we could contact Musk? I believe there is a nonzero chance that he could have an idea about what to do that has not yet been considered on LW, if for no other reason than he is a very intelligent outsider who seems to reason about everything from first principles.

I don't think that LW is a good vehicle for this. Within MIRI's interview experts series they could ask Musk. People at the head of MIRI or FHI might also reach out directly to Musk.

EY has tweeted to Elon, they're aware of each other. Also Peter Thiel is a major MIRI donor and was a co-founder of PayPal with Elon (and an investor in many of his ventures). I'm pretty sure if EY wanted a backchannel (or a meeting with) Elon it would be easily obtained.

There is a Google+ profile and a @elonmusk Twitter account.

[-][anonymous]-20

Nice, thanks. I'm willing to do it (I've actually never used G+ or Twitter before...). I would think though if there is a LW user who Musk might have actually heard of, say through the Superintelligence book, then that would be more likely to get a response. In other words, I think if Eliezer Yudkowsky wrote to Elon, that could be a good thing.

Do not spam high-status people. That's a recipe for an ugh field. I'm pretty confident that Elon Musk is capable of navigating this terrain, including finding a competent guide if needed. He's obviously read extensively on the topic, something that’s not possible to do without discovering MIRI and its proponents.

[-][anonymous]-40

Who is talking about spamming anyone? You are completely missing my point. The goal is not to help Elon navigate the terrain. I know he can do that. The point is to humbly ask for his advice as to what we could be doing given his track record of good ideas in the past.

[-]khafra120

Do not spam high-status people, and do not communicate with high-status people in a transparent attempt to affiliate with them and claim some of their status for yourself.

[-][anonymous]00

No... I am interested in not dying. How you could read that out of my comment, especially when I suggested Eliezer do the contacting I have no idea.

Anyway, by the upvotes to your comment and Halfwitz's comment it is clear that my values don't align with those of LW's so I guess I will be on my way...

[-]janos130

Musk has joined the advisory board of FLI and CSER, which are younger sibling orgs of FHI and MIRI. He's aware of the AI xrisk community.