Comment author: artemium 27 November 2014 05:49:39PM *  2 points [-]

Nice blog post about AI and existential risks by my friend and occasional LW poster. He was inspired by disappointingly bad debate on Edge.org. Feel free to share if you like it. I think it is a quite good introduction on Bostrom's and MIRI arguments.

"The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct."

http://nthlook.wordpress.com/2014/11/26/why-fear-ai/

Comment author: TRIZ-Ingenieur 25 November 2014 11:23:48PM 2 points [-]

No. Open available knowledge is not enough to obtain decisive advantage. For this close cooperation with humans and human led organizations is absolutely necessary. Trust building will take years even for AGIs. In the mean time competing AGIs will appear.

Ben Goertzel does not want to waste time debating any more - he pushes open AGI development to prevent any hardware overhang. Other readers of Bostrums book might start other projects against singleton AI development. We do not have a ceteris paribus condition - we can shape what the default outcome will be.

Comment author: artemium 26 November 2014 07:31:07PM 0 points [-]

we can shape what the default outcome will be.

But who are "we"? There are many agents with different motivations doing AI development. I'm afraid that it will be difficult to control each of this agents(companies, governments, militaries, universities, terrorist groups) in the future, and the deceasing cost of technology will only increase the problem over time .

Comment author: artemium 26 November 2014 07:00:23AM 0 points [-]

This is really worrying. Hubris and irrational geopolitical competition may create existential risks sooner then expected. http://motherboard.vice.com/read/how-the-pentagons-skynet-would-automate-war

Comment author: Artaxerxes 25 November 2014 07:54:23AM *  18 points [-]

Stuart Russell contributes a response to the Edge.org article from earlier this month.

Of Myths And Moonshine

"We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief."

So wrote Leo Szilard, describing the events of March 3, 1939, when he demonstrated a neutron-induced uranium fission reaction. According to the historian Richard Rhodes, Szilard had the idea for a neutron-induced chain reaction on September 12, 1933, while crossing the road next to Russell Square in London. The previous day, Ernest Rutherford, a world authority on radioactivity, had given a "warning…to those who seek a source of power in the transmutation of atoms – such expectations are the merest moonshine."

Thus, the gap between authoritative statements of technological impossibility and the "miracle of understanding" (to borrow a phrase from Nathan Myhrvold) that renders the impossible possible may sometimes be measured not in centuries, as Rod Brooks suggests, but in hours.

None of this proves that AI, or gray goo, or strangelets, will be the end of the world. But there is no need for a proof, just a convincing argument pointing to a more-than-infinitesimal possibility. There have been many unconvincing arguments – especially those involving blunt applications of Moore's law or the spontaneous emergence of consciousness and evil intent. Many of the contributors to this conversation seem to be responding to those arguments and ignoring the more substantial arguments proposed by Omohundro, Bostrom, and others.

The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:

  1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.

  2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker – especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.

This is not a minor difficulty. Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research – the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius. AI research has been accelerating rapidly as pieces of the conceptual framework fall into place, the building blocks gain in size and strength, and commercial investment outstrips academic research activity. Senior AI researchers express noticeably more optimism about the field's prospects than was the case even a few years ago, and correspondingly greater concern about the potential risks.

No one in the field is calling for regulation of basic research; given the potential benefits of AI for humanity, that seems both infeasible and misdirected. The right response seems to be to change the goals of the field itself; instead of pure intelligence, we need to build intelligence that is provably aligned with human values. For practical reasons, we will need to solve the value alignment problem even for relatively unintelligent AI systems that operate in the human environment. There is cause for optimism, if we understand that this issue is an intrinsic part of AI, much as containment is an intrinsic part of modern nuclear fusion research. The world need not be headed for grief.

Comment author: artemium 25 November 2014 08:01:40PM 4 points [-]

Finally some common sense. I was seriously disappointed in statements made by people I usually admire (Pinker, Schremer). It just shows how much we still have to go in communicating AI risk to the general public when even the smartest intellectuals dismiss this idea before any rational analysis.

I'm really looking forward to Elon Musk's comment.

Comment author: Punoxysm 20 November 2014 04:46:17PM 5 points [-]

There's good existing words for this: The internet troll trolled you. Don't sweat it.

Comment author: artemium 25 November 2014 08:29:21AM *  0 points [-]

I was actually planning to dress up as Pascal Mugger for Halloween. The plan was to go to bartender during Halloween party and ask him to give me expensive cocktail for free and tell him "If you give me that for free I will spend rest of my life trying to build AI which will put you in the Utopia simulation for eternity. I know that it sounds unlikely but the price of the cocktail is immensely smaller than the monstrous Utility you will potentially gain for counting on this small probability".

In the end I decided that the probability of being kicked out of the party was far greater than being successful Pascal Mugger, so I gave up :D.

Comment author: artemium 25 November 2014 08:05:14AM *  1 point [-]

I think we can all agree that for better or for worse this stuff already entered the public arena. I mean Slate magazine is as mainstream as you can get and that article was pretty brutal in the attempt to convince people in the viability of the idea.

I wouldn't be surprised if "The Basilliks" the movie is already in the works ;-) . (I hope that its get directed by Uwe Boll..hehe)

In light of this developments I think it is time to end the formal censorship and focus on the best way how we can inform general public that entire thing was a stupid overreaction and clear LW name from any slander.

There are real issues in AI safety and this is an unnecessary distraction.

Comment author: Capla 24 November 2014 11:11:14PM -1 points [-]

From here.

9. Is your pursuit of a theory of FAI similar to, say, Hutter's AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I'm still not 100% certain of your goals at SIAI.

Definitely actual blueprint, but, on the way to an actual blueprint, you probably have to, as an intermediate step, construct intractable theories that tell you what you’re trying to do, and enable you to understand what’s going on when you’re trying to do something. If you want a precise, practical AI, you don’t get there by starting with an imprecise, impractical AI and going to a precise, practical AI. You start with a precise, impractical AI and go to a precise, practical AI. I probably should write that down somewhere else because it’s extremely important, and as(?) various people who will try to dispute it, and at the same time hopefully ought to be fairly obvious if you’re not motivated to arrive at a particular answer there. You don’t just run out and construct something imprecise because, yeah, sure, you’ll get some experimental observations out of that, but what are your experimental observations telling you? And one might say along the lines of ‘well, I won’t know that until I see it,’ and suppose that has been known to happen a certain number of times in history; just inventing the math has also happened a certain number of times in history.

We already have a very large body of experimental observations of various forms of imprecise AIs, both the domain specific types we have now, and the sort of imprecise AI constituted by human beings, and we already have a large body of experimental data, and eyeballing it... well, I’m not going to say it doesn’t help, but on the other hand, we already have this data and now there is this sort of math step in which we understand what exactly is going on; and then the further step of translating the math back into reality. It is the goal of the Singularity Institute to build a Friendly AI. That’s how the world gets saved, someone has to do it. A lot of people tend to think that this is going to require, like, a country’s worth of computing power or something like that, but that’s because the problem seems very difficult because they don’t understand it, so they imagine throwing something at it that seems very large and powerful and gives this big impression of force, which might be a country-size computing grid, or it might be a Manhattan Project where some computer scientists... but size matters not, as Yoda says.

What matters is understanding, and if the understanding is widespread enough, then someone is going to grab the understanding and use it to throw together the much simpler AI that does destroy the world, the one that’s build to much lower standards, so the model of ‘yes, you need the understanding, the understanding has to be concentrated within a group of people small enough that there is not one defector in the group who goes off and destroys the world, and then those people have to build an AI.’ If you condition on that the world got saved, and look back and within history, I expect that that is what happened in the majority of cases where a world anything like this one gets saved, and working back from there, they will have needed a precise theory, because otherwise they’re doomed. You can make mistakes and pull yourself up, even if you think you have a precise theory, but if you don’t have a precise theory then you’re completely doomed, or if you don’t think you have a precise theory then you’re completely doomed.

Also,

Aside from that, though, I think that saving the human species eventually comes down to, metaphorically speaking, nine people and a brain in a box in a basement, and everything else feeds into that. Publishing papers in academia feeds into either attracting attention that gets funding, or attracting people who read about the topic, not necessarily reading the papers directly even but just sort of raising the profile of the issues where intelligent people wonder what they can do with their lives think artificial intelligence...

I get the sense that Eliezer wants to be one of the nine people in that basement, if he can be, but I might be streching the evidence little to say "Eliezer has expressed that ultimately, the goal of MIRI is not just research how to make FAI, but to be the one's to make it."

Comment author: artemium 25 November 2014 07:49:11AM 0 points [-]

Thanks! Haven't seen that before. I still think it would be better to specialize on ethics issue and than apply its result on AGI sytsem developed by other (hopefully friendly) party. But It would be awesome if someone who is genuinely ethical develops AGI first. I'm really hoping that some big org which went furthest in AI research like google decides to cooperate with MIRI on that issue when they reach the critical point in AGI buildup.

Comment author: artemium 24 November 2014 09:23:16PM *  0 points [-]

"We're sorry but this video is not available in your country." We'll I guess I'm safe. Living in a shitty country has some advantages.

Comment author: artemium 24 November 2014 09:52:30PM 0 points [-]

"We're sorry but this video is not available in your country." We'll I guess I'm safe :-).

Comment author: Artaxerxes 17 November 2014 01:12:30PM *  2 points [-]

What are you worried he might do?

If he believes what he's said, he should really throw lots of money at FHI and MIRI. Such an action would be helpful at best, harmless at worst.

Comment author: artemium 24 November 2014 09:42:01PM *  0 points [-]

He will probably try to buy influence in every AI company he can find. There are limits to this strategy thought. I think raising public awareness about this problem and donating money to MRI and FHI would also help.

BTW someone should make a movie where Elon Musk becomes Ironman and than accidentally develops ufAI...oh wait

Comment author: XiXiDu 19 November 2014 04:09:42PM *  1 point [-]

So what exactly is this 'witch hunt' composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?

What I meant is that he and others will cause the general public to adopt a perception of the field of AI that is comparable to the public perception of GMOs, vaccination, nuclear power etc., non-evidence-backed fear of something that is generally benign and positive.

He could have used his influence and reputation to directly contact AI researchers or e.g. hold a quarterly conference about risks from AI. He could have talked to policy makers on how to ensure safety while promoting the positive aspects. There is a lot you can do. But making crazy statements in public about summoning demons and comparing AI to nukes is just completely unwarranted given the current state of evidence about AI risks, and will probably upset lots of AI people.

You believe he's calling for the execution, imprisonment or other punishment of AI researchers?

I doubt that he is that stupid. But I do believe that certain people, if they were to seriously believe into doom by AI, would consider violence to be an option. John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.

The problem here is not that it would be wrong to deactivate a doomsday device forcefully, if necessary, but rather that there are people out there who are stupid enough to use force unnecessarily or decide to use force based on insufficient evidence (evidence such as claims made by Musk).

ETA: Just take those people who destroy GMO test fields. Musk won't do something like that. But other people, who would commit such acts, might be inspired by his remarks.

Comment author: artemium 24 November 2014 09:37:00PM *  0 points [-]

John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.

There is some truth to that, especially how crazy von Neumann was. But I'm not sure if anyone would be launching pre-emtive nuclear attack on other country because of AGI research. I mean this countries already have nukes, pretty solid doomsday weapon so I dont think that adding another superweapon to its arsenal will change situation. Whether you are blown to bits by chinese nuke or turn into paperclips by chinese-built AGI doesn't make much difference.

View more: Prev | Next