Roko comments on Be a Visiting Fellow at the Singularity Institute - Less Wrong

26 Post author: AnnaSalamon 19 May 2010 08:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: snarles 24 May 2010 09:58:03AM *  0 points [-]

Indeed, the truth of the matter is that I would be interested in contributing to SIAI, but at the moment I am still not convinced that it would be a good use of my resources. My other objections still haven't been satisfied, but here's another argument. As usual, I don't personally commit to what I claim, since I don't have enough knowledge to discuss anything in this area with certainty.

The main thing this community seems to lack when discussing Singularity is a lack of political savvy. The primary forces that shape history are, and quite likely, will always be economic and political motives, rather than technology. Technology and innovation are expensive, and innovators require financial and social motivation to create. This applies superlinearly for projects that are so large as to require collaboration.

General AI is exactly that sort of project. There is no magic mathematical insight that will enable us to write a program in a hundred lines of code that will allow it to improve itself in any reasonable amount of time. I'm sure Eliezer is aware of the literature on optimization processes, but the no free lunch principle and the practical randomness of innovation mean that an AI seeking to self-improve can only do so with an (optimized) random search. Humans essentially do the same thing, except we have knowledge and certain built-in processes to help us constrain the search space (but this also makes us miss certain obvious innovations.) To make GAI a real threat, you have to give it enough knowledge so that it can understand the basics of human behavior, or enough knowledge to learn more on its own from human-created resources. This is highly specific information which would take a fully general learning agent a lot of cycles to infer unless it were fed the information, in a machine-friendly form.

Now we will discuss the political and economic aspects of GAI. Support of general artificial intelligence is a political impossibility, because general AI, by definition, is a threat to the jobs of voters. By the time GAI becomes remotely viable, a candidate supporting a ban of GAI will have nearly universal support. It is impossible even to defend GAI on the grounds that the research it produces could save lives, because no medical researcher will welcome a technology that does their job for them. The same applies to any professional. There is a worry on this site that people underestimate GAI, but far more likely is that GAI or anything remotely like it is vastly overestimated as a threat.

The economic aspects are similar. GAI is vastly more costly to develop (for reasons I've outlined), and doesn't provide many advantages over expert systems. Besides, no company is going to produce a self-improving tool in the first place, because nobody, in theory, would ever have to buy an upgraded version.

These political and economic forces are a powerful retardant against the possibility a General AI catastrophe, and have more heft than any focused organization like SIAI could ever have. Yet much like Nader spoiling Al Gore's vote, the minor influence of SIAI might actually weaken rather than reinforce these protective forces. By claiming to have the tools in place to implement the strategically named 'friendly AI', SIAI might in fact assuage public worries about AI. Even if the organization itself does not take actions to do so, GAI advocates will be able to exaggerate the safety of friendly AI and point out that 'experts have already developed Friendly AI guidelines' in press releases. And by developing the framework to teach machines about human behavior, SIAI lowers the cost for any enterprise that for some reason, is interested in developing GAI.

At this point, I conclude my hypothetical argument. But I have realized that it is now my true position that SIAI should make it a clear position that: if tenable, NO general AI is preferable to friendly AI. (Back to no-accountability mode: it may be that general AI will eventually come, but by the point it will have become an eventuality, the human race will be vastly more prepared than it is now to deal with such an agent on an equal footing.)

Comment deleted 24 May 2010 11:28:38AM *  [-]
Comment author: snarles 24 May 2010 02:51:02PM 0 points [-]

"You might want to go back to basics and think about how politics, public opinion and the media operate, for example that they had little opinion on the hugely important probabilistic revolution in AI over the last 15 years, but spilled loads of ink over stem cells."

And why is that?

Comment deleted 24 May 2010 03:43:59PM [-]
Comment author: JoshuaZ 24 May 2010 10:37:51PM 0 points [-]

But stem cell research is much more prominent in that it is producing notable direct applications or very close to it. It also isn't just a yuck factor (although that's certainly one part), in many different moral systems, stem cells research produced serious moral qualms. AI may very well trigger some similar issues if it becomes more viable.

Comment deleted 25 May 2010 01:18:22AM *  [-]
Comment author: Clippy 25 May 2010 02:24:18AM -1 points [-]

The ideal dangerous technology for people to not give a shit about banning would involve a theoretical threat which is hard to understand, has never happened before, involves only nonphysical harards like information, and has nothing to do with flesh, sex or anything disgusting or with fire, sharp objects or other natural disasters.

Yes, but these are precisely the dangers humans should certainly not worry about to begin with.

Comment author: snarles 25 May 2010 07:03:06AM 0 points [-]

"The ideal dangerous technology for people to not give a shit about banning would involve a theoretical threat which is hard to understand"

I don't think The Terminator was hard to understand. The second you get some credible people saying that AI is a threat, the media reaction is going to be overexcessive, as it always is.

Comment deleted 25 May 2010 11:40:30AM *  [-]
Comment author: snarles 25 May 2010 11:48:24AM *  1 point [-]

Thanks; I was mistaken. Would you say, then, that mainstream scientists are similarly irrational? (The main comparison I have in mind throughout this section, by the way, is global warming.)

Comment author: JoshuaZ 25 May 2010 01:25:50AM 0 points [-]

There may be an issue here about what we define as AI. For example, I would not see what Google does as AI but rather as harvesting human intelligence. The lines here may be blurry are hard to define.

You make a good point about older taboos.

Comment author: JoshuaZ 25 May 2010 02:28:35PM 0 points [-]

Could someone explain why this comment got modded down? I don't see any errors in reasoning or other issues. (Was the content level too low for the desired signal/noise ratio?)

Comment deleted 25 May 2010 02:42:43PM [-]
Comment author: JoshuaZ 25 May 2010 03:15:20PM *  1 point [-]

Do you have a citation for Google using machine learning in any substantial scale? The most basic of the Google algorithms is PageRank which isn't a machine learning algorithm by most definitions of that term.