JoshuaZ comments on Be a Visiting Fellow at the Singularity Institute - Less Wrong

26 Post author: AnnaSalamon 19 May 2010 08:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: snarles 24 May 2010 09:58:03AM *  0 points [-]

Indeed, the truth of the matter is that I would be interested in contributing to SIAI, but at the moment I am still not convinced that it would be a good use of my resources. My other objections still haven't been satisfied, but here's another argument. As usual, I don't personally commit to what I claim, since I don't have enough knowledge to discuss anything in this area with certainty.

The main thing this community seems to lack when discussing Singularity is a lack of political savvy. The primary forces that shape history are, and quite likely, will always be economic and political motives, rather than technology. Technology and innovation are expensive, and innovators require financial and social motivation to create. This applies superlinearly for projects that are so large as to require collaboration.

General AI is exactly that sort of project. There is no magic mathematical insight that will enable us to write a program in a hundred lines of code that will allow it to improve itself in any reasonable amount of time. I'm sure Eliezer is aware of the literature on optimization processes, but the no free lunch principle and the practical randomness of innovation mean that an AI seeking to self-improve can only do so with an (optimized) random search. Humans essentially do the same thing, except we have knowledge and certain built-in processes to help us constrain the search space (but this also makes us miss certain obvious innovations.) To make GAI a real threat, you have to give it enough knowledge so that it can understand the basics of human behavior, or enough knowledge to learn more on its own from human-created resources. This is highly specific information which would take a fully general learning agent a lot of cycles to infer unless it were fed the information, in a machine-friendly form.

Now we will discuss the political and economic aspects of GAI. Support of general artificial intelligence is a political impossibility, because general AI, by definition, is a threat to the jobs of voters. By the time GAI becomes remotely viable, a candidate supporting a ban of GAI will have nearly universal support. It is impossible even to defend GAI on the grounds that the research it produces could save lives, because no medical researcher will welcome a technology that does their job for them. The same applies to any professional. There is a worry on this site that people underestimate GAI, but far more likely is that GAI or anything remotely like it is vastly overestimated as a threat.

The economic aspects are similar. GAI is vastly more costly to develop (for reasons I've outlined), and doesn't provide many advantages over expert systems. Besides, no company is going to produce a self-improving tool in the first place, because nobody, in theory, would ever have to buy an upgraded version.

These political and economic forces are a powerful retardant against the possibility a General AI catastrophe, and have more heft than any focused organization like SIAI could ever have. Yet much like Nader spoiling Al Gore's vote, the minor influence of SIAI might actually weaken rather than reinforce these protective forces. By claiming to have the tools in place to implement the strategically named 'friendly AI', SIAI might in fact assuage public worries about AI. Even if the organization itself does not take actions to do so, GAI advocates will be able to exaggerate the safety of friendly AI and point out that 'experts have already developed Friendly AI guidelines' in press releases. And by developing the framework to teach machines about human behavior, SIAI lowers the cost for any enterprise that for some reason, is interested in developing GAI.

At this point, I conclude my hypothetical argument. But I have realized that it is now my true position that SIAI should make it a clear position that: if tenable, NO general AI is preferable to friendly AI. (Back to no-accountability mode: it may be that general AI will eventually come, but by the point it will have become an eventuality, the human race will be vastly more prepared than it is now to deal with such an agent on an equal footing.)

Comment author: JoshuaZ 24 May 2010 02:15:50PM 2 points [-]

You make some good points about economic and political realities. However, I'm deeply puzzled by some of your other remarks. For example, you make the claim that general AI wouldn't provide any benefits above expert systems. I'm deeply puzzled by this claim since expert systems are by nature highly limited. Expert systems cannot construct new ideas nor can they handle anything that's even vaguely cross-disciplinary. No number of expert systems will be able to engage in the same degree of scientific productivity as a single bright scientists.

You also claim that no general AI is better than friendly AI. This is deeply puzzling. This makes sense only if one is fantastically paranoid about the loss of jobs. But new technologies are often economically disruptive. There are all sorts of jobs that don't exist now that were around a hundred years ago, or even fifty years ago. And yes, people lost jobs. But overall, they are better for it. You would need to make a much stronger case if you are trying to establish that no general AI is somehow better than general AI.

Comment author: snarles 24 May 2010 02:49:06PM *  -2 points [-]

Why do you think expert systems cannot handle anything cross-disciplinary? I even say that expert systems can generate new ideas, by more or less the same process that humans do. An expert system only needs an understanding of manufacturing, physics, and chemistry to design better computer chips, for instance. If you're talking about revolutionary, paradigm shifting ideas--we are probably already saturated with such ideas. The main bottleneck inhibiting paradigm shifts is not the ideas but the infrastructure and economic need for the paradigm shift. A company that can produce a 10% better product can already take over the market, a 200% better product is overkill, and especially unnecessary if there are substantial costs in overhauling the production line.

The reason why NO general AI is better than friendly (general) AI is very simple. IF general AI is an existential threat, than no organization claiming to put humans first could justify being pro-AGI (friendly or not), since no possible benefit* can justify the risk of destroying humanity.

*save for mitigating an even larger risk of annihilation, of course

Comment author: JoshuaZ 24 May 2010 10:30:51PM 2 points [-]

Why do you think expert systems cannot handle anything cross-disciplinary? I even say that expert systems can generate new ideas, by more or less the same process > that humans do. An expert system only needs an understanding of manufacturing, physics, and chemistry to design better computer chips, for instance.

Expert systems generally need very narrow problem domains to function. I'm not sure how you would expect an expert system to have an understanding of three very broad topics. Moreover, I don't know exactly how humans come up with new ideas (sometimes when people ask me, I tell them that I bang my head against the wall. That's not quite true but it does reflect that I only understand at a very gross level how I construct new ideas. I'm bright but not very bright, and I can see that much smarter people have the same trouble). So how you are convinced that expert systems could construct new ideas is not at all clear to me.

To be sure, there have been some limited work with computer systems coming up with new, interesting ideas. There's been some limited success with computers in my own field. See for example Simon Colton's work. There's also been similar work in geometry and group theory. But none of these systems were expert systems as that term is normally used. Moreover, none of the ideas they've come up with have that impressive. The only exception I'm aware of that is the proof of the Robbins conjecture. So even in narrow areas we've had very little success using specialized AIs. Are you using a more general definition of expert system than is standard?

The reason why NO general AI is better than friendly (general) AI is very simple. IF general AI is an existential threat, than no organization claiming to put humans first could justify being pro-AGI (friendly or not), since no possible benefit* can justify the risk of destroying humanity

Multiple problems with that claim. First, the existential threat may be low. There's some tiny risk for example that the LHC will destroy the Earth in some very fun way. There's also some risk that work with genetic engineering might give fanatics the skill to make a humanity destroying pathogen. And there's a chance that nanotech might turn everything into purple with green stripes goo (this is much more likely than gray goo of course). There's even some risk that proving the wrong theorem might summon Lovecraftian horrors. All events have some degree of risk. Moreover, general AI might actually help mitigate some serious threats, such as making it easier to track and deal with rogue asteroids or other catastrophic threats.

Also, even if one accepted the general outline of your argument, one would conclude that that's a reason why organizations shouldn't try to make general friendly AI. It isn't a reason that actually having no AI is better than having no friendly AI.

Comment author: snarles 25 May 2010 06:18:04AM *  1 point [-]

"First, the existential threat [of AGI] may be low."

Let me trace back the argument tree for a second. I originally asked for a defense of the claim that "SIAI is tackling the world's most important task." Michael Porter responded, "The real question is, do you even believe that unfriendly AI is a threat to the human race, and if so, is there anyone else tackling the problem in even a semi-competent way?" So NOW in this argument tree, we're assuming that unfriendly AI IS an existential threat, enough that preventing it is the "world's most important task."

Now in this branch of the argument, I assumed (but did not state) the following: If unfriendly AI is an existential threat, friendly AI is an existential threat, as long as there is some chance of it being modified into unfriendly AI. Furthermore, I assert that it's a naive notion that any organization could protect friendly AI from being subverted.

Comment author: Alicorn 25 May 2010 06:22:57AM 2 points [-]

AIs, including ones with Friendly goals, are apt to work to protect their goal systems from modification, as this will prevent their efforts from being directed towards things other than their (current) aims. There might be a window while the AI is mid-FOOM where it's vulnerable, but not a wide one.

Comment author: snarles 25 May 2010 10:39:33AM 1 point [-]

How are you going to protect the source code before you run it?

Comment author: JGWeissman 25 May 2010 06:24:49AM 0 points [-]

A Friendly AI ought to protect itself from being subverted into an unfriendly AI.

Comment author: snarles 25 May 2010 10:53:51AM 1 point [-]

Let me posit that FAI may be much less capable than unfriendly AI. The power of unfriendly AI is that it can increase its growth rate by taking resources by force. An FAI would be limited to what resources it could ethically obtain. Therefore, a low-grade FAI might be quite vulnerable to human antagonists, while its unrestricted version could be magnitudes of order more dangerous. In short, FAI could be low-reward high-risk.

Comment author: JGWeissman 25 May 2010 05:30:41PM 1 point [-]

There are plenty of resources that an FAI could ethically obtain, and with a lead of time of less than 1 day, it could grow enough to be vastly more powerful than an unfriendly seed AI.

Really, asking which AI wins going head to head is the wrong question. The goal is to get an FAI running before unfriendly AGI is implemented.

Comment author: Vladimir_Nesov 27 May 2010 09:52:08AM *  0 points [-]

The power of unfriendly AI is that it can increase its growth rate by taking resources by force. An FAI would be limited to what resources it could ethically obtain.

Wrong. FAI will make whatever unethical steps it must, as long as it's on the net the best path it can see, taking into account both the (ethically harmful) instrumental actions and their expected outcome. There is no such general disadvantage coming with AI being Friendly. Not that I expect any need for such drastic measures (in an apparent way), especially considering the likely fist-mover advantage it'll have.

Comment author: Blueberry 24 May 2010 04:45:00PM 1 point [-]

An expert system only needs an understanding of manufacturing, physics, and chemistry to design better computer chips, for instance.

If a program can take an understanding of those subjects and design a better computer chip, I don't think it's just an "expert system" anymore. I would think it would take an AI to do that. That's an AI complete problem.

If you're talking about revolutionary, paradigm shifting ideas--we are probably already saturated with such ideas. The main bottleneck inhibiting paradigm shifts is not the ideas but the infrastructure and economic need for the paradigm shift.

Are you serious? I would think the exact opposite would be true: we have an infrastructure starving for paradigm shifting ideas. I'd love to hear some of these revolutionary ideas that we're saturated with. I think we have some insights, but these insights need to be fleshed out and implemented, and figuring out how to do that is the paradigm shift that needs to occur

no organization claiming to put humans first could justify being pro-AGI (friendly or not), since no possible benefit* can justify the risk of destroying humanity.

Wait a minute. If I could press a button now with a 10% chance of destroying humanity and a 90% chance of solving the world's problems, I'd do it. Everything we do has some risks. Even the LHC had an (extremely miniscule) risk of destroying the universe, but doing a cost-benefit analysis should reveal that some things are worth minor chances of destroying humanity.

Comment author: snarles 25 May 2010 07:13:36AM *  0 points [-]

"If a program can take an understanding of those subjects and design a better computer chip, I don't think it's just an "expert system" anymore. I would think it would take an AI to do that. That's an AI complete problem."

What I had in mind was some sort of combinatorial approach to designing chips, i.e. take these materials and randomly generate a design, test it, and then start altering the search space based on the results. I didn't mean "understanding" in the human sense of the word, sorry.

"I'd love to hear some of these revolutionary ideas that we're saturated with. I think we have some insights, but these insights need to be fleshed out and implemented, and figuring out how to do that is the paradigm shift that needs to occur"

Example: many aspects of the legal and political systems could be reformed, and it's not difficult to come up with ideas on how they could be reformed. The benefit is simply insufficient to justify spending much of the limited resources we have on solving those problems.

"Wait a minute. If I could press a button now with a 10% chance of destroying humanity and a 90% chance of solving the world's problems, I'd do it. "

So you think there's a >10% chance that the world's problems are going to destroy humanity in the near future?

Comment author: JoshuaZ 26 May 2010 11:10:48PM 0 points [-]

What I had in mind was some sort of combinatorial approach to designing chips, i.e. > take these materials and randomly generate a design, test it, and then start altering the search space based on the results. I didn't mean "understanding" in the human sense of the word, sorry.

Given the very large number of possibilities and the difficulty with making prototypes, this seems like an extremely inefficient process without more thought going into to it.

Comment author: Blueberry 26 May 2010 05:17:45AM 0 points [-]

What I had in mind was some sort of combinatorial approach to designing chips

Oh, okay, fair enough, though I'm still not sure I would call that an "expert system" (this time for the opposite reason that it seems too stupid).

many aspects of the legal and political systems could be reformed, and it's not difficult to come up with ideas on how they could be reformed. The benefit is simply insufficient to justify spending much of the limited resources we have on solving those problems.

Ah. I was thinking of designing an AI, probably because I was primed by your expert system comment. Well, in those cases, I think the issue is that our legal and political systems were purposely set up to be difficult to change: change requires overturning precedents, obtaining majority or 3/5 or 2/3 votes in various legislative bodies, passing constitutional amendments, and so forth. And I can guarantee you that for any of these reforms, there are powerful interests who would be harmed by the reforms, and many people who don't want reform: this is more of a persuasion problem than an infrastructure problem. But yes, you're right that there are plenty of revolutionary ideas about how to reform, say, the education system: they're just not widely accepted enough to happen.

So you think there's a >10% chance that the world's problems are going to destroy humanity in the near future?

I'm confused by this sentence. I'm not sure if I think that, but what does it have to do with the hypothetical button that has a 10% chance of destroying humanity? My point was that it's worth taking a small risk of destroying humanity if the benefits are great enough.