BaconServ comments on MIRI strategy - Less Wrong

5 Post author: ColonelMustard 28 October 2013 03:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: BaconServ 28 October 2013 07:12:11PM 0 points [-]

Is "bad publicity" worse than "good publicity" here? If strong AI became a hot political topic, it would raise awareness considerably. The fiction surrounding strong AI should bias the population towards understanding it as a legitimate threat. Each political party in turn will have their own agenda, trying to attach whatever connotations they want to the issue, but if the public at large started really worrying about uFAI, that's kind of the goal here.

Comment author: ChristianKl 28 October 2013 07:27:05PM 4 points [-]

Politically people who fear AI might go after companies like google.

but if the public at large started really worrying about uFAI, that's kind of the goal here.

I don't think that the public at large is the target audience. The important thing is that the people who could potential build an AGI understand that they are not smart enough to contain the AGI.

If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I'm smart enough to program an AGI that does what I want.

I mean take a topic like genetic engineering. There are valid dangers involved in genetic engineering. On the other hand the people who think that all gene manipulated food is poisons are wrong. As a result a lot of self professed skeptics and Atheists see it as their duty to defend genetic engineering.

Comment author: BaconServ 28 October 2013 07:37:04PM 0 points [-]

Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers? If uFAI is popularized, the academia will pretty much be forced to seriously address the issue. Ideally, this is something we'll only need to do once; after it's known and taken seriously, the people who work on AI will be under intense pressure to ensure they're avoiding the dangers here.

Google probably already has an AI (and AI-risk) team internally that they've just had no reason to publicize their having. If uFAI becomes widely worried about, you can bet they'd make it known they were taking their own precautions.

Comment author: ChristianKl 28 October 2013 08:09:43PM 2 points [-]

Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers?

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food. It makes things much easier for the farmer, but to me it doesn't sound like a road that we should go on.

I wouldn't want to buy such food in the supermarket but I have no problem with buying genetic manipulated that adds extra vitamins.

Then there are various issues with introducing new species. Issues about monocultures. Bioweapons.

after it's known and taken seriously, the people who work on AI will be under intense pressure to ensure they're avoiding the dangers here.

The whole work is dangerous. Safety is really hard.

Comment author: Desrtopa 28 October 2013 09:07:46PM 3 points [-]

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food. It makes things much easier for the farmer, but to me it doesn't sound like a road that we should go on.

This is more or less the opposite of what we actually actually use genetic engineering of crops for. Production of pesticides isn't something that plants were incapable of until we started tinkering with their genes, it's something they've been doing for hundreds of millions of years. Plants in nature have to deal with tradeoffs between producing their own natural pesticides and using their biological resources for other things, such as more rapid growth, greater drought resistance, etc. In general, genetically engineered plants actually have less innate pest resistance, which farmers then compensate for by spraying pesticides onto them, because it allows them to trade off that natural pesticide production for faster growth.

Comment author: fubarobfusco 29 October 2013 04:54:02PM *  1 point [-]

In general, genetically engineered plants actually have less innate pest resistance, which farmers then compensate for by spraying pesticides onto them, because it allows them to trade off that natural pesticide production for faster growth.

ChristianKl may be thinking of Bt corn (maize) and, for instance, the Starlink corn recall. Bt corn certainly does express a pesticide, namely Bacillus thuringiensis toxin.

Comment author: Lumifer 28 October 2013 08:42:52PM 1 point [-]

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.

As a matter of evolutionary biology plants have been doing this for many millions of years and are pretty good at making poisons.

Comment author: TheOtherDave 28 October 2013 09:07:31PM 0 points [-]

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.

Somewhat tangentially: does it sound like a better or a worse strategy than not letting plants do this, and growing the plants in an environment where external pesticides are regularly applied to them?

(This really is a question about GMOs, not some kind of oblique analogical question about AIs.)

Comment author: BaconServ 28 October 2013 08:53:46PM -2 points [-]

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.

Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn't someone in the field be more aware of that and other potential dangers, despite the GE FUD they've no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people's misconceptions on the issue.

Your reasoning for why the "bad" publicity would have severe (or any notable) repercussions isn't apparent.

If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I'm smart enough to program an AGI that does what I want.

This just doesn't seem very realistic when you consider all the variables.

Comment author: ChristianKl 28 October 2013 09:23:36PM -1 points [-]

Is there reason to believe someone in the field of genetic engineering would make such a mistake?

Because those people do engineer plants to produce pesticides? Bt Potato was the first which was approved by the FDA in 1995.

The commerical incentives that exist encourage the development of such products. A customer in a store doesn't see whether a potato is engineered to have more vitamins. He doesn't see whether it's engineered to produce pesticides.

He buys a potato. It's cheaper to grow potatos that produce their own pesticides than it is to grow potatos that don't.

In the case of potatos it might be harmless. We don't eat the green of the potatos anyway, so why bother if the green has additional poison? But you can slip up. Biology is complicated. You could have changed something that also gets the poison to be produced in the edible parts.

It seems like the FUD should just be motivating them to understand the risks even more

It's not a question of motivation. Politics is the mindkiller. If a topic gets political people on all sides of the debate get stupid.

This just doesn't seem very realistic when you consider all the variables.

According to Eliezer it takes strong math skills to see how an AGI can overtake their own utility function and is therefore dangerous. Eliezer made the point that it's very difficult to explain to people who are invested into their AGI design that it's dangerous because that part needs complicated math.

It easy to say in abstract that some AGI might become UFAI, but it's hard to do the assessment for any individual proposal.