You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Incorrect comments on against "AI risk" - Less Wrong Discussion

24 Post author: Wei_Dai 11 April 2012 10:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: Incorrect 12 April 2012 09:09:42PM *  0 points [-]

It isn't an amazing novel philosophical insight that type-1 agents 'love' to solve problems in the wrong way. It is fact of life apparent even in the simplest automated software of that kind.

Of course it isn't.

Let's just assume that mister president sits on nuclear launch button by accident, shall we?

There are machine learning techniques like genetic programming that can result in black-box models. As I stated earlier, I'm not sure humans will ever combine black-box problem solving techniques with self-optimization and attempt to use the product to solve practical problems; I just think it is dangerous to do so once the techniques become powerful enough.

Comment author: Dmytry 12 April 2012 09:16:20PM *  0 points [-]

There are machine learning techniques like genetic programming that can result in black-box models.

Which are even more prone to outputting crap solutions even without being superintelligent.

Comment author: Incorrect 12 April 2012 09:18:12PM *  1 point [-]

Yup, we seem safe for the moment because we simply lack the ability to create anything dangerous.

Sorry you're being downvoted. It's not me.

Comment author: Dmytry 12 April 2012 09:25:56PM *  1 point [-]

Yup, we seem safe for the moment because we simply lack the ability to create anything dangerous.

Actually your scenario already happened... Fukushima reactor failure: they used computer modelling to simulate tsunami, it was 1960s, the computers were science woo, and if computer said so, then it was true.

For more subtle cases though - see, the problem is substitution of 'intellectually omnipotent omniscient entity' for AI. If the AI tells to assassinate foreign official, nobody's going to do that; got to be starting the nuclear war via butterfly effect, and that's pretty much intractable.

Comment author: Incorrect 12 April 2012 09:29:34PM *  0 points [-]

For more subtle cases though - see, the problem is substitution of 'intellectually omnipotent omniscient entity' for AI. If the AI tells to assassinate foreign official, nobody's going to do that; got to be starting the nuclear war via butterfly effect, and that's pretty much intractable.

I would prefer our only line of defense not be "most stupid solutions are going to look stupid". It's harder to recognize stupid solutions in say, medicine (although there we can verify with empirical data).

Comment author: Dmytry 12 April 2012 09:46:20PM *  0 points [-]

It is unclear to me that artificial intelligence adds any risk there, though, that isn't present from natural stupidity.

Right now, look, so many plastics around us, food additives, and other novel substances. Rising cancer rates even after controlling for age. With all the testing, when you have hundred random things a few bad ones will slip through. Or obesity. This (idiotic solutions) is a problem with technological progress in general.

edit: actually, our all natural intelligence is very prone to quite odd solutions. Say, reproductive drive, secondary sex characteristics, yadda yadda, end result, cosmetic implants. Desire to sell more product, end result, overconsumption. Etc etc.