timtyler comments on Open Thread, August 2010 - Less Wrong

4 Post author: NancyLebovitz 01 August 2010 01:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (676)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 03 August 2010 06:23:09PM *  -1 points [-]

Re: "The primary issue is that, if one thinks that an AI can engage in recursive self-improvement and can do so quickly, then once there's an AI that's at all capable of such improvement, the AI will rapidly move outside our control."

If its creators are incompetent. Those who think this are essentially betting on the incompetence of the creators.

There are numerous counter-arguments - the shifting moral zeitgeist, the downward trend in deliberate death, the safety record of previous risky tech enterprises.

A stop button seems like a relatively simple and effective safely feature. If you can get the machine to do anything at all, then you can probably get it to turn itself off.

See: http://alife.co.uk/essays/stopping_superintelligence/

The creators will likely be very smart humans assisted by very smart machines. Betting on their incompetence is not a particularly obvious thing to do.

Comment author: JoshuaZ 03 August 2010 11:58:50PM 1 point [-]

Missing the point. I wasn't arguing that there aren't reasons to think that the bad AI goes FOOM won't happen. Indeed, I said explicitly that I didn't think it would occur. My point was that if one is going to make an argument that relies on that here one needs to be aware that the premise is controversial and be clear about that (say giving basic reasoning for it, or even just saying "If one accepts that X then..." etc.).