timtyler comments on A Primer On Risks From AI - Less Wrong

15 Post author: XiXiDu 24 March 2012 02:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread.

Comment author: timtyler 25 March 2012 12:45:27AM *  1 point [-]

The information theoretic complexity of our values is very high. Which means that it is highly unlikely for similar values to automatically arise in agents that are the product of intelligent design, agents that never underwent the million of years of competition with other agents that equipped humans with altruism and general compassion.

But that does not mean that an artificial intelligence won’t have any goals. Just that those goals will be simple and their realization remorseless.

New York city is complex - yet it exists. Linux is complex - yet it exists. Something being in a tiny corner of a search space doesn't mean it isn't going to be hit.

Nobody argues that complex values will "automatically arise" in machines. They will be built in - in a similar way to the way car air bags were built in - or safety features on blenders were built in.

Comment author: John_Maxwell_IV 25 March 2012 04:46:39AM 6 points [-]

NYC and Linux were built incrementally. We can't easily test a super intelligent AI's morality in advance of deploying it. And the probability of failure is conjunctive, since getting just one thing wrong means failure.