kingmaker comments on Are there really no ghosts in the machine? - Less Wrong

0 Post author: kingmaker 13 April 2015 07:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread. Show more comments above.

Comment author: Normal_Anomaly 13 April 2015 10:24:31PM 6 points [-]

I never claimed that evolution did a good job, but I would argue that it gave us a primary directive; to further the human species.

No, it didn't. That's why I linked "Adaptation Executers, not Fitness Maximizers". Evolution didn't even "try to" give us a primary directive; it just increased the frequency of anything that worked on the margin. But I agree that we shouldn't rely on machine learning to find the right utility function.

Comment author: kingmaker 13 April 2015 10:44:49PM *  1 point [-]

Only a pantheist would claim that evolution is a personal being, and so it can't "try to" do anything. It is, however, a directed process, serving to favor individuals that can better further the species.

But I agree that we shouldn't rely on machine learning to find the right utility function.

How would you suggest we find the right utility function without using machine learning?

Comment author: [deleted] 14 April 2015 10:26:45PM 1 point [-]

How would you suggest we find the right utility function without using machine learning?

How would you find the right utility function using machine learning? With machine learning you have to have some way of classifying examples as good vs bad. That classifier itself is equivalent to the FAI problem.

Comment author: Normal_Anomaly 13 April 2015 10:52:25PM 0 points [-]

How would you suggest we find the right utility function without using machine learning?

If I find out, you'll be one of the first to know.

Comment author: kingmaker 13 April 2015 11:05:40PM 2 points [-]

The point I am making is that machine learning, though not provably safe, is the most effective way we can imagine of making the utility function. It's very likely that many AI's are going to be created by this method, and if the failure rate is anywhere near as high as that for humans, this could be very serious indeed. Some misguided person may attempt to create an FAI using machine learning and then we may have the situation in the H+ article

Comment author: SolveIt 14 April 2015 02:59:53AM 6 points [-]

Congratulations! You've figured out that UFAI is a threat!

Comment author: kingmaker 14 April 2015 05:02:04PM 1 point [-]

That wasn't what I claimed, I proposed that the current, most promising methods of producing an FAI are far too likely to produce a UFAI to be considered safe

Comment author: SolveIt 15 April 2015 01:12:06AM 2 points [-]

Why do you think the whole website is obsessed with provably-friendly AI? The whole point of MIRI is that pretty much every superintelligence that is anything other than provably safe is going to be unfriendly! This site is littered with examples of how terribly almost-friendly AI would go wrong! We don't consider current methods "too likely" to produce a UFAI, we think they're almost certainly going to produce UFAI! (Conditional on creating a superintelligence at all, of course).

So as much as I hate asking this question because it's alienating, have you read the sequences?