Unknowns comments on Are there really no ghosts in the machine? - Less Wrong

0 Post author: kingmaker 13 April 2015 07:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread. Show more comments above.

Comment author: Normal_Anomaly 13 April 2015 08:10:42PM *  6 points [-]

I think this is at bottom a restatement of "determining the right goals with sufficient rigor to program it into an AI is hard; ensuring that these goals are stable under recursive self-modification is also hard." If I'm right, then don't worry; we already know it's hard. Worry, if you like, about how to do it anyway.

In a bit more detail:

the most promising developments have been through imitating the human brain, and we have no reason to believe that the human brain (or any other brain for that matter) can be guaranteed to have a primary directive. One could argue that evolution has given us our prime directives: to ensure our own continued existence, to reproduce and to cooperate with each other; but there are many people who are suicidal, who have no interest in reproducing and who violently rebel against society (for example psychopaths).

Evolution did a bad job. Humans were never given a single primary drive; we have many. If our desires were simple, AI would be easier, but they are not. So evolution isn't a good example here. Also, I'm not sure of your assertion that the best advances in AI so far came from mimicking the brain. The brain can tell us useful stuff as an example of various kinds of program (belief-former, decision-maker, etc.) but I don't think we've been mimicking it directly. As for machine learning, yes there are pitfalls in using that to come up with the goal function, at least if you can't look over the resulting goal function before you make it the goal of an optimizer. And making a potential superintelligence with a goal of finding [the thing you want to use as a goal function] might not be a good idea either.

Comment author: Unknowns 14 April 2015 02:54:18AM *  0 points [-]

It can easily be argued that evolution did a good job, not a bad job, by not giving us a "primary directive." The reason AI is dangerous is precisely because it might have such a directive; being an "optimizer" is precisely the reason that one fears that AI might destroy the world. So if anything, kingmaker is correct to think that since human beings are like this, it is at least theoretically possible that AI's will be like this, and that they will not destroy the world for similar reasons.

Comment author: Luke_A_Somers 14 April 2015 03:36:50PM 0 points [-]

If we had a simple primary directive, we would be fully satisfied by having a machine accomplish it for us, and it would be much easier to get a machine that would do it.