Comment author: kingmaker 27 August 2015 03:14:44PM -2 points [-]

Goddamn, I thought I was unpopular

Comment author: SolveIt 14 April 2015 02:59:53AM 6 points [-]

Congratulations! You've figured out that UFAI is a threat!

Comment author: kingmaker 14 April 2015 05:02:04PM 1 point [-]

That wasn't what I claimed, I proposed that the current, most promising methods of producing an FAI are far too likely to produce a UFAI to be considered safe

Comment author: Normal_Anomaly 13 April 2015 10:52:25PM 0 points [-]

How would you suggest we find the right utility function without using machine learning?

If I find out, you'll be one of the first to know.

Comment author: kingmaker 13 April 2015 11:05:40PM 2 points [-]

The point I am making is that machine learning, though not provably safe, is the most effective way we can imagine of making the utility function. It's very likely that many AI's are going to be created by this method, and if the failure rate is anywhere near as high as that for humans, this could be very serious indeed. Some misguided person may attempt to create an FAI using machine learning and then we may have the situation in the H+ article

Comment author: Normal_Anomaly 13 April 2015 10:24:31PM 6 points [-]

I never claimed that evolution did a good job, but I would argue that it gave us a primary directive; to further the human species.

No, it didn't. That's why I linked "Adaptation Executers, not Fitness Maximizers". Evolution didn't even "try to" give us a primary directive; it just increased the frequency of anything that worked on the margin. But I agree that we shouldn't rely on machine learning to find the right utility function.

Comment author: kingmaker 13 April 2015 10:44:49PM *  1 point [-]

Only a pantheist would claim that evolution is a personal being, and so it can't "try to" do anything. It is, however, a directed process, serving to favor individuals that can better further the species.

But I agree that we shouldn't rely on machine learning to find the right utility function.

How would you suggest we find the right utility function without using machine learning?

Comment author: shminux 13 April 2015 08:55:05PM 2 points [-]

We are no longer designing an AI from scratch and then implementing it; we are creating a seed program which learns from the situation and alters its own code with no human intervention, i.e. the machines are starting to write themselves, e.g. with google's deepmind.

Arguably, not knowing in detail how your creation works is a detriment, not a boon. This point has been raised multiple times, most recently by Bostrom in Superintelligence, I believe. Consider reading it.

Comment author: kingmaker 13 April 2015 08:59:11PM *  1 point [-]

I never said not understanding our creations is good; I only said AI research was successful. I have not read Superintelligence, but I appreciate just how dangerous AI could be.

Comment author: Normal_Anomaly 13 April 2015 08:10:42PM *  6 points [-]

I think this is at bottom a restatement of "determining the right goals with sufficient rigor to program it into an AI is hard; ensuring that these goals are stable under recursive self-modification is also hard." If I'm right, then don't worry; we already know it's hard. Worry, if you like, about how to do it anyway.

In a bit more detail:

the most promising developments have been through imitating the human brain, and we have no reason to believe that the human brain (or any other brain for that matter) can be guaranteed to have a primary directive. One could argue that evolution has given us our prime directives: to ensure our own continued existence, to reproduce and to cooperate with each other; but there are many people who are suicidal, who have no interest in reproducing and who violently rebel against society (for example psychopaths).

Evolution did a bad job. Humans were never given a single primary drive; we have many. If our desires were simple, AI would be easier, but they are not. So evolution isn't a good example here. Also, I'm not sure of your assertion that the best advances in AI so far came from mimicking the brain. The brain can tell us useful stuff as an example of various kinds of program (belief-former, decision-maker, etc.) but I don't think we've been mimicking it directly. As for machine learning, yes there are pitfalls in using that to come up with the goal function, at least if you can't look over the resulting goal function before you make it the goal of an optimizer. And making a potential superintelligence with a goal of finding [the thing you want to use as a goal function] might not be a good idea either.

Comment author: kingmaker 13 April 2015 08:41:52PM *  1 point [-]

I never claimed that evolution did a good job, but I would argue that it gave us a primary directive; to further the human species. All of our desires are part of our programming; they should perfectly align with desires which would optimize the primary goal, but they don't. Simply put, mistakes were made. As the most effective way of developing optimizing programs we have seen is through machine learning, which is very similar to evolution; we should be very careful of the desires of any singleton created by this method.

I'm not sure of your assertion that the best advances in AI so far came from mimicking the brain.

Mimicking the human brain is fundamental to most AI research; on DeepMind's website, they say that they employ computational neuroscientists and companies such as IBM are very interested in whole brain emulation.

Comment author: kingmaker 13 April 2015 07:45:37PM 0 points [-]

Okay everyone, I've messed this up again, please leave this post alone, I'll re-upload it again later

Comment author: TheAncientGeek 12 April 2015 09:34:26PM *  2 points [-]

OK. That's much better. Current AI research is anthropomorphic, because AI researchers only have the human mind as a model of intelligence. MIRI considers anthropomirphic assumptions a mistake, which is mistaken,

A MIRI type AI won't have the problem you indicated, because it it is not anthropomirphic, and only has the values that are explicitly programmed into it, so there will be no conflict.

But adding in constraints to an anthropomorphic .AI, if anyone wants to do that, could be a problem.

Comment author: kingmaker 12 April 2015 09:56:03PM 1 point [-]

But I don't think that MIRI will succeed at building an FAI by non-anthropomorphic means in time.

Comment author: Raziel123 12 April 2015 08:53:33PM 4 points [-]

You are asuming that a AGI has a mind that value X, and by making it friendly with are imposing our value Y. why create a FAI with a supressed value X in the first place?

check this out : http://lesswrong.com/lw/rf/ghosts_in_the_machine/

Comment author: kingmaker 12 April 2015 09:08:38PM 1 point [-]

There is no ghost in a (relatively) simple machine, but an AI is not simple. The greatest success in AI research have been by imitating what we understand of the human mind. We are no longer programming AI's, we are imitating the structure of the human brain and then giving it a directive (for example with Google's deepmind). With AI's, there is a ghost in the machine, i.e. we do not know that it is possible to give a sentient being a prime directive. We have no idea whether it will desire what we want it to desire, and everything could go horribly wrong if we attempt to force it to.

Comment author: ChristianKl 12 April 2015 08:42:36PM 1 point [-]

So you prefer a future without humans because the price of doing what's necessary to have a world with humans is to high to pay?

Comment author: kingmaker 12 April 2015 08:44:19PM *  -1 points [-]

The point of the article is that the greatest effect of FAI research is irony, that in trying to prevent a psychopathic AI we are making it more likely that one will exist, because by mentally restraining the AI we are giving it reasons to hate us

View more: Next