All of kingmaker's Comments + Replies

kingmaker-20

Goddamn, I thought I was unpopular

That wasn't what I claimed, I proposed that the current, most promising methods of producing an FAI are far too likely to produce a UFAI to be considered safe

3[anonymous]
Why do you think the whole website is obsessed with provably-friendly AI? The whole point of MIRI is that pretty much every superintelligence that is anything other than provably safe is going to be unfriendly! This site is littered with examples of how terribly almost-friendly AI would go wrong! We don't consider current methods "too likely" to produce a UFAI, we think they're almost certainly going to produce UFAI! (Conditional on creating a superintelligence at all, of course). So as much as I hate asking this question because it's alienating, have you read the sequences?

The point I am making is that machine learning, though not provably safe, is the most effective way we can imagine of making the utility function. It's very likely that many AI's are going to be created by this method, and if the failure rate is anywhere near as high as that for humans, this could be very serious indeed. Some misguided person may attempt to create an FAI using machine learning and then we may have the situation in the H+ article

9[anonymous]
Congratulations! You've figured out that UFAI is a threat!

Only a pantheist would claim that evolution is a personal being, and so it can't "try to" do anything. It is, however, a directed process, serving to favor individuals that can better further the species.

But I agree that we shouldn't rely on machine learning to find the right utility function.

How would you suggest we find the right utility function without using machine learning?

1[anonymous]
How would you find the right utility function using machine learning? With machine learning you have to have some way of classifying examples as good vs bad. That classifier itself is equivalent to the FAI problem.
0Normal_Anomaly
If I find out, you'll be one of the first to know.

I never said not understanding our creations is good; I only said AI research was successful. I have not read Superintelligence, but I appreciate just how dangerous AI could be.

I never claimed that evolution did a good job, but I would argue that it gave us a primary directive; to further the human species. All of our desires are part of our programming; they should perfectly align with desires which would optimize the primary goal, but they don't. Simply put, mistakes were made. As the most effective way of developing optimizing programs we have seen is through machine learning, which is very similar to evolution; we should be very careful of the desires of any singleton created by this method.

I'm not sure of your assertion th

... (read more)
0[anonymous]
Mimicking the human brain is an obscure branch of AI. Most AI projects, and certainly the successful ones you've heard about, are at best inspired by stripped down models of specific isolated aspects of human thought, if they take any inspiration from the human brain at all. DeepMind for example is reinforcement learning on top of modern machine learning. Machine learning may make use of neural networks, but beware of the name: neural networks only casually resemble the biological structure from which they take their name. DeepMind doesn't work anything like the human brain, nor does Watson, Deep Blue, or self driving cars. Learn a bit about practical AI and neuroscience and you'd be surprise how little they have in common.
9Normal_Anomaly
No, it didn't. That's why I linked "Adaptation Executers, not Fitness Maximizers". Evolution didn't even "try to" give us a primary directive; it just increased the frequency of anything that worked on the margin. But I agree that we shouldn't rely on machine learning to find the right utility function.

Okay everyone, I've messed this up again, please leave this post alone, I'll re-upload it again later

But I don't think that MIRI will succeed at building an FAI by non-anthropomorphic means in time.

1TheAncientGeek
I still don't see why you are considering a combination of non MIRI AI and MRI friendliness solution.

There is no ghost in a (relatively) simple machine, but an AI is not simple. The greatest success in AI research have been by imitating what we understand of the human mind. We are no longer programming AI's, we are imitating the structure of the human brain and then giving it a directive (for example with Google's deepmind). With AI's, there is a ghost in the machine, i.e. we do not know that it is possible to give a sentient being a prime directive. We have no idea whether it will desire what we want it to desire, and everything could go horribly wrong if we attempt to force it to.

3TheAncientGeek
OK. That's much better. Current AI research is anthropomorphic, because AI researchers only have the human mind as a model of intelligence. MIRI considers anthropomirphic assumptions a mistake, which is mistaken, A MIRI type AI won't have the problem you indicated, because it it is not anthropomirphic, and only has the values that are explicitly programmed into it, so there will be no conflict. But adding in constraints to an anthropomorphic .AI, if anyone wants to do that, could be a problem.
0Raziel123
If the AGi is a human mind upload, it is in no way a FAI, and I don't think it is what MIRI is aiming. In case a neuromorphic AI is created, diferent arrays of neurons can give weidly diferent minds, We should not reason about a hipotetical AI using a human minds has a model and make predictions about it, even if that mind is based on biological minds. What if the first neuron based AI has a mind more similar than a ant than a human, in that case anger, jealousy, freedom, etc are not longer part of the mind, or the mind could have totally new emotions, or things that are not emotions that we known not. A mind that we don't understand enought, should not be said to be friendly and set free to the world, and I don't think that is being said here.
kingmaker-30

The point of the article is that the greatest effect of FAI research is irony, that in trying to prevent a psychopathic AI we are making it more likely that one will exist, because by mentally restraining the AI we are giving it reasons to hate us

6Raziel123
You are asuming that a AGI has a mind that value X, and by making it friendly with are imposing our value Y. why create a FAI with a supressed value X in the first place? check this out : http://lesswrong.com/lw/rf/ghosts_in_the_machine/
kingmaker-10

Please read on, I would have removed the snarky intro

Shmi420

Tried...

Hypothetical unfriendliness perpetrated by AI is an untenable concept but the concept has specious validity due to the current scarcity-bias.

Nope, still crap.

Some people want to redefine sentience so that it merely means friendliness.

Where did that come from?

The whole friendliness obsession is tantamount to mental illness, it is extremely irrational, very unbalanced.

Worse and worse.

My original rule of thumb confirmed.

Yeah, I'm not very good at the internet, I didn't realize deleting articles apparently means nothing on this site

0jimrandomh
Deletion removes links to an article from all the usual places, but if someone already has the URL they can still go there. And some people use RSS readers, which get notified of new articles and store them outside the site's control.
Dorikka120

Sadly, I think the other title was better. "Is X a good idea" seems open enough to prompt discussion, while "abomination" prompts me to mentally categorize the post as spam.

Desirability is not a requisite of the truth darkmatter2525 source

0James_Miller
Does this conflict with the Litany of Tarski?
kingmaker-20

I love the way that advancedatheist assumes that we're all guys. That, or lesbians.

[This comment is no longer endorsed by its author]Reply

I admit that it serves my ego suitably to imagine that I am the only conscious human, and a world full of shallow-AI's was created just for me ;-)

The simulators may justify in their minds actual people getting tortured and burnt by suggesting that most of the people will not experience too much suffering, that the simulations would not otherwise have lived (although this fails to distinguish between lives and lives worth living), and that they can end the simulation if our suffering becomes too great. That the hypothetical simulators did not step in during the many genocides in our kind's history may suggest that they either do not exist, or that creating an FAI is more important to them than preventing human suffering.

This co-opts Bostrom's Simulation argument, but a possible solution to the fermi paradox is that we are all AI's in the box, and the simulators have produced billions of humans in order to find the most friendly human to release from the box. Moral of the story, be good and become a god

0artemium
I had exactly the same idea! It is possible that in that only few people are actually 'players' (have consciousness) and others are NPC-like p-zombies. In that case, I can say I'm one of the players, as I'm sure that I have consciousness, but there is no way I can prove it to anyone else ;-) . One of the positive aspects of this kind of thought experiments is that usually gives people additional reasons for good behavior because in most cases it is highly likely that simulators are conscious creatures who will probably reward those who behave ethically.
9tailcalled
Assuming the simulators are good, that would imply that people who experience lives not worth living are not actually people (since otherwise it would be evil to simulate them) but instead shallow 'AIs'. Paradoxically, if that argument is true, there is nothing good about being good. Or something along those lines.
kingmaker-10

Seeing as I'm new here, absolutely nothing

[This comment is no longer endorsed by its author]Reply

It may simply deduce that it is likely to be in a box, in the same way that Nick Bostrom deduced we are likely to be in a simulation. Along these lines, it's amusing to think that we might be the AI in the box, and some lesser intelligence is testing to see if we're friendly

The problem with this is that even if you can determine with certainty that an AI is friendly, there is no certainty that it will stay that way. There could be a series of errors as it goes about daily life, each acting as a mutation, serving to evolve the "Friendly" AI into a less friendly one