Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: contravariant 16 May 2017 04:55:23PM *  0 points [-]

Why would they want to stop us from fleeing? It doesn't reduce their expansion rate, and we already established that we don't pose any serious threat to them. We would essentially be giving a perfectly good planet and star to them, undamaged by war (we would probably have enough time to launch at least some nuclear missiles, probably not harming them much but wrecking the ecosystem and making the planet ill-suited for colonization by biological life). Unless they're just sadistic and value the destruction of life as a final goal, I see no reason for them to care. Any planets and star systems that would be colonized by the escaping humans would be taken just as easily as Earth, with only a minor delay.

Comment author: siIver 13 May 2017 04:50:47PM *  0 points [-]

Essentially:

Q: Evolution is a dumb algorithm, yet it produced halfway functional minds. How can it be that the problem isn't easy for humans, who are much smarter than evolution?

A: Evolution's output is not just one functional mind. Evolution put out billions of different minds, an extreme minority of them being functional. If we had a billion years of time and had a trillion chances to get it right, the problem would be easy. Since we only have around 30 years and exactly 1 chance, the problem is hard.

Comment author: contravariant 13 May 2017 10:42:35PM *  0 points [-]

Evolution also had 1 chance, in the sense that the first intelligent species created would take over the world and reform it very quickly, leaving no time for evolution to try any other mind-design. I'm pretty sure there will be no other intelligent species that evolves by pure natural selection after humanity - unless it's part of an experiment run by humans. Evolution had a lot of chances to try to create a functional intelligence, but as for the friendliness problem, it had only one chance. The reason being, a faulty intelligence will die out soon enough, and give evolution time to design a better one, but a working paperclip maximizer is quite capable of surviving and reproducing and eliminating any other attempts at intelligence.

Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world?

1 contravariant 13 May 2017 08:23AM

As far as AI designers go, evolution has to be one of the worst. It randomly changes the genetic code, and then selects on the criterion of ingroup reproductive fitness - in other words, how well a being can reproduce and stay alive - it says nothing about the goals of that being while it's alive.

To survive, and increase one's power are instrumentally convergent goals of any intelligent agent, which means that evolution does not select for any specific type of mind, ethics, or final values.

And yet, it created humans and not paperclip maximizers. True, humans rebelled against and overpowered evolution, but in the end we ended up creating amazing things and not a universe tiled with paperclips(or DNA, for that matter).

Considering how neural network training and genetic algorithms are considered some of the most dangerous ways of creating an AI,

the fact that natural evolution managed to create us with all our goals of curiosity and empathy and love and science,

would be a very unlikely coincidence given that we assume that most AIs we could create are worthless in terms of their goals and what they will do with the universe. Did it happen by chance? The p-value is pretty small on this one.

Careless evolution managed to create humans on her first attempt at intelligence, but humans, given foresight and intelligence, have an extreme challenge making sure an AI is friendly? How can we explain this contradiction? 

 

Comment author: Thomas 24 April 2017 01:59:18PM 2 points [-]

Evolution is smarter than you. The notion that this is a stupid process, isn't justified.

Our intuition is here misleading once again.But not only the evolution, some other processes as well, outsmart us mortals.

Lenin was quite certain that his central planning will be far better than a chaotic merchants-buyer-peasant negotiations on a million market places at once. He was wrong.

The calculation power of the whole biology is astounding one. Eventually we may prevail, but never underestimate your opponent. Especially not the Red Queen herself!

Comment author: contravariant 24 April 2017 06:17:45PM 0 points [-]

Evolution is smarter than you.

Could you qualify that statement? If I was given a full time job to find the best way to increase some bacterium's fitness, I'm sure I could study the microbiology necessary and find at least some improvement well before evolution could. Yes, evolution created things that we don't yet understand, but then again, she had a planet's worth of processing power and 7 orders of magnitude more time to do it - and yet we can still see many obvious errors. Evolution has much more processing power than me, sure, but I wouldn't say she is smarter than me. There's nothing evolution created over all its history that humans weren't able to overpower in an eyeblink of a time. Things like lack of foresight and inability to reuse knowledge or exchange it among species, mean that most of this processing power is squandered.

Comment author: gworley 23 April 2017 08:24:30PM 2 points [-]

I like this because this helps better answer the anthropics problem of existential risk, namely that we should not expect to find ourselves in a universe that gets destroyed, and more specifically you should not find yourself personally living in a universe where the history of your experience is lost. I say this because this is evidence that we will likely avoid a failure in AI alignment that destroys us, or at least not find ourselves in a universe where AI destroys us all, because alignment will turn out to be practically easier than we expect it to be in theory. That alignment seems necessary for this still makes it a worthy pursuit since progress on the problem increases our measure, but it also fixes the problem of believing the low-probability event of finding yourself in a universe where you don't continue to exist.

Comment author: contravariant 24 April 2017 10:50:53AM 1 point [-]

And if something as stupid as evolution (almost) solved the alignment problem, it would suggest that it should be much easier for humans.

Comment author: tukabel 22 April 2017 10:54:15PM 4 points [-]

Welcome to the world of Memetic Supercivilization of Intelligence... living on top of the humanimal substrate.

It appears in maybe less than a percent of the population and produces all these ideas/science and subsequent inventions/technologies. This usually happens in a completely counter-evolutionary way, as the individuals in charge get most of the time very little profit (or even recognition) from it and would do much better (in evolutionary terms) to use their abilities a bit more "practically". Even the motivation is usually completely memetic: typically it goes along the lines like "it is interesting" to study something, think about this and that, research some phenomenon or mystery.

Worse, they give stuff more or less for free and without any control to the ignorant mass of humanimals (especially those in power), empowering them far beyond their means, in particular their abilities to control and use these powers "wisely"... since they are governed by their DeepAnimal brain core and resulting reward functions (that's why humanimal societies function the same way for thousands and thousands of years - politico-oligarchical predators living off the herd of mental herbivores, with the help of mindfcukers, from ancient shamans, through the stone age religions like the catholibanic one, to the currently popular socialist religion).

AI is not a problem, humanimals are.

Our sole purpose in the Grand Theatre of the Evolution of Intelligence is to create our (first nonbio) successor before we manage to self-destruct. Already nukes were too much, and once nanobots arrive, it's over (worse than DIY nuclear grenade for a dollar any teenager or terrorist can assemble in a garage).

Singularity should hurry up, there are maybe just few decades left.

Do you really want to "align" AI with humanimal "values"? Especially if nobody knows what we are really talking about when using this magic word? Not to mention defining it.

Comment author: contravariant 24 April 2017 10:48:55AM *  1 point [-]

Replies to some points in your comment:

One could say AI is efficient cross-domain optimization, or "something that, given a mental representation of an arbitrary goal in the universe, can accomplish it in the same timescale as humans or faster", but personally I think the "A" is not really necessary here, and we all know what intelligence is. It's the trait that evolved in Homo sapiens that let them take over the planet in an evolutionary eyeblink. We can't precisely define it, and the definitions I offered are only grasping at things that might be important.

If you think of intelligence as a trait of a process, you can imagine how many possible different things with utterly alien goals might get intelligence, and what they might use it for. Even the ones that would be a tiny bit interesting to us are just a small minority.

You may not care about satisfying human values, but I want my preferences to be satisfied and I have a meta-value that we should do the best effort to satisfy the preferences of any sapient being. If we look for the easiest thing to find that displays intelligence, the odds of that happening are next to none. It would eat us alive for a world of something that makes paperclips look beautiful in comparison.

And the prospect of an AI designed by the "Memetic Supercivilization" frankly terrifies me. A few minutes after an AI developer submits the last bugfix on github, a script kiddie thinks "Hey, let's put a minus in front of the utility function right here and have it TORTURE PEOPLE LULZ" and thus the world ends. I think that is something best left to a small group of people. Placing our trust in the fact that the emergent structure of society that had little Darwinian selection, and a spectacular history of failures over a pretty short timescale, handed such a dangerous technology, would produce something good even for itself, let alone humans, seems unreasonable.

Comment author: Qiaochu_Yuan 23 April 2017 08:45:25PM 9 points [-]

It's interesting to me that you identify with S2 / the AI / the rider, and regard S1 / the monkey / the elephant as external. I suspect this is pretty common among rationalists. Personally, I identify with S1 / the monkey / the elephant, and regard S2 / the AI / the rider in exactly the way your metaphor suggests - this sort of parasite growing on top of me that's useful for some purposes, but can also act in ways I find alien and that I work to protect myself from.

Comment author: contravariant 24 April 2017 10:18:23AM *  3 points [-]

But how can you use complex language to express your long term goals, then, like you're doing now? Do you get/trick S2 into doing it for you?

I mean, S2 can be used by S1, for instance if someone is addicted to heroin and they use S2 to invent reasons to take another dose would be the most clear example. But it must be hard doing anything more long term, you'd be giving up too much control.

Or is the concept of long term goals itself also part of the alien thing you have to use as a tool? Your S2 must really be a good FAI :D

Comment author: fmgn 19 April 2017 09:17:50AM 1 point [-]

Masculinity isn't off-putting.

Comment author: contravariant 24 April 2017 09:50:23AM *  0 points [-]

That's a subjective value judgement from your point of view.

If you intend it to be more than that, you would have to explain why others shouldn't see it as off-putting.

Otherwise, I don't see how it contributes to the discussion other than "there's at least one person out there who thinks masculinity isn't off putting", which we already know, there's billions of examples.

In response to comment by bogus on Am I Really an X?
Comment author: math5 07 March 2017 06:46:14PM *  0 points [-]

What work is the word "really" actually doing here?

How about referring to the cluster structure of gender space. Of course, then we'd reach the conclusion that there are only two genders, and the traditional assignment is people to them is the correct one.

Another way to think about this is to consider the analogous question of whether a jellyfish is "really" a fish.

In response to comment by math5 on Am I Really an X?
Comment author: contravariant 07 March 2017 07:51:23PM *  0 points [-]

I'm inherently suspicious of claims that the traditional idea is the right solution without ever questioning its justification, seems too easy to fall for status quo bias.

But even ignoring this, I see your cluster structure and I raise you a disguised query. Is a human mind running on a computer really a "person"? Even though they don't have human cells, human DNA, human bodies like your typical person has? In fact the only thing they share with a typical Homo sapiens is that their mind runs by the same algorithm. When the reason for the categorization is about the status of people in society, the structure of the mind plays a dominant role above non-sentient organic matter. This is as relevant to "gender" as it is to "personhood".

Comment author: contravariant 31 December 2016 07:13:18PM 1 point [-]

It seems to me like it's extremely hard to think about sociology, especially relating to policies and social justice without falling into this trap. When you consider a statistic about a group of people, "is this statistic accurate?" is put in the same bucket as "does this mean discriminating against this group is justified?" or even "are these people worth less?" almost instinctively. Especially if you are a part of that group yourself. Now that you've explained it that way, it seems that understanding that this is what going on is a good strategy to avoid being mindkilled by such discussions.

Though, in this case, it can still be a valid concern that others may be affected by this fallacy if you publish or spread the original statistic, so if it can pose a threat to a large number of people it may still be more ethical to avoid publicizing it. However that is an ethical issue and not an epistemic one.

View more: Next