Why would they want to stop us from fleeing? It doesn't reduce their expansion rate, and we already established that we don't pose any serious threat to them. We would essentially be giving a perfectly good planet and star to them, undamaged by war (we would probably have enough time to launch at least some nuclear missiles, probably not harming them much but wrecking the ecosystem and making the planet ill-suited for colonization by biological life). Unless they're just sadistic and value the destruction of life as a final goal, I see no reason for them to care. Any planets and star systems that would be colonized by the escaping humans would be taken just as easily as Earth, with only a minor delay.
Evolution also had 1 chance, in the sense that the first intelligent species created would take over the world and reform it very quickly, leaving no time for evolution to try any other mind-design. I'm pretty sure there will be no other intelligent species that evolves by pure natural selection after humanity - unless it's part of an experiment run by humans. Evolution had a lot of chances to try to create a functional intelligence, but as for the friendliness problem, it had only one chance. The reason being, a faulty intelligence will die out soon enoug...
Evolution is smarter than you.
Could you qualify that statement? If I was given a full time job to find the best way to increase some bacterium's fitness, I'm sure I could study the microbiology necessary and find at least some improvement well before evolution could. Yes, evolution created things that we don't yet understand, but then again, she had a planet's worth of processing power and 7 orders of magnitude more time to do it - and yet we can still see many obvious errors. Evolution has much more processing power than me, sure, but I wouldn't say sh...
And if something as stupid as evolution (almost) solved the alignment problem, it would suggest that it should be much easier for humans.
Replies to some points in your comment:
One could say AI is efficient cross-domain optimization, or "something that, given a mental representation of an arbitrary goal in the universe, can accomplish it in the same timescale as humans or faster", but personally I think the "A" is not really necessary here, and we all know what intelligence is. It's the trait that evolved in Homo sapiens that let them take over the planet in an evolutionary eyeblink. We can't precisely define it, and the definitions I offered are only grasping at things t...
But how can you use complex language to express your long term goals, then, like you're doing now? Do you get/trick S2 into doing it for you?
I mean, S2 can be used by S1, for instance if someone is addicted to heroin and they use S2 to invent reasons to take another dose would be the most clear example. But it must be hard doing anything more long term, you'd be giving up too much control.
Or is the concept of long term goals itself also part of the alien thing you have to use as a tool? Your S2 must really be a good FAI :D
That's a subjective value judgement from your point of view.
If you intend it to be more than that, you would have to explain why others shouldn't see it as off-putting.
Otherwise, I don't see how it contributes to the discussion other than "there's at least one person out there who thinks masculinity isn't off putting", which we already know, there's billions of examples.
It seems to me like it's extremely hard to think about sociology, especially relating to policies and social justice without falling into this trap. When you consider a statistic about a group of people, "is this statistic accurate?" is put in the same bucket as "does this mean discriminating against this group is justified?" or even "are these people worth less?" almost instinctively. Especially if you are a part of that group yourself. Now that you've explained it that way, it seems that understanding that this is what going...
People who require help can be divided into those who are capable of helping themselves, and those who are not. Such a position as yours would express the value preference that sacrificing the good of the latter group is better than letting the first group get unpaid rewards - in all cases. For me it's not that simple, the choice depends on the proportion of the groups, cost to me and society, and just how much good is being sacrificed. To make an extreme example, I would save someone's life even if this encourages other people to be less careful protecting theirs.
While you can't fool your logical brain, if you want to have a false belief to make you happy, you don't need to anyway. The brain is compartmentalized and often doesn't update what you feel intuitively true, or what you base your actions on, just because you learned a fact. This sentence: "You can't know the consequences of being biased, until you have already debiased yourself" strikes me as most hard to believe. Reading about a bias and considering its consequences, esp. in an academic mindframe does NOT debias you. That requires applying it t...
The Curse of Downregulation: Sufferers of this can never live "happily ever after", for anything that gives them joy, done often enough, will become mundane and boring. Someone who is afflicted could have the great luck to earn a million a day, and after a year they will be filled with despair and envy at their neighbor who is making two million, no happier than they would be in poverty.
What are you implying here? It's clear that *we*, or at least *you* exist, in the sense that the computation of our minds is being performed and inputs are being given to it. We can also say, (with slightly less certainty) that observable external physical objects such as atoms exist because the evolution of their states from one Planck instant to the next is being performed (even when we're not observing it - if the easiest way to get from observation t1 to observation t2 is by computing all the intermediate... (read more)