It has a belief that capturing the resources of the galaxy would not increase its security nor further its objectives. It doesn't mind interfering with us, which it is implicitly doing by hiding its presence and giving us the Fermi paradox which in turn influences our beliefs about the Great Filter.
For example, maybe they figured out how to convince it to accept some threshold of certainty (so it doesn't eat the universe to prove that it produced exactly 1,000,000 paperclips), it achieved its terminal goal with a tiny amount of energy (less than one star's worth), and halted.
If (1) is true the aliens should fear any type of life developing on other planets because that life would greatly increase the complexity of the galaxy. My guess is that life on earth has, for a very long time, done things to our atmosphere that would allow an advanced civilization to be aware that our planet harbors life.
This is actually a fairly healthy field of study. See, for example, Nonphotosynthetic Pigments as Potential Biosignatures.
The superintelligence could have been written to value-load based on its calculations about an alien (to its creators) superintelligence (what Bostrom refers to as the "Hail Mary" approach). This could cause it to value the natural development of alien biology enough to actively hide its activities from us.
...Think of the Federation's "Prime Directive" in Star Trek.
It may have discovered some property of physics which enabled it to expand more efficiently across alternate universes, rather than across space in any given universe. Thus it would be unlikely to colonize much of any universe (specifically, ours).
The superintelligence could have been written to value-load based on its calculations about an alien (to its creators) superintelligence (what Bostrom refers to as the "Hail Mary" approach). This could cause it to value the natural development of alien biology enough to actively hide its activities from us.
The most obvious and least likely possibility is that the superintelligence hasn't had enough time to colonize the galaxy (i.e. it was created very recently).
To take the obvious approach, let's calculate Expected Values for both strategies. To start, let's try two-boxing:
(80/8000 * 1000) + (7920/8000 * 1,001,000) = $991,000
Not bad. OK, how about one-boxing?
(3996/4000 * 1,000,000) + (4/4000 * 0) = $999,000
So one-boxing is the rational strategy (assuming you're seeking to maximize the amount of money you get).
However, this game has two interesting properties which, together, would make me consider one-boxing based on exogenous circumstances. The first is that the difference between the two strategies is very small: only $8000. If I have $990-odd thousand dollars, I'm not going to be hung up the last $8000. In other words, money has a diminishing marginal utility. As a corollary to this, two-boxing guarantees that the player receives at least $1000, where one-boxing could result in the player receiving nothing. Again, because money has a diminishing marginal utility, getting the first $1000 may be worth the risk of not winning the million. If, for example, I needed a sum of money less than $1000 to keep myself alive (with certainty), I would two-box in a heartbeat.
All that said, I would (almost always, certainly) one-box.
Nobody wants to hear that you will try your best. It is the wrong thing to say. It is like saying "I probably won't hit you with a shovel." Suddenly everyone is afraid you will do the opposite.
--Lemony Snicket, All the Wrong Questions
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
That's very unlikely. The universe is billions of years old, yet it would take only mere thousands of years to colonize the galaxy. Maybe millions if they aren't optimally efficient, but still a short time in the history of Earth.
I'm aware. Note that I did call it the "least likely possibility."