For hypothesis to hold AI needs to: 1. Kill their creators efficiently. 2. Don't spread 3. Do both these things every time any AI is created with near 100% success ratio.
Seems a lot of presumptions, with no good arguments for any of them?
For hypothesis to hold AI needs to: 1. Kill their creators efficiently. 2. Don't spread 3. Do both these things every time any AI is created with near 100% success ratio.
Seems a lot of presumptions, with no good arguments for any of them?
On the other hand I don't see, why AI that does spread can not be a great filter. Lets assume: 1. Every advanced civilization creates AI soon after creating radio. 2. Every AI spreads immediately (hard take off) and does that in near speed of light. 3. Every AI that reaches us, immediately kills us. 4. We have not seen any AI and we are still alive. That can only be explained by anthropic principle - every advanced civilization, that have at least bit more advanced neighbors is already dead. Every advanced civilization, that have at least bit less advanced neighbors, have not seen them, as they have not yet invented radio. This solves Fermi paradox and we can still hope to see some primitive life forms in other planets. (also AI may be approaching us at speed of light and will wipe us out any moment now)
For hypothesis to hold AI needs to: 1. Kill their creators efficiently. 2. Don't spread 3. Do both these things every time any AI is created with near 100% success ratio.
Seems a lot of presumptions, with no good arguments for any of them?
The major falsifiable prediction is that reminding people of their own mortality will cause them to increase the strength of their psychological terror defense mechanisms, which may include culture, religion, or social ties. Here is a literature review of the subject from 2010. According to the review, the theory hasn't been falsified:
"MS [mortality salience, i.e. death reminders] yielded moderate effects (r=0.35) on a range of worldview- and self-esteem-related dependent variables"
Can it predict something real/measurable?
May be the AI had existed in the Galaxy and halted by some internal reasons, but leaved after it some self-replicating remnants, which are only partly intelligent and so unable to fall in the same trap. Their behavior would look absurd to us, and that is why we can't find them.
Adding additional unneeded assumptions does not make hypothesis more likely. Just halting and not leaving any retarded children explains observations just as well if not better.
Crazy Idea--What if we are an isolated people and the solution to the Fermi paradox is that aliens have made contact with earth, but our fellow humans have decided to keep this information from us. Yes, this seems extremely unlikely, but so do all other solutions to the Fermi paradox.
Then why would they even contact those few people?
But it is unethical to allow all the suffering that occurs on our planet.
Compared to what alternative?
Alien AI is using SETI-attack strategy, but to convince us to bite and also to be sure that we have very strong computers which are able to run its code, it makes its signal very subtle and complex, so it is not easy to find. We didn't find it yet but will soon find. I wrote about SETI attack here: http://lesswrong.com/lw/gzv/risks_of_downloading_alien_ai_via_seti_search/
Alien AI exist in form of alien nanobots everywhere (including my room and body), but they do not interact with us and try to hide from microscopy.
They are berserk and will be triggered to kill us if we reach unknown threshold, most likely creation of AI or nanotech.
2 may include 3.
This looks far fetched, but interesting strategy. Does it perhaps ever occur in nature? I.e. do any predators wait for their prey to become stronger/smarter, before luring them into the trap?
I guess they could, but to what end?
Why wait?
What do you guys think of the theories of Ernest Becker, and of the more modern terror management theory?
The basic argument as that many, if not all, human behaviors are the result of our knowledge of our own mortality and our instinct to deny and forget about it in order to seek some kind of literal or symbolic immortality (by living forever, or writing a famous book, etc.) Religion exists to tell us that it's okay to die, culture to make us forget about religion, so we don't examine it too closely or think about death in general (it can still be scary because the evolutionary instinct to survive is still there). This definition of "culture" applies to nearly every domain of human achievement, including this very website. Less wrong exists to raise users' self esteem or self-concept so that the feel some security in a symbolic kind of immortality (I'm rational, I'm a transhuminist--of course I'll survive!)
I suggest you read the links I gave above for further explanation. There have also been scientific studies on mortality salience (here's a review) which seem to support the theory.
Freud said its all because we want to f* our mothers.
Big difference.
You don't know how much money is in my wallet. I do. You have no evidence, and you don't have a means to detect it, but it doesn't mean there is no evidence to be had.
That third little star off the end of the milky way may be a gigantic alien beacon transmitting a spread spectrum welcome message, but we just haven't identified it as such, or spent time trying to reconstruct the message from the spread spectrum signal.
We see it. We record it at observatories every night. But we haven't identified it as a signal, nor decoded it.
It seems you have some uncommon understanding of what word evidence means. Evidence is peace of information, not some physical thing.
If it is true, we should find ourselves surprisingly early in the history of Universe. But if we consider that frequency of gamma-ray bursts is quickly diminishing, and so we could not be very early, because there were so many planet killing gamma-bursts, these two tendencies may cancel each other and we are just in time.
Also, what if intelligent life is just a rare event? Like not rare enough to explain Fermi paradox by itself, but rare enough, that we could be considered among earliest and therefore surprisingly early in the history of universe? Given how long universe will last, we actually are quite early: https://en.wikipedia.org/wiki/Timeline_of_the_far_future