Many Worlds against Simulation?

2 Coacher 07 March 2016 01:50PM

Lets assume few things:

1. Many Worlds is real.

2. All identical consciousnesses measures as 1 in anthropics . So if we have set of consciousness: 1xA,1xB and 1000000xC, it is still 1/3 chance, to perceive being C.

 

Now say some intelligent being (i.e. human) starts another human brain simulation on silicon chip. The operations it does are all discrete, so despite the chip splitting in to many chips in many worlds, the simulated consciousness itself remain just 1 (because of #2 assumption).

But that is not true for human who started the simulation as he differs somehow in every Everett branch and reaches billions different consciousnesses really fast.

Is there some mistake in reasoning, that real persons should heavily outweigh simulations, despite, how many of them are running, given such assumptions?

Comment author: turchin 04 March 2016 10:07:07AM 1 point [-]

If it is true, we should find ourselves surprisingly early in the history of Universe. But if we consider that frequency of gamma-ray bursts is quickly diminishing, and so we could not be very early, because there were so many planet killing gamma-bursts, these two tendencies may cancel each other and we are just in time.

Comment author: Coacher 04 March 2016 12:45:16PM 0 points [-]

Also, what if intelligent life is just a rare event? Like not rare enough to explain Fermi paradox by itself, but rare enough, that we could be considered among earliest and therefore surprisingly early in the history of universe? Given how long universe will last, we actually are quite early: https://en.wikipedia.org/wiki/Timeline_of_the_far_future

Comment author: Coacher 04 March 2016 08:49:55AM 1 point [-]

For hypothesis to hold AI needs to: 1. Kill their creators efficiently. 2. Don't spread 3. Do both these things every time any AI is created with near 100% success ratio.

Seems a lot of presumptions, with no good arguments for any of them?

Comment author: Coacher 04 March 2016 09:13:11AM 1 point [-]

On the other hand I don't see, why AI that does spread can not be a great filter. Lets assume: 1. Every advanced civilization creates AI soon after creating radio. 2. Every AI spreads immediately (hard take off) and does that in near speed of light. 3. Every AI that reaches us, immediately kills us. 4. We have not seen any AI and we are still alive. That can only be explained by anthropic principle - every advanced civilization, that have at least bit more advanced neighbors is already dead. Every advanced civilization, that have at least bit less advanced neighbors, have not seen them, as they have not yet invented radio. This solves Fermi paradox and we can still hope to see some primitive life forms in other planets. (also AI may be approaching us at speed of light and will wipe us out any moment now)

Comment author: Coacher 04 March 2016 08:49:55AM 1 point [-]

For hypothesis to hold AI needs to: 1. Kill their creators efficiently. 2. Don't spread 3. Do both these things every time any AI is created with near 100% success ratio.

Seems a lot of presumptions, with no good arguments for any of them?

Comment author: argella42 01 March 2016 10:18:41PM 2 points [-]

The major falsifiable prediction is that reminding people of their own mortality will cause them to increase the strength of their psychological terror defense mechanisms, which may include culture, religion, or social ties. Here is a literature review of the subject from 2010. According to the review, the theory hasn't been falsified:

"MS [mortality salience, i.e. death reminders] yielded moderate effects (r=0.35) on a range of worldview- and self-esteem-related dependent variables"

Comment author: Coacher 02 March 2016 09:50:21AM 0 points [-]

Can it predict something real/measurable?

Comment author: turchin 01 March 2016 11:28:00AM 0 points [-]

May be the AI had existed in the Galaxy and halted by some internal reasons, but leaved after it some self-replicating remnants, which are only partly intelligent and so unable to fall in the same trap. Their behavior would look absurd to us, and that is why we can't find them.

Comment author: Coacher 01 March 2016 06:23:49PM 0 points [-]

Adding additional unneeded assumptions does not make hypothesis more likely. Just halting and not leaving any retarded children explains observations just as well if not better.

Comment author: James_Miller 27 February 2016 03:03:34PM *  0 points [-]

Crazy Idea--What if we are an isolated people and the solution to the Fermi paradox is that aliens have made contact with earth, but our fellow humans have decided to keep this information from us. Yes, this seems extremely unlikely, but so do all other solutions to the Fermi paradox.

Comment author: Coacher 01 March 2016 06:18:06PM 0 points [-]

Then why would they even contact those few people?

Comment author: RedErin 01 March 2016 03:26:20PM -1 points [-]

But it is unethical to allow all the suffering that occurs on our planet.

Comment author: Coacher 01 March 2016 06:06:55PM 1 point [-]

Compared to what alternative?

Comment author: turchin 27 February 2016 07:01:23PM *  2 points [-]
  1. Alien AI is using SETI-attack strategy, but to convince us to bite and also to be sure that we have very strong computers which are able to run its code, it makes its signal very subtle and complex, so it is not easy to find. We didn't find it yet but will soon find. I wrote about SETI attack here: http://lesswrong.com/lw/gzv/risks_of_downloading_alien_ai_via_seti_search/

  2. Alien AI exist in form of alien nanobots everywhere (including my room and body), but they do not interact with us and try to hide from microscopy.

  3. They are berserk and will be triggered to kill us if we reach unknown threshold, most likely creation of AI or nanotech.

2 may include 3.

Comment author: Coacher 01 March 2016 06:03:13PM 0 points [-]
  1. This looks far fetched, but interesting strategy. Does it perhaps ever occur in nature? I.e. do any predators wait for their prey to become stronger/smarter, before luring them into the trap?

  2. I guess they could, but to what end?

  3. Why wait?

Comment author: argella42 01 March 2016 03:04:08PM 2 points [-]

What do you guys think of the theories of Ernest Becker, and of the more modern terror management theory?

The basic argument as that many, if not all, human behaviors are the result of our knowledge of our own mortality and our instinct to deny and forget about it in order to seek some kind of literal or symbolic immortality (by living forever, or writing a famous book, etc.) Religion exists to tell us that it's okay to die, culture to make us forget about religion, so we don't examine it too closely or think about death in general (it can still be scary because the evolutionary instinct to survive is still there). This definition of "culture" applies to nearly every domain of human achievement, including this very website. Less wrong exists to raise users' self esteem or self-concept so that the feel some security in a symbolic kind of immortality (I'm rational, I'm a transhuminist--of course I'll survive!)

I suggest you read the links I gave above for further explanation. There have also been scientific studies on mortality salience (here's a review) which seem to support the theory.

Comment author: Coacher 01 March 2016 05:31:26PM -1 points [-]

Freud said its all because we want to f* our mothers.

View more: Prev | Next