If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

New Comment
53 comments, sorted by Click to highlight new comments since:
[-][anonymous]340

Would anybody listen to an almost-almost-molested woman? It's not awful or something, and I do not need it, would probably feel down regardless of that thing, but still. Please.

Edit: thank you a lot. I have written to some of people who agreed to listen, and if anything rough happens to anybody here, I could listen, too.

[-]Elo210

I volunteer. But some of the people on the slack would be better suited than me.

[-][anonymous]180

Thank you, I'll PM you soon.

(FYI, this was almost certainly downvoted by Eugene_Nier's sockpuppets rather than actual people. I upvoted, hopefully others will as well to counteract the trollling.)

I'll listen, too.

jsteinhardt is right about the resident troll who downvotes maliciously.

Yeah I do volunteer to, although it's not clear to me how I could be of help.

[-][anonymous]160

Thank you. Two people have already reached out, and I am very grateful for that. (Will edit the post now.) May I ask you for advice on what to tell relatives who ask "And what would you have done if..."? Their ifs are usually so bad that I cannot come up with anything different from "Well in that case, I'll probably be drifting down the Dnipro river right now". I mean, I feel like I'm supposed to either give them some totally cool rational answer or burst into sobs, and I can do neither on the fly. If you could invent something real wise, that would help much.

Well, I guess it depends very much on the situation you are asked for.
Generally speaking, I think those are idle questions: unless you're prepared like a Special Force operator, you usually in a crisis situation will behave instinctively. And if the situation is so bad, then you'll die, yes, like anyone else in that situation.
But why they ask the question? Is this something that you are particularly probable to face? Like a job-related risk or a well known danger in the place you live? Because in that case, then I would put much more effort in thinking and training on how to cope.
If, on the other hand, those are idle questions, then I think they deserve an idle answer: "I don't know exactly, but I'm a grown woman, somehow I'll manage it."

[-][anonymous]20

As far as I can tell, it's just healthy sportsmanlike interest. Like, "I went to the shop and a drunkard threw his bottle at me, so I waited for a bus on my way back." - "AHA! And what would you have done if there were THREE of them, and they had KNIVES?!" - "..."

That doesn't seem very healthy, unless it's said in humour. What's the point of drilling on impossible circumstances?

[-][anonymous]00

I don't know, but my relatives and their guests do it all the time.

PSA:

I just realized that /u/Elo's posts haven't been showing up in /r/Discussion because of all the downvoting from Eugene_Nier's sockpuppet accounts. So, I've gone back to read through the sequence of posts they're in the middle of. You may wish to do the same.

Meta:

I was going to leave this as a comment on Filter on the way in, Filter on the way out..., but I figured it's different enough to stand on it's own. It’s also mostly a corollary though, and just links Elo’s post to existing ideas without saying much new, and so probably isn’t worth it’s own top-level post. This isn’t likely to be actionable either, since I basically come to the conclusion that it’s ok to take down the Chesterton Fence that LW has already long ago taken down.

This might be a good comment to skim rather than read, since the examples are mostly to completely define precisely what I’m getting at, and you're likely already familiar with them. I've divided this into sections for easy skimming. I’m posting only because I thought the connections were small but interesting insights.

Also meta: this took about 2.5 hrs to write and edit.

TL;DR of Elo’s “Filter on the way in, Filter on the way out...” post:

Elo proposes that nerd culture encourages people to apply tact to anything they hear, and so it becomes less necessary to be tiptoe around sensitive issues for fear of being misunderstood. Nerds have a tact filter between their ears and brain, to soften incoming ideas.

"Normal" culture, on the other hand, encourages people to apply tact to anything they say, and so it becomes less necessary to constantly look for charitable interpretations, for fear of a misunderstanding. Non-nerds have a tact filter between their brain and mouth, to soften outgoing ideas.

They made several pretty diagrams, but they all look something like this:

speaker’s brain -> [filter] -> speaker’s mouth -> listener’s ears -> [filter] -> listener’s brain

The thing I want to expand Elo’s idea to cover:

What’s going on in someone’s head when they encounter something like the trolley problem, and say “you can’t just place a value on a human life”? EA’s sometimes get backlash for even weighing the alternatives. Why would anyone refuse to even engage with the problem, and merely empathize with the victims? After all, the analytic half of our brains, not the emotional parts, are what solves such problems.

I propose that this can be thought of as a tact filter for one’s own thoughts. If that’s not clear, let me give a couple rationalist examples of the sort of thing I think is going on in people’s heads, to help triangulate meaning:

  • HPMOR touches on this a couple times with McGonagall. She avoids even thinking of disturbing topics.

  • Some curiosity stoppers/semantic stopsigns are due to avoiding asking one’s self unpleasant questions.

  • The idea of separate magisteria comes from an aversion to thinking critically about religion.

  • Several biases and fallacies. The just world fallacy is result of an aversion to more accurate mental models.

  • Politics is the mindkiller, so I’ll leave you to come up with your own examples from that domain. Identity politics is especially ripe with examples.

Filter on the way in, Filter on the way out, Filter while in, Filter while out:

So, I propose that Elo’s model can be expanded by adding this:

Some subcultures encourage people to apply tact to anything they think, and so it becomes less necessary to constantly filter what we say, for fear of a misunderstanding. Such people have a tact filter between different parts of their brain, to filter the internal monologue.

That corollary doesn’t add much that hasn’t already been discussed to death on LW. However, we can phrase things in such a way as to put people at ease, and encourage them to relax their internal and/or outgoing filters, while maintaining their ingoing filter. Adapting Elo's model to capture this, we get this:

future speaker’s thought -> [filter] -> speaker’s cashed thoughts -> [filter] -> speaker’s mouth -> listener’s ears -> [filter] -> listener’s thoughts -> [filter] -> past listener’s cashed thoughts

Note that both the speaker and the listener have internal filters. We can think or hear something, and then immediately reject it for being horrible, even if it’s true.

Ideally, everyone would avoid filtering their own ideas internally, but apply tact when speaking and listening, and then strip any filters from memes they encounter while unpacking them. Without this model, perhaps us endorsing removing the 2 internal filters was a bit of a Chesterton Fence.

However, with the other 2 filters firmly in place, we should be able to safely remove the internal filters in both the thoughts of the speaker and listener. If the listener believes the filter between the speaker and their mouth is clouding information transfer, they might even ask for Crocker's rules. This is dangerous though, since removing redundant backup leaves only their own ear->brain filter as a single point of failure.

Practical applications:

To encourage unconstrained thinking in others, perhaps we can vocally strip memes passed to us of obfuscating tact if there is a backup filter in place and if we’ve already shown that we agree with the ideas. (If we don’t agree, obviously this would look like an attack on their argument, and would backfire.)

That sounds like something out of the boring advice repository, but providing social proof is probably much more powerful than merely telling people that they shouldn’t filter their internal monologue. It probably doesn’t feel like censorship from the inside. If we want to raise the sanity waterline, we’ll have to foster cultures where we all provide positive reinforcement for each other’s good epistemic hygiene.

[-]Elo80

I have now added:

Later edit for clarification: I don't like the Nerd|Normal dichotomy because those words have various histories and baggage associated with them, so I renamed them (Stater, listener, Launch filter, Landing filter). "Normal" is pretty unhelpful when trying to convey a clear decision about what's good or bad.

to the post.

future speaker’s thought -> [filter] -> speaker’s cashed thoughts -> [filter] -> speaker’s mouth -> listener’s ears -> [filter] -> listener’s thoughts -> [filter] -> past listener’s cashed thoughts

I can agree with this model, but the potential implications get more confusing the more moving parts. And not only more confusing but harder to predict, and harder to imagine someone who is missing one or the other filter. In reality there are lots of filters. Even for different people we talk to (grandma filter). What can or should be done about it? Iunno.

What’s going on in someone’s head when they encounter something like the trolley problem, and say “you can’t just place a value on a human life”?

Maybe: "Here is someone who is practicing excuses for killing people, using fictional scenarios. Is this some kind of wannabe killer, exploring the terrain to find out under which circumstances would his actions be socially acceptable? I'd better explain him that this approach wouldn't work here."

This seems plausible to me. Also compare "torture vs. dust specks" (intended as a thought experiment about aggregating disutility over hypothetical people) with "the ticking bomb scenario" (intended as an actual justification for actual societies developing torture practices for actually torturing actual people).

I'm not sure that it is so much a cultural thing, as it is a personal deal. Popular dudes who can always get more friends don't need to filter other people's talky-talky for tact. Less cool bros have to put up with a lot more and. "Your daddy loves us and he means well..." kind of stuff. Not just filter but positively translating.

I would say this is about status. People filter what they say to high-status individuals, but don't bother filtering what they say to low-status individuals.

Nerd culture is traditionally low-status in context of the whole society, and meritocratic inside. That means that nerds are used to hearing non-filtered things from outsiders, and don't have strong reasons to learn filtering when speaking with insiders. Also, it is more complicated for aspies to understand when and why exactly should the filters be used, so it is easier to have a norm for not having filters.

(And I suspect that people most complaining about the lack of filters would be often those who want to be treated as high-status by the nerd community, without having the necessary skills and achievements.)

Good point. It definitely does vary person-to-person, so I probably should have used individual terminology instead of "culture".

I haven't updated all the way in that direction, though. I wouldn't be surprised if certain cliques, subcultures, and cultures show significant variance in how strong each of their 4 filters are on average. I'd bet that LW is outside the mean, but we could easily be an outlier. We're too small to push the average for "nerd culture" or anything else very far, and it's certainly possible that, after correcting for confounders, it would turn out not to be a cultural thing at all.

What’s going on in someone’s head when they encounter something like the trolley problem, and say “you can’t just place a value on a human life”?

Deontological ethics. Also see Sophie's Choice.

Can someone sketch me the Many-Worlds version of what happens in the delayed choice quantum eraser experiment? Does a last-minute choice to preserve or erase the which-path information affect which "worlds" decohere "away from" the experimenter? If so, how does that go, in broad outline? If not, what?

The key to understand this kind of experiment under the MWI is to remember that if you erase (or never register) information specific to one of the branches of the super-position, then you can effectively merge two branches just as easily as you had split them.

The delayed quantum erasure is too complex for me to analyze now (it has a lot of branches), but the point is that when you make two branches interact, if those branches have not acquired information on their path, then they can be constructively re-merged.

Even though it's been quite a few years since I attended any quantum mechanics courses, I did do a talk as an undergraduate on this very experiment, so I'm hoping that what I write below will not be complete rubbish. I'll quickly go through the double slit experiment, and then try to explain what's happening in the delayed choice quantum eraser and why it happens. Disclaimer: I know (or knew) the maths, but our professors did not go to great lengths explaining what 'really' happens, let alone what happens according to the MWI, so my explanation comes from my understanding of the maths and my admittedly more shoddy understanding of the MWI. So take the following with a grain of salt, and I would welcome comments and corrections from better informed people! (Also, the names for the different detectors in the delayed choice explanation are taken from the wikipedia article)

In the normal double slit experiment, letting through one photon at a time, the slit through which the photon went cannot be determined, as the world-state when the photon has landed could have come from either trajectory (so it's still within the same Everett branch), and so both paths of the photon were able to interfere, affecting where it landed. As more photons are sent through, we see evidence of this through the interference pattern created. However, if we measure which slit the photon goes through, the world states when the photon lands are different for each slit the photon went through (in one branch, a measurement exists which says it went through slit A, and in the other, through slit B). Because the end world states are different, the two branch-versions of the photon did not interfere with each other. I think of it like this: starting at a world state at point A, and ending at a world state at point B, if multiple paths of a photon could have led from A to B, then the different paths could interfere with each other. In the case where the slit the photon went through is known, the different paths could not both lead to the same world state (B), and so existed in separate Everett branches, unable to interfere with each other.

Now, with the delayed choice: the key is to resist the temptation to take the state "signal photon has landed, but idler photon has yet to land" as point B in my above analogy. If you did, you'd see that the world state can be reached by the photon going through either slit, and so interference inside this single branch must have occurred. But time doesn't work that way, it turns out: the true final world states are those that take into account where the idler photon went. And so we see that in the world state where the idler photon landed in D1 or D2, this could have occurred whether the photon went through either slit, and so both on D0 (for those photons) and D1/D2, we end up seeing interference patterns, as we're still within a single branch, so to speak (when it comes to this limited interaction, that is). Whereas in the case where the idler photon reaches D3, that world state could not have been reached by the photon going through either slit, and so the trajectory of the photon did not interfere with any other trajectory (since the other trajectory led to a world state where the idler photon was detected at D4, so a separate branch).

So going back to my point A/B analogy, imagine three world states A, B and C as points on a page, and STRAIGHT lines represent different hypothetical paths a photon could take, you can see that if two paths lead from point A to point B, the lines would be on top of each other, meaning a single branch, and the paths would interfere. But if one of the paths led to point A and the other to point B, they would not be on top of each other, they go into different branches, and so the paths would not interfere.

Belated thanks to you and MrMind, these answers were very helpful.

*Longtime lurker, and I've managed to fight akrasia and geniune shortage of time to put my thoughts down into a post. I think it does deserve a post, but I don't have the karma or the confidence to create a top-level post.

Comments and feedback really welcome and desired : I've gotten tired of being intellectually essentially alone.*

There are many urgent problems in the world yet Anthropogenic Global Warming (AGW) should be considered the defining crisis to humanity. For example, increasing drug-resistance in pathogens , reducing populations of endangered species, an increase in fundamentalism, rapid increase in lifestyle diseases, increasing inequality, etc.

The key difference is that solutions to the other problems can be either technological or economic. Even solutions to fundamentalism usually involve development and infrastructure deployment, which is a difficult, but known, process. Reducing populations of endangered species looks more insoluble, but it’s rarely land itself that’s in shortage. Better monitoring and surveillance can greatly reduce poaching.

AGW however requires, in addition to technological and economic solutions, solutions at the social, psychological, and, fundamentally, human level. Now this sounds true only in a wishy-washy way, so I’m going to back them up one by one.

AGW of course requires technological solutions. The main emitters of GHGs are electricity generation, industry and transport sector. All of these are using fossil-fuel for the energy, so we need low-carbon sources of energy. We also waste a lot of energy in various ways, and it’s a lot cheaper to not consume that unit of energy than to generate a green unit of energy, so energy efficiency is needed.

Low-carbon energy sources are relatively mature with well-known costs. Aside from certain rare metal shortages, a worldwide rollout of low-carbon energy sources is eminently possible.

Energy efficient generation, transmission and consumption devices can allow us to reduce energy consumed by half, at max. There are hard limits to the effectiveness of energy efficiency, so new low-carbon energy deployment is quite necessary.

Basically, there are technological solutions, which require gigantic capital outlay, thus passing it on to the economic sector. AGW of course also requires economic solutions, primarily for the massive capital outlay. Since low-carbon energy is more expensive than fossil-fuel energy, end-consumers will have to pay the costs in some form or the other. This would have to be accounted for by economic growth.

More insidious, however, is the requirement for continuous expansion of economy for it to remain stable. Also, economic growth and energy consumption growth are tightly linked – more energy made available creates economic value, and more affluent consumers demand more energy-intensive goods and services.

So then, we would need a way to have economic growth, without carbon-intensive energy growth – a process called decoupling. One fundamental problem with a substitution of fossil-fuel by low-carbon energy sources is the sheer scale of the operation. Indeed, “even if solar panels are as cheap as dinner plates and 100% efficient, it would still take a few decades to replace current energy sources with solar”[citation needed]. So clearly, we need to reduce energy consumption in the meantime.

Relative decoupling means the emissions intensity of the economy decreases – higher economic value generated per joule of energy consumed. This kind of decoupling means nothing for AGW if economic growth continues. Absolute decoupling means decreasing absolute energy consumed and increasing economic value. So far, this stage is proving elusive.

Unfortunately, economic growth is the only reason countries around the world can have stable societies. The moment economic growth stops, basically chaos breaks out.

This is essentially an allocation problem. We have all the technology and resources to ensure basic necessities to everyone, but

  1. We have already allocated resources in a highly imbalanced manner
  2. We don’t know how to deliver resources to only the resource-poor

Compound this with the fact when more resources enter the economy, those already with resources tend to accumulate more of it. Notice, however, that I made no moral position on inequality, except that today we do not want poor people to die out of lack of resources, and the cost of basic resources keeps rising (again due to economic growth).

The problem should be clear by now: we require an increasing amount of resources to simply keep the economy running.

Rectifying this requires us to get mentally used to the fact that things don't just keep getting better every year. And that's neither a technological problem, nor an economic one. It's psychological.

Anthropogenic Global Warming (AGW) should be considered the defining crisis to humanity

Why? It's not an existential threat. Even at the projected levels of warming by, say, 2100, it's not much more than an inconvenience.

I assume you're talking of around 4 degrees warming under business-as-usual conditions?

To pick the most important effect, it's going to impact agriculture severely. Even if irrigation can be managed, untimely heavy rain will still damage crops. And they can't be prevented from affecting crops, unless you build giant roofs.

If you are saying that all these effects can be defended against, I agree. But the key point is that our entire economy is built on a lot of things being extremely cheap. Erecting a giant roof over all sensitive cropland is far less technically challenging than launching a geostationary satellite, but we do the latter and not the former because of the sheer size. It's basically an anchoring effect : now that we expect basic necessities to cost so little, we are unable to cope with a world were they are even slightly more expensive.

If we could overcome these issues and embark on such large scale projects, then agw is an easily solved problem! as you said, it's just an inconvenience.

What's with the downvoting?

[-][anonymous]00

Edit: sorry, I assumed you're new here. Apologies.

To start with, I don't agree that land shortage is not one of the worst threats to endangered species. Your reasoning is mostly suited to megafauna, but won't exactly fit colonial birds, rainforest dwellers, lots of plants, etc. (Your argument doesn't actually rely on AGW being the worst problem, so you can jettisone the beginning.)

Am I right that you consider the first obstacle to solving AGW to be the lack of coordination between nations? In this case, what direction should the solvers choose?

I argue that agw is the worst because it is the only one that hits at very deep-seated human assumptions that may well be genetic/inherent.

The first obstacle to agw is, even before coordination, is anchoring - we assume that everything must get better only, and nothing ever gets worse. Further, a lot of systems are built up on the assumption that there will always be a continuously expanding material economy. This is like the case where becoming slightly more rational from a point of relatively complete irrationality is likely to make one less effective : a cluster of irrational beliefs are supporting each other, and removing one exposes other irrationalities. Similarly agw directly impinges on several hidden irrationalities of both humans and economies, and that's why everyone is dragging their feet -- not because they're all totally evil.

[-][anonymous]10

Just a note, posting here because some people might have participated in something similar (if so, what were your impressions?):

...Unfortunately, at the all-Soviet [mathematics] Olympiads they failed to implement another idea, which A. N. Kolmogorov put forward more than once: to begin one of the days of the contest with a lecture on a previously unfamiliar, to the participants, topic and then offer several problems which would build on the ones considered during the lecture. (Such an experiment was successfully carried out only once […] in 1985 […] the lecture was on geometric probabilities.)

N.Vasilyev, A.A.Yegorov. Problems of All-Soviet mathematical olympiads. - Moscow, 1988. - p. 14 of 286.

Applying probabilistic thinking to fears about terrorism in this piece for the 16th largest newspaper in the US, reaching over 320K with its printed version and over 5 million hits on its website per month. The title was chosen by the newspaper, and somewhat occludes the points. The article is written from a liberal perspective to play into the newspaper's general bent, and its main point was to convey the benefits of applying probabilistic thinking to evaluating political reality.

Edit Updated somewhat based on conversation with James Miller here

Is anyone in a position to offer some criticism (or endorsement) of the work produced at Gerwin Schalk's lab?

http://www.schalklab.org/

I attended a talk given by Dr. Schalk in April 2015, where he described a new method of imaging the brain, which appeared to be a better-resolution fMRI (the image in the talk was a more precise image of motor control of the arm, showing the path of neural activity over time). I was reminded of it because Dr. Schalk spent quite a bit of time emphasizing doing the probability correctly and optimizing the code, which seemed relevant when the recent criticism of fMRI software was published.

[-][anonymous]00

If nomads united into large hordes to go to war, shouldn't the change in the number of men living together have had some noticeable psychological effect on the warriors? I mean, the Wikipedia says that "a Dunbar's number is a suggested cognitive limit to the number of people with whom one can maintain stable social relationships", and surely they had to co-work with lots more people than during peace?

You don't "maintain stable social relationships" with the whole horde, you maintain them with your small unit. And the military hierarchy exists so that people have to coordinate only a limited number of entities: if you're a grunt, you need to coordinate only with people immediately around you; if you're a mid-level officer you coordinate a limited number of platoons, if you're a general you coordinate a limited number of regiments.

A soldier does not meaningfully "co-work" with the whole army.

Not necessarily, and after all military hierarchies are a way to cope with the complexity of managing thousands of peopla at a time.

[-][anonymous]00

Yes, but how? Are there different DN for peace and war?

No I don't think, but still Dunbar number are not an exact quantity, first, and second: if you only need to relate to a handful of comrade in your platoon and one higher ranking official, then you can effectively restrict the number of people with whom you have to interact.

[-][anonymous]00

I am asking mostly because I have trouble imagining strict segregation in, say, Mongolian hordes; and intuitively, advance (where you have an army of able-bodied men) should be different from retreat (where you have also women, children and infirm men).

Elon Musk almost terminated our simulation.

Simulation is a simulation only if everybody is convinced that they are living real life. Bostrom proved that we are most likely live in a simulation, but not much people know about it. Elon Musk tweeted that we live with probability 1000000 to 1 in simulation. Now everybody knows. I think that it was 1 per cent chance that our simulation will terminate after it. It has not happen this time, but there may be some other threshold after which it will be terminated, like finding more proves that we are in a simulation or creation of an AI.

Simulation is a simulation only if everybody is convinced that they are living real life.

Is this a widely held belief?

On the other hand, the more "actions that would get the simulation terminated" we do and survive them, the higher the chance that we are actually not living in a simulation.

Unfortunately anthropic bias prevents us from meaningful probability calculations in this case.

It seems possible to me that after passing some threshold of metaphysics insight, beings in basement reality would come to believe that basement reality simulations have high measure.

Past a certain point, maybe original basement reality beings actually believe they are simulated. Then accurately simulated basement reality beings would mean simulating beings who think (correctly) that they are in a simulation.

I don't know how to balance such possibilities to figure out what's likely.

One idea how to measure the measure of simulations I had is that it is proportional of the energy of calculation. That is because the large computer could be "sliced" into two parallel if we could make slices in 4 dimensions. We could do such slices until we reach Plank level. So any simulation is equal to finite number of Plank simulation.

Base reality level computers will use more energy of calculations and any sub-level will use only part of this energy, so we have smaller measure for lower level simulations.

But it is just preliminary idea, as we need to coordinate it with probability of branches in MWI and also find the ways to prove it.

Interesting idea. So I guess that approach is focused on measure across universes with physics similar to ours? I wonder what fraction of simulations have physics similar to one level up. Presumably ancestor simulations would.

Actually, Bostrom did not argue that we are most likely living in a simulation. Instead, he argued that at least one of three propositions must be true, and one of those propositions is that "almost all people with our sorts of experiences live in computer simulations". But, since it is possible that one of the other two propositions could be true, it does not necessarily follow that we most likely live in a simulation.

In fact, Bostrom has stated that he believes that we are probably not simulated (see the second paragraph of this paper).

Of course, per your comments, it is possible that Bostrom only said that we are probably not simulated so as not to terminate the simulation :).

In fact, first two propositions in Bostrom's article are very improbable, especially if we include in the consideration all other civilizations.

1) "All possible civilizations will never create simulations" - seems to be very implausible, we are already good in creating movies, dreams and games.

2) "All possible civilizations will go extinct before they create simulations - it also seems implausible.

"All possible civilizations will never create simulations" - seems to be very implausible... "All possible civilizations will go extinct before they create simulations" - it also seems implausible

We only need to be concerned about actual civilizations, not all possible civilizations. We don't know how many actual civilizations there are, so there could be very few (we know of only one). We also don't know how difficult creating a sufficiently realistic simulation will be - obviously the more difficult it is to achieve such a simulation the more likely it is that civilizations will tend to go extinct before they create simulations. Finally, Bostrom's propositions use "almost all" rather than "all", e.g. "almost all civilizations at our current level of development go extinct before reaching technological maturity".

In light of these considerations, Bostrom's first two propositions do not seem implausible to me.

It all depends of our understanding of actuality. If modal realism is true, there is no difference between actual and possible. If our universe is really very large because of MWI, inflation and other universes, there should be many civilizations. But it all require some difficult philosophical questions, so it is better to use simple model with cut-of. (I think that model realism is true and all possible is actualy exist somewhere in the universe, if actuality is not depending of human consciousness, but it is long story for another post, so I will not concentrate on proving it here)

Imagine that in our Galaxy (or any other sufficiently large part of the Universe) exists 1000 our-tech level civilizations. If 990 of them go extinct in x-risks, 9 decide not to create simulations and 1 decided to model all civilizations in the galaxy 100 000 000 times in order to solve Fermi paradox numerically.

That is why I didn't use the word "almost". Because in this example almost all go extinct, and almost all will not make simulations, but it doesn't prevent one civilization to create so many simulations that overweight it.

The only condition in which we are not in simulation is that ALL possible civilization will not make them.

In with case we have 100 001 000 total number of the civilizations.

In this example we see that even if most civilization will go extinct, and most of survived civilizations will decide not to run simulation, 1 will do it based on its practical needs, and proportion of real to simulated will be 1 to 100 000.

It means that we are with a chance of 100 000 is in simulated civilization.

This example is also predicts the future of our simulation: it will simulate extinction event with 99 per cent probability, it will simulate simulation-less civilization with 0.9 probability and it will result in two level "matryoshka simulation" with 0.1 per cent simulation.

It also demonstrate that Bostrom's preposition is not alternative: all 3 conditions are true in this case (And Bostrom said that "at least one of three condition is true"). We will go extinct, we will not run simulations, we are in simulation.

With those assumptions (especially modal realism), I don't think your original statement that our simulation was not terminated this time doesn't quite make sense; there could be a bajillion simulations identical to this, and even if most of them we're shut down, we wouldn't notice anything.

In fact, I'm not sure what saying "we are in a simulation" or "we are not in a simulation" exactly means.

What you say is like quantum immortality for many simulation world. Lets name it "simulation immortality" as there is nothing quantum about it. I think that it may be true. But it requires two conditions: many simulations and identity problem solution (is a copy of me in remote part of the universe is me.) I wrote about it here: http://lesswrong.com/lw/n7u/the_map_of_quantum_big_world_immortality/

Simulation immortality precisely neutralise risks of the simulation been shut down. But if we agree with quantum immortality logic, it works even broader, preventing any other x-risk, because in any case one person (observer in his timeline) will survive.

In case of simulation shutdown it works nicely if it will be shutdown instantaneous and uniformly. But if servers will be shut one by one, we will see how stars will disappear, and for some period of time we will find ourselves in strange and unpleasant world. Shut down may take only a millisecond in base universe, but it could take long time in simulated, may be days.

Slow shutdown is especially unpleasant scenario for two reasons connected with "simulation immortality".

  1. Because of simulation immortality, its chances are rising dramatically: If for example 1000 simulations is shutting down and one of them is shutting slowly, I (my copy) will find my self only in the one that is in slow shut down.

  2. If I find my self in a slow shutdown, there is infinitely many the same simulations, which are also experience slow shutdown. In means that my slow shutdown will never end from my observer point of view. Most probably after all this adventure I will find my self in the simulation which shutdown was stopped or reversed, or stuck somewhere in the middle.

TL;DR: So shut down of the simulation may be observed and may be unpleasant and it is especially likely if there are infinitely many simulations. It will look like very strange global catastrophe from the observer point of view. I also wrote a lot of such things in my simulation map, but idea of slow shut down I got only now

http://lesswrong.com/lw/mv0/simulations_map_what_is_the_most_probable_type_of/

I have been asked about something like a carrier advise in field of x-risks prevention deep in EA forum so I will repost my answer here and would like to get any comments or more suggestions to the list

Q: "... it seems like it would be helpful to accompany some maps with a scheme for prioritizing the important areas. e.g. if people could know that safe ai engineering is a useful area for reducing gcrs.." http://effective-altruism.com/ea/10h/the_map_of_global_warming_prevention/#comments

A: So, some ideas for further research, that is fields which a person could undertake if he want to make an impact in the field of x-risks. So it is carrier advises. For many of them I don't have special background or needed personal qualities.

1.Legal research of international law, including work with UN and governments. Goal: prepare an international law and a panel for x-risks prevention. (Legal education is needed)

2.Convert all information about x-risks (including my maps) in large wikipedia style database. Some master of communication to attract many contributors and balance their actions is needed.

3.Create computer model of all global risks, which will be able to calculate their probabilities depending of different assumptions. Evolve this model into world model with elements of AI and connect it to monitoring and control systems.

4.Large research is safety of bio-risks, which will attract professional biologists.

5.Promoter, who could attract funding for different research without oversimplification of risks and overhyping solutions. He may be also a political activist.

  1. I think that in AI safety we are already have many people, so some work to integrate their results is needed.

  2. Teacher. A professor who will be able to teach a course in x-risks research for student and prepare many new researchers. May be youtube lectures.

  3. Artist, who will be able to attract attention to the topic without sensationalism and bad memes.

Applying probabilistic thinking to fears about terrorism in this piece for the 16th largest newspaper in the US, reaching over 320K with its printed version and over 5 million hits on its website per month. The title was chosen by the newspaper, and somewhat occludes the points. The article is written from a liberal perspective to play into the newspaper's general bent, and its main point was to convey the benefits of applying probabilistic thinking to evaluating political reality.

[Edit] Updated somewhat based on conversation with James Miller here

[This comment is no longer endorsed by its author]Reply