Comment author: turchin 18 August 2016 02:11:51PM *  0 points [-]

It all depends of our understanding of actuality. If modal realism is true, there is no difference between actual and possible. If our universe is really very large because of MWI, inflation and other universes, there should be many civilizations. But it all require some difficult philosophical questions, so it is better to use simple model with cut-of. (I think that model realism is true and all possible is actualy exist somewhere in the universe, if actuality is not depending of human consciousness, but it is long story for another post, so I will not concentrate on proving it here)

Imagine that in our Galaxy (or any other sufficiently large part of the Universe) exists 1000 our-tech level civilizations. If 990 of them go extinct in x-risks, 9 decide not to create simulations and 1 decided to model all civilizations in the galaxy 100 000 000 times in order to solve Fermi paradox numerically.

That is why I didn't use the word "almost". Because in this example almost all go extinct, and almost all will not make simulations, but it doesn't prevent one civilization to create so many simulations that overweight it.

The only condition in which we are not in simulation is that ALL possible civilization will not make them.

In with case we have 100 001 000 total number of the civilizations.

In this example we see that even if most civilization will go extinct, and most of survived civilizations will decide not to run simulation, 1 will do it based on its practical needs, and proportion of real to simulated will be 1 to 100 000.

It means that we are with a chance of 100 000 is in simulated civilization.

This example is also predicts the future of our simulation: it will simulate extinction event with 99 per cent probability, it will simulate simulation-less civilization with 0.9 probability and it will result in two level "matryoshka simulation" with 0.1 per cent simulation.

It also demonstrate that Bostrom's preposition is not alternative: all 3 conditions are true in this case (And Bostrom said that "at least one of three condition is true"). We will go extinct, we will not run simulations, we are in simulation.

Comment author: qmotus 18 August 2016 05:20:57PM 0 points [-]

With those assumptions (especially modal realism), I don't think your original statement that our simulation was not terminated this time doesn't quite make sense; there could be a bajillion simulations identical to this, and even if most of them we're shut down, we wouldn't notice anything.

In fact, I'm not sure what saying "we are in a simulation" or "we are not in a simulation" exactly means.

Comment author: turchin 02 August 2016 11:02:46AM 0 points [-]

In current political situation in the world cutting emissions can't be implemented. Point.

It may happening naturally in 20 years after electric transportation will take place.

Plan B should be implemented if situation suddenly change to worse. If temperature jumps 3-5 C in one year. In this case the only option we had is to bomb Pinatubo volcano to make it erupting again.

But if we will have prepared and tested measures of Sun shielding, we could start them if situation will be worsening.

It all looks like political fight between Plan A and Plan B. You suggest not to implement Plan B as it would show real need to implement Plan A (cutting emissions). But the same logic works in the opposite direction. They will not cut emission to press policymakers to implement plan B. ))) It looks like prisoners dilemma of two plans.

Comment author: qmotus 02 August 2016 11:56:00AM 0 points [-]

It all looks like political fight between Plan A and Plan B. You suggest not to implement Plan B as it would show real need to implement Plan A (cutting emissions).

That's one thing. But also, let's say that we choose Plan B, and this is taken as a sign that reducing emissions is unnecessary and global emissions soar. We then start pumping aerosols into the atmosphere to cool the climate.

Then something happens and this process stops: we face unexpected technical hurdles, or maybe the implementation of this plan has been largely left to a smallish number of nations and they are incapable or unwilling to implement it anymore, perhaps a large-scale war occurs, or something like that. Because of the extra CO2, we'd probably be worse off than if we had even partially succeeded with Plan A. So what's the expected payoff of choosing A or B?

As I said, I'm a bit wary of this, but I also think that it's important to research climate engineering technologies and make plans so that they can be implemented if (and probably when) necessary. The best option would probably be a mixture of plans A and B, but as you said, it looks like a bit of a prisoner's dilemma.

Comment author: turchin 01 August 2016 10:23:17PM 0 points [-]

I think that climate change is a situation where we should directly go to plan B. Plan A here is cutting emissions. It is not working, because it is very expensive and require cooperation of all sides. It also will have immediate results and the temperature will still grow by many reasons.

The plan B in climate change prevention is changing opacity of earth atmosphere. It could be surprisingly cheap and local. There are suggestions to put something as simple as sulfuric acid in the upper atmosphere to rise it reflection ability.

"According to Keith’s calculations, if operations were begun in 2020, it would take 25,000 metric tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric tons of it each year, at an annual cost of $700 million, would be required to compensate for the increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft." https://www.technologyreview.com/s/511016/a-cheap-and-easy-plan-to-stop-global-warming/

The problem with that approach is that it can't be stopped. As Seth Baum wrote smaller catastrophe could result in disruption of such engineering and immediate return of global warming with vengeance.

There are other ways to prevent global warming. Plan C is creating artificial nuclear winter by volcanic explossion or starting large scale forest fires with nukes.

There are also ideas to recapture CO2 using genetically modified organisms, iron seeding in ocean and dispersing carbon capturing mineral olivine.

So we are not even closed to be doomed from global warming - but we may have to change the way we react on them. We must agree that cutting emission is not working in next 10-20 years perspective.

Comment author: qmotus 02 August 2016 10:15:14AM 0 points [-]

I would still be a bit reluctant to advocate climate engineering, though. The main worry, of course, is that if we choose that route, we need to commit to in the long term, like you said. Openly embracing climate engineering would probably also cause emissions to soar, as people would think that there's no need to even try to lower emissions any more. So, if for some reason the delivery of that sulfuric acid into the atmosphere or whatever was disrupted, we'd be in trouble. And do we know enough of such measures to say that there safe?. Of course, if we believe that history will end anyways within decades or centuries because of singularity, long-term effects of such measures may not matter so much.

Also, many people, whether or not they're environmentalists strictly speaking, care about keeping our ecosystems at least somewhat undisrupted, and large scale climate engineering doesn't fit too well with that view.

But I agree that we're not progressing fast enough with emissions reductions (we're not progressing with them at all, actually), so we'll probably have to resort to some kind of plan B eventually.

Comment author: Artaxerxes 01 August 2016 04:03:50AM *  1 point [-]

What's the worst case scenario involving climate change given that for some reason no large scale wars occur due to its contributing instability?

Climate change is very mainstream, with plenty of people and dollars working on the issue. LW and LW-adjacent groups discuss many causes that are thought to be higher impact and have more room for attention.

But I realised recently that my understanding of climate change related risks could probably be better, and I'm not easily able to compare the scale of climate change related risks to other causes. In particular I'm interested in estimations of metrics such as lives lost, economic cost, and similar.

If anyone can give me a rundown or point me in the right direction that would be appreciated.

Comment author: qmotus 02 August 2016 10:04:41AM 0 points [-]

I think many EAs consider climate change to be very important, but often just think that it receives a lot of attention already and solving it is difficult, and that there are therefore better things to focus on. Like 80 000 hours for example.

Comment author: Kaj_Sotala 19 July 2016 12:19:39PM *  5 points [-]

I'm currently working on an AI strategy project for the Foundational Research Institute; they are hiring and do not require plenty of experience:

Requirements

  • Language requirement is research proficiency in English.
  • We anticipate that an applicant is dedicated to alleviating and preventing suffering, and considers it the top global priority.
  • A successful applicant will probably have a background in quantitative topics such as game theory, decision theory, computer science, physics, or math. But we welcome applicants regardless of background.
  • Peer-reviewed publications or a track record of completed comparable research output is not required, but a plus.
  • There is no degree requirement, although a PhD is an advantage, all else equal.

Their open research questions include a number of AI-related ones, and I expect many of them to still have plenty of low-hanging fruit. I'm working on getting a better handle on hard takeoff scnearios in general; most of the my results so far can be found on my website under the "fri-funded" tag. (Haven't posted anything new in a while, because I'm working on a larger article that's been taking some time.)

Comment author: qmotus 21 July 2016 05:14:13PM 0 points [-]

Will your results ultimately take the form of blog posts such as those, or peer-reviewed publications, or something else?

I think FRI's research agenda is interesting and that they may very well work on important questions that hardly anyone else does, but I haven't yet supported them as I'm not certain about their ability to deliver actual results or the impact of their research, and find it a tad bit odd that it's supported by effective altruism organizations, since I don't see any demonstration of effectiveness so far. (No offence though, it looks promising.)

Comment author: turchin 03 July 2016 09:48:03PM *  1 point [-]

Yes ... (( Ben Goertzel suggested another term: "potentially indefinite life extension". It is almost the same as immortality, but people may feel freedom on ontological level with it.

Nobody wants to be trapped forever in unpleasant conditions, and to become immortal is the first step to such situation )) So we should address this concern.

Comment author: qmotus 05 July 2016 08:50:36AM 0 points [-]

I wouldn't call cryonics life extension; sounds more like resurrection to me. And, well, "potentially indefinite life extension" after that, sure.

Comment author: Houshalter 23 May 2016 01:50:12PM 2 points [-]

Interesting that Lesswrongers are 50,000 times more likely to sign up for cryonics than the general population. I had previously heard criticism of Lesswrong, that if we really believe in cryonics, it's irrational that so few are signed up.

Also surprising that vegetarianism correlates with cryonics interest.

Comment author: qmotus 28 May 2016 03:18:58PM 1 point [-]

I bet many LessWrongers are just not interested in signing up. That's not irrational, or rational, it's just a matter of preferences.

Comment author: woodchopper 03 May 2016 03:10:00PM *  0 points [-]

I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.

So I am struggling to understand his reply to my argument. In some ways it simply looks like he's saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren't in a simulation.

If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.

If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.

He's saying that (3) doesn't hold if we are not in a simulation, so either (1) or (2) is true. He's not saying that if we're not in a simulation, we somehow are actually in a simulation given this logic.

Comment author: qmotus 03 May 2016 04:37:39PM 1 point [-]

either we are in a simulation or we are not, which is obviously true

Just wanted to point out that this is not necessarily true; in a large enough multiverse, there would be many identical copies of a mind, some of which would probably be "real minds" dwelling in "real brains", and some would be simulated.

Comment author: woodchopper 27 April 2016 01:30:58PM 0 points [-]

Wouldn't there, then, be some copies of me not being tortured and one that is being tortured?

If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.

So go back to the scenario - you're killed, there are some exact copies made of your brain and some inexact copies. It has been shown that it is possible to torture an exact copy of your brain while not torturing 'you', so surely you could torture one or all of these reconstructed brains and you would have no reason to fear?

Comment author: qmotus 27 April 2016 03:29:41PM 0 points [-]

If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.

Well.. Let's say I make a copy of you at time t. I can also make them forget which one is which. Then, at time t + 1, I will tickle the copy a lot. After that, I go back in time to t - 1, tell you of my intentions and ask you if you expect to get tickled. What do you reply?

Does it make any sense to you to say that you expect to experience both being and not being tickled?

Comment author: woodchopper 25 April 2016 02:59:17PM 0 points [-]

So, let's say you die, but a super intelligence reconstructs your brain (using new atoms, but almost exactly to specification), but misplaces a couple of atoms. Is that 'you'?

If it is, let's say the computer then realises what it did wrong and reconstructs your brain again (leaving its first prototype intact), this time exactly. Which one is 'you'?

Let's say the second one is 'you', and the first one isn't. What happens when the computer reconstructs yet another exact copy of your brain?

If the computer told you it was going to torture the slightly-wrong copy of you (the one with a few atoms missing), would that scare you?

What if it was going to torture the exact copy of you, but only one of the exact copies? There's a version of you not being tortured, what's to say that won't be the real 'you'?

Comment author: qmotus 25 April 2016 05:17:05PM 0 points [-]
  1. Maybe; it would probably think so, at least if it wasn't told otherwise.

  2. Both would probably think so.

  3. All three might think so.

  4. I find that a bit scary.

  5. Wouldn't there, then, be some copies of me not being tortured and one that is being tortured?

View more: Prev | Next