Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: turchin 30 August 2017 01:10:06PM 2 points [-]

By the way, it is surprisingly not easy to demonstrate that death is bad from the utilitarian perspective - but it is.

Comment author: philosophytorres 30 August 2017 02:08:44PM 0 points [-]

It's amazing how many people on FB answered this question, "Annihilation, no question." Really, I'm pretty shocked!

Is life worth living?

5 philosophytorres 30 August 2017 10:42AM

Genuinely curious how folks on this website would answer the following question:

 

First, imagine the improbable: God exists. Now pretend that he descends from the clouds and visits you one night, saying the following: "I'm going to give you exactly two choices. (1) I'll murder you right now and annihilate your soul, meaning that you'll have no more conscious experiences ever again. [Theologians call this "annihilationism."] Alternatively, (2) I'll allow you to relive your life up to this moment exactly as it unfolded the first time -- that is, all the exact same experiences, life decisions, outcomes, etc. If you choose the second, once you reach the present moment -- this moment right now -- I'll then annihilate your soul."

 

Which would you choose, if you were forced to pick one or the other?

Comment author: WalterL 29 August 2017 06:13:14PM 0 points [-]

I'm not sure what you mean by 'it is a metaphysical issue', and I'm getting kind of despairing at breaking through here, but one more time.

Just to be clear, every sim who says 'real' in this example is wrong, yeah? They have been deceived by the partial information they are being given, and the answer they give does not accurately represent reality. The 'right' call for the sims is that they are sims.

In a future like you are positing, if our universe is analogous to a sim, the 'right' call is that we are a sim. If, unfortunately, our designers decide to mislead us into guessing wrong by giving us numbers instead of just telling us which we are...that still wouldn't make us real.

This is my last on the subject, but I hope you get it at this point.

Comment author: philosophytorres 29 August 2017 06:31:32PM 0 points [-]

“I'm getting kind of despairing at breaking through here, but one more time.” Same here. Because you still haven’t addressed the relevant issue, and yet appear to be getting pissy, which is no bueno.

By analogy: in Scenario 2, everyone who has wandered through room Y but says that they’re in room X is wrong, yeah? The answer they give does not accurately represent reality. The “right” call for those who’ve pass through room Y is that they actually passed through room Y. I hope we can at least agree on this.

Yet, it remains 100% true that if everyone at any given timeslice, when the question “Which room are you in right now, this very moment” is posed, nearly everyone will win some money if they say room X.

Not sure how to make this point clearer to you: if you want to make the argument that you’re making, then you’ll have to say that the new, additional historical/diachronic information of Scenario 2 changes your mind about which room you’re in, from room X to room Y.

The point: diachronic information is irrelevant to winning a bet about whether you're in room X or room Y at Tx. If this logic holds with rooms, it should also hold with simulations: diachronic information is irrelevant to winning a bet about whether you're in a simulation or not, if the number of non-sims far exceeds the number of sims when you answer the question, "Where are you right now?"

Comment author: WalterL 29 August 2017 05:59:15PM 0 points [-]

So, like, a thing we generally do in these kinds of deals is ignore trivial cases, yeah? Like, if we were talking about the trolley problem, no one brings up the possibility that you are too weak to pull the lever, or posits telepathy in a prisoner's dilemma.

To simplify everything, let's stick with your first example. We (thousand foks) make one sim. We tell him that there are a thousand and one humans in existence, one of which is a sim, the others are real. We ask him to guess. He guesses real. We delete him and do this again and again, millions of time. Every sim guesses real. Everyone is wrong.

This isn't an example that proves that, if we are using our experience as analogous to the sim, we should guess 'real'. It isn't a future that presents an argument against the simulation argument. It is just a weird special case of a universe where most things are sims.

The fact that there are more 'real' at any given time isn't relevant to the fact of whether any of these mayfly sims are, themselves, real. If there are more simulated universes, then it is more likely that our universe is simulated.

Comment author: philosophytorres 29 August 2017 06:07:52PM 0 points [-]

"The fact that there are more 'real' at any given time isn't relevant to the fact of whether any of these mayfly sims are, themselves, real." You're right about this, because it's a metaphysical issue. The question, though, is epistemology: what does one have reason to believe at any given moment. If you want to say that one should bet on being a sim, then you should also say that one is in room Y in Scenario 2, which seems implausible.

Comment author: WalterL 29 August 2017 03:03:19PM 0 points [-]

I'm confused by why you are constraining the argument to future-humanity as simulators, and further by why you are care what order the experimenters turn em on.

Like, it seems perverse to make up an example where we turn on one sim at a time, a trillion trillion times in a row. Yeah, each one is gonna get told that there are 6 billion real humans and one sim, so if they guess real or sim they might get tricked to guess real. Who cares? No reason to think that's our future.

The iv disjunct you are posing isn't one that we don't have familiarity with. How many instances of Mario Kart did we spin up? How bout Warcraft? The idea that our future versions are gonna be super careful with sims isn't super interesting. Sentience will increase forever, resources will increase forever, eventually someone is gonna press the button.

Comment author: philosophytorres 29 August 2017 04:49:58PM 1 point [-]

"Like, it seems perverse to make up an example where we turn on one sim at a time, a trillion trillion times in a row. ... Who cares? No reason to think that's our future." The point is to imagine a possible future -- and that's all it needs to be -- that instantiates none of the three disjuncts of the simulation argument. If one can show that, then the simulation argument is flawed. So far as I can tell, I've identified a possible future that is neither (i), (ii), nor (iii).

Is there a flaw in the simulation argument?

2 philosophytorres 29 August 2017 02:34PM

Can anyone tell me what's wrong with the following "refutation" of the simulation argument? (I know this is a bit long -- my apologies! I also posted an earlier draft several months ago and got some excellent feedback. I don't see a flaw, but perhaps I'm missing something!)

Consider the following three scenarios:

Scenario 1: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then escorted into one of two rooms, either X or Y, but you don’t know which one. While in the unknown room, you are told that there are exactly 1,000 people in room X and only a single person in room Y. There is no way of communicating with anyone else, so you must use the information given to guess which room you’re in. If you guess correctly, you win 1 million dollars. Using the principle of indifference as your guide, you guess that you’re in room X—and consequently, you almost certainly win 1 million dollars. After all, since betting odds are a guide to rationality, if everyone in room X and Y were to bet that they’re in room X, just about everyone would win.

Scenario 2: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then escorted into one of two rooms, either X or Y, but you don’t know which one. While in the unknown room, you are told that there are exactly 1,000 people in room X and only a single person in room Y. You are also told that over the past year, a total of 1 billion people have been in room Y at one time or another whereas only 10,000 people have been in room X. There is no way of communicating with anyone else, so you must use the information given to guess which room you’re in. If you guess correctly, you win 1 million dollars. The question here is: Does the extra information about the past histories of rooms X and Y change your mind about which room you’re in? It shouldn’t. After all, if everyone currently in rooms X and Y were to bet that they’re in room X, just about everyone would win.

Scenario 3: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then told that you’ll be escorted into room Z through one of two rooms, either X or Y, but you won’t know which one. At any given moment, or timeslice, there will always be exactly 1,000 people in room X and only a single person in room Y. (Thus, as one person enters each room another one exits into room Z.) Once you arrive in room Z at time T2, you are told that between T1 and T2 a total of 1 billion people passed through room Y whereas only 10,000 people in total passed through room X, where all of these people are now in room Z with you. There is no way of communicating with anyone else, so you must use the information given to guess which room, X or Y, you passed through on your way from Location A to room Z. If you guess correctly, you win 1 million dollars. Using the principle of indifference as your guide, you now guess that you passed through room Y—and consequently, you almost certainly win 1 million dollars. After all, if everyone in room Z at T2 were to bet that they passed through room Y rather than room X, the large majority would win.

Let’s analyze these scenarios. In the first two, the only relevant information is synchronic information about the current distribution of people when you answer the question, “Which room am I in, X or Y?” (Thus, the historical knowledge offered in Scenario 2 doesn’t change your answer.) In contrast, the only relevant information in the third scenario is diachronic information about which of the two rooms had more people pass through them from T1 to T2. If these claims are correct, then the simulation argument proposed by Nick Bostrom (2003) is flawed. The remainder of this paper will (a) outline this argument, and (b) show how the ideas above falsify the argument’s conclusion.

According to the simulation argument, one or more of the following disjuncts must be true: (i) humanity goes extinct before reaching a stage of technological development that would enable us to run a large number of ancestral simulations; (ii) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations but we decide not to; and (iii) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do, in fact, run a large number of ancestral simulations. The third disjunct entails that we would almost certainly live in a computer simulation because (a) a sufficiently high-resolution simulation would be sensorily and phenomenologically indistinguishable from the “real” world, and (b) the indifference principle tells us to distribute our probabilities evenly among all the possibilities if we have no special reason to favor one over another. Since the population of sims would far outnumber the population of non-sims in scenario (iii), ex hypothesi, then we would almost certainly be sims. This is the simulation hypothesis.

But consider the following possible Posthuman Future: instead of running a huge number of ancestral simulations in parallel, as Bostrom seems to assume we would, future humans run a huge number of simulations sequentially, one after another. This could be done such that at any given moment the total number of extant non-sims far exceeds the total number of extant sims, yet over time the total number of sims who have existed far exceeds the total number of non-sims who also have existed. (This could be accomplished by running simulations at speeds much faster than realtime.) If the question is, “Where am I right now, in a simulation or not?,” then the principle of indifference instructs you to answer, “I am not a sim.” After all, if everyone were to bet at some timeslice Tx that they are not a sim, nearly everyone would win.

Here the only information that matters is synchronic information; diachronic information about how many sims, non-sims, or “observer-moments” there have been has no bearing on one’s credence about one’s present ontological status (sim or non-sim?)—that is, no more than historical knowledge about rooms X and Y in Scenario 2 have any bearing on one’s response to the question, “Which room am I currently in?” This is problematic for the simulation argument because the Posthuman Future outlined above satisfies the condition of disjunct (iii) yet it doesn’t entail that one is almost certainly living in a simulation. Thus, Bostrom’s assertion that “at least one of the following propositions is true” is false.

One might wonder: but what if we run a huge number of simulations sequentially and then stop. Wouldn’t this be analogous to Scenario 3, in which we would have reason for believing that we passed through room Y rather than room X, i.e., that we were (and thus still are) in a simulation rather than the “real” world? The answer is no, it’s not analogous to Scenario 3 because in our case we would have some additional relevant information about our actual history—that is, we would know that we were in “room X,” which held more people at every given moment, since we would have control over the ratio of sims to non-sims (always making sure that the latter far outnumbers the former). Even more, if we were to stop all simulations, then the ratio of sims to non-sims would be zero to whatever the human population is at the time, thus making a bet that we are non-sims virtually certain. So far as I can tell, these conclusions follow whether one accepts the self-sampling assumption (SSA), strong self-sampling assumption (SSSA), or the self-indication assumption (SIA) (Bostrom 2002).

In sum, the simulation argument is missing a fourth disjunct: (iv) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do run a large number of ancestral simulations, yet the principle of indifference leads us to believe that we are not in a simulation. It will, of course, be up to future generations to decide whether to run a large number of ancestral simulations, and if so whether to run these sequentially or in parallel, given the ontological-epistemic implications of each.

Comment author: turchin 25 August 2017 10:24:51AM 3 points [-]

First article TL;DR: space colonisation will produce star wars and result in enormous sufferings, that is s-risk.

My 5 dollars: maxipoc is mostly not about space colonisation, but prevention of total extinction. I also hold an unshared opinion that death is the worst form of sufferings, as it is really bad. Pain-Sufferings are part of life and are ok, if they are diluted by much larger pleasure. Surely space wars are possible (without singleton), but life is intrinsically good, and most time there will be no wars, but some form of very sophisticated space pleasures. They will dilute sufferings from wars.

But I also don't share the Maxipoc interpretation that we should start space colonisation as soon as possible to get maximum number of possible people into existence. Firstly, all possible people exist somewhere else in the infinite multiverse. Also, it better to be slow but sure (is it correct expression?)

Comment author: philosophytorres 29 August 2017 01:12:59PM 1 point [-]

"My 5 dollars: maxipoc is mostly not about space colonisation, but prevention of total extinction." But the goal of avoiding an x-catastrophe is to reach technological maturity, and reaching technological maturity would require space colonization (to satisfy the requirement that we have "total control" over nature). Right?

Comment author: turchin 25 August 2017 10:44:18AM *  2 points [-]

Third article TL;DR: It is clear that superintelligence singleton is the most obvious solution to prevent all non-AI risks.

However, the main problem is that there is a risk of creation of such singleton (risks of unfriendly AI), risks of implementation it (AI have to fight a war for global domination probably against other AIs, nuclear national states etc) and risks of singletone failure (if it halts - it is forever).

As result, we only move risks from one side equation to another, and even replace known risks with unknown risks.

I think that other possible solutions exist, where many agents unite in some kind of police to monitor each other, like suggested David Brin in his transparent society. Such police may consist not of citizens, but of AIs.

Comment author: philosophytorres 29 August 2017 01:10:38PM 0 points [-]

Yes, good points. As for "As result, we only move risks from one side equation to another, and even replace known risks with unknown risks," another way to put the paper's thesis is this: insofar as the threat of unilateralism becomes widespread, thus requiring a centralized surveillance apparatus, solving the control problem is that mush more important! I.e., it's an argument for why MIRI's work matters.

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

6 philosophytorres 25 August 2017 10:00AM

Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)

Comment author: SithLord13 15 October 2016 11:25:12PM 5 points [-]

Furthermore, implementing stricter regulations on CO2 emissions could decrease the probability of extreme ecoterrorism and/or apocalyptic terrorism, since environmental degradation is a “trigger” for both.

Disregarding any discussion of legitimate climate concerns, isn't this a really bad decision? Isn't it better to be unblackmailable, to disincentivize blackmail.

Comment author: philosophytorres 20 October 2016 08:08:23PM -1 points [-]

What do you mean? How is mitigating climate change related to blackmail?

View more: Next