Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: J_Thomas_Moros 05 October 2017 07:04:43PM 2 points [-]

You should probably clarify that your solution is assuming the variant where the god's head explodes when given an unanswerable question. If I understand correctly, you are also assuming that the god will act to prevent their head from exploding if possible. That doesn't have to be the case. The god could be suicidal but perhaps not be able to die in any other way and so given the opportunity by you to have their head explode they will take it.

Additionally, I think it would be clearer if you could offer a final English sentence statement of the complete question that doesn't involve self referential variables. The variables formation is helpful for seeing the structure, but confusing in other ways.

Comment author: wMattDodd 20 September 2017 07:29:08PM *  0 points [-]

I've finally been able to put words to some things I've been pondering for awhile, and a Google search on the most sensible terms (to me) for these things turned up nothing. Looking to see if there's already a body of writing on these topics using different terms, and my ignorance of such would lead to me just re-inventing the wheel in my ponderings. If these are NOT discussed topics for some reason, I'll post my thoughts because I think they could be critically important to the development of Friendly AI.

implicit utility function ('survive' is an implicit utility function because regardless of what your explicit utility function is, you can't progress it if you're dead)

conflicted utility function (a utility function that requires your death for optimal value is conflicted, as in the famous Pig That Wants to be Eaten)

dynamic utility function (a static utility function is a major effectiveness handicap, probably a fatal one on a long enough time scale)

meta utility function (a utility function that takes the existence of itself into account)

Comment author: J_Thomas_Moros 20 September 2017 07:56:28PM 1 point [-]

What you label "implicit utility function" sounds like instrumental goals to me. Some of that is also covered under Basic AI Drives.

I'm not familiar with the pig that wants to be eaten, but I'm not sure I would describe that as a conflicted utility function. If one has a utility function that places maximum utility on an outcome that requires their death, then there is no conflict, that is the optimal choice. Though I think human's who think they have such a utility function are usually mistaken, but that is a much more involved discussion.

Not sure what the point of a dynamic utility function is. Your values really shouldn't change. I feel like you may be focused on instrumental goals that can and should change and thinking those are part of the utility function when they are not.

Comment author: NancyLebovitz 17 September 2017 07:32:02PM 2 points [-]

How about the boring simplicity of having downvote limits? Maybe something around one downvote/24 hours-- not cumulative.

If you're feeling generous, maybe add a downvote/24 hours per 1000 karma, with a maximum or 5 downvotes/24 hours.

Comment author: J_Thomas_Moros 18 September 2017 05:41:56PM 1 point [-]

I'm not opposed to downvote limits, but I think they need to not be too low. There are situations where I am more likely to downvote many things just because I am more heavily moderating. For example, on comments on my own post I care more and am more likely to both upvote and downvote whereas other times I might just not care that much.

Comment author: pepe_prime 13 September 2017 01:20:21PM 10 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: J_Thomas_Moros 17 September 2017 04:26:32AM 17 points [-]

I have completed the survey and upvoted everyone else on this thread

Comment author: J_Thomas_Moros 01 September 2017 10:59:40PM 2 points [-]

There is a flaw in your argument. I'm going to try to be very precise here and spell out exactly what I agree with and disagree with in the hope that this leads to more fruitful discussion.

Your conclusions about scenarios 1, 2 and 3 are correct.

You state that Bostrom's disjunction is missing a fourth case. The way you state (iv) is problematical because you phrase it in terms of a logical conclusion that "the principle of indifference leads us to believe that we are not in a simulation" which, as I'll argue below, is incorrect. Your disjunct should properly be stated as something like (iv) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do run a large number of ancestral simulations, however we do this in a way so as to keep the number of simulated people well below the number of real people at any given moment. Stated that way, it is clear that Bostrom's (iii) is meant to include that outcome. Bostrom's argument is predicated only on the number of ancestral simulations, not whether they are run in parallel or sequentially or how much time they are run over. The reason Bostrom includes your (iv) in (iii) is because it doesn't change the logic of the argument. Let me now explain why.

For the sake of argument let's split (iii) into two cases (iii.a) and (iii.b). Let (iii.a) be all the futures in (iii) not covered by your (iv). For convenience, I'll refer to this as "parallel" even though there are cases in (iv) where some simulations could be run in parallel. Then (iii.b) is equivalent to your (iv). For convenience, I'll refer to this as serial even though again, it might not be strictly serial. I think we agree that if the future were guaranteed to be (iii.a), then we should bet we are in a simulation.

First, even if you were right about (iii.b), I don't think it invalidates the argument. Essentially, you have just added another case similar to (ii), and it would still be the case that there are many more simulations that real people because of (iii.a) and we should bet that we are in a simulation.

Second, if the future is actually (iii.b) we should still bet we are in a simulation just as much as with (iii.a). At several points, you appeal to the principle of indifference, but you are vague on how this should be applied. Let me give a framework for thinking about this. What is happening here is that we are reasoning under indexical uncertainty. In each of your three scenarios and the simulation argument, there is uncertainty about which observer we are. Your statement that by the principle of indifference we should conclude something is actually saying what the SSA say which is that we should reason as if we are a randomly chosen observer. In Bostrom's terms, you are uncertain which observer in your reference class you are. To make sure we are on the same page, let me go through your scenarios using this approach.

Scenario 1: You are not sure if you are in room X or room Y, the set of all people currently in room X and Y is your reference class. You reason as if you could be a randomly selected one so you have a 1000 to 1 chance of being in room X.

Scenario 2: You are told about the many people who have been in room Y in the past. However, they are in your past. You have no uncertainty about your temporal index relative to them, so you do not add them to your reference class and reason the same as in scenario 1. Bostrom's book is weak here in that he doesn't give you very good rules for selecting your reference class. I'm arguing that one of the criteria is that you have to be uncertain if you could be that person or not. So for example, you know you are not one of the many people not currently in room X or Y so you don't include them in your reference class. Your reference class is the set of people you are unsure of your index relative to.

Scenario 3: This one is more tricky to reason correctly about. I think you are wrong when you say that the only relevant information here is diachronic information. You know you are now in room Z that contains 1 billion people who passed through room Y and 10,000 people who passed through room X. Your reference class is the people in room Z. You don't have to reason about the temporal information or the fact that at any given moment there was only one person in room Y but 1,000 people in room X. The passing through room X or Y is now only a property of the people in room Z. This is equivalent to if I said you are blindfolded in a room with 1 billion people wearing red hats and 10,000 people wearing blue hats. Which hat color should you bet you are wearing? Reasoning with the people in room Z as your reference class you correctly give your self a 1 billion to 10,000 chance of having passed through room Y.

In (iii.b), you are uncertain whether you are in a simulation or reality. But if you are in a simulation you are also uncertain where you are chronologically relative to reality. Thus if a pair of simulations were run in sequence, you would be unsure if you were in the first or second simulation. You have both spatial and temporal uncertainty. You aren't sure what the proper now is. Your reference class includes everyone in the historical reality as well as everyone in all the simulations. Given that as your reference class, you should reason that you are in a simulation (assuming many simulations are run). It doesn't matter that those simulations are run serially, only that many of them are run. Your reference class isn't limited to the current simulation and the current reality because you aren't sure where you are chronologically relative to reality.

With regards to SIA or SSA. I can't say that they make any difference to your position because the problem is that you have chosen the wrong reference class. In the original simulation argument, SIA vs. SSA makes little or no difference because presumably, the number of people living in historical reality is roughly equal to the number of people living in any given simulation. SIA only changes the conclusions when one outcome contains many more observers than the other. Here we treat each simulation as a different possible outcome, and so they agree.

Comment author: gworley 29 August 2017 05:32:31PM 1 point [-]

This seems to me to be failing to account for the fact that we are not in fact totally blindfolded and know that we live in what appears to be a time prior to simulation. Your alternative scenario that contradicts (iii) seems to be making a bet on information that seems directly contradictory to what we know at the current time (that is, there are no simulations we know about yet). The problem isn't purely one of numbers, but one of where we perceive ourselves to be living now.

I do happen to agree that indifference is probably the most useful response to the simulation argument, though it sounds like probably for different reasons.

Comment author: J_Thomas_Moros 01 September 2017 09:38:25PM 0 points [-]

We are totally blindfolded. He specified that they would be "ancestor simulations" thus in all those simulations they would appear to be in a time prior to simulation.

Comment author: Slider 31 August 2017 10:25:50AM *  1 point [-]

It shouldn’t. After all, if everyone currently in rooms X and Y were to bet that they’re in room X, just about everyone would win.

edit: separated wrongly quoted part Yet if everyone bet that they are in room Y vast majority would win (1 000 / 1 vs 1 000 000 000 / 10 000). In the scenario you can deduce that a lot less questions will be posed in room X.

You are tying to invoke that "right now" is always a relevant indifference breaker. It might be that you are imagining that people in room X will be posed a question NOW. But what if every Xer was asked only the question once when they entered? Then what the contents of the room NOW are becomes irrelevant to the distribuiton of questions. We can keep the amount of questions the same and keep more people in. In the limit we can have the whole 10000 stay for the whole duration when the single persons are driven throught the other. Still more questions will be asked in total in the single person room. But maybe crucially a new person entering the single room doesn't mean that eveyone in the big room will be reasked. What is proper to focus on is the first time everyone is asked and this only happens once for everyone in the big room (I guess we need to assume you would remember if asked the second time).

Comment author: J_Thomas_Moros 01 September 2017 09:36:08PM 1 point [-]

Looks like the poster edited the post since you took this quote. The last two sentence have been removed. Though they might not have explained it well, OP is correct on this point. I think the two sentences removed confused it though.

Crucially you are "told that over the past year, a total of 1 billion people have been in room Y at one time or another whereas only 10,000 people have been in room X." You are given information about your temporal position relative to all of those people. So regardless whether they were asked the question when they were in the room, you know you are not them. You know that your reference class is the 1000 in room X and 1 in room Y right now. I'm not sure why you're bringing up asking people repeatedly. I'm pretty sure the poster was assuming everyone was asked only once.

The answer would change if you were told that at some point in the current year (past or future) a total of 1 billion people would pass through room Y at one time or another whereas only 10,000 people would pass through room X. Then you would not know your temporal position and should bet that you are in room Y.

Comment author: ImmortalRationalist 14 July 2017 01:02:58AM 3 points [-]

Does it make more sense to sign up for cryonics at Alcor or the Cryonics Institute?

Comment author: J_Thomas_Moros 01 August 2017 01:09:14AM 0 points [-]

If you can afford it, it makes more sense to sign up at Alcor. Alcor's patient care trust improves the chances that you will be cared for indefinitely after cryopreservation. CI asserts their all volunteer status as a benefit, but the cryonics community has not been growing and has been aging. It is not unlikely that there could be problems with availability of volunteers in the next 50 years.

Comment author: Jiro 21 June 2017 03:37:17PM 3 points [-]

I am skeptical of this whole thing, because calling someone else's side of a debate a "folk ontology" assumes that their side is the wrong side. So the whole article is basically saying "now that I've determined that my opponent is wrong, how should I deal with it?"--it sounds like a recipe for skipping that pesky debate stuff and prematurely assuming that one's opponent is wrong.

Comment author: J_Thomas_Moros 21 June 2017 11:19:47PM 1 point [-]

This post was meant to apply when you find either that your own folk ontology is incorrect or to assist people who agree that the folk ontology is incorrect but find themselves disagreeing because they have chosen different responses. Establishing the folk ontology to be incorrect was a prerequisite and like all beliefs should be subject to revision based on new evidence.

This is in no way meant to dismiss genuine debate. As a moral nihilist, I might put moral realism in the category of incorrect "folk ontology". However, if I'm discussing or debating with a moral realist, I will have to engage their arguments not just dismiss it because I have already labeled their view as a folk ontology. In such a debate, it can be helpful to recognize which response I have taken and be clear when other participants may be adopting a different one.

Comment author: J_Thomas_Moros 21 June 2017 02:26:38PM 1 point [-]

When we find that the concepts typically held by people, termed folk ontologies, don't correspond to the territory, what should we do with those terms/words? This post discusses three possible ways of handling them. Each is described and discussed with examples from science and philosophy.

View more: Next