Kevin comments on Less Wrong Q&A with Eliezer Yudkowsky: Video Answers - Less Wrong

41 Post author: MichaelGR 07 January 2010 04:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread.

Comment author: Kevin 08 January 2010 02:09:26AM *  6 points [-]

20: What is the probability that this is the ultimate base layer of reality?

Eliezer gave the joke answer to this question, because this is something that seems impossible to know.

However, I myself assign a significant probability that this is not the base level of reality. Theuncertainfuture.com tells me that I assign a 99% probability of AI by 2070 and it starts approaching .99 before 2070. So why would I be likely to be living as an original human circa 2000 when transhumans will be running ancestor simulations? I suppose it's possible that transhumans won't run ancestor simulations, but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.

The zero one infinity rule also makes it seem more unlikely this is the base level of reality. http://catb.org/jargon/html/Z/Zero-One-Infinity-Rule.html

It seems rather convenient that I am living in the most interesting period in human history. Not to mention I have a lifestyle in the top 1% of all humans living today.

I believe this is a minority viewpoint here, so my rationalist calculus is probably wrong. Why?

Comment author: Wei_Dai 08 January 2010 02:50:29AM 26 points [-]

In my posts, I've argued that indexical uncertainty like this shouldn't be represented using probabilities. Instead, I suggest that you consider yourself to be all of the many copies of you, i.e., both the ones in the ancestor simulations and the one in 2010, making decisions for all of them. Depending on your preferences, you might consider the consequences of the decisions of the copy in 2010 to be the most important and far-reaching, and therefore act mostly as if that was the only copy.

Comment author: Eliezer_Yudkowsky 18 February 2010 07:05:46AM 5 points [-]

BTW, I agree with this.

Comment author: cousin_it 19 April 2011 10:28:57AM *  1 point [-]

Coming back to this comment, it seems to be another example of UDT giving a technically correct but incomplete answer.

Imagine you have a device that will tell you, tomorrow at 12am, whether you are in a simulation or in the base layer. (It turns out that all simulations are required by multiverse law to have such devices.) There's probably not much you can do before 12am tomorrow that can cause important and far-reaching consequences. But fortunately you also have another device that you can hook up to the first. The second device generates moments of pleasure or pain for the user. More precisely, it gives you X pleasure/pain if you turn out to be in a sim, and Y pleasure/pain if you are in the base layer (presumably X and Y have different signs). Depending on X and Y, how do you decide whether to turn the second device on?

Comment author: gwern 18 February 2010 03:39:40AM 1 point [-]

Have you pulled it all together anywhere? I've sometimes seen & thought this Pascal's wager-like logic before (act as if your choices matter because if they don't...), but I've always been suspicious precisely because it looks too much to me like Pascal's wager.

Comment author: Wei_Dai 18 February 2010 11:01:54PM 2 points [-]

I've thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could't think of much to say. But to expand a bit more on what I wrote in the grandparent, in the Simulation Argument, the decision of the original you interacts with the decisions of the simulations. If you make the wrong decision, your simulations might end up not existing at all, so it doesn't make sense to put a probability on "being in a simulation". (This is like in the absent-minded driver problem, where your decision at the first exit determines whether you get to the second exit.)

I'm not sure I see what you mean by "Pascal's wager-like logic". Can you explain a bit more?

Comment author: Kevin 10 March 2010 06:44:13AM 3 points [-]

A top-level post on the application of TDT/UDT to the Simulation Argument would be worthwhile even if it was just a paragraph or two long.

Comment author: wedrifid 10 March 2010 09:48:53AM 1 point [-]

A top level post telling me whether TDT and UDT are supposed to be identical or different (or whether they are the same but at different levels of development) would also be handy!

Comment author: gwern 19 February 2010 03:02:37AM 2 points [-]

I've thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could't think of much to say.

I think that's enough. I feel I understand the SA very well, but not TDT or UDT much at all; approaching the latter from the former might make things click for me.

I'm not sure I see what you mean by "Pascal's wager-like logic". Can you explain a bit more?

I mean that I read Pascal's Wager as basically 'p implies x reward for believing in p, and ~p implies no reward (either positive or negative); thus, best to believe in p regardless of the evidence for p'. (Clumsy phrasing, I'm afraid.)

Your example sounds like that: 'believing you-are-not-being-simulated implies x utility (motivation for one's actions & efforts), and if ~you-are-not-being-simulated then your utility to the real world is just 0; so believe you-are-not-being-simulated.' This seems to be a substitution of 'not-being-simulated' into the PW schema.

Comment author: Thomas 08 January 2010 07:05:01PM 4 points [-]

If the probability, that you are inside a simulation is p, what's the probability that your master simulator is also simulated?

How tall is this tower, most likely?

Comment author: Cyan 08 January 2010 07:54:47PM *  1 point [-]

Being in a simulation within a simulation (nested to any level) implies being in a simulation. The proper decomposition is p = sum over all positive N of (probability of simulation nested to level N)

Comment author: Thomas 08 January 2010 10:15:47PM 3 points [-]

The top simulator has N operations to execute before his free enthalpy basin is empty.

Every level down, this number is smaller. Before long, there is impossible to create a nontrivial simulation inside the current. This is the bottom one.

This simulation tower is just a great way to squander all the free enthalpy you have. Is the top simulation master that stupid?

I doubt it.

Comment author: Kevin 09 January 2010 06:13:32AM *  -1 points [-]

In that sense, there's actually a significant risk to the singularity. Why should the simulation master (I usually facetiously use the phrase "our overlords" when referring to this entity) let us ever run a simulation that is likely to result in an infinitely nested simulation? Maybe that's why the LHC keeps blowing up.

Comment author: DanArmak 08 January 2010 11:49:51PM *  1 point [-]

You also need to include scenarios for infinitely-high towers, or closed-loop towers, or branching and merging networks, or one simulation being run in several (perhaps infinitely many) simulating worlds, or the other way around...

I don't think we can assign a meaningful prior to any of these, and so we can't calculate the probability of being in a simulation.

Comment author: Kevin 09 January 2010 06:15:19AM 0 points [-]

I don't think the probability calculation is meaningful because the infinities mess it up. But you still need to ask, are you in the original 2010 or one of infinitely many possible ways to be in a simulated 2010? I can't assign a probability; but I have a strong intuition when comparing one to infinite.

Comment author: ArisKatsaris 19 April 2011 11:28:48AM 2 points [-]

The zero one infinity rule also makes it seem more unlikely this is the base level of reality.

The Zero-One-Infinity Rule hasn't been shown to apply to our reality, and even if it applied to our reality it would also permit "One".

It seems rather convenient that I am living in the most interesting period in human history.

Can you give us a list of most-to-least interesting periods in human history? You have an anglo name, and I think you're living in a particularly boring period of Anglo-American history. (If you had an Arab name, this might be an interesting period though, though not as interesting as if you were an Arab in the period of Mohammed or the first few Caliphs)

but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.

You don't actually know what you would want with a transhuman mind. If simulations are fully conscious (the only sort of simulation relevant to our argument) I think that would be a particularly cruel thing for a transhuman mind to want.

Comment author: rortian 09 January 2010 09:06:39AM 0 points [-]

You are suggesting a world with much more energy then the one that we know. It seems you should assign a lower probability to there being a much higher energy universe.

Comment author: Kevin 10 January 2010 10:32:50AM -1 points [-]

By the zero one infinity rule, I also think it likely that there are infinite spacial dimensions. Just a few extra spacial dimensions should give you plenty of computing power to run a lower dimensional universe.

Comment author: rortian 11 January 2010 10:53:38PM 0 points [-]

Wow, I really am curious why you think this would apply to spacial dimensions.