Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: denimalpaca 01 April 2017 05:03:53PM 0 points [-]

I 100% agree that a "perfect simulation" and a non-simulation are essentially the same, noting Lumifer's comment that our programmer(s) are gods by another name in the case of simulation.

My comment is really about your second paragraph, how likely are we to see an imperfection? My reasoning about error propagation in an imperfect simulation would imply a fairly high probability of us seeing an error eventually. This is assuming that we are a near-perfect simulation of the universe "above" ours, with "perfect" simulation being done at small scales around conscious observers.

So I'm not really sure if you just didn't understand what I'm getting at, because we seem to agree, and you just explained back to me what I was saying.

Comment author: dogiv 03 April 2017 02:06:24PM 0 points [-]

I guess where we disagree is in our view of how a simulation would be imperfect. You're envisioning something much closer to a perfect simulation, where slightly incorrect boundary conditions would cause errors to propagate into the region that is perfectly simulated. I consider it more likely that if a simulation has any interference at all (such as rewinding to fix noticeable problems) it will be filled with approximations everywhere. In that case the boundary condition errors aren't so relevant. Whether we see an error would depend mainly on whether there are any (which, like I said, is equivalent to asking whether we are "in" a simulation) and whether we have any mechanism by which to detect them.

Comment author: denimalpaca 31 March 2017 02:51:29PM 0 points [-]

An idea I keep coming back to that would imply we reject the idea of being in a simulation is the fact that the laws of physics remain the same no matter your reference point nor place in the universe.

You give the example of a conscious observer recognizing an anomaly, and the simulation runner rewinds time to fix this problem. By only re-running the simulation within that observer's time cone, the simulation may have strange new behavior at the edge of that time cone, propagating an error. I don't think that the error can be recovered so much as moved when dealing with lower resolution simulations.

It makes the most sense to me, that if we are in a simulation it be a "perfect" simulation in that the most foundational forces and quantum effects are simulated all the time, because they are all in a way interacting with each other all the time.

Comment author: dogiv 31 March 2017 05:08:31PM 0 points [-]

If it is the case that we are in a "perfect" simulation, I would consider that no different than being in a non-simulation. The concept of being "in a simulation" is useful only insofar as it predicts some future observation. Given the various multiverses that are likely to exist, any perfect simulation an agent might run is probably just duplicating a naturally-occurring mathematical object which, depending on your definitions, already "exists" in baseline reality.

The key question, then, is not whether some simulation of us exists (nearly guaranteed) but how likely we are to encounter an imperfection or interference that would differentiate the simulation from the stand-alone "perfect" universe. Once that happens, we are tied in to the world one level up and should be able to interact with it.

There's not much evidence about the likelihood of a simulation being imperfect. Maybe imperfect simulations are more common than perfect ones because they're more computationally tractable, but that's not a lot to go on.

Comment author: dogiv 28 March 2017 07:54:37PM 2 points [-]

Does anybody think this will actually help with existential risk? I suspect the goal of "keeping up" or preventing irrelevance after the onset of AGI is pretty much a lost cause. But maybe if it makes people smarter it will help us solve the control problem in time.

Comment author: lifelonglearner 27 March 2017 05:32:19PM *  2 points [-]

I started learning some web development for the first time, and I put together a "plan-bot" that asks you a series of questions for your plans, similar to Murphyjitsu.

Here is the link, if anyone wants to play around with it.

Comment author: dogiv 28 March 2017 04:00:10PM 2 points [-]

I just tried this out for a project I'm doing at work, and I'm finding it very useful--it forces me to think about possible failure modes explicitly and then come up with specific solutions for them, which I guess I normally avoid doing.

In response to comment by dogiv on Act into Uncertainty
Comment author: Lumifer 27 March 2017 05:56:11PM *  0 points [-]

even though in fact no one will read it

How do you know this?

Note the difference between what you intend and what might happen to you and your property regardless of your intentions.

Comment author: dogiv 27 March 2017 09:42:21PM 0 points [-]

Encrypting/obscuring it does help a little bit, but doesn't eliminate the problem, so it's not just that.

In response to Act into Uncertainty
Comment author: Viliam 27 March 2017 03:13:15PM *  1 point [-]

Mathematically, refusing to make a prediction may be equivalent to going with some prior distribution of possible values.

Socially, it's different. For example, people have different "prior distributions", so talking about your one explicitly exposes a lot of information about you, while refusing to make a prediction exposes little. (You might get into unnecessary conflicts over the parts where the probability is small anyway, so it wouldn't make a practical difference.)

I suspect that refusing to make prediction, even for yourself, is just an internalization of this rule. You know that doing something would make other people laught at you, so it feels silly to do even if no one is watching.

Comment author: dogiv 27 March 2017 04:16:39PM 1 point [-]

I agree with that... personally I have tried several times to start a private journal, and every time I basically end up failing to write down any important thoughts because I am inhibited by the mental image of how someone else might interpret what I write--even though in fact no one will read it. Subconsciously it seems much more "defensible" to write nothing at all, and therefore effectively leave my thoughts unexamined, than to commit to having thought something that might be socially unacceptable.

Comment author: madhatter 21 March 2017 10:28:54PM 1 point [-]

Can someone explain why UDT wasn't good enough? In what case does UDT fail? (Or is it just hard to approximate with algorithms)?

Comment author: dogiv 24 March 2017 02:26:12PM 0 points [-]

I've been trying to understand the differences between TDT, UDT, and FDT, but they are not clearly laid out in any one place. The blog post that went along with the FDT paper sheds a little bit of light on it--it says that FDT is a generalization of UDT intended to capture the shared aspects of several different versions of UDT while leaving out the philosophical assumptions that typically go along with it.

That post also describes the key difference between TDT and UDT by saying that TDT "makes the mistake of conditioning on observations" which I think is a reference to Gary Drescher's objection that in some cases TDT would make you decide as if you can choose the output of a pre-defined mathematical operation that is not part of your decision algorithm. I am still working on understanding Wei Dai's UDT solution to that problem, but presumably FDT solves it in the same way.

Comment author: dogiv 22 March 2017 09:43:12PM 7 points [-]

It does seem like a past tendency to overbuild things is the main cause. Why are the pyramids still standing five thousand years later? Because the only way they knew to build a giant building back then was to make it essentially a squat mound of solid stone. If you wanted to build a pyramid the same size today you could probably do it for 1/1000 of the cost but it would be hollow and it wouldn't last even 500 years.

Even when cars were new they couldn't be overbuilt the way buildings were in prehistory because they still had to be able to move themselves around. Washing machines are somewhere in between, I guess. But I don't think rich people demand less durability. If anything, rich people have more capital to spend up front on a quality product and more luxury to research which one is a good long-term investment.

Comment author: username2 22 March 2017 12:31:42AM 0 points [-]

I agree with your concern, but I think that you shouldn't limit your fear to party-aligned attacks.

For example, the Thirty-Meter Telescope in Hawaii was delayed by protests from a group of people who are most definitely "liberal" on the "liberal/conservative" spectrum (in fact, "ultra-liberal"). The effect of the protests is definitely significant. While it's debatable how close the TMT came to cancelation, the current plan is to grant no more land to astronomy atop Mauna Kea.

Comment author: dogiv 22 March 2017 05:06:49PM 0 points [-]

Agreed. There are plenty of liberal views that reject certain scientific evidence for ideological reasons--I'll refrain from examples to avoid getting too political, but it's not a one-sided issue.

Comment author: Viliam 21 March 2017 01:31:28PM 0 points [-]

I have a feeling that perhaps in some sense politics is self-balancing. You attack things that are associated with your enemy, which means that your enemy will defend them. Assuming you are an entity that only cares about scoring political points, if your enemy uses rationality as an applause light, you will attack rationality, but if your enemy uses postmodernism as an applause light, you will attack postmodernism and perhaps defend (your interpretation of) rationality.

That means that the real risk for rationality is not that everyone will attack it. As soon as the main political players will all turn against rationality, fighting rationality will become less important for them, because attacking things the others consider sacred will be more effective. You will soon get rationality apologists saying "rationality per se is not bad, it's only rationality as practiced by our political opponents that leads to horrible things".

But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party. Imagine trying to teach people x-rationality in that universe.)

Comment author: dogiv 21 March 2017 05:34:02PM 0 points [-]

This may be partially what has happened with "science" but in reverse. Liberals used science to defend some of their policies, conservatives started attacking it, and now it has become an applause light for liberals--for example, the "March for Science" I keep hearing about on Facebook. I am concerned about this trend because the increasing politicization of science will likely result in both reduced quality of science (due to bias) and decreased public acceptance of even those scientific results that are not biased.

View more: Next