Comment author: woodchopper 17 May 2016 08:59:17AM *  1 point [-]

This doesn't seem very coherent.

As it happens, a perfect and truthful predictor has declared that you will choose torture iff you are alone.

OK. Then that means if I choose torture, I am alone. If I choose the dust specks, I am not alone. I don't want to be tortured, and don't really care about 3 ^^^ 3 people getting dust specks in their eyes, even if they're all 'perfect copies of me'. I am not a perfect utilitarian.

A perfect utilitarian would choose torture though, because one person getting tortured is technically not as bad from a utilitarian point of view as 3 ^^^ 3 dust specks in eyes.

Comment author: woodchopper 15 May 2016 12:49:19PM *  -1 points [-]

I think a very interesting trait of humans is that we can for the most part collaboratively truth-seek on most issues, except those defined as 'politics', where a large proportion of the population, with varying IQs, some extremely intelligent, believe things that are quite obviously wrong to who anyone who has spent any amount of time seeking the truth on those issues without prior bias.

The ability for humans to totally turn off their rationality, to organise the 'facts' as they see them to confirm their biases, is nothing short of incredible. If humans treated everything like politics, we would certainly get nowhere.

I think a community hazard would, unfortunately, be trying to collaboratively truth-seek about political issues on a forum like LessWrong. People would not be able to get over their biases, despite being very open to changing their mind on all other issues.

Comment author: pragmatist 03 May 2016 04:01:05PM *  1 point [-]

I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.

(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.

He's saying that (3) doesn't hold if we are not in a simulation, so either (1) or (2) is true. He's not saying that if we're not in a simulation, we somehow are actually in a simulation given this logic.

Right. When I say "his conclusion is still true", I mean the conclusion that at least one of (1), (2) or (3) is true. That is the conclusion of the simulation argument, not "we are living in a simulation".

If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.

This, I think, is a possible difference between your position and Bostrom's. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).

I actually suspect that you are doing neither of these things, though. You seem to be simply denying that the minds our post-human descendants will simulate (if any) will be similar to our own minds. This is what your game AI comparisons suggest. In that case, your argument is not incompatible with Bostrom's conclusion. Remember, the conclusion of the simulation argument is that either (1), (2), or (3) is true. You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants. If that's all you're claiming, then you're not disagreeing with the simulation argument.

Comment author: woodchopper 03 May 2016 05:05:38PM 0 points [-]

(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.

The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are "true" I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.

You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants.

I think Bostrom's argument applies even if they aren't "highly accurate". If they are simulated at all, you can apply his argument. I think the core of his argument is that if simulated minds outnumber "real" minds, then it's likely we are all simulated. I'm not really sure how us being "accurately simulated" minds changes things. It does make it easier to reason outside of our little box - if we are highly accurate simulations then we can actually know a lot about the real universe, and in fact studying our little box is pretty much akin to studying the real universe.

This, I think, is a possible difference between your position and Bostrom's. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).

Let's assume I'm trying to make conclusions about the universe. I could be a brain in a vat, but there's not really anything to be gained in assuming that. Whether it's true or not, I may as well act as if the universe can be understood. Let's say I conclude, from my observations about the universe, that there are many more simulated minds than non-simulated minds. Does it then follow that I am probably a simulated mind? Bostrom says yes. I say no, because my reasoning about the universe that led me to the conclusion that there are more simulated minds than non-simulated ones is predicated on me not being a simulated mind. I would almost say it's impossible to reason your way into believing you're in a simulation. It's self-referential.

I'm going to have to think about this harder, but try and criticise what I'm saying as you have been doing because it certainly helps flesh things out in my mind.

Comment author: RowanE 03 May 2016 02:17:52PM 0 points [-]

It sounds like you expect it to be obvious, but nothing springs to mind. Perhaps you should actually describe the insane reasoning or conclusion that you believe follows from the premise.

Comment author: woodchopper 03 May 2016 03:23:36PM 4 points [-]

We could have random number generators that choose the geometry an agent in our simulation finds itself in every time it steps into a new room. We could make the agent believe that when you put two things together and group them, you get three things. We could add random bits to an agent's memory.

There is no limit to how perverted a view of the world a simulated agent could have.

Comment author: pragmatist 03 May 2016 10:04:14AM *  7 points [-]

First, Bostrom is very explicit that the conclusion of his argument is not "We are probably living in a simulation". The conclusion of his argument is that at least one of the following three claims is very likely to be true -- (1) humans won't reach the post-human stage of technological development, (2) post-human civilizations will not run a significant number of simulations of their ancestral history, or (3) we are living in a simulation.

Second, Bostrom has addressed the objection you raise here (in his Simulation Argument FAQ, among other places). He essentially flips your disjunctive reasoning around. He argues that we are either in a simulation or we are not. if we are in simulation, then claim 3 is obviously true, by hypothesis. If we are not in a simulation, then our ordinary empirical evidence is a veridical guide to the universe (our universe, not some other universe). This means the evidence and assumptions used as the basis for the simulation argument are sound in our universe. It follows that since claim 3 is false by hypothesis, either claim 1 or claim 2 is very likely to be true. It's worth noting that these two are claims about our universe, not about some parent universe.

In other words, your objection is based on the argument that if we are in a simulation, there is no good reason to trust the assumptions of the simulation argument (such as assumptions about how our simulators will behave). Bostrom's reply is that if we are in a simulation, then his conclusion is true anyway, even if the specific reasoning he uses doesn't apply. If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.

There does seem to be some sort of sleight-of-mind going on here, if you want my opinion. I generally feel that way about most non-trivial uses of anthropic reasoning. But the exact source of the sleight is not easy for me to detect. At the very least, Bostrom has a prima facie response to your objection, so you need to say something about why his response is flawed. Making your objection and Bostrom's response mathematically precise would be a good way to track down the flaw (if any).

Comment author: woodchopper 03 May 2016 03:10:00PM *  0 points [-]

I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.

So I am struggling to understand his reply to my argument. In some ways it simply looks like he's saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren't in a simulation.

If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.

If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.

He's saying that (3) doesn't hold if we are not in a simulation, so either (1) or (2) is true. He's not saying that if we're not in a simulation, we somehow are actually in a simulation given this logic.

Comment author: bogus 02 May 2016 08:45:33PM *  1 point [-]

you cannot reason about without our universe from within our universe. It doesn't make sense to do so.

Of course you can. Anyone who talks about any sort of 'multiverse' - or even causally disconnected regions of 'our own universe' - is doing precisely this, whether they realize it or not.

Comment author: woodchopper 03 May 2016 03:07:40AM 0 points [-]

No. Think about what sort of conclusions an AI in a game we make would come to about reality. Pretty twisted, right?

Comment author: woodchopper 02 May 2016 06:16:47PM *  0 points [-]

The "simulation argument" by Bostrom is flawed. It is wrong. I don't understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn't make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are simulated agents. However, the observation that most agents will eventually be or already are simulated only applies in this reality/universe. If we are in a simulation, all of our logic will not be universal but instead will be a reaction to the perverted rules set up by the simulation's creators. If we're not in a simulation, we're not in a simulation. Either way, the simulation argument is flawed.

Comment author: Gram_Stone 02 May 2016 05:47:41PM 2 points [-]

This is actually just the sort of thing that I'm trying to say. I'm saying that when you understand guilt as a source of information, and not a thing that you need to carry around with you after you've learned everything you can from it, then you can take the weight off of your shoulders. I'm saying that maybe if more people did this, it wouldn't be as hard to do extraordinary kinds of good, because you wouldn't constantly be feeling bad about what you conceivably aren't doing. Most of what people consider conceivable would require an unrealistic sort of discipline. Punishing people likely just reduces the amount of good that they can actually do.

Am I right that we seem to agree on this?

Comment author: woodchopper 02 May 2016 06:04:30PM 1 point [-]

I think I agree with what you're saying for the most part. If your goal is, say, reducing suffering, then you have to consider the best way of convincing others to share your goal. If you started killing people who ran factory farms, you're probably going to turn a lot of the world against you, and so fail in your goal. And, you have to consider the best way of convincing yourself to continue performing your goal, now and into the future, since humans goals can change depending on circumstances and experiences.

In terms of guilt, finding little tricks to rid yourself of guilt for various things probably isn't a good way to make you continue caring and doing as much as you can for a certain issue. I can know that something is wrong, but if I don't feel guilty about doing nothing, I'm probably not going to exert myself as hard in trying to fix it. If I can tell myself "I didn't do it, therefore it's none of my concern, even though it is technically a bad thing" and absolve myself of guilt, it's simply going to make me less likely to do anything about the issue.

Comment author: woodchopper 02 May 2016 05:37:49PM 1 point [-]

You have to consider that humans don't have perfect utility functions. Even if I want to be a moral utilitarian, it is a fact that I am not. So I have to structure my life around keeping myself as morally utilitarian as possible. Brian Tomasik talks about this. It might be true that I could reduce more suffering by not eating an extra donut, but I'm going to give up on the entire task of being a utilitarian if I can't allow myself some luxuries.

Comment author: jacob_cannell 11 March 2016 05:42:58AM *  3 points [-]

I take this as another sign favoring transcension over expansion, and also weird-universes.

The standard dev model is expansion - habitable planets lead to life leads to intelligence leads to tech civs which then expand outward.

If the standard model was correct, barring any wierd late filter, then the first civ to form in each galaxy would colonize the rest and thus preclude other civs from forming.

Given that the strong mediocrity principle holds - habitable planets are the norm, life is probably the norm, enormous expected number of bio worlds, etc, if the standard model is correct than most observers will find themselves on an unusually early planet - because the elder civs prevent late civs from forming.

But that isn't the case, so that model is wrong. In general it looks like a filter is hard to support, given how strongly all the evidence has lined up for mediocrity, and the inherent complexity penalty.

Transcension remains as a viable alternative. Instead of expanding outward, each civ progresses to a tech singularity and implodes inward, perhaps by creating new baby universes, and perhaps using that to alter the distribution over the multiverse, and thus gaining the ability to effectively alter physics (as current models of baby universe creation suggest the parent universe has some programming level control over the physics of the seed). This would allow exponential growth to continue, which is enormously better than expansion which only provides polynomial growth. So everyone does this if it's possible. Furthermore, if it's possible anywhere in the multiverse, then those pockets expand faster, and thus they was and will dominate everywhere. So if that's true the multiverse has/will be edited/restructured/shaped by (tiny, compressed, cold, invisible) gods.

Barring transcension wierdness, another possibility is that the multiverse is somehow anthropic tuned for about 1 civ per galaxy, and galaxy size is cotuned for this, as it provides a nice sized niche for evolution, similar to the effect of continents/island distributions on the earth scale. Of course, this still requires a filter, which has a high complexity penalty.

Comment author: woodchopper 30 April 2016 06:34:28PM 0 points [-]

What you are saying doesn't follow from the premises, and is about as accurate as me saying that magic exists and Harry Potter casts a spell on too-advanced civilisations.

View more: Next