If you believe the DA, and you also believe you're being simulated (with some probability), then you should believe to be among the last N% humans in the simulation. So you don't escape the DA entirely.
However, it may be that if you believe yourself to be likely in a simulation, you shouldn't believe the DA at all. The DA assumes you know how many humans lived before you, and that you're not special among them. Both may be false in a simulation of human history: it may not have simulated all the humans and pre-humans who ever lived, and/or you may be in a ...
If I were doing it, I'd save computing power by only simulating the people who would program the AI. I don't think I'm going to do that, so it doesn't apply to me. Eliezer doesn't accept the Doomsday Argument, or at least uses a decision theory that makes it irrelevant, so it wouldn't apply to him.
So - I am still having issues parsing this, and I am persisting because I want to understand the argument, at least. I may or may not agree, but understanding it seems a reasonable goal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
The success of the self-modifying AI would make the builders of that AI's observations extremely rare... why? Because the AI's observations count, and it is presumably many orders of magnitude faster?
For a moment, I will assume I...
See LW wiki's Doomsday Argument for reference.
The problem I have with this kind of reasoning is that it causes early reasoners to come to wrong conclusions (though 'on average' the reasoning is most probably true).
Nope. I don't think ignoring causality to such extent makes sense. Simulating many instances of humanity won't make other risks magically go away, because it basically has no effect on them.
Yet another example of how one can misuse rationality and start to believe bogus statements.
Seems backwards. If you are a society that has actually designed and implemented an AI and infrastructure capable of "creating billions of simulated humanities" - it seems de-facto you are the "real" set, as you can see the simulated ones, and a recursive nesting of such things should, in theory have artifacts of some sort (ie. a "fork bomb" in the unix parlance).
I rather think that pragmatically, if a simulated society developed an AI capable of simulating society in sufficient fidelity, it would self-limit - either the simul...
I don't have a model which I believe with certainty even provided MWI is true.
I think that, given MWI, your consciousness is in any world in which you exist, so that if you kill yourself in the other worlds, you only exist in worlds that you didn't kill yourself. I'm not sure what else could happen; obviously you can't exist in the worlds you're dead in.
What happens if you die in a non-MWI world? Pretty much the same for the case of MWI with random branch choice. If your random branch happens to be a bad one, you cease to exist, and maybe some of your clones in other branches are still alive.
So at time t, the data is already determined from the computer's perspective, but not from mine. At t+dt, the data is determined from my perspective, as I've awoken. In the time between t and t+dt, it's meaningless to ask what "branch" I'm in; there's no test I can do to determine that in theory, as I only awaken if I'm in the data=n branch. It's meaningful to other people, but not to me. I don't see anywhere that requires non-local laws in this scenario.
Non-locality is required if you claim that you (that copy of you which has your consciousness) will always wake up. Otherwise, it's just a twisted version of a Russian roulette and has nothing to do with quants.
At time t, the computer either shoots you, or not. At time t + dt, its bullet kills you (or not). So you say that at time t you will go to the branch where the computer doesn't kill you. But such a choice of a branch requires information at time t + dt (whether you are alive or not in that branch). So, physical laws have to perform a look-ahead in time to decide in which Everett branch they should put your consciousness.
Now, imagine that your (quantum) computer generates a random number n from the Poisson distribution. Then, it will kill you after n days. Now n = ... what? Well, thanks to thermodynamics, your (and computer's) lifespan is limited, so hopefully it will be a finite number -- but, look, if the universe allowed unbounded lifespan, it would be a logical contradiction in physical laws. Anyway, you see that the look-ahead in time required after the random number generation can be arbitrarily large. That's what I mean by non-locality here.
Non-locality is required if you claim that you (that copy of you which has your consciousness)
I deny that this is meaningful. If there are two copies of me, both "have my consciousness". I fail to see any sense in which my consciousness must move to only one copy.
So you say that at time t you will go to the branch where the computer doesn't kill you.
I do not claim that. I claim that I exist in both branches, up until one of them no longer contains my consciousness, because I'm dead, and then I only exist in one branch. (In fact, I can cons...
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?