If you believe the DA, and you also believe you're being simulated (with some probability), then you should believe to be among the last N% humans in the simulation. So you don't escape the DA entirely.
However, it may be that if you believe yourself to be likely in a simulation, you shouldn't believe the DA at all. The DA assumes you know how many humans lived before you, and that you're not special among them. Both may be false in a simulation of human history: it may not have simulated all the humans and pre-humans who ever lived, and/or you may be in a ...
If I were doing it, I'd save computing power by only simulating the people who would program the AI. I don't think I'm going to do that, so it doesn't apply to me. Eliezer doesn't accept the Doomsday Argument, or at least uses a decision theory that makes it irrelevant, so it wouldn't apply to him.
So - I am still having issues parsing this, and I am persisting because I want to understand the argument, at least. I may or may not agree, but understanding it seems a reasonable goal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
The success of the self-modifying AI would make the builders of that AI's observations extremely rare... why? Because the AI's observations count, and it is presumably many orders of magnitude faster?
For a moment, I will assume I...
See LW wiki's Doomsday Argument for reference.
The problem I have with this kind of reasoning is that it causes early reasoners to come to wrong conclusions (though 'on average' the reasoning is most probably true).
Nope. I don't think ignoring causality to such extent makes sense. Simulating many instances of humanity won't make other risks magically go away, because it basically has no effect on them.
Yet another example of how one can misuse rationality and start to believe bogus statements.
Seems backwards. If you are a society that has actually designed and implemented an AI and infrastructure capable of "creating billions of simulated humanities" - it seems de-facto you are the "real" set, as you can see the simulated ones, and a recursive nesting of such things should, in theory have artifacts of some sort (ie. a "fork bomb" in the unix parlance).
I rather think that pragmatically, if a simulated society developed an AI capable of simulating society in sufficient fidelity, it would self-limit - either the simul...
Ah - that's much clearer than your OP.
FWIW - I suspect it violates causality under nearly everyone's standards.
You asked if your proposal was plausible. Unless you can postulate some means to handle that causality issue, I would have to say the answer is "no".
So - you are suggesting that if the AI generates enough simulations of the "prime" reality with enough fidelity, then the chances that a given observer is in a sim approach 1, because of the sheer quantity of them. Correct?
If so - the flaw lies in orders of infinity. For every way you can simulate a world, you can incorrectly simulate it an infinite number of other ways. So - if you are in a sim, it is likely with a chance approaching unity that you are NOT in a simulation of the higher level reality simulating you. And if it's not the same, you have no causality violation, because the first sim is not actually the same as reality; it just seems to be from the POV an an inhabitant.
The whole thing seems a bit silly anyway - not your argument, but the sim argument - from a physics POV. Unless we are actually in a SIM right now, and our understanding of physics is fundamentally broken, doing the suggested would take more time and energy than has ever or will ever exist, and is still mathematically impossible (another orders of infinity thing).
FWIW - I suspect it violates causality under nearly everyone's standards.
Oh god damn it, Lesswrong is responsible for every single premise of my argument. I'm just the first to make it!
As for the rest of your post: I have to admit I did not consider this, but I still don't see why they wouldn't just create a less complex physical universe for the simulation.
Or maybe I'm misunderstanding you. My brain is feeling more than usually fried at the moment.
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?