James_Miller comments on Crazy Ideas Thread - October 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (114)
We are probably in a historical simulation. Most historical simulations are not of everyone but just of historically important people. Update on this hypothesis to increase your estimate that your life is historically significant. Look for clues as to why you might be important. For all of us it might be that Eliezer succeeds and we are one of the 10^(big number) simulations of his life and everything surrounding him.
Simulation argument case 3 obviously.
One consequence for ethics in this case is that you can create conscious being by performing interactions equivalent of Turing tests on persons you get in contact with. Bonus points for spreading this meme to bring lots of conscious beings into existence (and put heavy load on the simulator).
But wouldn't increasing load on the simulator increase the chances of the simulation being turned off, thus negating ALL the conscious and potentially conscious beings it was simulating?
That's exactly what an agent of the simulator would say.
Cue the rooftop chase.
But just like HPMOR's hat, the conscious being might switch back to nonsentience once the interaction ends.
Yeah, I wondered to what degree that could be optimized. But if you interact repeatedly and in complex ways than shouldn't you notice that? Kind of a long-duration Turing test.
Hm, I wonder what the best place to find really happy people is?
Could you elaborate whether you mean in general, in simulations or elsewhere? And how this related to my comment?
The thought was to induce the simulation of good experiences by being in close proximity to happy people.
Ah yes. Interesting idea. But I think it only 'counts' if the happyness is conscious. One has to work a bit harder for that.
I understand why most historical simulations would be of historically important people, but why would most or even a lot of simulations be historical simulations?
The set of all simulations is irrelevant in this case. What matters for us is the set of simulations that match our observations. For this set, historical simulations of various forms are naturally expected to be predominate.
The past can't simulate the future, so we must be in a sim from a future timeline. Loosely speaking this leaves open historical sims and 'fictional' sims. From the inside they may be hard to differentiate (consider that harry potter's world looks historical from his perspective, etc.)
If multiple levels of sim are likely, I have a simple argument that fictional sims are more likely than you'd think: for us to be in a historical sim with respect to the root physical universe, every sim level in the stack must be historical. If even one sim in the tree/stack/chain is fictional, then everything below that level is also fictional.
So 'fiction' is something that only increases with sim levels.
See this. Basically, if the future goes well it will have lots of computing power and if a tiny fraction of this power is used to make historical simulations most people in our situation will be living in historical simulations.
Evidence?
In most stories, the majority of the population are NPCs.
Not if you weight each character by the number of words her or she speaks.
There's a paper on this called the "simulation argument". It's not evidence based but logic based.
Bostrom's paper doesn't purport to show that we are probably in a simulation, but only the weaker claim that one of these things is true:
(Bostrom puts it slightly differently; I think what I've written above is clearer and has fewer little holes.)
You will observe that this argument is more or less a triviality; Bostrom's contribution is thinking of making such an argument rather than filling in difficult steps in the reasoning once the argument is thought of.
I confess that my own response to this is indifference; I think there's a very good chance that the sort of computational superpowers needed to run a lot of faithful historical simulations will never be ours, and I don't see why a post-human civilization would bother to run a lot of simulations of their ancestors, so the most the argument can tell me is that it's not completely impossible that I might be in a simulation. Fair enough, but so what?
(Elaborating on that not-seeing-why: it's not very clear why our posthuman successors would bother running any ancestor-simulations, but to get "I'm probably in a simulation" out of Bostrom's argument what's necessary is either that the bit of my life I'm experiencing right now has been simulated not just once but many many times, or else that the posthumans are going to simulate not only their actual ancestors but many many people very like their ancestors in situations similar to their ancestors'. I see no reason to expect either of those.)
Have you heard of the Resurrection? In many belief systems (of the western mid-east flavour specifically) it is the greatest goal that humanity could ever achieve. Historical simulation could implement it - in fact it is the only way to implement it.
Go find an average Christian or Muslim or other believer-in-the-Resurrection, and say to them "Great news! You and your friends and family are indeed going to be raised from the dead. What'll happen is that you'll get to live exactly the same life you're living now all over again. You will have no recollection of having lived it before, suffering and disease and so on will be the same as ever, and you'll die at the end. Isn't that great?"
If they take you seriously enough to bother answering at all, do you really think they'll say "Yeah, that's exactly what I'm currently hoping for"?
I think that jacob_cannell's implication was this but without "you'll die at the end." You die at the end physically but the point of the simulation is to obtain your mental state at the end of your life, so you can transfer that to heaven.
(I don't believe there will ever be any possibility of rerunning a particular human being's life in any manner that would be even close to his actual life.)
Why "at the end of ... life"? If you're simulating someone, what's special about a particular point when the physical body died?
The point at which someone dies is the point at which their mind no longer causally effects the simulation. Naturally they can be copied out before then, but historical accuracy requires at least one version to remain in the sim until death.
And why should the AI care about historical accuracy?
I guess the real question is the difference between minds simulated on the basis of historical data (="previously existing") and minds simulated de novo, just plausible human minds invented out of thin air. Why should the AI favour previously existing minds?
BTW, affects the simulation, not effects.
Because that's the time when you would want to be resurrected.
If I'm being simulated, I have already been "resurrected". But what is the point of resurrection? You yourself say "so you can transfer ... to heaven" and given that, what is the reason for running the simulation at all instead of not collecting $200 and going directly to heaven?
Yeah, I wondered about that. But I don't think it makes sense. If you can get enough information about particular ancestors to simulate them (as opposed to simulating other people who happen to resemble them) then surely you have enough to put them directly in heaven / paradise / the New Jerusalem / whatever.
I'm inclined to agree. But, since I am the person I am largely because of the life I've lived, how can running a simulation that doesn't replicate my life help to determine the proper mental state to send me to heaven with?
Let me try to imagine the process working as well as possible. I've kept a journal for the past ten years, and screenshots of my computer every 30 seconds for nearly the last four (as well as webcam shots that can indicate exactly when I was and was not present at the computer). If someone were to simulate me they would have to simulate someone who went through the experiences and thoughts described in the journal, and who used his computer in the way implied by the screen and webcam shots.
Does all that info actually imply that someone could simply describe my current state, or would you get something more accurate by such a simulation? Perhaps an AI could simply use the info to directly produce a current state, but how would it do that, without simulating something like a process that passes through all that info? In other words, it's not clear to me that a simulation couldn't help.
Regarding the last point, basically I was saying that ultimately I don't expect jacob_cannell's idea to work, but I don't think it is unintelligible.
OK, so if I'm understanding correctly your suggestion is that in order to reconstruct your mind it would be necessary to do lots of simulations of you-like minds in order to adjust the (unfathomably many) parameters to find a mind that behaves in the right ways. I concede that that might be so.
It's an interesting (and disturbing) idea because it suggests that (little bits of?) our lives might be simulated billions of times, with small variations, in the process of trying to reconstruct us. (If, that is, anyone is so interested in reconstructing us at all.) This seems to me to make a big difference to the moral calculus of attempted simulated resurrection -- "we can reconstruct your mind-state and put a new instantiation of it somewhere wonderful" sounds like quite a different deal from "we can reconstruct your mind-state and put a new instantiation of it somewhere wonderful -- but the reconstruction process will involve billions of simulated minds that more or less closely resemble yours passing through good approximations to all the events of your life that we could find out about", and I'd be much less happy about the latter.
I have to say that it seems unlikely that enough information exists to do the reconstruction for anyone -- even people who save as much information about themselves as you do, which most of us don't. I mean, in some sense maybe it's still there since everything we do has effects on everything else in our future light cone, but I'd expect the information to be unusable in something like the way that energy becomes unusable when it turns into waste heat in rough thermal equilibrium with its surroundings.
Right. The simulation is the forward time sweep of an inference engine recreating historical people for the purpose of future resurrection.
If humanity survives to singularity level superintelligence, it's a rather obvious possibility. Doesn't even require any advanced violations of physics. It's actually a nearer term tech than most people think - the simplest forms of it will be possible not long after AGI.
It depends of course on one's definition of 'close' and the currently available information. Identity is subjective though - and that is what makes the approach viable. There is no such thing as the singular canonical correct version of a person. We are distributions over mindspace across the multiverse.
I am a distribution over mindspace..? across the multiverse..? Funny, I don't feel like a distribution. Do you have any evidence to support or that's just a word salad?
Identity in general can refer to current self, past self, and future selves all as the same 'person'. That is a set. Mindspace is just the space of all possible minds, so the person-defining set is a distribution over mindspace.
I'm using 'multiverse' in the most general sense (nothing QM specific) to refer to all possible universes/futures etc.
Lots of people today play video games that contain characters from the past.
True, but I think there are reasons beyond mere lack of capability why those games don't involve neuron-level simulation of billions of specific past people.
I just publish simulation map, in which I conclude that most likely I live in one person me-simulation of a period near AI creation. In fact, there is two possible variants: 1. This is a simulation of Eliezer's life, and I just onу of thousands people who are simulated for it with enough details to be conscious observer. 2. It is only me-simulation, there I am the only really simulated observer, and others are p-zombie and simplified models.
Hypothesis 2 is favoured by some kind of power law in simulation world, that says that simpler and cheaper simulations are more abundant. (e.g. there are more novels than movies in our world). But if it is true I should do something really important in FAI or other x-risks topics. I did many things, like map of x-risks prevention, but it is not enough to be simulated.
The simulation map:
http://lesswrong.com/r/discussion/lw/mv0/simulations_map_what_is_the_most_probable_type_of/
I'm surprised you think he actually has a high change of creating AGI.
EY was here only an example. Now so many players on field that AGI will probably created by someone else. And also it seems that he is not working on coding AI.
Seem to be implying that you are more likely to be in a simulation if historixcally impt. Interesting