At some point in the future we may be able to scan someone's brain at very high resolution and "run" them on a computer. [1] When I first heard this as a teenager I thought it was interesting but not hugely important. Running people faster or slower and keeping backups came immediately to mind, and Wikipedia adds space travel, but those three by themselves don't seem like they change that much. Thinking speed doesn't seem to be major limiting factor in coming up with good ideas, we generally only restore from backups in cases of rare failure, and while space travel would dramatically affect the ability of humans to spread [2] it doesn't sound like it changes the conditions of life.

This actually undersells emulation by quite a lot. For example "backups" let you repeatedly run the same copy of a person on different information. You can find identify a person when they're at their intellectual or creative best, and give them an hour to think about a new situation. Add in potentially increased simulation speed and parallelism, and you could run lots of these ones looking into all sorts of candidate approaches to problems.

With emulations you can get around the mental overhead of keeping all your assumptions about a direction of thought in your mind at once. I might not know if X is true, and spend a while thinking about what should happen if it's true and another while about what if it's not, but it's hard for me to get past the problem that I'm still uncertain about X. With an emulation that you can reset to a saved state however, you could have multiple runs where you give some emulations a strong assurance that X is true and some a strong assurance that X is false

You can also run randomized controlled trials where the experimental group and the control group are the same person. This should hugely bring down experimental cost and noise, allowing us to make major and rapid progress in discovering what works in education, motivation, and productivity.

(Backups stop being about error recovery and fundamentally change the way an emulation is useful.)

These ideas aren't new here [3] but I don't see them often in discussions of the impact of emulating people. I also suspect there are many more creative ways of using emulation; what else could you do with it?


[1] I think this is a long way off but don't see any reasons why it wouldn't be possible.

[2] Which has a big effect on estimates of the number of future people.

[3] I think most of these ideas fo back to Carl Schulman's 2010 Whole Brain Emulation and the Evolution of Superorganisms.

I also posted this on my blog

New Comment
34 comments, sorted by Click to highlight new comments since: Today at 5:53 PM

Even with baseline ems, and even setting aside the issues that came up during the FOOM debate, there are a lot of subjective details.

  • Backups. I mean, seriously. Backups. You can't die by accident. Yeah, this was noted, but it's seriously subjectively awesome compared to being a very squishy human.

  • Flip side: Not having to worry about your loved ones? I can think of certain parents who would go from bundle-of-stress to, well, only somewhat more stressed than what they were as non-parents.

  • Bad people might feel less inhibitions about killing people with backups. That's not so great.

  • Brain bleach that works.

  • Instant readouts of metrics on your mental state. Like, you can have an indicator for just how much your emulated limbic system is active. Many fewer arguments over whether you're calm enough to be having this discussion right now.

  • Ability to guarantee that you can keep confidence when giving advice, by forking a doomed copy... so long as the emulating environment can be trusted.

  • Two roads diverged in a yellow wood, and I took them.

Robin Hanson thinks that strong cooperation within copy clans won't have a huge impact because there will still be a tradeoff between cooperation and specialization. But if the clan consists of copies of someone like John von Neumann, it can easily best world-class specialists in every field, just by forking a bunch of copies and having each copy take a few subjective months to read up on one field and do a bit of practicing. There is little need for such a clan to cooperate with outsiders (except maybe investors/donors for the initial capital) and I don't see what can prevent it from taking over the world as a singleton once it comes into existence.

I would think that we would probably want to have cultural, ethical, and legal rules against infinitely copying yourself. For one thing, that leads to the rather dystpoian situation Robert Hanson was talking about; and for another, it would lead to a rapidly diminishing amount of variety among humans, which would be sad. One or two copies of you might be ok, but would you really want to live in a world where there are billions of copies of you, billions of copies of Von Neumann, and almost no one else to talk to? Remember, you are now immortal, and the amount of subjective time you are going to live is going to be vast; boredom could be a huge problem, and you would want a huge variety of people to interact with and be social with, wouldn't you?

I really think that we wouldn't want to allow an large amount of copying of the exact same mind to happen.

Over the course of even a few centuries of subjective existence, I expect the divergence experienced by copies of me would be sufficient to keep me entertained.

A few centuries, sure, but how many eternities?

My point was that divergence is rapid enough that a few centuries would suffice to create significant diversity. Over an eternity, it would create maximal diversity, but of course that's not what you're asking: you're asking whether the diversity created by copies of me would keep me entertained indefinitely.

And of course I don't know, but my (largely unjustified) intuition is that no, it wouldn't. .

That said, my intuition is also that the diversity created by an arbitrary number of other people will also be insufficient to keep me entertained indefinitely, so eternal entertainment is hardly a reason to care about whether I have "someone else" to talk to.

Eh. "Talk to" in this case is quite broad. It doesn't just literally mean actually talking to them; it means reading books that you wouldn't have written that contain ideas you wouldn't have thought of, seeing movies that you wouldn't have filmed, playing games from very different points of view, ect. If all culture, music, art, and idea all came from minds that were very similar, I think it would tend to get boring much more quickly.

I do think there is a significant risk that an individual or a society might drift into a stable, stagnant, boring state over time, and interaction and social friction with a variety of fundamentally different minds could be a huge effect on that.

And, of course, there is a more practical reason why everyone would want to ban massive copying of single minds, which is that it would dramatically reduce the resources and standard of living of any one mind. A society of EM's would urgently need some form of population control.

Edit: By the way, there also wouldn't necessarily be even the limited diversity you might expect from having different versions of you diverge over centuries. In the kind of environment we're talking about here, less competitive versions of you would also be wiped out by more competitive versions of you, leaving only a very narrow band of the diversity you yourself would be capable of becoming.

Completely agreed about "talk to" being metaphorical. Indeed, it would astonish me if over the course of few centuries of the kind of technological development implied by whole-brain-emulation, we didn't develop means of interaction with ourselves and others that made the whole notion of concerning ourselves with identity boundaries in the first place a barely intelligible historical artifact for most minds. But I digress.

That aside, I agree that interaction with "fundamentally different minds" could have a huge effect on our tendency to stagnate, but the notion that other humans have fundamentally different minds in that sense just seems laughably implausible to me. If we want to interact with fundamentally different minds over the long haul, I think our best bet will be to create them.

More generally, it sounds like we just have very different intuitions about how much diversity there is among individual minds today, relative to how much diversity there is within a single mind today. I don't have any particularly compelling additional evidence to offer here, so I think my best move is to accept as additional evidence that your expectations about this differ from mine.

As far as the population problem is concerned, I agree, but this has nothing to do with duplicates-vs-"originals". Distributing resources among N entities reduces the average resources available to one entity, regardless of the nature of the entities. Copying a person is no worse than making a person by any other means from this perspective.

I agree that if we are constantly purging most variation, the variation at any given moment will be small. (Of course, if we're right about the value of variation, it seems to follow that variation will therefore be a rare and valued commodity, which might increase the competitiveness of otherwise-less-competitive individuals.)

More generally, it sounds like we just have very different intuitions about how much diversity there is among individual minds today, relative to how much diversity there is within a single mind today. I don't have any particularly compelling additional evidence to offer here, so I think my best move is to accept as additional evidence that your expectations about this differ from mine.

Well, if Eliezer's FAI theory is correct, then the possible end states of any mind capable of deliberate self-modification should be significantly limited by the values, wants, and desires of the initial state of the mind. If that's the case, then there are whole vast ares of potential human-mind-space that you or your decedents would never move into or through because of their values, while EM derived from another human might.

As for your other point; you are right that duplicates vs. originals doesn't necessarily make a difference in terms of population, but it may; theoretically, at least at the subjective speeds of the EM, it should be much faster to make an exact copy of you then to make a "child" EM and raise it to adulthood. And if you are the type of person who wants to make lots of copies of yourself, then all those copies will also want to make lots of copies; if a culture of EM's allows infinite duplication of self things could get out of control very quickly.

If that's the case, then there are whole vast ares of potential human-mind-space that you or your decedents would never move into or through because of their values, while EM derived from another human might.

We seem to keep trading vague statements about our intuitions back and forth, so let me try to get us a bit more concrete, and maybe that will help us move past that.

For convenient reference, call D the subset of mind-space that I and my descendents can potentially move through or into, and H the subset of mind-space that some human can potentially move through or into.
I completely agree that H is larger than D.

What's your estimate of H/D?
My intuitive sense is that it's <2.

it should be much faster to make an exact copy of you then to make a "child" EM and raise it to adulthood. And if you are the type of person who wants to make lots of copies of yourself, then all those copies will also want to make lots of copies

Even if I grant all of that, it still seems that what's necessary (supposing there's a resource limit here) is to limit the number of people I create, not to prevent me from making clones. If I want to take my allotment of creatable people and spend it on creating clones rather than creating "children," how does that affect you, as long as that allotment is set sensibly? Conversely, if I overflow that allotment with "children", how does that affect you less than if I'd overflowed it with clones?

Put differently: once we effectively restrict the growth rate, we no longer have to be concerned with which factors would have been correlated with higher growth rate had we not restricted it.

What's your estimate of H/D? My intuitive sense is that it's <2.

I would think it's far higher then that. Probably H/D>100, and it might be far higher then that. I tend to think that maintaining some continuity of identity would be very important to uploaded minds (because, honestly, isn't that the whole point of uploading your mind instead of just emulating a random human-like mind?). I also tend to think that there are vast categorizes of experiences that you would not put yourself through just so you could be the kind of person who had been through that experience; if there are mind-states that can only be reached by, say, "losing a child and then overcoming that horrible experience after years of grieving through developing a kind of inner strength", the I can't imagine any mind would intentionally do that to themselves just to explore more sections of mind-space.

Or, think about it in terms of beliefs. Say that mind A is an atheist. Do you think that the person who has mind A would ever intentionally turn themselves into a theist or into a spiritualist just in order to experience those emotions, and to get to places in mind-space that can only be reached from there? Judging from the whole of human experience, by just preventing yourself from going that route, you're probably eliminating at least half of all mind-states that a normal human can reach; many of those mind-states being states that can apparently produce incredibly interesting culture, music, literature, art, ect. Not to mention all the possible mind-states that can only be reached by being a former theist who has lost his faith. And that's just one example; there are probably dozens or hundreds of beliefs, values, and worldviews that any mind has that it would never want to change, because they are simply too fundamental to that mind's basic identity. Even with basic things; Eliezer once mentioned, when talking about FAI theory. "My name is Eliezer Yudkowsky. Perhaps it would be easier if my name was something shorter and easier to remember, but I don't want to change my name. And I don't want to change into a person who would want to change my name." (That's not an exact quote, but it was something along those lines.) I would be surprised if any decent of your mind would ever get to even 1% of all possible human mind-space.

Not only that, if mind A has a certain set of values and beliefs, and then you make a million copies of mind A and they all interact with each other all the time, I would think that would tend to discourage any of them from changing or questioning those values or beliefs. Usually the main way people change their minds is when they encounter someone with fundamentally different beliefs who seems to be intelligent and worth listening to; on the other hand, if you surround yourself with only people who believe the same thing you do, you are very unlikely to ever change that belief; if anything, social pressure would likely lock it into place. Therefore, I would say that a mind that primarily interacts with other copes of itself would be far more likely to become static and unchanging then that same mind in an environment where it is interacting with other minds with different beliefs.

I can't imagine any mind would intentionally do that to themselves just to explore more sections of mind-space.

Mm. That's interesting. While I can't imagine actually arranging for my child to die in order to explore that experience, I can easily imagine going through that experience (e.g., with some kind of simulated person) if I thought I had a reasonable chance of learning something worthwhile in the process, if I were living in a post-scarcity kind of environment.

I can similarly easily imagine myself temporarily adopting various forms of theism, atheism, former-theism, and all kinds of other mental states.

And I can even more easily imagine encouraging clones of myself to do so, or choosing to do so when there's a community of clones of myself already exploring other available paths. Why choose a path that's already being explored by someone else?

It sounds like we're both engaging in mind projection here... you can't imagine a mind being willing to choose these sorts of many-sigmas-out experiences, so you assume a population of clone-minds would stick pretty close to a norm; I can easily imagine a mind choosing them, so I assume a population of clone-minds would cover most of the available space.

And it may well be that you're more correct about what clones of an arbitrarily chosen mind would be like... that is, I may just be an aberrant data point.

I can easily imagine a mind choosing them, so I assume a population of clone-minds would cover most of the available space.

Ok, so let's say for the sake of arguments that you're more flexible about such things then 90% of the population is. If so, would you be willing to modify yourself into someone less flexible, into someone who never would want to change himself? If you don't, then you've just locked yourself out of about 90% of all possible mindspace on that one issue alone. However, if you do, then you're probably stuck in that state for good; the new you probably wouldn't want to change back.

Absolutely... temporarily being far more rigid-minded than I am would be fascinating. And knowing that the alarm was ticking and that I was going to return to being my ordinary way of being would likely be deliciously terrifying, like a serious version of a roller coaster.

But, sure, if we posit that the technology is limited such that temporary changes of this sort aren't possible, then I wouldn't do that if I were the only one of me... though if there were a million of me around, I might.

[-][anonymous]11y10

it would lead to a rapidly diminishing amount of variety among humans, which would be sad

Not to mention dangerous. Dissensus is one of the few nearly-universally effective insurance policies.

Dissensus is one of the few nearly-universally effective insurance policies.

Or possibly a good way to get everyone killed. For example suppose any sufficiently intelligent being can build a device to trigger a false vacuum catastrophe.

Nothing short of a very powerful singleton could stop competing, intelligent, computation-based agents from using all available computation resources. If the most efficient way to use them is to parallelize many small instances, then that's what they'll do. How do you stop people from running whatever code they please?

Nothing short of a very powerful singleton could stop competing, intelligent, computation-based agents from using all available computation resources.

I don't see any reason why a society of computing, intelligent, computation-based agents wouldn't be able to prevent any single computation-based agent from doing something they want to make illegal. You don't need a singleton, a society of laws probably works just fine.

And, in fact, you would probably have to have laws and things like that, unless you want other people hacking into your mind.

For society to be sure of what code you're running, they need to enforce transparency that ultimately extends to the physical, hardware level. Even if there are laws, to enforce them I need to know you haven't secretly built custom hardware that would give you an illegal advantage, which falsely reports that it's running something else and legal. In the limit of a nano-technology-based, AGI scenario, this means verifying the actual configurations of atoms of all matter everyone controls.

A singleton isn't required, but it seems like the only stable solution.

Well, you don't have to assume that 100% of all violations of laws will be caught to get a stable society. Just that enough of them are caught to deter most potential criminals.

It depends on a lot of variables, of course, most of which we don't know yet. But, hypothetically speaking, if the society of EM's we're talking about are running on the same network (or the same mega-computer, or whatever), then it should be pretty obvious if someone suddenly makes a dozen illegal copies of themselves and suddenly starts using far more network resources then they were a short time ago.

Well, you don't have to assume that 100% of all violations of laws will be caught to get a stable society. Just that enough of them are caught to deter most potential criminals.

That's a tradeoff vs. the benefit to a criminal who isn't caught from the crime. The benefit here could be enormous.

it should be pretty obvious if someone suddenly makes a dozen illegal copies of themselves and suddenly starts using far more network resources then they were a short time ago.

I was assuming that creating illegal copies lets you use the same resources more intelligently, and profit more for them. Also, if your only measurable is the amount of resource use and not the exact kind of use (because you don't have radical transparency), then people could acquire resources first and convert them to illegal use later.

Network resources are externally visible, but the exact code you're running internally isn't. You can purchase resources first and illegally repurpose them later, etc.

von Neumann was very smart, but I very much doubt he would have been better than everyone at all jobs if trained in those jobs. There is still comparative advantage, even among the very smartest and most capable.

The population boom to the Malthusian limit (and a lower Malthusian limit for AI than humans) is an overwhelmingly important impact (on growth, economic activity, etc) that you don't mention, but that is regularly emphasized.

Running people faster or slower and keeping backups came immediately to mind, and Wikipedia adds space travel, but those three by themselves don't seem like they change that much.

Do you think mathematics and CS, or improvement of brain emulation software and other AI, wouldn't go much further with 1000 people working for a million years, than 100 million people working for 10 years?

It is a toss up as far as I am concerned, depends what the search space of maths/cs looks like. People seem to get stuck in their ways and dismiss other potential pathways. I'm envisioning the difference (for humans at least) to be like running a hill climbing algorithm from 1000 different points for a million years or 100 million different points for 10 years. So if the 1000 people get stuck in local optima they may do worse compared to someone who get lucky and happens to search a very fertile bit of maths/cs for a small amount of time.

Also you couldn't guarantee that people would maintain interest that long.

Lastly the sped up people would also have to wait 100,000 times longer for any practical run. Which are still done in lots of CS/AI. Even stuff like algorithm design.

So unless you heavily modded humans first, I'm not sure it is slam dunk for the sped up people.

Over dinner with MBlume, he suggested that having simulations of yourself could allow you to make decisions based on taste, and then forget having made them, letting yourself be pleasantly surprised. This is probably not as valuable as the other things, but it's an interesting idea.

Disclaimer: rambling ahead.

One thing that struck me while reading your discussion post is that, even with simulated AI that can be saved, copied, and replayed, it isn't obvious nor given that we can merge these clone instances into a single 'averaged' instance.

We could, if we could arbitrarily shrink an instance to only require 1/Nth the resources (or time). The shrunken instance would be less perfect, but there is power in the ability to run numerous nested (sets in sets, like a tree) instances simultaneously.

[-][anonymous]11y-10

The intentional merging with part or all of another emulation has its appeal. As does the possibility of well and truly forgetting.

I would probably enjoy creating several copies of myself, allow them to pursue their (my interest's) wholly, gather all the experiences they can along the way, and merge them together at a later time. Then repeat.

The main impact of whole brain emulation will probably come long after we have superintelligence, when origins emulations (to help predict the actions of unknown aliens) become important.

Whole brain emulation impact earlier seems likely to be low - few will want to emulate a crappy human brain once we have superintelligence.

That's Eliezer's claim, but other people think that EM's might be the first type of true artificial intelligence. It depends how far we are away from solving the software problems of creating a GAI from scratch, and the truth is no one really know the answer to that.

It depends on an ability to compare the difficulty of the two paths.

To me this seems to be a much more practical task - even if it is relatively challenging to put an absolute scale on either of them.

I'm... not sure I'm ready to bet my life on whole brain emulation (although I'd definitely consider it better than dying). I'm not sure what makes an instance of me me (IIRC this is why the Quantum Physics sequence is actually relevant? Have not read it).

But I'm skeptical of anything that lets me get duplicated as being a consciousness transfer.

Yes, it is one of the reasons that sequence is relevant, and I definitely recommend reading it :)

"skeptical of anything that lets me get duplicated as being a consciousness transfer"

The ideas in the post are all functional. Whether there's a consciousness transfer or not, they all are reasons that a given person emulated at real time speed could have the output of someone much more intelligent and focused.

EDIT: this was unclear. Less "a given person emulated at real time speed" and more "per emulated-person-hour".