Robin Hanson has done a great job of describing the future world and economy, under the assumption that easily copied "uploads" (whole brain emulations), and the standard laws of economics continue to apply. To oversimplify the conclusion:

  • There will be great and rapidly increasing wealth. On the other hand, the uploads will be in Darwinian-like competition with each other and with copies, which will drive their wages down to subsistence levels: whatever is required to run their hardware and keep them working, and nothing more.

The competition will not so much be driven by variation, but by selection: uploads with the required characteristics can be copied again and again, undercutting and literally crowding out any uploads wanting higher wages.

 

Megadeaths

Some have focused on the possibly troubling aspects voluntary or semi-voluntary death: some uploads would be willing to make copies of themselves for specific tasks, which would then be deleted or killed at the end of the process. This can pose problems, especially if the copy changes its mind about deletion. But much more troubling is the mass death among uploads that always wanted to live.

What the selection process will favour is agents that want to live (if they didn't, they'd die out) and willing to work for an expectation of subsistence level wages. But now add a little risk to the process: not all jobs pay exactly the expected amount, sometimes they pay slightly higher, sometimes they pay slightly lower. That means that half of all jobs will result in a life-loving upload dying (charging extra to pay for insurance will squeeze that upload out of the market). Iterating the process means that the vast majority of the uploads will end up being killed - if not initially, then at some point later. The picture changes somewhat if you consider "super-organisms" of uploads and their copies, but then the issue simply shifts to wage competition between the super-organisms.

The only way this can be considered acceptable is if the killing of a (potentially unique) agent that doesn't want to die, is exactly compensated by the copying of another already existent agent. I don't find myself in the camp arguing that that would be a morally neutral or positive action.

 

Pain and unhappiness

The preceding would be mitigated to some extent if the uploads were happy. It's quite easy to come up with mental pictures of potential uploads living happy and worthwhile lives. But evolution/selection is the true determiner of the personality traits of uploads. Successful uploads would have precisely the best amount of pain and happiness in their lives to motivate them to work at their maximum possible efficiency.

Can we estimate what this pain/happiness balance would be? It's really tricky; we don't know exactly what work the uploads would be doing ("office work" is a good guess, but that can be extraordinarily broad). Since we are in extreme evolutionary dis-equilibrium ourselves, we don't have a clear picture of the best pain/happiness wiring for doing our current jobs today - or whether other motivational methods could be used.

But if we take the outside view, and note that this is an evolutionary processes operating on agents at the edge of starvation, we can compare this with standard Darwinian evolution. And there the picture is clear: the disequilibrium between happiness and pain in the lives of evolved beings is tremendous, and all in the direction of pain. It's far too easy to cause pain to mammals, far too hard to cause happiness. If upload selection follows broadly similar processes, their lives will be filled with pain far more than they will be filled with happiness.

 

All of which doesn't strike me as a good outcome, in total.

New to LessWrong?

New Comment
82 comments, sorted by Click to highlight new comments since: Today at 8:58 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

ISTM that the major flaw in Hanson's logic is the assumption that uploads won't replace themselves with simpler nonsentients based on their expertise. The real evolutionary pressure wouldn't be to have optimum levels of pain and pleasure, but to replace motivation with automation: it takes less power, computing time, and storage space.

6RobinHanson12y
The issue is the time period being considered. I don't claim to analyze an asymptotic future after all tech change has stopped. I instead try to consider the "next" era after foraging, farming, industry. While that era might be short on a cosmic timescale, it may be long subjectively to the creatures involved. At the moment human minds are vastly more productive than automation. Automation is slowly getting more capable yes, but with ems, they will also increase in efficiency.
4pjeby12y
At what? Tasks involving perceptual control? Social interaction?
6DanArmak12y
That's an explicit assumption he makes: that even the future ems will fail to design AIs or highly-modifed or nonhuman ems that will outcompete regular human ems. This seems to me very unlikely, but it's the premise of the discussion, as you correctly note. Edit: didn't mean to imply Hanson makes this assumption without arguing for it and justifying it. I'm pretty sure he's posted about it.

Stuart, it sounds like you think that the life of the typical animal, and of the typical human in history, were not worth living -- you'd prefer that they had never existed. Since you seem to think your own life worth living, you must see people like you as a rare exception, and may be unsure if your existence justifies all the suffering your ancestors went through to produce you. And you'd naturally be wary of a future of descendants with lives more like your ancestors' than like your own. What you'd most want from the future is to stop change enough to ensure that people very much like you continue to dominate.

If we conceive of "death" broadly, then pretty much any competitive scenario will have lots of "death", if we look at it on a large enough scale. But this hardly implies that individuals will often feel the emotional terror of an impending death - that depends far more on framing and psychology.

the life of the typical animal, and of the typical human in history, were not worth living -- you'd prefer that they had never existed.

When I read this, a part of my brain figuratively started jumping up and down and screaming "False Dichotomy! False Dichotomy!"

8Stuart_Armstrong12y
I'd prefer that their lives were better, rather than there were more of them. What I'd most want from the future is change in many directions (more excitement! more freedom! more fun!), but not in the direction of low-individual-choice, death-filled worlds (with possibly a lot of pain). I'd eagerly embrace a world without mathematicians, without males, without academics, without white people (and so on down the list of practically any of my characteristics), without me or any copy of me, in order to avoid the malthusian scenario.

Even if you and I might disagree on trading number/length of lives for some measure of quality, I hope you see that my analysis can help you identify policies that might push the future in your favored direction. I'm first and foremost trying to predict the outcomes of a low regulation scenario. That is the standard basis for analyzing the consequences of possible regulations.

4[anonymous]12y
Hang on, yesterday you were telling me that there's very little anyone could do to make a real difference to the outcome. So why tell Stuart that your analysis could be helpful in bringing about a different outcome?
3RobinHanson12y
The issue is the size of the influence you can have. Even if you only have a small influence, you still want to think about how to use it.
0Stuart_Armstrong12y
Certainly. I don't dispute the utility of the research (though I do sometimes feel that it is presented in ways that make the default outcome seem more attractive).
5[anonymous]12y
Something I've not been clear about (I think you might have changed your thinking about this): Do you see your malthusian upload future as something that we should work to avoid, or work to bring about?
7wedrifid12y
I notice that Robin avoided answering your question. For what it is worth if OvercomingBias.com posts are taken literally then Hanson has declared the scenario to be desirable and so assuming rudimentary consequentialist behavior some degree of "prefer to work to bring about" is implied. I am not sure exactly how much of what he writes on OB is for provocative effect rather than sincere testimony. I am also not sure how much such advocacy is based on "sour grapes" - that is, concluding that the scenario is inevitable and then trying to convince yourself that that what you wanted all along.
2[anonymous]12y
Yeah, the reason I asked is that he's been evasive about it before and I wanted to try to pin down an actual answer.
8RobinHanson12y
If you want more precise answers you have to ask more precise questions. Whether or not I favor any particular thing depends on what alternatives it is being compared with.
0[anonymous]12y
Okay, compare it to life now.
3RobinHanson12y
That isn't a realistic choice. If you mean imagine that 1) humanity continues on as it has without ems arriving, or 2) ems arrive as I envision, then we'd be adding trillions of ems with lives worth living onto the billions of humans who would exist anyway with a similar quality of life. That sounds good to me.
0[anonymous]12y
Thanks, that tells me what I wanted to know. (FWIW, I didn't mean that last one as a choice, just a comparison of two situations happening at different times, but i don't think that raelly matters)
6Multiheaded12y
Hanson sees moral language as something he should work to avoid. :D
5[anonymous]12y
s/should/would prefer/ or whatever.
0RobinHanson12y
People tend to assume they have more personal influence on these big far topics than they do. We might be able to make minor adjustments to what happens when, but we just don't get to choose between very different outcomes like uploads vs. AGI vs. nothing. We might work to make an upload world happen a bit sooner, or via a bit more stable a transition.
0Multiheaded12y
What's your take on the first mover advantage that EY is apparently hoping for?
0Lukas_Gloor12y
Why does it seem like Stuart considers his life worth living?
5Sabiola12y
Because he's still alive. I'm not sure that's enough evidence though; it started me thinking about whether I found my life worth living, and I just don't know. (It's a bad subject to get me thinking about, I'll try and stop it ASAP after this post). Right now, I'm a fat middle-aged translator who has to translate boring technical stuff. OTOH, I do have a sweet husband and a lovely cat, and the people who pay me for those translations must think they add some value to the world. In the 56 years leading up to that, do the positives outweigh the negatives, taking into account my tendency to remember the negatives (especially the embarrassing ones) much better? I can't tell, but if someone were proposing to create another one of me, to live the life I have lived so far, I'd say: "Why on earth would you want to do that?". That isn't to say I'm ready to commit suicide though. I do have my husband and cat, I have a holiday to look forward to, and death is probably a very nasty experience, especially a DIY one. [EDIT: fixed typo]
0Jayson_Virissimo12y
Revealed preference.
3Lukas_Gloor12y
But some people stay alive to make the world a better place, despite considering their life not worth living. I know several people with that view.
0DanArmak12y
As a slight aside: I've been arguing recently that we should use moral theories that are not universally applicable, but have better results than existing universal theories when they are applicable. In this case, you correctly point out that many moral theories have conflicts between their evaluation of the value of past lives (possibly negative) and their valuation of present existence (positive). Personally, I answer this by saying my moral theory doesn't need to make counterfactual choices about things in the past. It's enough that it be consistent about the future. I think that's a plausible answer, here, to the question of whether "my existence justifies the past suffering of my ancestors".

'Pain asymbolia' is when people feel pain but it isn't painful: they are aware of the damage but it causes no suffering. (As opposed to conditions like leprosy or diabetes, where the pain nerves are dead, report nothing, and this causes endless health problems.)

We already find it very useful to override pain in the interests of long-term gain or optimization (eg. surgery). Why should we not expect uploads to quickly be engineered to pain asymbolia? Pain which is more like a clock ticking away in the corner of one's eye than a needle through the eye doesn't seem like that bad a problem.

1Jack12y
You probably don't even need to do that much re-engineering. The 'suffering' of uploads in a Malthusian existence isn't physical pain, just endless mental drudge work. They could probably just interfere with their emotional experience of time so that they didn't get bored or overwhelmed. See, e.g. Diane Van Deren who became one of the world's top ultra-runners (100-300 mile races) after epilepsy surgery damaged her perception of time.
1gwern12y
Interesting; I'd never heard of her before. My closest example was the late Jure Robic
1[anonymous]12y
Those uploads would probably be outcompeted by uploads that feel extreme pain any time they aren't working.
7gwern12y
Do companies that judge projects based on their Return on Investment over the next week outcompete companies that judge RoI over months or years?
4[anonymous]12y
Okay, it couldn't be taken to extreme levels, but I think some things (like arguing about uploads on LW) are sufficiently unlikely to improve workplace productivity that a dose of pain for doing it would be have positive expected survival value.

Far more efficiently dealt with by some simple cognitive prostheses like RescueTime... What's better, a few machine instructions matching a blocked Web address, or reengineering the architecture of the brain with, at a minimum, operant conditioning? This is actually a good example of how a crude ancestral mechanism like pain is not very adaptive or applicable to upload circumstances!

1[anonymous]12y
I'll concede that it's not terribly likely, then (with the pascal's wager caveat of it being very bad if it is true (and the anti-caveat that I don't think the upload scenario is stable anyway))
0Stuart_Armstrong12y
Selection, not reegineering. The question is whether there are people alive today with the best sets of characteristics to become these malthusian uploads.
0Stuart_Armstrong12y
The mass of uploads seem much more likely to be contractors rather than employees or bosses; hence they would be required to be very performant in the short term. Even if they are employees, their short-term copies would have the same requirements.
2DanArmak12y
Only if nobody succeeded in developing non-pain-based cognitive architecture modifications that achieved competitive results. E.g., making work addictive via positive feedback. Very simplified POCs are already feasible in lab rats, so I expect future ems (which would allow for very rapid and extensive modification and testing) could solve the problem for humans. The interesting question is whether there will be legal or market pressures for anyone to work on the problem at all.

I've never been able to figure out what sort of work ems would do once everything available has been turned into computronium. A few of them would do maintenance on the physical substrate, but all I can imagine for the rest is finding ways to steal computational resources from each other.

What are humans doing now that we need only ~2% of the workforce to grow food and ~15% to design and make stuff?

4RobinHanson12y
Most of those other people are doing useful tasks, without which people wouldn't get nearly as much of what they want. If you don't understand our current economy, you don't have much of a prayer of understanding future ones.

I didn't say the rest weren't doing useful tasks. On the contrary, I meant to imply that if only a fraction of the workforce works on providing subsistence directly and obviously, it doesn't mean that the rest are useless rent-seekers.

(That said, I probably do have a more pessimistic view than you about the amount of rent-seeking and makework that takes place presently.)

0NancyLebovitz12y
Probably a fair point. One of the things we do is keep each other entertained. Ems would still need "sensory" stimulation, though part of having a work ethic is not needing a lot of sensory stimulation.
1[anonymous]12y
Quite. I also don't think emulation is going to come anything like as quickly as most people here seem to think. I'll start to think that maybe emulation might happen in the next couple of centuries the day I see a version of WINE that doesn't have half the programs that one might want to run in it crashing...
3Luke_A_Somers12y
A century is a very long time indeed. Think back to 1912.
9[anonymous]12y
I used to work on a program that was designed to run binaries compiled for one processor on another. It was only meant to run the binaries compiled for a single minor revision of a GNU/Linux distro on one processor on the same minor revision of the same distro on another processor. We had access to the source code of the distro -- and got some changes made to make our job easier. We had access to the full chip design of one chip (to which, again, there were changes made for our benefit), and to the published spec of the other. We managed to get the product out of the door, but every single code change -- even, at times, changes to non-functional lines of code like comments -- would cause major problems (mention the phrase "Java GUI" to me even now, a couple of years later, and I'll start to twitch). We would only support a limited subset of functionality, it would run at a fraction of the speed, and even that took a hell of a lot of work to do at all. Now, that was just making binaries compiled for a distro for which we had the sources to run on a different human-designed von Neumann-architecture chip. Given my experience of doing even that, I'd say the amount of time it would take (even assuming continued progress in processor speeds and storage capacity, which is a huge assumption) to get human brain emulation to the point where an emulated brain can match a real one for reliability and speed is in the region of a couple of hundred years, yes.
0RobinHanson12y
Yes, emulation can be hard. But even so, writing software with the full power of the human brain from scratch seems much harder. If you agree, then you should still expect emulations to be the first AI to arrive.
2[anonymous]12y
I disagree. In general I think that once the principles involved are fully understood, writing from scratch a program that performs the same generic tasks as the human brain would be easier than emulating a specific human brain. In fact I suspect that the code for an AI itself, if one is ever created, will be remarkably compact -- possibly the kind of thing that could be knocked up in a few lines of Perl once someone has the correct insights into the remaining problems. AIXI, for example, would be a trivially short program to write, if one had the computing power necessary to make it workable (which is not going to happen, obviously). My view (and it is mostly a hunch) is that implementing generic intelligence will be a much, much easier task than implementing a copy of a specific intelligence that runs on different hardware, in much the same way that if you're writing a computer racing game it's much easier to create an implementation of a car that has only the properties needed for the game than it would be to emulate an entire existing car down to the level of the emissions coming out of the exhaust pipe and a model of the screwed up McDonald's wrapper under the seat. The latter would be 'easy' in the sense of just copying what was there rather than creating something from basic principles, but I doubt it's something that would be easier to do in practice.
-2asr12y
Building emulators is hard. But I think it isn't quite so hard as that, these days. Apple has now done it twice, and been able to run a really quite large subset of Mac software after each transition. Virtual machines are reasonably straightforward engineering at this point. Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine -- and they're quite robust these days with very small performance penalties. All these are certainly very large software engineering projects -- but they're routine engineering, not megaprojects, at this stage. Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. Probably small changes in the physics model correspond to small changes in temperature, chemical content, etc. And an emulation that's as good as a slightly feverish and drunk person would still be impressive and even useful.
3[anonymous]12y
" Apple has now done it twice," No they didn't. At least one of those times was actually the software I described above, bought from the company I worked for. So I know exactly how hard it was to create. "Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine" -- which the engineers themselves get to specify, design and implement, "Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. " I would love to live in a world where re-implementing an algorithm that runs on meat, so it runs on silicon instead, amounted to a 'minor detail of underlying platform'. I live i this one, however.
0asr12y
I had assumed we were talking about low-level emulation: the program explicitly models each neuron, and probably at a lower level than that. And physical simulation is a well understood problem and my impression is that the chemists are pretty good at it. Trying to do some clever white-box reimplementation of the algorithm I agree is probably intractable or worse. The emulation will be very far from the optimal implementation of the mind-program in question.
-1CronoDAS12y
On the other hand, the only inventions of any significance made between between 1930 and 2012 were personal computers, antibiotics, and nuclear weapons.
2[anonymous]12y
Just taking 'invention' in terms of physically existent technology (where algorithms etc or new processes don't count) that people experience in their everyday life -- The laser, the transistor, MRI scanners, genetic engineering, the jet engine, the mobile phone, nylon, video recording, electrical amplification of musical instruments, electronic instruments, artificial pacemakers... Add in vast improvements to previously existing technologies (I think getting people on the moon may have been mildly significant), and scientific breakthroughs that have made whole areas of technology more efficient (information theory, the Turing machine, cybernetics) and those 82 years have been some of the most inventive in human history.
-2CronoDAS12y
Of the ones you listed, I might grant you the jet engine, which I suppose one could argue was as big an advance in transportation as the railroad, since it let people travel at 700 miles an hour instead of 70 miles an hour. Most of what you mentioned wasn't even as important as the electric washing machine. (Genetic engineering has a lot of potential, but it hasn't had much of an influence yet. We'll need another century to really figure out how to take advantage of it - we don't even know how to make a tree grow into the shape of a house!)

This article has given me an idea about the new worst case scenario for preference utiltiarianism: A lot of computing power and an algorythm that will make different minds pop in and out of existence. Each time the mind has a different combination of preferences out of some vast template space for possible minds. And each time, the mind is turned off (forever) after a very brief period of existence. How much computing power and time would it need to force a preference utiltiarian to torture every human being on earth if that were the only way to prevent the simulation?

Malthusian cases arise mainly when reproduction is involuntary or impulsive, as it is with humans. It seems highly unlikely that ems will have the same mechanisms in place for this.

Plus, a 'merge' function would solve the 'fork and die' problem.

Instead of the deletion or killing of uploads that want to live but can't cut it economically, why not slow them down? (Perhaps to the point where they are only as "quick" and "clever" as an average human being is today.) Given that the cost of computation keeps decreasing, this should impose a minimal burden on society going forward. This could also be an inducement to find better employment, especially if employers can temporarily grant increased computation resources for the purposes of the job.

0RobinHanson12y
This is close to the me-now immorality that I have said can be possible: http://www.overcomingbias.com/2011/12/the-immortality-of-me-now.html
0DanArmak12y
If you assume resources will be spent on the happiness/continued life/etc. of uploads, you might as well stipulate they'll have simulated off-hours at home instead of being actually Malthusian. This discussion is about whether, as Hanson suggests, natural economic evolution - with no extra protection provided by law - might result in not-entirely-awful lives for futures ems. In a computation-intensive society, demand is almost certainly infinite. If the cost of computation decreases, the amount of computation done increases. More em (upload) copies are created, or existing ones run faster; either way, carrying out more work. Society grows. Computation market prices can and do go down. But since society can grow almost infinitely quickly (by copying ems), from an em's POV it's more relevant to say that everything else's price goes up. This relies on the crucial assumption that there's a limit to how much you can speed up an em relative to the physical universe. If not a hard limit, some other reason speeding them up has diminishing returns. Otherwise we might as well talk about a society of <10 planet-sized Jupiter brains, each owning its physical computing substrate and so immortal short of violent death.
0stcredzero12y
A society of super-optimizers better have a darn good reason for allowing resource use to outstrip N^3. (And no doubt, they often will.) A society of super-optimizers that regulates itself in a way resulting in mass death either isn't so much super-optimized, or has a rather (to me) unsavory set of values. Past a certain point of optimization power, all deaths become either violent or voluntary.
0DanArmak12y
Yes, that's exactly the point of this discussion.

Nonsentient AI doing all the work necessary is a far better option. The protocol regulating uploading and multiplying them should be implemented in time.

An upload may be only a pleasure recipient, nothing else.

2jhuffman12y
So you would contrive to make it illegal and/or impossible for an upload to do any productive work? At least none that they receive more benefit from than the average of all others?
5Thomas12y
Sooner or later, it becomes non-optimal for a human (or an upload) to do ANY kind of work. I can't imagine anything, what could be only done by a human (or an upload). If you want something to be done, there is an optimal set of algorithms which will do that the best. Having humans (or uploads) for doing it, is just a relict of the past, when it was the only way.
1moridinamael12y
I somewhat agree, or at least, I agree more with this than I agree with the assumption that the risk of Hanson's Malthusian upload scenario is worth more than a passing thought. Consider the conditions that have to exist during the technological window in which it is fast and cheap to reproduce and run uploaded humans but it is impossible to build a strongly superhuman AI which outcompetes any number of human uploads. Anyway, the concept of "wealth" has already morphed beyond Hanson's definitions since the advent of mere online games. I'm not sure why this scenario keeps getting brought up as a real thing.
7RobinHanson12y
Online games do not invalidate the usual concept of wealth.
2moridinamael12y
I don't believe I said that online games invalidate the concept of wealth. What I was getting at is that online games hit the human motivational system in such a way that work and entertainment become the same thing, while simultaneously replacing physical goods with assets that are only real in the most gerrymandered sense of the word. This may not "invalidate the concept" but it sure causes people to act in ways that don't follow old-fashioned economic models. Of course you can patch the models in light of this new data, but then they are new models, and you have surreptitiously modified the concept of wealth. I would argue that the behavior of people who play online games certainly does contradict the folk conception of what wealth is and what it means and what it does. (There's the side issue that I could point to literally any social/economic arrangement existing between human beings and an economist could explain to me why the concept of wealth still exists in that society. This is because the concept is sufficiently vague as to be inescapable. This does not necessarily mean that the concept has corresponding predictive power. Other concepts might be more appropriate. Other models might be more predictive.)

The virtualization of conflict neatly solves this. Nature makes conflict virtual, as part of its drive towards efficiency. The result is conflicts between companies and sports teams. When these "die" it is sometimes sad, but no human beings are seriously harmed in the process. It's Darwinian evolution that has lost its sting. Evolution via differential reproductive success is largely an alternative to evolution via death.