You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

SteveG comments on Should We Shred Whole-Brain Emulation? - Less Wrong Discussion

-6 Post author: SteveG 09 July 2015 10:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread.

Comment author: SteveG 10 July 2015 01:56:58PM *  0 points [-]

Just laying some more groundwork... One distinction the discussion requires:

Who is in control of the components and the environment of the emulation?

Possibilities:

An outside entity, attempting to gain economic or other value by using the emulation to complete information processing tasks. (I'll call this "The Boss.")

-The environment was established to maintain the emulation, which is not "given a job," but was created for scientific observation by outsiders.

-The emulation is not given a job, but environment was created by outsiders as a platform for experimentation on emulations.

-Perhaps the emulation was created as an "upload" of a person, or as their designed child or progeny.

-The emulation has a greater or lesser degree of control over its own environment or composition.

Example of lesser degree of control: It can decide to select some of the content it sees and listens to.

Example of greater degree of control: It can directly alter one of its emotions by "twisting a knob."

Comment author: SteveG 11 July 2015 03:33:11AM 0 points [-]

Uploads and those creating a WBE-like entity as progeny most likely would prefer to add improvements to a greater or lesser extent, rather than complete fidelity.

Some people may argue that WBEs should lead as natural an existence as possible, one very much like people.

On the assumption that these people value their uploads or progeny, however, some aspects of life experience would be edited out. For example, what would motivate one of these creators to pass their WBEs through an unpleasant end-of-life experience, like vascular dementia?

The emulated lives of uploads and progeny would, to a greater or lesser extent, be edited. We could try to reason more about that.

Comment author: SteveG 11 July 2015 03:42:59AM 0 points [-]

Would uploads avoid self improvement? If we are going to try to address this question, we should first consider the plausibility and importance of the whole upload concept.

Given the power and relatively young age of some Silicon Valley executives who seem to see uploading as part of their future, we might want to check to see whether the pursuit of uploading would have any side-effects.

If we believe that uploads are malleable and improvable, then the technology to create uploads would also permit the creation of more powerful minds, with all the consequences.

Comment author: SteveG 11 July 2015 03:18:53AM 0 points [-]

Suppose that a emulations will be created to study how the brains of flesh-and-blood people work in general, or to study and forecast how a particular, living person will react to stimulus.

This is a reasonable application of high-fidelity whole-brain emulation. To use such emulations to forecast behavior, though, the emulation would have to be "run" on a multi-dimensional distribution of possible future sets of environmental stimuli. The variation in these distributions grows combinatorially, so even tens of thousands of runs would only provide some information about what the person is likely to do next.

Such WBEs would be only one tool in a toolbox to predict human behavior. However, they would be useful for that purpose. Your WBE could be fed many possible future lives, allowing you to make better choices about your future in the physical world, if using WBEs in that manner was considered ethical.

People on this site generally seem to agree, though, that using a high-fidelity WBE as a guinea pig to test out life scenarios is ethically problematic. If these life scenarios were biased in favor of delivering positive outcomes to the WBEs, maybe we would not have as much of a problem with that. Perhaps the interaction of two WBEs could be observed over many scenarios, allowing people to better choose companions.

WBEs could end up being used for this purpose, ethical or not. Again, though, I suspect that more data about people's reactions could be gained if modified WBEs were used in some of the tests.

It's worth exploring, but high-performance neuromorphic or algorithmic minds would still be the better choice for actually controlling physical conditions.

Comment author: SteveG 10 July 2015 02:14:02PM 0 points [-]

If the emulation is controlled by "The Boss," what incentives does "The Boss" have?

-to increase the emulation's throughput and efficiency -to increase the emulation's focus on the task that generates value -to avoid activities by regulators, protesters or other outsiders which could cause work stoppages.

Comment author: SteveG 10 July 2015 02:31:19PM 0 points [-]

These characteristics are more available to "The Boss" if "The Boss" considerably alters a malleable emulation.

Such an altered emulation is now neuromorphic.

Thus: if one or more "Bosses" is constructing a workforce, these "Bosses" will prefer neuromorphic components over whole-brain emulations.

Thus, if emulations are sufficiently malleable, there is no economy of whole-brain emulations: There is an economy of neuromorphic computing resources.

Comment author: SteveG 10 July 2015 02:41:41PM *  0 points [-]

So, if we can establish that progress in emulation technology will quickly result in functional, malleable products, then for the most part future productivity will be generated by purpose-built neuromorphic computing resources rather than by human-like WBEs.

Comment author: SteveG 10 July 2015 03:46:29PM 0 points [-]

Unless, prior to the emergence of neuromorphic AI, forms of AI that do not include neurologically-inspired elements become more dominant.

Comment author: SteveG 10 July 2015 02:25:55PM 0 points [-]

If the technology is available, "The Boss" will prefer that its work force have high-speed connections to other computing resources. "The Boss" will also prefer that its work force have high-speed connections to whatever sensory input is relevant to the task.

Comment author: SteveG 10 July 2015 02:22:22PM 0 points [-]

"The Boss" can get more done if it can create new workers, and turn them on and off at will, without ethical or regulatory constraints.

If the technology is available,"The Boss" will prefer to employ cogntive capacity which has no personhood, and to which it has no ethical obligation.