Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Behavior: The Control of Perception

21 Vaniver 21 January 2015 01:21AM

This is the second of three posts dealing with control theory and Behavior: The Control of Perception by William Powers. The previous post gave an introduction to control theory, in the hopes that a shared language will help communicate the models the book is discussing. This post discusses the model introduced in the book. The next post will provide commentary on the model and what I see as its implications, for both LW and AI.

continue reading »

... And Everyone Loses Their Minds

9 Ritalin 16 January 2015 11:38PM

Chris Nolan's Joker is a very clever guy, almost Monroesque in his ability to identify hypocrisy and inconsistency. One of his most interesting scenes in the film has him point out how people estimate horrible things differently depending on whether they're part of what's "normal", what's "expected", rather than on how inherently horrifying they are, or how many people are involved.

Soon people extrapolated this observation to other such apparent inconsistencies in human judgment, where a behaviour that once was acceptable, with a simple tweak or change in context, becomes the subject of a much more serious reaction.

I think there's rationalist merit in giving these inconsistencies a serious look. I intuit that there's some sort of underlying pattern to them, something that makes psychological sense, in the roundabout way that most irrational things do. I think that much good could come out of figuring out what that root cause is, and how to predict this effect and manage it.

Phenomena that come to mind, are, for instance, from an Effective Altruism point of view, the expenses incurred in counter-terrorism (including some wars that were very expensive in treasure and lives), and the number of lives said expenses save, compared with the number of lives that could be saved by spending that same amount into improving road safety, increasing public helathcare expense where it would do the most good, building better lightning rods (in the USA you're four times more likely to be struck by thunder than by terrorists), or legalizing drugs.

What do y'all think? Why do people have their priorities all jumbled-up? How can we predict these effects? How can we work around them?

LINK: Diseases not sufficiently researched

2 polymathwannabe 17 January 2015 04:03PM

This Chart Shows The Worst Diseases That Don't Get Enough Research Money

We have already covered this topic several times on LW, but what prompted me to link this was this remark:

Of course, where research dollars flow isn't —and shouldn't be— dictated simply in terms of which diseases lay claim to the most years, but also by, perhaps most importantly, where researchers see the most potential for a breakthrough.

[Edit: a former, dumber version of me had asked, "I wonder what criterion the author would prefer," before the correct syntax of the sentence was pointed out to me.]

Opinions?

Respond to what they probably meant

11 adamzerner 17 January 2015 11:37PM

Story

I was confused about Node Modules, so I did a bunch of research to figure out how they work. Explaining things helps me to understand them, and I figured that others might benefit from my explanation, so I wrote a blog post about them. However, I'm inexperienced and still unsure of exactly what's going on, so I started the blog post off with a disclaimer:

Disclaimer

I'm a bit of a noob. I just graduated from a coding bootcamp and am still trying to wrap my head around this stuff myself (that's actually why I'm writing this article). I tried to do my research, but I can't guarantee that everything is correct. Input from more knowledgeable people is very welcome.

My friend said that it's a bad idea to do that. He said:

You're literally discrediting yourself in the first sentence of the article. Stand by what you've written!

I interpreted what he said literally and basically responded by saying:

Why should I "stand by what I've written"? What I mean to communicate to the readers is that, "I'm x% sure of what I'm about to say." To "stand by what I've written" is to assign a higher confidence to the things I've written than what my true confidence is. It wouldn't even be a stretch to interpret "stand by what you've written" as meaning "claim that you're 100% sure of what you've written". Why would I do that?

This was stupid of me. He didn't mean "claim that you're 100% sure of what you've written". He didn't mean "pretend that you're way more confident in what you've written than what you really are". He meant, "I think that it comes across as you being less confident than you actually are. And so I think you should reword it to better communicate your confidence."

I shouldn't have interpreted what he said so literally. I should have thought about and responded to what I thought he meant to say. (Although, he also should have been more precise...)

Thesis

People often interpret and respond to statements literally. Instead of doing this, it's often useful to think about and respond to what the other person probably meant.

For example, "If I interpret what you said literally, then A. But you probably meant X, so B. If you meant Y, then C."

Depending on how confident you are in your interpretation, you should probably respond to a variety of possibilities. Like if you're < 80% sure that you know what they meant, you should probably respond to possibilities that have at least a 5% chance of being what they meant. I'm not sure whether 80 and 5 are the right numbers, but hopefully it communicates the point.

Why don't people do this?

I see two likely reasons:

  1. The whole "argument is a war that I must win" attitude.
  2. Habit.
1 - "You said X! Gotcha! That's stupid! You're wrong!". This clearly isn't a productive approach.
2 - I think that a lot of people - myself included - have a bad habit of interpreting things too literally. Well actually, that by itself isn't what's bad. What's bad is stopping after your literal analysis, and not considering alternatives that are likely to be what they actually meant. This bad habit isn't ill-intentioned - that's why I distinguish it from reason 1). It's just an analytical impulse.

Practical considerations

In "low friction" situations (like when you're talking to someone face-to-face), it's probably a better idea to just say, "I think that what you're trying to say is X. Is that true?". Ie. instead of responding to what you think they mean... you could just ask them to clarify.

In higher friction situations, there's a cost (in time and/or effort) to having one person stop talking and another person start talking. Like in online discussions, you might have to wait a while before they respond. So if you're 95% sure that you know what they meant, you could just say, "I think that you meant X, so A. But if you meant Y, then B". The alternative is to respond by saying, "I think you meant X but I'm not sure. Did you mean X", and then having to wait for a reply.

I'm having trouble thinking of other "higher friction situations". Perhaps a (semi)formal debate where you have to speak for a certain length of time would be a good example. In this situation you're expected to just keep speaking, so you can't pause to ask people what they meant - you just have to think about and respond to the possibilities on the spot.

Another practical point to make is that the flow of the conversation has to be taken into account. Stopping to address every possible interpretation of what the other person said is obviously impractical - it'd take too long, and it's hard for everyone to follow the logic of the conversation.

However, I think that my core point is applicable for all types of conversations. The goal of communication is for each person to interpret and respond to the others' statements. Interpreting things literally instead of thinking about what the other person probably meant to say is a failure to interpret, and it impedes communication.

Edit: I have just learned that what I'm referring to is The Principle of Charity.

Misapplied economics and overwrought estimates

2 erratim 12 January 2015 05:10PM

I believe that a small piece of rationalist community doctrine is incorrect, and I'd like your help correcting it (or me). Arguing the point by intuition has largely failed, so here I make the case by leaning heavily on the authority of conventional economic wisdom.

The question:

How does an industry's total output respond to decreases in a consumer's purchases; does it shrink by a similar amount, a lesser amount, or not at all?

(Short-run) Answers from the rationalist community:

The consensus answer in the few cases I've seen cited in the broader LW community appears to be that production is reduced by an amount that's smaller than the original decrease in consumption.

Animal Charity Evaluators (ACE):

Fewer people in the market for meat leads to a drop in prices, which causes some other people to buy more meat. The drop in prices does also reduce the amount of meat produced and ultimately consumed, but not by as much as was consumed by people who have left the market.

Peter Hurford:

As is commonly known by economists, when you choose to not buy a product, you lower the demand ever so slightly, which lowers the price ever so slightly, which turns out to re-increase the demand ever so slightly. Therefore, forgoing one pound of meat means that less than one pound of meat actually gets prevented from being factory farmed.

Compassion, by the Pound:

The key points to note are that a permanent decision to reduce meat consumption (1) does ultimately reduce the number of animals on the farm and the amount of meat produced (2), but it has less than a 1-to-1 effect on the amount of meat produced. 

These answers are all correct in the short-run (ie, when the “supply curve” doesn’t have time to shift). If there is less demand for a product, the price will fall, and some other consumers will consume more because of the better deal. One intuitive justification for this is that when producers don’t have time to fully react to a change in demand, the total amount of production and consumption is somewhat ‘anchored’ to prior expectations of demand, so any change in demand will have less than a 1:1 effect on production.

For example, a chicken producer who begins to have negative profits due to the drop in price isn't going to immediately yank their chickens from the shelves; they will sell what they've already produced, and maybe even finish raising the chickens they've already invested in (if the remaining marginal cost is less than the expected sale price), even if they plan to shut down soon.

(Long-run) Answers from neoclassical economics:

In the long-run, however, the chicken producer has time to shrink or shut down the money-losing operation, which reduces the number of chickens on the market (shifts the "supply curve" to the left). The price rises again and the consumers that were only eating chicken because of the sale prices return to other food sources.

As a couple of online economics resources put it:

Policonomics:

The long-run market equilibrium is conformed of successive short-run equilibrium points. The supply curve in the long run will be totally elastic as a result of the flexibility derived from the factors of production and the free entry and exit of firms.

 

AmosWEB*:

The increase in demand causes the equilibrium price of zucchinis [to] increase... and the equilibrium quantity [to] rise... The higher price and larger quantity is achieved as each existing firm in the industry responds to the demand shock.

However, the higher price leads to above-normal economic profit for existing firms. And with freedom of entry and exit, economic profit attracts kumquat, cucumber, and carrot producers into this zucchini industry. An increase in the number of firms in the zucchini industry then causes the market supply curve to shift. How far this curve shifts and where it intersects the new demand curve... determines if the zucchini market is an increasing-cost, decreasing-cost, [or] constant-cost industry.

Constant-Cost Industry: An industry with a horizontal long-run industry supply curve that results because expansion of the industry causes no change in production cost or resource prices. A constant-cost industry occurs because the entry of new firms, prompted by an increase in demand, does not affect the long-run average cost curve of individual firms, which means the minimum efficient scale of production does not change.

[I left out the similar explanations of the increasing- and decreasing-cost cases from the quote above.]

In other words, while certain market characteristics (increasing-cost industries) would lead us to expect that production will fall by less than consumption in the long-run, it could also fall by an equal amount, or even more.

Short-run versus long-run

Economists define the long-run as a scope of time in which producers and consumers have time to react to market dynamics. As such, a change in the market (e.g. reduction in demand) can have one effect in the short-run (reduced price), and a different effect in the long-run (reduced, constant, or increased price). In the real world, there will be many changes to the market in the short-run before the long-run has a chance to react to to any one of them; but we should still expect it to react to the net effect of all of them eventually.

Why do economists even bother measuring short-run dynamics (such as short-run elasticity estimates) on industries if they know that a longer view will render them obsolete? Probably because the demand for such research comes from producers who have to react to the short-run. Producers can't just wait for the long-run to come true; they actively realize it by reacting to short-run changes (otherwise the market would be 'stuck' in the short-run equilibrium).

So if we care about long-run effects, but we don't have any data to know whether the industries and increasing-cost, constant-cost, or decreasing-cost, what prior should we use for our estimates? Basic intuition suggests we should assume an industry is constant-cost in the absence of industry-specific evidence. The rationalist-cited pieces I quoted above are welcome to make an argument that animal industries in particular are increasing-cost, but they haven't done that yet, or even acknowledged that the opposite is also possible.

Are there broader lessons to learn?

Have we really been messing up our cost-effectiveness estimates simply by confusing the short-run and long-run in economics data? If so, why haven't we noticed it before?

I'm not sure. But I wouldn't be surprised if one issue is, in the process of trying to create precise cost-effectiveness-style estimates it's tempting to use data simply because it's there.

How can we identify and prevent this bias in other estimates? Perhaps we should treat quantitative estimates as chains that are no stronger than their weakest link. If you're tempted to build a chain with a particularly weak link, consider if there's a way to build a similar chain without it (possibly gaining robustness at the cost of artificial precision or completeness) or whether chain-logic is even appropriate for the purpose.

For example, perhaps it should have raised flags that ACE's estimates for the above effect on broiler chicken production (which they call "cumulative elasticity factor" or CEF) ranged by more than a factor of 10x, adding almost as much uncertainty to the final calculation for broiler chickens as the 5 other factors combined. (To be fair, the CEF estimates of the other animal products were not as lopsided.)

Superintelligence 18: Life in an algorithmic economy

3 KatjaGrace 13 January 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the eighteenth section in the reading guideLife in an algorithmic economy. This corresponds to the middle of Chapter 11.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Life in an algorithmic economy” from Chapter 11


Summary

  1. In a multipolar scenario, biological humans might lead poor and meager lives. (p166-7)
  2. The AIs might be worthy of moral consideration, and if so their wellbeing might be more important than that of the relatively few humans. (p167)
  3. AI minds might be much like slaves, even if they are not literally. They may be selected for liking this. (p167)
  4. Because brain emulations would be very cheap to copy, it will often be convenient to make a copy and then later turn it off (in a sense killing a person). (p168)
  5. There are various other reasons that very short lives might be optimal for some applications. (p168-9)
  6. It isn't obvious whether brain emulations would be happy working all of the time. Some relevant considerations are current human emotions in general and regarding work, probable selection for pro-work individuals, evolutionary adaptiveness of happiness in the past and future -- e.g. does happiness help you work harder?--and absence of present sources of unhappiness such as injury. (p169-171)
  7. In the long run, artificial minds may not even be conscious, or have valuable experiences, if these are not the most effective ways for them to earn wages. If such minds replace humans, Earth might have an advanced civilization with nobody there to benefit. (p172-3)
  8. In the long run, artificial minds may outsource many parts of their thinking, thus becoming decreasingly differentiated as individuals. (p172)
  9. Evolution does not imply positive progress. Even those good things that evolved in the past may not withstand evolutionary selection in a new circumstance. (p174-6)

Another view

Robin Hanson on others' hasty distaste for a future of emulations: 

Parents sometimes disown their children, on the grounds that those children have betrayed key parental values. And if parents have the sort of values that kids could deeply betray, then it does make sense for parents to watch out for such betrayal, ready to go to extremes like disowning in response.

But surely parents who feel inclined to disown their kids should be encouraged to study their kids carefully before making such a choice. For example, parents considering whether to disown their child for refusing to fight a war for their nation, or for working for a cigarette manufacturer, should wonder to what extend national patriotism or anti-smoking really are core values, as opposed to being mere revisable opinions they collected at one point in support of other more-core values. Such parents would be wise to study the lives and opinions of their children in some detail before choosing to disown them.

I’d like people to think similarly about my attempts to analyze likely futures. The lives of our descendants in the next great era after this our industry era may be as different from ours’ as ours’ are from farmers’, or farmers’ are from foragers’. When they have lived as neighbors, foragers have often strongly criticized farmer culture, as farmers have often strongly criticized industry culture. Surely many have been tempted to disown any descendants who adopted such despised new ways. And while such disowning might hold them true to core values, if asked we would advise them to consider the lives and views of such descendants carefully, in some detail, before choosing to disown.

Similarly, many who live industry era lives and share industry era values, may be disturbed to see forecasts of descendants with life styles that appear to reject many values they hold dear. Such people may be tempted to reject such outcomes, and to fight to prevent them, perhaps preferring a continuation of our industry era to the arrival of such a very different era, even if that era would contain far more creatures who consider their lives worth living, and be far better able to prevent the extinction of Earth civilization. And such people may be correct that such a rejection and battle holds them true to their core values.

But I advise such people to first try hard to see this new era in some detail from the point of view of its typical residents. See what they enjoy and what fills them with pride, and listen to their criticisms of your era and values. I hope that my future analysis can assist such soul-searching examination. If after studying such detail, you still feel compelled to disown your likely descendants, I cannot confidently say you are wrong. My job, first and foremost, is to help you see them clearly.

More on whose lives are worth living here and here.

Notes

1. Robin Hanson is probably the foremost researcher on what the finer details of an economy of emulated human minds would be like. For instance, which company employees would run how fast, how big cities would be, whether people would hang out with their copies. See a TEDx talk, and writings hereherehere and here (some overlap - sorry). He is also writing a book on the subject, which you can read early if you ask him. 

2. Bostrom says,

Life for biological humans in a post-transition Malthusian state need not resemble any of the historical states of man...the majority of humans in this scenario might be idle rentiers who eke out a marginal living on their savings. They would be very poor, yet derive what little income they have from savings or state subsidies. They would live in a world with  extremely advanced technology, including not only superintelligent machines but also anti-aging medicine, virtual reality, and various enhancement technologies and pleasure drugs: yet these might be generally unaffordable....(p166)

It's true this might happen, but it doesn't seem like an especially likely scenario to me. As Bostrom has pointed out in various places earlier, biological humans would do quite well if they have some investments in capital, do not have too much of their property stolen or artfully manouvered away from them, and do not undergo too massive population growth themselves. These risks don't seem so large to me.

3. Paul Christiano has an interesting article on capital accumulation in a world of machine intelligence.

4. In discussing worlds of brain emulations, we often talk about selecting people for having various characteristics - for instance, being extremely productive, hard-working, not minding frequent 'death', being willing to work for free and donate any proceeds to their employer (p167-8). However there are only so many humans to select from, so we can't necessarily select for all the characteristics we might want. Bostrom also talks of using other motivation selection methods, and modifying code, but it is interesting to ask how far you could get using only selection. It is not obvious to what extent one could meaningfully modify brain emulation code initially. 

I'd guess less than one in a thousand people would be willing to donate everything to their employer, given a random employer. This means to get this characteristic, you would have to lose a factor of 1000 on selecting for other traits. All together you have about 33 bits of selection power in the present world (that is, 7 billion is about 2^33; you can divide the world in half about 33 times before you get to a single person). Lets suppose you use 5 bits in getting someone who both doesn't mind their copies dying (I guess 1 bit, or half of people) and who is willing to work an 80h/week (I guess 4 bits, or one in sixteen people). Lets suppose you are using the rest of your selection (28 bits) on intelligence, for the sake of argument. You are getting a person of IQ 186. If instead you use 10 bits (2^10 = ~1000) on getting someone to donate all their money to their employer, you can only use 18 bits on intelligence, getting a person of IQ 167. Would it not often be better to have the worker who is twenty IQ points smarter and pay them above subsistance?

5. A variety of valuable uses for cheap to copy, short-lived brain emulations are discussed in Whole brain emulation and the evolution of superorganisms, LessWrong discussion on the impact of whole brain emulation, and Robin's work cited above.

6. Anders Sandberg writes about moral implications of emulations of animals and humans.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Is the first functional whole brain emulation likely to be (1) an emulation of low-level functionality that doesn’t require much understanding of human cognitive neuroscience at the computational level, as described in Sandberg & Bostrom (2008), or is it more likely to be (2) an emulation that makes heavy use of advanced human cognitive neuroscience, as described by (e.g.) Ken Hayworth, or is it likely to be (3) something else?
  2. Extend and update our understanding of when brain emulations might appear (see Sandberg & Bostrom (2008)).
  3. Investigate the likelihood of a multipolar outcome?
  4. Follow Robin Hanson (see above) in working out the social implications of an emulation scenario
  5. What kinds of responses to the default low-regulation multipolar outcome outlined in this section are likely to be made? e.g. is any strong regulation likely to emerge that avoids the features detailed in the current section?
  6. What measures are useful for ensuring good multipolar outcomes?
  7. What qualitatively different kinds of multipolar outcomes might we expect? e.g. brain emulation outcomes are one class.
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about the possibility of a multipolar outcome turning into a singleton later. To prepare, read “Post-transition formation of a singleton?” from Chapter 11The discussion will go live at 6pm Pacific time next Monday 19 January. Sign up to be notified here.

Selfish preferences and self-modification

4 Manfred 14 January 2015 08:42AM

One question I've had recently is "Are agents acting on selfish preferences doomed to having conflicts with other versions of themselves?" A major motivation of TDT and UDT was the ability to just do the right thing without having to be tied up with precommitments made by your past self - and to trust that your future self would just do the right thing, without you having to tie them up with precommitments. Is this an impossible dream in anthropic problems?

 

In my recent post, I talked about preferences where "if you are one of two copies and I give the other copy a candy bar, your selfish desires for eating candy are unfulfilled." If you would buy a candy bar for a dollar but not buy your copy a candy bar, this is exactly a case of strategy ranking depending on indexical information.

This dependence on indexical information is inequivalent with UDT, and thus incompatible with peace and harmony.

 

To be thorough, consider an experiment where I am forked into two copies, A and B. Both have a button in front of them, and 10 candies in their account. If A presses the button, it deducts 1 candy from A. But if B presses the button, it removes 1 candy from B and gives 5 candies to A.

Before the experiment begins, I want my descendants to press the button 10 times (assuming candies come in units such that my utility is linear). In fact, after the copies wake up but before they know which is which, they want to press the button!

The model of selfish preferences that is not UDT-compatible looks like this: once A and B know who is who, A wants B to press the button but B doesn't want to do it. And so earlier, I should try and make precommitments to force B to press the button.

But suppose that we simply decided to use a different model. A model of peace and harmony and, like, free love, where I just maximize the average (or total, if we specify an arbitrary zero point) amount of utility that myselves have. And so B just presses the button.

(It's like non-UDT selfish copies can make all Pareto improvements, but not all average improvements)

 

Is the peace-and-love model still a selfish preference? It sure seems different from the every-copy-for-themself algorithm. But on the other hand, I'm doing it for myself, in a sense.

And at least this way I don't have to waste time with precomittment. In fact, self-modifying to this form of preferences is such an effective action that conflicting preferences are self-destructive. If I have selfish preferences now but I want my copies to cooperate in the future, I'll try to become an agent who values copies of myself - so long as they date from after the time of my self-modification.

 

If you recall, I made an argument in favor of averaging the utility of future causal descendants when calculating expected utility, based on this being the fixed point of selfish preferences under modification when confronted with Jan's tropical paradise. But if selfish preferences are unstable under self-modification in a more intrinsic way, this rather goes out the window.

 

Right now I think of selfish values as a somewhat anything-goes space occupied by non-self-modified agents like me and you. But it feels uncertain. On the mutant third hand, what sort of arguments would convince me that the peace-and-love model actually captures my selfish preferences?

2015 Repository Reruns - Boring Advice Repository

13 TrE 08 January 2015 06:00PM

 

This is the first post of the 2015 repository rerun, which appears to be a good idea. The motivation for this rerun is that while the 12 repositories (go look them up, they're awesome!) exist and people might look them up, few new comments are posted there. In effect, there might be useful stuff that should go in those repositories, but is never posted due to low expected value and no feedback. With the rerun, attention is shifted to one topic per month. This might allow us to have a lively discussion on the topic at hand and gather new content for the repository.

continue reading »

The decline of violence as a lens for understanding effective altruism

2 alwhite 07 January 2015 05:16PM

Greetings all!  There's a puzzle that I'm working on and I'm interested to see what the members of this community have to say about it.

I am an electrical engineer that is currently working on a master's in counseling.  One of the big questions I keep asking myself in this program is "how effective is this field in making the world a better place"?

To help focus the discussion I want to focus on violence.  This video from Steven Pinker is a great overview of the data http://www.ted.com/talks/steven_pinker_on_the_myth_of_violence.  But for those who don't want to spend the time to watch it, the short version is that violence per capita is at an all time low for human history, and other people will state it as "there has never been a safer time in history".

The question then, why is this so?

My personal belief on this is that our technology advancement has reduced the effort it takes for people to survive so there is less drive to become hostile towards people who have what we need.  This belief applied to effective altruism would suggest that the most effective method of improving all of human life would be to continue to increase our technology level so that there is an abundance of basic needs and no one has a need to become hostile.  I do believe that as a planet, we do not yet have that abundance so I don't believe this is merely a matter of redistribution.  The GWP (gross world product) per capita, as of 2014, was $12,400 USD, which is just barely above the poverty line for an individual.  This is why I say, we're not yet producing enough to truly eliminate need.

From this belief, I wonder if social movements and psychological training are really doing anything in comparison to the need that exists.

Going back to the violence issue, I am thinking if we can understand why violence has been declining we can also understand what is truly effective in bettering the human condition.  I believe the reason is technological advancement.  Does anyone have any good evidence to suggest other reasons?

Are we possibly at a tipping point?  Has our past been dominated by technological advancement but now we're reaching a level where more socially oriented advancements will be more effective?

Thoughts?

Brain-centredness and mind uploading

14 gedymin 02 January 2015 12:23PM

The naïve way of understanding mind uploading is "we take the connectome of a brain, including synaptic connection weights and characters, and emulate it in a computer". However, people want their personalities to be uploaded, not just brains. That is more than just replicating the functionality of their brains in silico.

This nuance has lead to some misunderstandings, for example, to experts wondering [1] why on Earth would anyone think that brain-centredness [2] (the idea that brains are "sufficient" in some vague sense) is a necessary prerequisite for successful whole brain emulation. Of course, brain-centredness is not required for brain uploading to be technically successful; the problem is whether it is sufficient for mind uploading in the sense that people actually care about?

 

The first obvious extension that may be required is the chemical environment of the brain. Here are some examples:

  • Are you familiar with someone whose personality is radically (and often predictability) altered under influence of alcohol or drugs? This is not an exception, but a rule: most are impacted by this, only to a smaller extent. Only the transiency of the effects allow us to label them as simple mood changes.
  • I have observed that my personal levels of neuroticism vary depending on the pharmaceutical drugs I'm using. Nootropics make me more nervous, while anti-hypertensions drugs have the reverse effect.
  • The levels of hormones in the blood function as long-term personality changes. There are neurotransmitters that themselves are slow-acting, for example, nitric oxide [3].
  • Artificially enchanted levels of serotonin in the brain causes it to "adapt" to this environment - in this way some of antidepressants work (namely, SSRI) [4].

Whole Brain Emulation - A Roadmap includes a short section about the "Body chemical environment" and concludes that for "WBE, the body chemistry model, while involved, would be relatively simple", unless protein interactions have to be modelled.

The technical aspect notwithstanding, what are the practical and moral implications? I think that here's not only a problem, but also an opportunity. Why keep the accidental chemistry we have developed in our lifetimes, one that presumably has little relation to what we would really like to be - if we could? Imagine that it is possible to create carefully improved and tailored versions of the neurotransmitter "soup" in the brain. There are new possibilities here for personal growth in ways that have not been possible before. These ways are completely orthogonal to the intelligence enhancement opportunities commonly associated with uploading.

The question of personal identity is more difficult, and there appears to be a grey zone here. A fictional example of the protagonist in Planescape: Torment comes into mind - is he the same person in each of his incarnations?

 

The second extension required to upload our personalities in the fullest sense might be the peripheral nervous system. Most of us think it's the brain that's responsible for emotions, but this is a simplified picture. Here are some hints why:

  • The James-Lange 19th century theory of emotions proposed that we experience emotion in response to physiological changes in our body. For example, we feel sad because we cry rather than cry because we are sad [5]. While the modern understanding of emotions is significantly different, these ideas have not completely gone away neither from academic research [5] nor everyday life. For example, to calm down, we are suggested to take deep and slow breaths. Paraplegics and quadriplegics, with severe spinal cord injuries typically experience less intense emotions than other people [6].
  • Endoscopic thoracic sympathectomy (ETS) is a surgical procedure in which a portion of the sympathetic nerve trunk in the thoracic region is destroyed [7]. It is typically used against excessive hand sweating. However, "a large study of psychiatric patients treated with this surgery [also] showed significant reductions in fear, alertness and arousal [..] A severe possible consequence of thoracic sympathectomy is corposcindosis (split-body syndrome) [..] In 2003 ETS was banned in Sweden due to overwhelming complaints by disabled patients." The complaints include having not been able to lead emotional life as fully as before the operation.
  • The enteric nervous system in the stomach "governs the function of the gastrointestinal system" [8]. I'm not sure how solid the research is, but there are a lot of articles on the Web that mention the importance of this system to our mood and well being [9]. Serotonin is "the happiness neurotransmitter" and "in fact 95 percent of the body's serotonin is found in the bowels", as are 50% of dopamine [8]. "Gut bacteria may influence thoughts and behaviour" [10] by using the serotonin mechanism. Also, "Irritable bowel syndrome is associated with psychiatric illness" [10].

 

In short, different chemistry in the brain changes what we are, as does the peripheral nervous system. To upload someone in the fullest sense, his/her chemistry and PNS also have to be uploaded.

[1] Randal Koene on whole brain emulation

[2] Anders Sandberg, Nick Bostrom, Future of Humanity Institute, Whole Brain Emulation - A Roadmap.

[3] Bradley Voytek's (Ph.D. neuroscience) Quora answer to Will human consciousness ever be transferrable?

[4] Selective serotonin reuptake inhibitors

[5] Bear et al. Neuroscience: Exploring the Brain, 3rd edition. Page 564.

[6] Michael W. Eysenck - Perspectives On Psychology - Page 100 - Google Books Result

[7] Endoscopic thoracic sympathectomy

[8] Enteric nervous system

[9] Scientific American, 2010. Think Twice: How the Gut's "Second Brain" Influences Mood and Well-Being

[10] The Guardian, 2012. Microbes manipulate your mind

View more: Next