Another way our brains betray us
This appeared in the news yesterday.
http://www.alternet.org/media/most-depressing-discovery-about-brain-ever?paging=off
It turns out that in the public realm, a lack of information isn’t the real problem. The hurdle is how our minds work, no matter how smart we think we are. We want to believe we’re rational, but reason turns out to be the ex post facto way we rationalize what our emotions already want to believe.
...
The bleakest finding was that the more advanced that people’s math skills were, the more likely it was that their political views, whether liberal or conservative, made them less able to solve the math problem. [...] what these studies of how our minds work suggest is that the political judgments we’ve already made are impervious to facts that contradict us.
...
Denial is business-as-usual for our brains. More and better facts don’t turn low-information voters into well-equipped citizens. It just makes them more committed to their misperceptions.
...
When there’s a conflict between partisan beliefs and plain evidence, it’s the beliefs that win. The power of emotion over reason isn’t a bug in our human operating systems, it’s a feature.
[LINK] Antibiotic seemingly improves decision-making in the presence of attractive women.
This study seems to show that minocycline helps to resist placing too much trust in attractive women.
Recently, minocycline, a tetracycline antibiotic, has been reported to improve symptoms of psychiatric disorders and to facilitate sober decision-making in healthy human subjects. Here we show that minocycline also reduces the risk of the 'honey trap' during an economic exchange. Males tend to cooperate with physically attractive females without careful evaluation of their trustworthiness, resulting in betrayal by the female. In this experiment, healthy male participants made risky choices (whether or not to trust female partners, identified only by photograph, who had decided in advance to exploit the male participants). The results show that trusting behaviour in male participants significantly increased in relation to the perceived attractiveness of the female partner, but that attractiveness did not impact trusting behaviour in the minocycline group. Animal studies have shown that minocycline inhibits microglial activities. Therefore, this minocycline effect may shed new light on the unknown roles microglia play in human mental activities.
FAI, FIA, and singularity politics
In discussing scenarios of the future, I speak of "slow futures" and "fast futures". A fast future is exemplified by what is now called a hard takeoff singularity: something bootstraps its way to superhuman intelligence in a short time. A slow future is a continuation of history as we know it: decades pass and the world changes, with new politics, culture, and technology. To some extent the Hanson vs Yudkowsky debate was about slow vs fast; Robin's future is fast-moving, but on the way there, there's never an event in which some single "agent" becomes all-powerful by getting ahead of all others.
The Singularity Institute does many things, but I take its core agenda to be about a fast scenario. The theoretical objective is to design an AI which would still be friendly if it became all-powerful. There is also the practical objective of ensuring that the first AI across the self-enhancement threshold is friendly. One way to do that is to be the one who makes it, but that's asking a lot. Another way is to have enough FAI design and FAI theory out there, that the people who do win the mind race will have known about it and will have taken it into consideration. Then there are mixed strategies, such as working on FAI theory while liaising with known AI projects that are contenders in the race and whose principals are receptive to the idea of friendliness.
I recently criticised a lot of the ideas that circulate in conjunction with the concept of friendly AI. The "sober" ideas and the "extreme" ideas have a certain correlation with slow-future and fast-future scenarios, respectively. The sober future is a slow one where AIs exist and posthumanity expands into space, but history, politics, and finitude aren't transcended. The extreme future is a fast one where one day the ingredients for a hard takeoff are brought together in one place, an artificial god is born, and, depending on its inclinations and on the nature of reality, something transcendental happens: everyone uploads to the Planck scale, our local overmind reaches out to other realities, we "live forever and remember it afterwards".
Although I have criticised such transcendentalism, saying that it should not be the default expectation of the future, I do think that the "hard takeoff" and the "all-powerful agent" would be among the strategic considerations in an ideal plan for the future, though in a rather broader sense than is usually discussed. The reason is that if one day Earth is being ruled by, say, a coalition of AIs with a particular value system, with natural humans reduced to the status of wildlife, then the functional equivalent of a singularity has occurred, even if these AIs have no intention of going on to conquer the galaxy; and I regard that as a quite conceivable scenario. It is fantastic (in the sense of mind-boggling), but it's not transcendental. All the scenario implies is that the human race is no longer at the top of the heap; it has successors and they are now in charge.
But we can view those successors as, collectively, the "all-powerful agent" that has replaced human hegemony. And we can regard the events, whatever they were, that first gave the original such entities their unbeatable advantage in power, as the "hard takeoff" of this scenario. So even a slow, sober future scenario can issue in a singularity where the basic premises and motivations of existing FAI research apply. It's just that one might need to be imaginative in anticipating how they are realized.
For example, perhaps hegemonic superintelligence could emerge, not from a single powerful AI research program, but from a particular clique of networked neurohackers who have the right combination of collaborative tools, brain interfaces, and concrete plans for achieving transhuman intelligence. They might go on to build an army of AIs, and subdue the world that way, but the crucial steps which made them the winners in the mind race, and which determined what they would do with their victory, would lie in their methods of brain modification, enhancement, and interfacing, and in the ends to which they applied those methods.
In such a scenario, we could speak of "FIA" - friendly intelligence augmentation. A basic idea of existing FAI discourse is that the true human utility function needs to be determined, and then the values that make an AI human-friendly would be extrapolated from that. Similar thinking can be applied to the prospect of brain modification and intelligence increase in human beings. Human brains work a certain way, modified or augmented human brains will work in specifically different ways, and we should want to know which modifications are genuinely enhancements, what sort of modifications stabilize value and which ones destabilize value, and so on.
If there was a mature and sophisticated culture of preparing for the singularity, then there would be FAI research, FIA research, and a lot of communication between the two fields. (For example, researchers in both fields need to figure out how the human brain works.) Instead, the biggest enthusiasts of FAI are a futurist subculture with a lot of conceptual baggage, and FIA is nonexistent. However, we can at least start thinking and discussing about how this broader culture of research into "friendly minds" could take shape.
Despite its flaws, the Singularity Institute stands alone as an organization concerned with the fast future scenario, the hard takeoff. I have argued that a sober futurology, while forecasting a slowly evolving future for some time to come, must ultimately concern itself with the emergence of a posthuman power arising from some cognitive technology, whether that is AI, neurotechnology, or a combination of these. So I have asked myself who, among "slow futurists", is best equipped to develop an outlook and a plan which is sober and realistic, yet also visionary enough to accommodate the really overwhelming responsibility of designing the architecture of friendly posthuman minds capable of managing a future that we would want.
At the moment, my favorites in this respect are the various branches, scattered around the world, of the Longevity Party that was started in Russia a few months ago. (It shouldn't be confused with "Evolution 2045", a big-budget rival backed by an Internet entrepreneur, that especially promotes mind uploading. For some reason, transhumanist politics has begun to stir in that country.) If the Singularity Institute falls short of the ideal, then the "longevity parties" are even further away from living up to their ambitious agenda. Outside of Russia, they are mostly just small Facebook groups; the most basic issues of policy and practice are still being worked out; no-one involved has much of a history of political achievement.
Nonetheless, if there were no prospect of singularity but otherwise science and technology were advancing as they are, the agenda here looks just about ideal. People age and decline until it kills them, an extrapolation of biomedical knowledge suggests this is not a law of nature but just a sign of primitive technology, and the Longevity Party exists to rectify this situation. It's visionary and despite the current immaturity and growing pains, an effective longevity politics must arise one day, simply because the advance of technology will force the issue on us! The human race cannot currently muster enough will to live, to openly make rejuvenation a political goal, but the incremental pursuit of health and well-being is taking us in that direction anyway.
There's a vacuum of authority and intention in the realm of life extension, and transhuman technology generally, and these would-be longevity politicians are stepping into that vacuum. I don't think they are ready for all the issues that transhuman power entails, but the process has to start somewhere. Faced with the infinite possibilities of technological transformation, the basic affirmation of the desire to live as well as reality permits, can serve as a founding principle against which to judge attitudes and approaches for all the more complicated "issues" that arise in a world where anyone can become anything.
Maria Konovalenko, a biomedical researcher and one of the prime movers behind the Russian Longevity Party, wrote an essay setting out her version of how the world ought to work. You'll notice that she manages to include friendly AI on her agenda. This is another example, a humble beginning, of the sort of conceptual development which I think needs to happen. The sort of approach to FAI that Eliezer has pioneered needs a context, a broader culture concerned with FIA and the interplay between neuroscience and pure AI, and we need realistic yet visionary political thinking which encompasses both the shocking potentials of a slow future, above all rejuvenation and the conquest of aging, and the singularity imperative.
Unless there is simply a catastrophe, one day someone, some thing, some coalition will wield transhuman power. It may begin as a corporation, or as a specific technological research subculture, or as the peak political body in a sovereign state. Perhaps it will be part of a broader global culture of "competitors in the mind race" who know about each other and recognize each other as contenders for the first across the line. Perhaps there will be coalitions in the race: contenders who agree on the need for friendliness and the form it should take, and others who are pursuing private power, or who are just pushing AI ahead without too much concern for the transformation of the world that will result. Perhaps there will be a war as one contender begins to visibly pull ahead, and others resort to force to stop them.
But without a final and total catastrophe, however much slow history there remains ahead of us, eventually someone or something will "win", and after that the world will be reshaped according to its values and priorities. We don't need to imagine this as "tiling the universe"; it should be enough to think of it as a ubiquitous posthuman political order, in which all intelligent agents are either kept so powerless as to not be a threat, or managed and modified so as to be reliably friendly to whatever the governing civilizational values are. I see no alternative to this if we are looking for a stable long-term way of living in which ultimate technological powers exist; the ultimate powers of coercion and destruction can't be left lying around, to be taken up by entities with arbitrary values.
So the supreme challenge is to conceive of a social and technological order where that power exists, and is used, but it's still a world that we want to live in. FAI is part of the answer, but so is FIA, and so is the development of political concepts and projects which can encompass such an agenda. The Singularity Institute and the Longevity Party are fledgling institutions, and if they live they will surely, eventually, form ties with older and more established bodies; but right now, they seem to be the crucial nuclei of the theoretical research and the political vision that we need.
[Link] Learning New Languages Helps The Brain Grow
http://www.lunduniversity.lu.se/o.o.i.s?news_item=5928&id=24890
According to Johan Mårtensson from Lund University, if you are learning new language quickly, it helps your brain to become bigger and increase its activity:
This finding came from scientists at Lund University, after examining young recruits with a talent for acquiring languages who were able to speak in Arabic, Russian, or Dari fluently after just 13 months of learning, before which they had no knowledge of the languages.
After analyzing the results, the scientists saw no difference in the brain structure of the control group. However, in the language group, certain parts of the brain had grown, including the hippocampus, responsible for learning new information, and three areas in the cerebral cortex.
And there is more:
One particular study from 2011 provided evidence that Alzheimer's was delayed 5 years for bilingual patients, compared to monolingual patients.
Clarification: Behaviourism & Reinforcement
Disclaimer: The following is but a brief clarification on what the human brain does when one's behaviour is reinforced or punished. Thorough, exhaustive, and scholarly it is not.
Summary: Punishment, reinforcement, etc. of a behaviour creates an association in the mind of the affected party between the behaviour and the corresponding punishment, reinforcement, etc., the nature of which can only be known by the affected party. Take care when reinforcing or punishing others, as you may be effecting an unwanted association.
I've noticed the behaviourist concept of reinforcement thrown around a great deal on this site, and am worried a fair number of those who frequent it develop a misconception or are simply ignorant of how reinforcement affects humans' brains, and why it is practically effective.
In the interest of time, I'm not going to go into much detail on classical black-box behaviourism and behavioural neuroscience; Luke already covered the how one can take advantage of positive reinforcement. Negative reinforcement and punishment are also important, but won't be covered here.
Scientists make monkeys smarter using brain implants [link]
Article at io9. The paper is available here.
The researchers showed monkeys specific images and then trained them to select those images out of a larger set after a time delay. They recorded the monkeys' brain function to determine which signals were important. The experiment tests the monkey's performance on this task in different cases, as described by io9:
Once they were satisfied that the correct mapping had been done, they administered cocaine to the monkeys to impair their performance on the match-to-sample task (seems like a rather severe drug to administer, but there you have it). Immediately, the monkeys' performance fell by a factor of 20%.
It was at this point that the researchers engaged the neural device. Specifically, they deployed a "multi-input multi-output nonlinear" (MIMO) model to stimulate the neurons that the monkeys needed to complete the task. The inputs of this device monitored such things as blood flow, temperature, and the electrical activity of other neurons, while the outputs triggered the individual neurons required for decision making. Taken together, the i/o model was able to predict the output of the cortical neurons — and in turn deliver electrical stimulation to the right neurons at the right time.
And incredibly, it worked. The researchers successfully restored the monkeys' decision-making skills even though they were still dealing with the effects of the cocaine. Moreover, when duplicating the experiment under normal conditions, the monkeys' performance improved beyond the 75% proficiency level shown earlier. In other words, a kind of cognitive enhancement had happened.
This research is a remarkable followup to research that was done in rodents last year.
Four major problems with neuroscience
A discussion of four errors which lead to false positives-- neglecting maturation (that brains change with time, even without intervention, learning effects (people who take a test more than once get better at it), regression to the mean (people who are unusually good or bad at something will probably have a more average score on subsequent attempts), and the placebo effect.
The link above is a summary of a lecture which isn't playing for me, so any further information about the lecture would be greatly appreciated.
Question about brains and big numbers
From time to time I encounter people who claim that our brains are really slow compared to even an average laptop computer and can't process big numbers.
At the risk of revealing my complete lack of knowledge of neural networks and how the brain works, I want to ask if this is actually true?
It took massive amounts of number crunching to create movies like James Cameron's Avatar. Yet I am able to create more realistic and genuine worlds in front of my minds eye, on the fly. I can even simulate other agents. For example, I can easily simulate sexual intercourse between me and another human. Which includes tactile and olfactory information.
I am further able to run real-time egocentric world-simulations to extrapolate and predict the behavior of physical systems and other agents. You can do that too. Having a discussion or playing football are two examples.
Yet any computer can outperform me at simple calculations.
But it seems to me, maybe naively so, that most of my human abilities involve massive amounts of number crunching that no desktop computer could do.
So what's the difference? Can someone point me to some digestible material that I can read up on to dissolve possible confusions I have with respect to my question?
Why I Moved from AI to Neuroscience, or: Uploading Worms
This post is shameless self-promotion, but I'm told that's probably okay in the Discussion section. For context, as some of you are aware, I'm aiming to model C. elegans based on systematic high-throughput experiments - that is, to upload a worm. I'm still working on course requirements and lab training at Harvard's Biophysics Ph.D. program, but this remains the plan for my thesis.
Last semester I gave this lecture to Marvin Minsky's AI class, because Marvin professes disdain for everything neuroscience, and I wanted to give his students—and him—a fair perspective of how basic neuroscience might be changing for the better, and seems a particularly exciting field to be in right about now. The lecture is about 22 minutes long, followed by over an hour of questions and answers, which cover a lot of the memespace that surrounds this concept. Afterward, several students reported to me that their understanding of neuroscience was transformed.
I only just now got to encoding and uploading this recording; I believe that many of the topics covered could be of interest to the LW community (especially those with a background in AI and an interest in brains), perhaps worthy of discussion, and I hope you agree.
Possible Implications of the neural retrotransposons to the future
Retrotransposons are small bits of genetic code than can copy themselves into other bits of the dna strand
They have been found to be active in brains, with different amounts of activity in different brain sections. The highest being in the hippocampus (an important region for long-term memory). Also they were active in coding regions.
"Overall, L1, Alu, and, to a more limited extent, SVA mobilization produced a large number of insertions that affected protein-coding genes,"
This means that they are more likely to have some large effect, than if they were just in junk dna.
One form of autism is linked to a malfunctioning of retrotransposons. So it can have a drastic affect.
It makes a certain amount of sense. If there is information in the brain that needs to to be stored, but not directly in neural firing rates, why not store it in the DNA of neuron? There is lots of error correcting data storage there and the genome has lots of tools for manipulating itself. Time will tell if it is very important or not.
If it is important, what are the implications for the future?
Cryo is harder, scanning the genome is a lot harder than just doing some spectroscopy. but since we assume a certain amount of sufficiently advanced technology and don't have a timeline, our plans aren't impinged upon.
The em scenario seems like it will take longer to happen or may have some gotchas. Being able to scan the genetic code of each neuron would require some serious breakthroughs in scanning technolgies.
To naively emulate the genetic code changes would take immense amounts of bandwidth and to crack things like the protein folding problem (for how the changes in ). Just for storage I think we might need on the order of 500 exabits to store the dna sequence for each neuron. You'ld need to update them as well, which is going to take lots of memory bandwidth. This is not to mention chemical emulation.
I think naive emulation of the brain is off the table before AI. We may well be able to do better with shortcuts in terms of ability. But there might be questions of whether the copy is "you" if short cuts are taken. Also if we understand the brain, we don't need to make copies of people, we could just create AIs that do the same thing.
Some even more blue sky speculation. If the changes in the genetic code are to do with changing how we learn, then it still might be possible to scan a brain at low res and get something that seems to act the same as someone else, but cannot learn in the same way. An interesting twist to the Turing test, someone might be behaviourally human and fool you in the short-term, but may seem odd when tasked with learning problems.
So call centre staff would be out of work, but scientists would be still in demand.
It also has implication on cloning attempts at intelligence amplification. I'm guessing this can be answered somewhat by looking at twins and the differential in mental ability between them. Anyone know of any books on this field.
Also anyone interested in discussion on this kind of topic (neurobiological implication on the future)?
[Link] Reconstructing Speech from Human Auditory Cortex
Abstract (emphasis mine):
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.
State your physical account of experienced color
Previous post: Does functionalism imply dualism? Next post: One last roll of the dice.
Don't worry, this sequence of increasingly annoying posts is almost over. But I think it's desirable that we try to establish, once and for all, how people here think color works, and whether they even think it exists.
The way I see it, there is a mental block at work. An obvious fact is being denied or evaded, because the conclusions are unpalatable. The obvious fact is that physics as we know it does not contain the colors that we see. By "physics" I don't just mean the entities that physicists talk about, I also mean anything that you can make out of them. I would encourage anyone who thinks they know what I mean, and who agrees with me on this point, to speak up and make it known that they agree. I don't mind being alone in this opinion, if that's how it is, but I think it's desirable to get some idea of whether LessWrong is genuinely 100% against the proposition.
Just so we're all on the same wavelength, I'll point to a specific example of color. Up at the top of this web page, the word "Less" appears. It's green. So, there is an example of a colored entity, right in front of anyone reading this page.
My thesis is that if you take a lot of point-particles, with no property except their location, and arrange them any way you want, there won't be anything that's green like that; and that the same applies for any physical theory with an ontology that doesn't explicitly include color. To me, this is just mindbogglingly obvious, like the fact that you can't get a letter by adding numbers.
At this point people start talking about neurons and gensyms and concept maps. The greenness isn't in the physical object, "computer screen", it's in the brain's response to the stimulus provided by light from the computer screen entering the eye.
My response is simple. Try to fix in your mind what the physical reality must be, behind your favorite neuro-cognitive explanation of greenness. Presumably it's something like "a whole lot of neurons, firing in a particular way". Try to imagine what that is physically, in terms of atoms. Imagine some vast molecular tinker-toy structures, shaped into a cluster of neurons, with traveling waves of ions crossing axonal membranes. Large numbers of atoms arranged in space, a few of them executing motions which are relevant for the information processing. Do you have that in your mind's eye? Now look up again at that word "Less", and remind yourself that according to your theory, the green shape that you are seeing is the same thing as some aspect of all those billions of colorless atoms in motion.
If your theory still makes sense to you, then please tell us in comments what aspect of the atoms in motion is actually green.
I only see three options. Deny that anything is actually green; become a dualist; or (supervillain voice) join me, and together, we can make a new ontology.
Personal research update
Synopsis: The brain is a quantum computer and the self is a tensor factor in it - or at least, the truth lies more in that direction than in the classical direction - and we won't get Friendly AI right unless we get the ontology of consciousness right.
Followed by: Does functionalism imply dualism?
Sixteen months ago, I made a post seeking funding for personal research. There was no separate Discussion forum then, and the post was comprehensively downvoted. I did manage to keep going at it, full-time, for the next sixteen months. Perhaps I'll get to continue; it's for the sake of that possibility that I'll risk another breach of etiquette. You never know who's reading these words and what resources they have. Also, there has been progress.
I think the best place to start is with what orthonormal said in response to the original post: "I don't think anyone should be funding a Penrose-esque qualia mysterian to study string theory." If I now took my full agenda to someone out in the real world, they might say: "I don't think it's worth funding a study of 'the ontological problem of consciousness in the context of Friendly AI'." That's my dilemma. The pure scientists who might be interested in basic conceptual progress are not engaged with the race towards technological singularity, and the apocalyptic AI activists gathered in this place are trying to fit consciousness into an ontology that doesn't have room for it. In the end, if I have to choose between working on conventional topics in Friendly AI, and on the ontology of quantum mind theories, then I have to choose the latter, because we need to get the ontology of consciousness right, and it's possible that a breakthrough could occur in the world outside the FAI-aware subculture and filter through; but as things stand, the truth about consciousness would never be discovered by employing the methods and assumptions that prevail inside the FAI subculture.
Perhaps I should pause to spell out why the nature of consciousness matters for Friendly AI. The reason is that the value system of a Friendly AI must make reference to certain states of conscious beings - e.g. "pain is bad" - so, in order to make correct judgments in real life, at a minimum it must be able to tell which entities are people and which are not. Is an AI a person? Is a digital copy of a human person, itself a person? Is a human body with a completely prosthetic brain still a person?
I see two ways in which people concerned with FAI hope to answer such questions. One is simply to arrive at the right computational, functionalist definition of personhood. That is, we assume the paradigm according to which the mind is a computational state machine inhabiting the brain, with states that are coarse-grainings (equivalence classes) of exact microphysical states. Another physical system which admits the same coarse-graining - which embodies the same state machine at some macroscopic level, even though the microscopic details of its causality are different - is said to embody another instance of the same mind.
An example of the other way to approach this question is the idea of simulating a group of consciousness theorists for 500 subjective years, until they arrive at a consensus on the nature of consciousness. I think it's rather unlikely that anyone will ever get to solve FAI-relevant problems in that way. The level of software and hardware power implied by the capacity to do reliable whole-brain simulations means you're already on the threshold of singularity: if you can simulate whole brains, you can simulate part brains, and you can also modify the parts, optimize them with genetic algorithms, and put them together into nonhuman AI. Uploads won't come first.
But the idea of explaining consciousness this way, by simulating Daniel Dennett and David Chalmers until they agree, is just a cartoon version of similar but more subtle methods. What these methods have in common is that they propose to outsource the problem to a computational process using input from cognitive neuroscience. Simulating a whole human being and asking it questions is an extreme example of this (the simulation is the "computational process", and the brain scan it uses as a model is the "input from cognitive neuroscience"). A more subtle method is to have your baby AI act as an artificial neuroscientist, use its streamlined general-purpose problem-solving algorithms to make a causal model of a generic human brain, and then to somehow extract from that, the criteria which the human brain uses to identify the correct scope of the concept "person". It's similar to the idea of extrapolated volition, except that we're just extrapolating concepts.
It might sound a lot simpler to just get human neuroscientists to solve these questions. Humans may be individually unreliable, but they have lots of cognitive tricks - heuristics - and they are capable of agreeing that something is verifiably true, once one of them does stumble on the truth. The main reason one would even consider the extra complication involved in figuring out how to turn a general-purpose seed AI into an artificial neuroscientist, capable of extracting the essence of the human decision-making cognitive architecture and then reflectively idealizing it according to its own inherent criteria, is shortage of time: one wishes to develop friendly AI before someone else inadvertently develops unfriendly AI. If we stumble into a situation where a powerful self-enhancing algorithm with arbitrary utility function has been discovered, it would be desirable to have, ready to go, a schema for the discovery of a friendly utility function via such computational outsourcing.
Now, jumping ahead to a later stage of the argument, I argue that it is extremely likely that distinctively quantum processes play a fundamental role in conscious cognition, because the model of thought as distributed classical computation actually leads to an outlandish sort of dualism. If we don't concern ourselves with the merits of my argument for the moment, and just ask whether an AI neuroscientist might somehow overlook the existence of this alleged secret ingredient of the mind, in the course of its studies, I do think it's possible. The obvious noninvasive way to form state-machine models of human brains is to repeatedly scan them at maximum resolution using fMRI, and to form state-machine models of the individual voxels on the basis of this data, and then to couple these voxel-models to produce a state-machine model of the whole brain. This is a modeling protocol which assumes that everything which matters is physically localized at the voxel scale or smaller. Essentially we are asking, is it possible to mistake a quantum computer for a classical computer by performing this sort of analysis? The answer is definitely yes if the analytic process intrinsically assumes that the object under study is a classical computer. If I try to fit a set of points with a line, there will always be a line of best fit, even if the fit is absolutely terrible. So yes, one really can describe a protocol for AI neuroscience which would be unable to discover that the brain is quantum in its workings, and which would even produce a specific classical model on the basis of which it could then attempt conceptual and volitional extrapolation.
Clearly you can try to circumvent comparably wrong outcomes, by adding reality checks and second opinions to your protocol for FAI development. At a more down to earth level, these exact mistakes could also be made by human neuroscientists, for the exact same reasons, so it's not as if we're talking about flaws peculiar to a hypothetical "automated neuroscientist". But I don't want to go on about this forever. I think I've made the point that wrong assumptions and lax verification can lead to FAI failure. The example of mistaking a quantum computer for a classical computer may even have a neat illustrative value. But is it plausible that the brain is actually quantum in any significant way? Even more incredibly, is there really a valid apriori argument against functionalism regarding consciousness - the identification of consciousness with a class of computational process?
I have previously posted (here) about the way that an abstracted conception of reality, coming from scientific theory, can motivate denial that some basic appearance corresponds to reality. A perennial example is time. I hope we all agree that there is such a thing as the appearance of time, the appearance of change, the appearance of time flowing... But on this very site, there are many people who believe that reality is actually timeless, and that all these appearances are only appearances; that reality is fundamentally static, but that some of its fixed moments contain an illusion of dynamism.
The case against functionalism with respect to conscious states is a little more subtle, because it's not being said that consciousness is an illusion; it's just being said that consciousness is some sort of property of computational states. I argue first that this requires dualism, at least with our current physical ontology, because conscious states are replete with constituents not present in physical ontology - for example, the "qualia", an exotic name for very straightforward realities like: the shade of green appearing in the banner of this site, the feeling of the wind on your skin, really every sensation or feeling you ever had. In a world made solely of quantum fields in space, there are no such things; there are just particles and arrangements of particles. The truth of this ought to be especially clear for color, but it applies equally to everything else.
In order that this post should not be overlong, I will not argue at length here for the proposition that functionalism implies dualism, but shall proceed to the second stage of the argument, which does not seem to have appeared even in the philosophy literature. If we are going to suppose that minds and their states correspond solely to combinations of mesoscopic information-processing events like chemical and electrical signals in the brain, then there must be a mapping from possible exact microphysical states of the brain, to the corresponding mental states. Supposing we have a mapping from mental states to coarse-grained computational states, we now need a further mapping from computational states to exact microphysical states. There will of course be borderline cases. Functional states are identified by their causal roles, and there will be microphysical states which do not stably and reliably produce one output behavior or the other.
Physicists are used to talking about thermodynamic quantities like pressure and temperature as if they have an independent reality, but objectively they are just nicely behaved averages. The fundamental reality consists of innumerable particles bouncing off each other; one does not need, and one has no evidence for, the existence of a separate entity, "pressure", which exists in parallel to the detailed microphysical reality. The idea is somewhat absurd.
Yet this is analogous to the picture implied by a computational philosophy of mind (such as functionalism) applied to an atomistic physical ontology. We do know that the entities which constitute consciousness - the perceptions, thoughts, memories... which make up an experience - actually exist, and I claim it is also clear that they do not exist in any standard physical ontology. So, unless we get a very different physical ontology, we must resort to dualism. The mental entities become, inescapably, a new category of beings, distinct from those in physics, but systematically correlated with them. Except that, if they are being correlated with coarse-grained neurocomputational states which do not have an exact microphysical definition, only a functional definition, then the mental part of the new combined ontology is fatally vague. It is impossible for fundamental reality to be objectively vague; vagueness is a property of a concept or a definition, a sign that it is incomplete or that it does not need to be exact. But reality itself is necessarily exact - it is something - and so functionalist dualism cannot be true unless the underdetermination of the psychophysical correspondence is replaced by something which says for all possible physical states, exactly what mental states (if any) should also exist. And that inherently runs against the functionalist approach to mind.
Very few people consider themselves functionalists and dualists. Most functionalists think of themselves as materialists, and materialism is a monism. What I have argued is that functionalism, the existence of consciousness, and the existence of microphysical details as the fundamental physical reality, together imply a peculiar form of dualism in which microphysical states which are borderline cases with respect to functional roles must all nonetheless be assigned to precisely one computational state or the other, even if no principle tells you how to perform such an assignment. The dualist will have to suppose that an exact but arbitrary border exists in state space, between the equivalence classes.
This - not just dualism, but a dualism that is necessarily arbitrary in its fine details - is too much for me. If you want to go all Occam-Kolmogorov-Solomonoff about it, you can say that the information needed to specify those boundaries in state space is so great as to render this whole class of theories of consciousness not worth considering. Fortunately there is an alternative.
Here, in addressing this audience, I may need to undo a little of what you may think you know about quantum mechanics. Of course, the local preference is for the Many Worlds interpretation, and we've had that discussion many times. One reason Many Worlds has a grip on the imagination is that it looks easy to imagine. Back when there was just one world, we thought of it as particles arranged in space; now we have many worlds, dizzying in their number and diversity, but each individual world still consists of just particles arranged in space. I'm sure that's how many people think of it.
Among physicists it will be different. Physicists will have some idea of what a wavefunction is, what an operator algebra of observables is, they may even know about path integrals and the various arcane constructions employed in quantum field theory. Possibly they will understand that the Copenhagen interpretation is not about consciousness collapsing an actually existing wavefunction; it is a positivistic rationale for focusing only on measurements and not worrying about what happens in between. And perhaps we can all agree that this is inadequate, as a final description of reality. What I want to say, is that Many Worlds serves the same purpose in many physicists' minds, but is equally inadequate, though from the opposite direction. Copenhagen says the observables are real but goes misty about unmeasured reality. Many Worlds says the wavefunction is real, but goes misty about exactly how it connects to observed reality. My most frustrating discussions on this topic are with physicists who are happy to be vague about what a "world" is. It's really not so different to Copenhagen positivism, except that where Copenhagen says "we only ever see measurements, what's the problem?", Many Worlds says "I say there's an independent reality, what else is left to do?". It is very rare for a Many World theorist to seek an exact idea of what a world is, as you see Robin Hanson and maybe Eliezer Yudkowsky doing; in that regard, reading the Sequences on this site will give you an unrepresentative idea of the interpretation's status.
One of the characteristic features of quantum mechanics is entanglement. But both Copenhagen, and a Many Worlds which ontologically privileges the position basis (arrangements of particles in space), still have atomistic ontologies of the sort which will produce the "arbitrary dualism" I just described. Why not seek a quantum ontology in which there are complex natural unities - fundamental objects which aren't simple - in the form of what we would presently called entangled states? That was the motivation for the quantum monadology described in my other really unpopular post. :-) [Edit: Go there for a discussion of "the mind as tensor factor", mentioned at the start of this post.] Instead of saying that physical reality is a series of transitions from one arrangement of particles to the next, say it's a series of transitions from one set of entangled states to the next. Quantum mechanics does not tell us which basis, if any, is ontologically preferred. Reality as a series of transitions between overall wavefunctions which are partly factorized and partly still entangled is a possible ontology; hopefully readers who really are quantum physicists will get the gist of what I'm talking about.
I'm going to double back here and revisit the topic of how the world seems to look. Hopefully we agree, not just that there is an appearance of time flowing, but also an appearance of a self. Here I want to argue just for the bare minimum - that a moment's conscious experience consists of a set of things, events, situations... which are simultaneously "present to" or "in the awareness of" something - a conscious being - you. I'll argue for this because even this bare minimum is not acknowledged by existing materialist attempts to explain consciousness. I was recently directed to this brief talk about the idea that there's no "real you". We are given a picture of a graph whose nodes are memories, dispositions, etc., and we are told that the self is like that graph: nodes can be added, nodes can be removed, it's a purely relational composite without any persistent part. What's missing in that description is that bare minimum notion of a perceiving self. Conscious experience consists of a subject perceiving objects in certain aspects. Philosophers have discussed for centuries how best to characterize the details of this phenomenological ontology; I think the best was Edmund Husserl, and I expect his work to be extremely important in interpreting consciousness in terms of a new physical ontology. But if you can't even notice that there's an observer there, observing all those parts, then you won't get very far.
My favorite slogan for this is due to the other Jaynes, Julian Jaynes. I don't endorse his theory of consciousness at all; but while in a daydream he once said to himself, "Include the knower in the known". That sums it up perfectly. We know there is a "knower", an experiencing subject. We know this, just as well as we know that reality exists and that time passes. The adoption of ontologies in which these aspects of reality are regarded as unreal, as appearances as only, may be motivated by science, but it's false to the most basic facts there are, and one should show a little more imagination about what science will say when it's more advanced.
I think I've said almost all of this before. The high point of the argument is that we should look for a physical ontology in which a self exists and is a natural yet complex unity, rather than a vaguely bounded conglomerate of distinct information-processing events, because the latter leads to one of those unacceptably arbitrary dualisms. If we can find a physical ontology in which the conscious self can be identified directly with a class of object posited by the theory, we can even get away from dualism, because physical theories are mathematical and formal and make few commitments about the "inherent qualities" of things, just about their causal interactions. If we can find a physical object which is absolutely isomorphic to a conscious self, then we can turn the isomorphism into an identity, and the dualism goes away. We can't do that with a functionalist theory of consciousness, because it's a many-to-one mapping between physical and mental, not an isomorphism.
So, I've said it all before; what's new? What have I accomplished during these last sixteen months? Mostly, I learned a lot of physics. I did not originally intend to get into the details of particle physics - I thought I'd just study the ontology of, say, string theory, and then use that to think about the problem. But one thing led to another, and in particular I made progress by taking ideas that were slightly on the fringe, and trying to embed them within an orthodox framework. It was a great way to learn, and some of those fringe ideas may even turn out to be correct. It's now abundantly clear to me that I really could become a career physicist, working specifically on fundamental theory. I might even have to do that, it may be the best option for a day job. But what it means for the investigations detailed in this essay, is that I don't need to skip over any details of the fundamental physics. I'll be concerned with many-body interactions of biopolymer electrons in vivo, not particles in a collider, but an electron is still an electron, an elementary particle, and if I hope to identify the conscious state of the quantum self with certain special states from a many-electron Hilbert space, I should want to understand that Hilbert space in the deepest way available.
My only peer-reviewed publication, from many years ago, picked out pathways in the microtubule which, we speculated, might be suitable for mobile electrons. I had nothing to do with noticing those pathways; my contribution was the speculation about what sort of physical processes such pathways might underpin. Something I did notice, but never wrote about, was the unusual similarity (so I thought) between the microtubule's structure, and a model of quantum computation due to the topologist Michael Freedman: a hexagonal lattice of qubits, in which entanglement is protected against decoherence by being encoded in topological degrees of freedom. It seems clear that performing an ontological analysis of a topologically protected coherent quantum system, in the context of some comprehensive ontology ("interpretation") of quantum mechanics, is a good idea. I'm not claiming to know, by the way, that the microtubule is the locus of quantum consciousness; there are a number of possibilities; but the microtubule has been studied for many years now and there's a big literature of models... a few of which might even have biophysical plausibility.
As for the interpretation of quantum mechanics itself, these developments are highly technical, but revolutionary. A well-known, well-studied quantum field theory turns out to have a bizarre new nonlocal formulation in which collections of particles seem to be replaced by polytopes in twistor space. Methods pioneered via purely mathematical studies of this theory are already being used for real-world calculations in QCD (the theory of quarks and gluons), and I expect this new ontology of "reality as a complex of twistor polytopes" to carry across as well. I don't know which quantum interpretation will win the battle now, but this is new information, of utterly fundamental significance. It is precisely the sort of altered holistic viewpoint that I was groping towards when I spoke about quantum monads constituted by entanglement. So I think things are looking good, just on the pure physics side. The real job remains to show that there's such a thing as quantum neurobiology, and to connect it to something like Husserlian transcendental phenomenology of the self via the new quantum formalism.
It's when we reach a level of understanding like that, that we will truly be ready to tackle the relationship between consciousness and the new world of intelligent autonomous computation. I don't deny the enormous helpfulness of the computational perspective in understanding unconscious "thought" and information processing. And even conscious states are still states, so you can surely make a state-machine model of the causality of a conscious being. It's just that the reality of how consciousness, computation, and fundamental ontology are connected, is bound to be a whole lot deeper than just a stack of virtual machines in the brain. We will have to fight our way to a new perspective which subsumes and transcends the computational picture of reality as a set of causally coupled black-box state machines. It should still be possible to "port" most of the thinking about Friendly AI to this new ontology; but the differences, what's new, are liable to be crucial to success. Fortunately, it seems that new perspectives are still possible; we haven't reached Kantian cognitive closure, with no more ontological progress open to us. On the contrary, there are still lines of investigation that we've hardly begun to follow.
Interesting article about optimism
According to this brain-imaging study, volunteers presented with negative scenarios (i.e. car crashes, cancer), and asked to estimate the probability of these scenarios happening to them, would only update their beliefs if the actual rate of ocurrence in the population, given to them afterwards, was lower, i.e. more optimistic, than what they had guessed. The more "optimistic" the subjects were, according to a personality test, the less likely they were to update their belief based on more negative information, and the less activity they showed their frontal lobes, indicating that they weren't "paying attention" to the new information.
Sounds like confirmation bias, except that interestingly enough, it's unidirectional in this case. I wonder if very pessimistic people would have the opposite bias, only updating their estimate if the actual probability was higher, or more negative.
Link to article on kurzweilai.
Link to abstract in Nature journal. I can't access the full text.
Marsh et al. "Serotonin Transporter Genotype (5-HTTLPR) Predicts Utilitarian Moral Judgments"
The whole paper is here. In short, they found a genotype that predicts people's response to the original trolley problem:
A trolley (i.e. in British English a tram) is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?
Participants with one kind of serotonin transmitter (LL-homozygotes) judged flipping the switch to be better than a morally neutral action. Participants with the other kind (S-carriers) judged flipping the switch to be no better than a morally neutral action. The groups responded equally to the "fat man scenario" both rejecting the 'push' option.
Some quotes:
We hypothesized that 5-HTTLPR genotype would interact with intentionality in respondents who generated moral judgments. Whereas we predicted that all participants would eschew intentionally harming an innocent for utilitarian gains, we predicted that participants' judgments of foreseen but unintentional harm would diverge as a function of genotype. Specifically, we predicted that LL homozygotes would adhere to the principle of double effect and preferentially select the utilitarian option to save more lives despite unintentional harm to an innocent victim, whereas S-allele carriers would be less likely to endorse even unintentional harm. Results of behavioral testing confirmed this hypothesis.
Participants in this study judged the acceptability of actions that would unintentionally or intentionally harm an innocent victim in order to save others' lives. An analysis of variance revealed a genotype × scenario interaction, F(2, 63) = 4.52, p = .02. Results showed that, relative to long allele homozygotes (LL), carriers of the short (S) allele showed particular reluctance to endorse utilitarian actions resulting in foreseen harm to an innocent individual. LL genotype participants rated perpetrating unintentional harm as more acceptable (M = 4.98, SEM = 0.20) than did SL genotype participants (M = 4.65, SEM = 0.20) or SS genotype participants (M = 4.29, SEM = 0.30).
...
The results indicate that inherited variants in a genetic polymorphism that influences serotonin neurotransmission influence utilitarian moral judgments as well. This finding is interpreted in light of evidence that the S allele is associated with elevated emotional responsiveness.
The neural bases of behavioral game theory
Bhatt & Camerer (2011). The cognitive neuroscience of strategic thinking. Abstract:
This chapter focuses on some emerging elements of a neuroscientific basis for behavioral game theory. The premise of this chapter is that game theory can be useful in helping to elucidate the neural basis of strategic thinking. The great strength of game theory is that it offers precision in defining what players are likely to do and suggesting algorithms of reasoning and learning. Whether people are using these algorithms can be estimated from behavior and from psychological observables (such as response times and eye tracking of attention), and used as parametric regressors to identify candidate brain circuits that appear to encode those regressors.
This review article may be particularly interesting for those who suspect that game theory may play a major role in human value, perhaps even in ways that would make it more intuitively plausible that reasonable value extrapolation algorithms can be developed.
A study in Science on memory conformity
I believe this may be a good addition to the cognitive bias literature:
Following the Crowd: Brain Substrates of Long-Term Memory Conformity
- Micah Edelson1,*,
- Tali Sharot2,
- Raymond J. Dolan2,
- Yadin Dudai1
1Department of Neurobiology, Weizmann Institute of Science, Israel.
- 2Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK.
ABSTRACT
Human memory is strikingly susceptible to social influences, yet we know little about the underlying mechanisms. We examined how socially induced memory errors are generated in the brain by studying the memory of individuals exposed to recollections of others. Participants exhibited a strong tendency to conform to erroneous recollections of the group, producing both long-lasting and temporary errors, even when their initial memory was strong and accurate. Functional brain imaging revealed that social influence modified the neuronal representation of memory. Specifically, a particular brain signature of enhanced amygdala activity and enhanced amygdala-hippocampus connectivity predicted long-lasting but not temporary memory alterations. Our findings reveal how social manipulation can alter memory and extend the known functions of the amygdala to encompass socially mediated memory distortions.
Biomedical engineers analyze—and duplicate—the neural mechanism of learning in rats [link]
Restoring Memory, Repairing Damaged Brains (article @ PR Newswire)
Using an electronic system that duplicates the neural signals associated with memory, they managed to replicate the brain function in rats associated with long-term learned behavior, even when the rats had been drugged to forget.
This series of experiments, as described, sounds very well-constructed and thorough. The scientists first recorded specific activity in the hippocampus, where short-term memory becomes long-term memory. They then used drugs to inhibit that activity, preventing the formation of and access to long-term memory. Using the information they had gathered about the hippocampus activity, they constructed an artificial replacement and implanted it into the rats' brains. This successfully restored the rats' ability to store and use long-term memory. Further, they implanted the device into rats without suppressed hippocampal activity, and demonstrated increased memory abilities in those subjects.
"These integrated experimental modeling studies show for the first time that with sufficient information about the neural coding of memories, a neural prosthesis capable of real-time identification and manipulation of the encoding process can restore and even enhance cognitive mnemonic processes," says the paper.
It's a truly impressive result.
Review of Doris, 'The Moral Psychology Handbook' (2010)
The Moral Psychology Handbook (2010), edited by John Doris, is probably the best way to become familiar with the exciting interdisciplinary field of moral psychology. The chapters are written by philosophers, psychologists, and neuroscientists. A few of them are all three, and the university department to which they are assigned is largely arbitrary.
I should also note that the chapter authors happen to comprise a large chunk of my own 'moral philosophers who don't totally suck' list. The book is also exciting because it undermines or outright falsifies a long list of popular philosophical theories with - gasp! - empirical evidence.
Chapter 1: Evolution of Morality (Machery & Mallon)
The authors examine three interpretations of the claim that morality evolved. The claims "Some components of moral psychology evolved" and "Normative cognition is a product of evolution" are empirically well-supported but philosophically uninteresting. The stronger claim that "Moral cognition (a kind of normative cognition) evolved" is more philosophically interesting, but at present not strongly supported by the evidence (according to the authors).
The chapter serves as a compact survey of recent models for the evolution of morality in humans (Joyce, Hauser, de Waal, etc.), and attempts to draw philosophical conclusions about morality from these descriptive models (e.g. Joyce, Street).
Chapter 2: Multi-system Moral Psychology (Cushman, Young, & Greene)
The authors survey the psychological and neuroscientific evidence showing that moral judgments are both intuitive/affective/unconscious and rational/cognitive/conscious, and propose a dual-process theory of moral judgment. Scientific data is used to verify or falsify philosophical theories proposed as, for example, explanations for trolley-problem cases.
Consequentialist moral judgments are more associated with rational thought than deontological judgment, but both deontological and consequentialist moral judgments have their sources in emotion. Deontological judgments are associated with 'alarm bell' emotions that circumvent reasoning and provide absolute demands on behavior. Alarm bell emotions are rooted in (for example) the amygdala. Consequentialist judgments are associated with 'currency' emotions provide negotiable motivations that weigh for and against particular behaviors, and are rooted in meso-limbic regions that track a stimulus' reward magnitude, reward probability, and expected value.
This chapter might be the best one in the book.
Chapter 3: Moral Motivation (Schroeder, Roskies, & Nichols)
The authors categorize philosophical theories of moral motivation into four groups:
- Instrumentalists think people are motivated when they form beliefs about how to satisfy pre-existing desires.
- Cognitivists think people are motivated merely by the belief that something is right or wrong.
- Sentimentalists think people are morally motivated only by emotions.
- Personalists think people are motivated by their character: their knowledge of good and bad, their wanting for good or bad, their emotions about good or bad, and their habits of responding to these three.
The authors then argue that the neuroscience of motivation fits best with the instrumentalist and personalist pictures of moral motivation, poses some problems for sentimentalists, and presents grave problems for cognitivists. The main weakness of the chapter is that its picture of the neuroscience of motivation is mostly drawn from a decade-old neuroscience textbook. As such, the chapter misses many new developments, especially the important discoveries occurring in neuroeconomics. Still, I can personally attest that the latest neuroscience still comes down most strongly in favor of instrumentalists and personalists, but there are recent details that could have been included in this chapter.
Chapter 4: Moral Emotions (Prinz & Nichols)
The authors survey studies that illuminate the role of emotions in moral cognition, and discuss several models that have been proposed, concluding that the evidence currently respects each of them. They then focus on a more detailed discussion of two emotions that are particularly causal in the moral judgments of Western society: anger and guilt.
The chapter is strong in example experiments, but a higher-level discussion of the role of emotions in moral judgment is provided by chapter 2.
Chapter 5: Altruism (Stich, Doris, & Roedder)
The authors distinguish four kinds of desires: (1) desires for pleasure and avoiding pain, (2) self-interested desires, (3) desires that are not self-interested and no for the well-being of others, and (4) desires for the well-being of others. Psychological hedonism maintains that all (terminal, as opposed to instrumental) desires are of type 1. Psychological egoism says that all desires are of type 2 (which includes type 1). Altruism claims that some desires fall into category 4. And if there are desires of tyep 3 but none of type 4, then both egoism and altruism are false.
The authors survey evolutionary arguments for and against altruism, but are not yet convinced by any of them.
Psychology, however, does support the existence of altruism, which seems to be "the product of an emotional response to another's distress." The authors survey the experimental evidence, especially the work of Batson. They conclude there is significant support for the existence of genuine human altruism. We are not motivated by selfishness alone.
Chapter 6: Moral Reasoning (Harman, Mason, & Sinnott-Armstrong)
The authors clarify the roles of conscious and unconscious moral reasoning, and reject one popular theory of moral reasoning: the deductive model. One of many reasons for their rejection of the deductive model is that it assumes we come to explicit moral conclusions by applying logic, probability theory, and decision theory to pre-existing moral principles, but in the deductive model these principles are understood in terms of psychological theories of concepts that are probably false. The authors survey the 'classical view of concepts' (concepts as defined in terms of necessary and sufficient conditions) and conclude that it is less likely to be true than alternate theories of mental concepts that are less friendly to the deductive model of moral reasoning.
The authors propose an alternate model of moral reasoning whereby one makes mutual adjustments to one's beliefs and plans and values in pursuit of what Rawls called 'reflective equilibrium.'
Chapter 7: Moral Intuitions (Sinnott-Armstrong, Young, & Cushman)
The authors refer to moral intuitions as "strong, stable, immediate moral beliefs." The 'immediate' part means that these moral beliefs do not arise through conscious reasoning; the subject is conscious only of the resulting moral belief.
Their project is this:
...moral intuitions are unreliable to the extent that morally irrelevant factors affect moral intuitions. When they are distorted by irrelevant factors, moral intuitions can be likened to mirages or seeing pink elephants while one is on LSD. Only when beliefs arise in more reputable ways do they have a fighting chance of being justified. Hence we need to know about the processes that produce moral intuitions before we can determine whether moral intuitions are justified.
Thus the chapter engages in something like Less Wrong-style 'dissolution to algorithm.'
A major weakness of this article is that it focuses on the understanding of intuitions as attribute substitution heuristics, but ignores the other two major sources of intuitive judgments: evolutionary psychology and unconscious associative learning.
Chapter 8: Linguistics and Moral Theory (Roedder & Harman)
This chapter examines the 'linguistic analogy' in moral psychology - the analogy between Chomsky's 'universal grammar' and what has been called 'universal moral grammar.' The authors don't have any strong conclusions, but instead suggest that this linguistic analogy may be a helpful framework for pursuing further research. They list five ways in particular the analogy is useful. This chapter can be skipped without missing much.
Chapter 9: Rules (Mallon & Nichols)
The authors survey the evidence that moral rules "are mentally represented and play a causal role in the production of judgment and behavior." This may be obvious, but it's nice to have the evidence collected somewhere.
Chapter 10: Responsibility (Knobe & Doris)
This chapter surveys the experimental studies that test people's attributions of moral responsibility. In short, people do not make such judgments according to invariant principles, as assumed by most of 20th century moral philosophy. (Moral philosophers have spent most of their time trying to find a set of principles that accounted for people's ordinary moral judgments, and showing that alternate sets of principles failed to capture people's ordinary moral judgments in particular circumstances.)
People adopt different moral criteria for judging different cases, even when they verbally endorse a simple set of abstract principles. This should not be surprising, as the same had already been shown to be true in linguistics and in non-moral judgment. The chapter surveys the variety of ways in which people adopt different moral criteria for different cases.
Chapter 11: Character (Merritt, Doris, & Harman)
This chapter surveys the evidence from situationist psychology, which undermines the 'robust character traits' view of human psychology upon which many varieties of virtue ethics depend.
Chapter 12: Well-Being (Tiberius & Plakias)
This chapter surveys competing concepts of 'well-being' in psychology, and provides reasons for using the 'life satisfaction' concept of well-being, especially in philosophy. The authors then discuss life satisfaction and normativity; for example the worry about the arbitrariness of factors that lead to human life satisfaction.
Chapter 13: Race and Racial Cognition (Kelly, Machery, & Mallon)
I didn't read this chapter.
[REVIEW] Foundations of Neuroeconomic Analysis
Neuroeconomics is the application of advances in neuroscience to the fundamentals of economics: choice and valuation. Foundations of Neuroeconomic Analyis by Paul Glimcher, an active researcher in this area, presents a summary of this relatively new field to psychologists and economists. Although written as a serious work, the presentation is made across disciplines, so it should be accessible to anyone interested without much background knowledge in either area. Although the writing is so-so, the book covers multiple Less Wrong-relevant themes, from reductionism to neuroscience to decision theory. If nothing else, the results discussed provide a wonderful example of how no one knows what science doesn't know. I doubt many economists are aware researchers can point to something very similar to utility on a brain scanner and would scoff at the very notion.
Because of the book's wide target audience, there is not enough detail for specialists, but possibly a little too much for non-specialists. If you are interested in this topic, the best reason to pick up the book would be to track down further references. I hope the following summary does the book justice for everyone else.
Are book summaries of this sort useful? The recent review/summary of Predictably Irrational appears to have gone over well. Any suggestions to improve possible future reviews?
Introduction
Many economists think economics is fundamentally separate from psychology and neuroscience; since they take choices as primitives, little if any knowledge would be gained from understanding the mechanisms underlying choice. However, science steadily brings reduction and linkages between previously unrelated disciplines. A striking amount has already been discovered about the exact processes in the brain governing choice and valuation. On the other side, neuroscientists and psychologist underestimate the ability of economists to say whether claims about the brain are logically coherent or not.
Section I: The Challenge of Neuroeconomics
Consider a man and woman who have an affair with each other at a professional conference, which they later consider a mistake. An economist looking at this situation would treat their choice to sleep together as revealing a preference, regardless of their verbal claims. A psychologist would consider how mental states mediated this decision, and would be more willing to consider whether the decision was a mistake or not. Biologists would be more likely to point to ancestral benefits of extra-pair copulations, not considering the reflective judgements as directly relevant. These explanations largely speak past each other, hinting that a unified theory could do much better in predicting behavior.
The key to this is establishing linkages between the logical primitives of each discipline. Behavior could be explained on the level of physics, biology, psychology, or economics, but whether low-level explanations are practical is a different matter. Realistically, linking disciplines will strengthen both fields by mutually constraining the theories available to them.
With the neoclassical revolution, economics developed concepts of utility as reflecting ordinal relationships over revealed preferences. Choices that satisfied certain consistency conditions could be treated as if generated by a utility function. Additional axioms allowed consistent choice under uncertainty to be added to the theory. There are notable problems with this approach, but the core ideas of utility and maximization have surprisingly close neural analogues. Rather than operating "as if" individuals act on the basis of utility, a hard theory of "because" is being developed.
A look at visual perception reveals our subjective experience of light intensity varies subtantially depending on the wavelength of the light. Brightness is concept that resides in the mind, and furthermore sensitivity to different wavelengths corresponds precisely to the absorption spectra of the chemical rhodopsin in our retinas. All perceptions are represented in the mind along a power scale with some variance. Because the distributions of perceptions overlap, subjects can report accurately that a dimmer light is perceptually brighter. This suggests random utility models developed for statisical purposes might be directly explain what happens in the brain. One interesting consequence about the power scaling law is that risk aversion would be embedded at the level of perception.
Section II: The Choice Mechanism
Due to its relative simplicity, eye movement serves as a model for motor control and perhaps decisions broadly. The superior colliculus represents possible eye movements topographically with "hills" of activity. Eventually, the tissue transitions to a bursting state where the most active hill becomes much more active and the rest are inhibited via a "winner-take-all" or "argmax" mechanism. All inputs have to eye motion have to pass through the superior colliculus, so this represents a common final pathway of processed sensory signals. By giving monkeys varying awards for eye-motion tasks, activity in the lateral intraparietal area (LIP) correlates strongly with the probability and size of reward in an area known to trigger action before the action is taken. In other words, this appears to be a direct neural representation of subjective expected valuation. If monkey subjects play a game with mixed strategies in equilibrium, neuron firing rates are all roughly equal, matching the conclusion that expected utilities of actions are equalized when an opponent is mixing.
Cortial neurons fire almost like independent Poisson processes, resulting in neurons down the line being able to easily extract the mean firing rate of the inputs. Interneuronal correlation can vary according to the task at hand, resulting in greater or lesser variation of the final decision, so descriptive decision theories must incorporate randomness in choice. This also provides support for mixed strategies being represented directly in the brain.
Subjective valuations are normalized, and are only considered relative to the other options at hand. This normalization maximizes the joint information of neurons, increasing the efficiency of value representation. One consequence is that as the choice set increases, valuations start overlapping, and choice becomes essentially random. Activity also varies according to the delay of rewards, matching previous findings of hyperbolic discounting. While these findings are largely based on eye-movements in monkeys, this provides a clear path of how choice can be reduced to neural mechanisms.
Section III: Valuation
Back to visual perception, our judgements are made relative to other elements in the environment. Color looks roughly the same indoors and outdoors, even though there can be six orders of magnitude more illumination outside. Drifting reference points make absolute values unrecoverable. Local irrationalies due to reliance on a reference point arise because evolution is trading off between accurate sensory encoding and the costs of these irrationalities.
One promising way to specify the reference point is as the discounted sum of our future wealth. Learning depends on the difference between actual and expected rewards, so valuation compared to a reference point arises from the learning process. In the brain, reward prediction errors are encoded through dopamine. Dopamine firing rates are well-described by an exponentially weighted sum of previous awards subtracted from the most recent award. Hebb's law, which says "cells that fire together, wire together", describes how long-term predictions work.
Valuation appears to be orginally constructed in the striatum and medial prefrontal cortex. The reference level encoded there can be directly observed with brain scanners. Various other regions provide inputs to construct value. For instance, the orbitofrontal cortex (OFC) provides an assessment of risk. Subjects with lesions in this area exhibit almost perfect risk neutrality. Values might also be stored in the OFC, again in a compressed and encoded way. Longer-term valuations might be stored in the amygdala.
Because valuations are encoded relatively and don't work well over large choice sets, humans might edit out options by sequentially considering particular attributes until the choice set become manageable. Sorting by attributes can lead to irrational choices, unsurprisingly.
Probabilistic valuations depend on whether the expectation was learned experientially or symbolically. Symbolically communicated probabilities, where the person is told a number, are overweighed near zero and underweighted near one. Experientially communicated probabilities, where the person samples the lotteries directly, exhibit the opposite pattern. This suggests at least two mechanisms at work, especially with the ability to deal with symbolic probabilities arising relatively late in our evolutionary history. Also, while experiential expected values incorporate probabilities implicitly, this information can't be extracted. When probabilities change, the only means to change valuations is to relearn them from scratch.
Section IV: Summary and Conclusions
Here the author presents formalized models of the descriptive theory. The normative uses of this theory are still unclear. Even if we can identify subjective valuations in the brain, does this have any relation to welfare?
The four critical observations of neuroeconomics are reference-dependence, the lack of an absolute measure of anything in the brain, stochasticity in choice, and the influence of learning on choice. Along with the question of the welfare implications of these findings, six primary questions are currently unanswered:
- Where is subjective value stored and how does it get to choice?
- What part of the brain governs when it is "time to choose"?
- What neural mechanism guides complementarity between goods?
- How does symbolic probability work?
- How does the state of the world and utility interact?
- How does the brain represent money?
Put Yourself in Manual Mode (aka Shut Up and Multiply)
Joshua Greene manages to squeeze his ideas about 'point and shoot morality vs. manual mode morality' into just 10 minutes. For those unfamiliar, his work is a neuroscientific approach to recommending that we shut up and multiply.
A neural correlate of certainty
Adam Kepecs' Eppendorf essay, hosted at science's website (but not printed in the magazine), is about some neurons in the orbitofrontal cortex of rats that appear to represent uncertainty in an odor-recognition task by firing more often, at a rate roughly linearly proportional to the error rate.
The involvement of OFC in decision-making isn't new, but the graphs are nice and quantitative.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)