You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

What are the best books on evolutionary psychology?

4 diegocaleiro 21 September 2012 07:59PM

I'd like to divide three classes of reasons to read a discipline:

1) You are curious and want to begin reading by something 100-500 pages. I'd go for Pinker's 1990's  "How the mind works"

2) You want to screen the whole field, by reading something 500-1500 pages. I definitely recommend David Buss 2004 "The Handbook of Evolutionary Psychology" which defeats the usual SI recommendations on the field

3) You want to know the state of the art of the field, so you really need something that is very recent, say from the last 2 or 3 years at most.  This is me. Please help me if you know what should I read.  300-1500 seems a good interval.

Just for a comparative, in Cognitive Neuroscience, 3 would be 2009 "MIT The Cognitive Neurosciences IV"

 

Post your opinions on what 1 2 and 3 should be for Evolutionary Psychology.

Oh, and if you like Evolutionary Cognitive Neuroscience (a field so new I don't know any of the 3) please post yours too...

Experimental psychology on word confusion

11 lukeprog 14 September 2012 05:44AM

There's plenty of experimental work about how humans make poor judgments and decisions, but I haven't yet found much about how humans make poor judgments and decisions because of confusions about words. And yet, I expect such errors are common — I, at least, encounter them frequently.

It would be nice to have some scientific studies which illustrate the ways in which confusions about words affect everyday decision making, but instead all I can do is make philosophical arguments and point people to things like Yudkowsky's 37 Ways That Words Can Be Wrong or Chalmers' Verbal Disputes and Philosophical Progress.

Which keywords do I need to find experimental work on this topic? I tried Google scholar searches like "fuzzy concepts" "decision making" and effect of connotations on choices but I didn't find much in my first hour of looking into this.

How to tell apart science from pseudo-science in a field you don't know ?

18 kilobug 02 September 2012 10:25AM

First, a short personal note to make you understand why this is important to me. To make a long story short, the son of a friend has some atypical form of autism and language troubles. And that kid matters a lot to me, so I want to become stronger in helping him, to be able to better interact with him and help him overcome his troubles.

But I don't know much about psychology. I'm a computer scientist, with a general background of maths and physics. I'm kind of a nerd, social skills aren't my strength. I did read some of the basic books advised on Less Wrong, like Cialdini, Wright or Wiseman, but those just give me a very small background on which to build.

And psychology in general, autism/language troubles in particular, are fields in which there is a lot of pseudo-science. I'm very sceptical of Freud and psychoanalysis, for example, which I consider (but maybe I am wrong?) to be more like alchemy than like chemistry. There are a lot of mysticism and sect-like gurus related to autism, too.

So I'm bit unsure on how from my position of having a general scientific and rationality background I can dive into a completely unrelated field. Research papers are probably above my current level in psychology, so I think books (textbooks or popular science) are the way to go. But how to find which books on the hundreds that were written on the topic I should buy and read? Books that are evidence-based science, not pseudo-science, I mean. What is a general method to select which books to start in a field you don't really know? I would welcome any advise from the community.

Disclaimer: this is a personal "call for help", but since I think the answers/advices may matter outside my own personal case, I hope you don't mind.

Let's Talk About Intelligence

3 Crystalist 22 August 2012 08:19PM

I'm writing this because, for a while, I have noticed that I am confused: particularly about what people mean when they say someone is intelligent. I'm more interested in a discussion here than actually making a formal case, so please excuse my lack of actual citations. I'm also trying to articulate my own confusion to myself as well as everyone else, so this will not be as focused as it could be.

If I had to point to a starting point for this state, I'd say it was in psych class, where we talked about research presented by Eyesenck and Gladwell. Eyesenck is very clear to define intelligence as the ability to solve abstract problems, but not necessarily the motivation . In many ways, this matches Yudkowsky's definition, where he talks about intelligence as a property we can ascribe to an entity, which lets us predict that the entity will be able to complete a task, without ourselves necessarily understanding the steps toward completion.

The central theme I'm confused about is the generality of the concept: are we really saying that there is a general algorithm or class of algorithms that will solve most or all problems to within a given distance from optimum?

Let me give an example. Depending on what test you use, an autistic can look clinically retarded, but with 'islands' of remarkable ability, even up to genius levels. The classic example is “Rain Man,” who is depicted as easily solving numerical problems most people don't even understand, but having trouble tying his shoes. This is usually an exaggeration (by no means are all autistics savants), and these island skills are hardly limited to math. The interesting point, though, is that even someone with many such islands can have an abysmally low overall IQ.

Some tests correct for this – Raven's Pattern matching test, for instance, gives you increasingly complex patterns that you have to complete – and this tends to level out those islands, and give an overall score that seems commensurate with the sheer genius that can be found in some areas.

What I find confusing is why we're correcting this at all. Certainly, we know that some people, given a task, can complete that task, and of course, depending on the person, this task can be unfathomably complex. But do we really have the evidence to say that, in general, this task does not depend on the person as well? Or, more specifically, on the algorithms they're running? Is it reasonable to say that a person runs an algorithm that will solve all problems within an efficiency x (with respect to processing time and optimality of the solution)? Or should we be looking closer for islands in neurological baselines as well?

Certainly, we could change the question and ask how efficient are all the algorithms the person is running, and from that, we could give an average efficiency, which might serve as a decent rough estimate for the efficiency with which a person will solve a problem. And for some uses, this is exactly the information we're looking for, and that's fine. But, as a general property of the people we're studying, it seems like the measure is insufficient.

If we're trying to predict specific behavior, it seems like it would be useful to be aware of whatever 'islands' exist – for instance, the common separation between algebraic and geometric approaches to math. In my experience, using geometric explanations to someone with an algebraic approach may not be at all successful, but this is not predictive of what we might think of as the person's a priori probability of solving the problem: occasionally they seem to solve the problem with no more than a few algebraic hints. Of course, this is hardly hard evidence, but I think it points to what I'm getting at.

Looking at the specific algorithm that's being used (or perhaps, the class of algorithm?) can be considerably more predictive of the outcome. Actually, I can't really say that, either: looking at what could be a distinct algorithm can be considerably more predictive of the outcome. There are numerous explanations for these observations, one of which is of course that these are all the same algorithm, just trained on different inputs, and perhaps even constrained or aided by changes in the local neural architecture (as some studies on neurological correlates of autism might suggest). But computational power alone seems insufficient if we're going to explain phenomena like the autistic 'islands'. A savant doesn't want for computational power – but in some areas, they can want for intelligence.

Here's where I start getting confused: the research I've seen assumes intelligence is a single trait which could be genetically, epigenetically, or culturally transmitted. When correlates of intelligence are looked for, from what I've seen, the correlates are for the 'average' intelligence score, and largely disregard the 'islands' of ability. As I've said, this can be useful, but it seems like answering some of these questions would be useful for a more general understanding of intelligence, especially going into the neurological side of things, whether that's in wetware or hardware.

Then again, there's a good chance I'm missing something: in which case, I'd appreciate some help updating my priors.

[Link] Admitting to Bias

19 GLaDOS 10 August 2012 08:13AM

Summary: Current social psychology research is probably on average compromised by political bias leftward. Conservative researchers are likely discriminated against in at least this field. More importantly papers and research that does not fit a liberal perspective faces greater barriers and burdens.  

An article in the online publication inside higher ed on a survey on anti-conservative bias among social psychologists.

Numerous surveys have found that professors, especially those in some disciplines, are to the left of the general public. But those same -- and other -- surveys have rarely found evidence that left-leaning academics discriminate on the basis of politics. So to many academics, the question of ideological bias is not a big deal. Investment bankers may lean to the right, but that doesn't mean they don't provide good service (or as best the economy will permit) to clients of all political stripes, the argument goes.

And professors should be assumed to have the same professionalism.

A new study, however, challenges that assumption -- at least in the field of social psychology. The study isn't due to be published until next month (in Perspectives on Psychological Science), and the authors and others are noting limitations to the study. But its findings of bias by social psychologists (even if just a decent-sized minority of them) are already getting considerable buzz in conservative circles. Just over 37 percent of those surveyed said that, given equally qualified candidates for a job, they would support the hiring of a liberal candidate over a conservative candidate. Smaller percentages agreed that a "conservative perspective" would negatively influence their odds of supporting a paper for inclusion in a journal or a proposal for a grant. (The final version of the paper is not yet available, but an early version may be found on the website of the Social Science Research Network.)

To some on the right, such findings are hardly surprising. But to the authors, who expected to find lopsided political leanings, but not bias, the results were not what they expected.

"The questions were pretty blatant. We didn't expect people would give those answers," said Yoel Inbar, a co-author, who is a visiting assistant professor at the Wharton School of the University of Pennsylvania, and an assistant professor of social psychology at Tilburg University, in the Netherlands.

He said that the findings should concern academics. Of the bias he and a co-author found, he said, "I don't think it's O.K."

Discussion of faculty politics extends well beyond social psychology, and humanities professors are frequently accused of being "tenured radicals" (a label some wear with pride). But social psychology has had an intense debate over the issue in the last year.

At the 2011 meeting of the Society for Personality and Social Psychology, Jonathan Haidt of the University of Virginia polled the audience of some 1,000 in a convention center ballroom to ask how many were liberals (the vast majority of hands went up), how many were centrists or libertarians (he counted a couple dozen or so), and how many were conservatives (three hands went up). In his talk, he said that the conference reflected "a statistically impossible lack of diversity,” in a country where 40 percent of Americans are conservative and only 20 percent are liberal. He said he worried about the discipline becoming a "tribal-moral community" in ways that hurt the field's credibility.

The link above is worth following. The problems that arise remind me of the situation with academic and our own ethics in light of this paper.

That speech prompted the research that is about to be published. Members of a social psychologists' e-mail list were surveyed twice. (The group is not limited to American social scientists or faculty members, but about 90 percent are academics, including grad students, and more than 80 percent are Americans.) Not surprisingly, the overwhelming majority of those surveyed identified as liberal on social, foreign and economic policy, with the strongest conservative presence on economic policy. Only 6 percent described themselves as conservative over all.

The questions on willingness to discriminate against conservatives were asked in two ways: what the respondents thought they would do, and what they thought their colleagues would do. The pool included conservatives (who presumably aren't discriminating against conservatives) so the liberal response rates may be a bit higher, Inbar said.

The percentages below reflect those who gave a score of 4 or higher on a 7-point scale on how likely they would be to do something (with 4 being "somewhat" likely).

Percentages of Social Psychologists Who Would Be Biased in Various Ways

  Self Colleagues
A "politically conservative perspective" by author would have a negative influence on evaluation of a paper 18.6% 34.2%
A "politically conservative perspective" by author would have a negative influence on evaluation of a grant proposal 23.8% 36.9%
Would be reluctant to extend symposium invitation to a colleague who is "politically quite conservative" 14.0% 29.6%
Would vote for liberal over conservative job candidate if they were equally qualified 37.5% 44.1%

I can't help but think that self-assessments are probably too generous. For predictive power of how an individual behaves when the behaviour in question is undesirable, I'm more likely to take their estimate of how "colleagues" behave than their estimate of how they personally do. 

The more liberal the survey respondents identified as being, the more likely they were to say that they would discriminate.

The paper notes surveys and statements by conservatives in the field saying that they are reluctant to speak out and says that "they are right to do so," given the numbers of individuals who indicate they might be biased or that their colleagues might be biased in various ways.

Inbar said that he has no idea if other fields would have similar results. And he stressed that the questions were hypothetical; the survey did not ask participants if they had actually done these things.

He said that the study also collected free responses from participants, and that conservative responses were consistent with the idea that there is bias out there. "The responses included really egregious stuff, people being belittled by their advisers publicly for voting Republican."

This shouldn't be surprising to hear since to quote CharlieSheen: "we even have LW posters who have in academia personally experienced discrimination and harassment because of their right wing politics."

Neil Gross, a professor of sociology at the University of British Columbia, urged caution about the results. Gross has written extensively on faculty political issues. He is the co-author of a 2007 report that found that while professors may lean left, they do so less than is imagined and less uniformly across institution type than is imagined.

Gross said it was important to remember that the percentages saying they would discriminate in various ways are answering yes to a relatively low bar of "somewhat." He also said that the numbers would have been "more meaningful" if they had asked about actual behavior by respondents in the last year, not the more general question of whether they might do these things.

At the same time, he said that the numbers "are higher than I would have expected." One theory Gross has is that the questions are "picking up general political animosity as much as anything else."

If you are wondering about the political leanings of the social psychologists who conducted the study, they are on the left. Inbar said he describes himself as "a pretty doctrinaire liberal," who volunteered for the Obama campaign in 2008 and who votes Democrat. His co-author, Joris Lammers of Tilburg, is to Inbar's left, he said.

What most impressed him about the issues raised by the study, Inbar said, is the need to think about "basic fairness."

While I can see Lammers' point that this as disturbing from a fairness perspective to people grinding their way through academia and should serve as warning for right wing LessWrong readers working through the system, I find the issue of how this our heavy reliance on academia for our map of reality might lead to us inheriting such distortions of the map of reality much more concerning. Overall in light of this if a widely accepted conclusion from social psychology favours a "right wing" perspective it is more likely to be correct than if no such biases against such perspectives existed. Conclusions that favour "left wing" perspective are also somewhat less likely to be true than if no such biases existed. We should update accordingly.

I also think there are reasons to think we may have similar problems on this site.

Notes on the Psychology of Power

34 gwern 27 July 2012 07:22PM

Luke/SI asked me to look into what the academic literature might have to say about people in positions of power. This is a summary of some of the recent psychology results.

The powerful or elite are: fast-planning abstract thinkers who take action (1) in order to pursue single/minimal objectives, are in favor of strict rules for their stereotyped out-group underlings (2) but are rationalizing (3) & hypocritical when it serves their interests (4), especially when they feel secure in their power. They break social norms (5, 6) or ignore context (1) which turns out to be worsened by disclosure of conflicts of interest (7), and lie fluently without mental or physiological stress (6).

What are powerful members good for? They can help in shifting among equilibria: solving coordination problems or inducing contributions towards public goods (8), and their abstracted Far perspective can be better than the concrete Near of the weak (9).

  1. Galinsky et al 2003; Guinote, 2007; Lammers et al 2008; Smith & Bargh, 2008
  2. Eyal & Liberman
  3. Rustichini & Villeval 2012
  4. Lammers et al 2010
  5. Kleef et al 2011
  6. Carney et al 2010
  7. Cain et al 2005; Cain et al 2011
  8. Eckel et al 2010
  9. Slabu et al; Smith & Trope 2006; Smith et al 2008

continue reading »

Exploiting the Typical Mind Fallacy for more accurate questioning?

31 Xachariah 17 July 2012 12:46AM

I was reading Yvain's Generalizing from One Example, which talks about the typical mind fallacy.  Basically, it describes how humans assume that all other humans are like them.  If a person doesn't cheat on tests, they are more likely to assume others won't cheat on tests either.  If a person sees mental images, they'll be more likely to assume that everyone else sees mental images.

As I'm wont to do, I was thinking about how to make that theory pay rent.  It occurred to me that this could definitely be exploitable.  If the typical mind fallacy is correct, we should be able to have it go the other way; we can derive information about a person's proclivities based on what they think about other people.

Eg, most employers ask "have you ever stolen from a job before," and have to deal with misreporting because nobody in their right mind will say yes.  However, imagine if the typical mind fallacy was correct.  The employers could instead ask "what do you think the percentage of employees who have stolen from their job is?" and know that the applicants who responded higher than average were correspondingly more likely to steal, and the applicants who responded lower than average were less likely to cheat.  It could cut through all sorts of social desirability distortion effects.  You couldn't get the exact likelihood, but it would give more useful information than you would get with a direct question. 

In hindsight, which is always 20/20, it seems incredibly obvious.  I'd be surprised if professional personality tests and sociologists aren't using these types of questions.  My google-fu shows no hits, but it's possible I'm just not using the correct term that sociologists use.  I'm was wondering if anyone had heard of this questioning method before, and if there's any good research data out there showing just how much you can infer from someone's deviance from the median response.

Two books by Celia Green

-9 Mitchell_Porter 13 July 2012 08:43AM

Celia Green is a figure who should interest some LW readers. If you can imagine Eliezer, not as an A.I. futurist in 2000s America, but as a parapsychologist in 1960s Britain - she must have been a little like that. She founded her own research institute in her mid-20s, invented psychological theories meant to explain why the human race was walking around resigned to mortality and ignorance, felt that her peers (who got all the research money) were doing everything wrong... I would say that her two outstanding books are The Human Evasion and Advice to Clever Children. The first book, while still very obscure, has slowly acquired a fanbase online; but the second book remains thoroughly unknown.

For a synopsis of what the books are about, I think something I wrote in 1993 (I've been promoting her work on the Internet for years) remains reasonable. They contain an analysis of the alleged deficiencies and hidden motivations of normal human psychology, description of an alternative outlook, and an examination of various topics from that new perspective. There is some similarity to the rationalist ideal developed in the Sequences here, in that her alternative involves existential urgency, deep respect for uncertainty, and superhuman aspiration.

There are also prominent differences. Green's starting point is not Bayesian calculation, it's Humean skepticism. Green would agree that one should aspire to "think like reality", but for her this would mean, above all, being mindful of "total uncertainty". It's a fact that I don't know what comes next, that I don't know the true nature of reality, that I don't know what's possible if I try; I may have habitual opinions about these matters, but a moment's honest reflection shows that none of these opinions are knowledge in any genuine sense; even if they are correct, I don't know them to be correct. So if I am interested in thinking like reality, I can begin by acknowledging the radical uncertainty of my situation. I exist, I don't know why, I don't know what I am, I don't know what the world is or what it has planned for me. I may have my ideas, but I should be able to see them as ideas and hold them apart from the unknown reality.

If you are like me, you will enjoy the outlook of open-ended striving that Green develops in this intellectual context, but you will be jarred by her account of ordinary, non-striving psychology. Her answer to the question, why does the human race have such petty interests and limited ambitions, is that it is sunk in an orgy of mutual hatred, mostly disguised, and resulting from an attempt to evade the psychology of striving. More precisely, to be a finite human being is to be in a desperate and frustrating situation; and people attempt to solve this problem, not by overcoming their limitations, but by suppressing their reactions to the situation. Other people are central to the resulting psychological maneuvers. They are a way for you to distract yourself from your own situation, and they are a safe target if the existential frustration and desperation reassert themselves.

Celia Green's psychological ideas are the product of her personal confrontation with the mysterious existential situation, and also her confrontation with an uncomprehending society. I've thought for some time that her portrayal of universal human depravity results from overestimating the potential of the average human being; that in effect she has asked herself, if I were that person, how could I possibly lead the life I see them living, and say the things I hear them saying, unless I were that twisted up inside? Nonetheless, I do think she has described an aspect of human psychology which is real and largely unexamined, and also that her advice on how to avoid the resentful turning-away from reality, and live in the uncertainty, is quite profound. One reason I'm promoting these books is in the hope that some small part of the culture at large is finally ready to digest their contents and critically assess them. People ought to be doing PhDs on the thought of Celia Green, but she's unknown in that world.

As for Celia Green herself, she's still alive and still going. She has a blog and a personal website and an organization based near Oxford. She's an "academic exile", but true to her philosophy, she hasn't compromised one iota and hopes to start her own private university. She may especially be of interest to the metaphysically inclined faction of LW readers, identified by Yvain in a recent blog post.

[Link] Can We Reverse The Stanford Prison Experiment?

43 [deleted] 14 June 2012 03:41AM

From the Harvard Business Review, an article entitled: "Can We Reverse The Stanford Prison Experiment?"

By: Greg McKeown
Posted: June 12, 2012

Clicky Link of Awesome! Wheee! Push me!

Summary:

Royal Canadian Mounted Police attempt a program where they hand out "Positive Tickets" 

Their approach was to try to catch youth doing the right things and give them a Positive Ticket. The ticket granted the recipient free entry to the movies or to a local youth center. They gave out an average of 40,000 tickets per year. That is three times the number of negative tickets over the same period. As it turns out, and unbeknownst to Clapham, that ratio (2.9 positive affects to 1 negative affect, to be precise) is called the Losada Line. It is the minimum ratio of positive to negatives that has to exist for a team to flourish. On higher-performing teams (and marriages for that matter) the ratio jumps to 5:1. But does it hold true in policing?

According to Clapham, youth recidivism was reduced from 60% to 8%. Overall crime was reduced by 40%. Youth crime was cut in half. And it cost one-tenth of the traditional judicial system.


This idea can be applied to Real Life

The lesson here is to create a culture that immediately and sincerely celebrates victories. Here are three simple ways to begin:

1. Start your next staff meeting with five minutes on the question: "What has gone right since our last meeting?" Have each person acknowledge someone else's achievement in a concrete, sincere way. Done right, this very small question can begin to shift the conversation.

2. Take two minutes every day to try to catch someone doing the right thing. It is the fastest and most positive way for the people around you to learn when they are getting it right.

3. Create a virtual community board where employees, partners and even customers can share what they are grateful for daily. Sounds idealistic? Vishen Lakhiani, CEO of Mind Valley, a new generation media and publishing company, has done just that at Gratitude Log. (Watch him explain how it works here).

[Video] Presentation on metacognition contains good intro to basic LW ideas

3 Cyan 12 June 2012 01:12PM

I attended a talk yesterday given under the auspices of the Ottawa Skeptics on the subject of "metacognition" or thinking about thinking -- basically, it was about core rationality concepts. It was designed to appeal to a broad group of lay people interested in science and consisted of a number of examples drawn from pop-sci books such as Thinking, Fast and Slow and Predictably Irrational. (Also mentioned: straw vulcans as described by CFAR's own Julia Galef.) If people who aren't familiar with LW ask you what LW is about, I'd strongly recommend pointing them to this video.

Here's the link.

[Link] Thick and thin

23 [deleted] 06 June 2012 12:08PM

A new interesting entry on Gregory Cochran's and Henry Harpending's well known blog (West Hunter). For me the information I gained from the LessWrong articles on inferential distances complemented it nicely. Link to source.

There is a spectrum of problem-solving, ranging from, at one extreme, simplicity  and clear chains of logical reasoning (sometimes long chains) and, at the other,  building a picture by sifting through a vast mass of evidence of  varying quality.  I will give some examples. Just the other day, when I was conferring, conversing and otherwise hobnobbing with my fellow physicists, I mentioned high-altitude lighting, sprites and elves and blue jets.   I said that you could think of a thundercloud as a vertical dipole,  with an electric field that decreased as the cube of altitude, while the breakdown voltage varied with air pressure, which declines exponentially with altitude. At which point the prof I was talking to said ” and so the curves must cross!”.  That’s how physicists think, and it can be very effective. The amount of information required to solve the problem is not very large. I call this a ‘thin’ problem’.

At the other extreme,  consider Darwin gathering and pondering on a vast amount of natural-history information, eventually coming up with natural selection as the explanation.   Some of the information in the literature  wasn’t correct, and much  key information that would have greatly aided his  quest, such as basic genetics, was still unknown.   That didn’t stop him, anymore than not knowing the cause of continental drift stopped Wegener.

In another example at the messy end of the spectrum, Joe Rochefort, running Hypo in the spring of 1942,  needed to figure out Japanese plans. He had an an ever-growing mass of Japanese radio intercepts, some of which were partially decrypted – say, one word of five, with luck.   He had data from radio direction-finding; his people were beginning to be able to recognize particular Japanese radio operators by their ‘fist’.  He’d studied in Japan, knew the Japanese well.  He had plenty of Navy experience – knew what was possible. I would call this a classic ‘thick’ problem, one in which an analyst needs to deal with an enormous amount of data of varying quality.  Being smart is necessary but not sufficient: you also need to know lots of  stuff.

At this point he was utterly saturated with information about the Japanese Navy.  He’d been  living and breathing JN-25 for months. The Japanese were aimed somewhere,  that somewhere designated by an untranslated codegroup – ‘AF’.  Rochefort thought it meant Midway, based on many clues, plausibility, etc.  OP-20-G, back in Washington,  thought otherwise. They thought the main attack might be against Alaska, or Port Moresby, or even the West Coast.

Nimitz believed Rochefort – who was correct.  Because of that, we managed to prevail at Midway, losing one carrier and one destroyer while the the Japanese lost four carriers and a heavy cruiser*.  As so often happens, OP-20-G won the bureaucratic war:  Rochefort embarrassed them by proving them wrong, and they kicked him out of Hawaii, assigning him to a floating drydock.

The usual explanation of Joe Rochefort’s fall argues that John Redman’s ( head of OP-20-G, the Navy’s main signals intelligence and cryptanalysis group) geographical proximity to Navy headquarters  was a key factor in winning the bureaucratic struggle, along with his brother’s influence (Rear Admiral Joseph Redman).  That and being a shameless liar.

Personally, I wonder if part of the problem is the great difficulty of explaining the analysis of a thick problem to someone without a similar depth of knowledge.  At best, they believe you because you’ve  been right in the past.  Or, sometimes, once you have developed the answer, there is a ‘thin’ way of confirming your answer – as when Rochefort took Jasper Holmes’s suggestion and had Midway broadcast an uncoded complaint about the failure of their distillation system – soon followed by a Japanese report that ‘AF’ was short of water.

Most problems in the social sciences are ‘thick’, and unfortunately, almost all of the researchers are as well. There are a lot more Redmans than Rocheforts.

Case Study: Testing Confirmation Bias

32 gwern 02 May 2012 02:03PM

Master copy lives on gwern.net

[link]Mass replication of Psychology articles planed.

25 beoShaffer 18 April 2012 04:13PM

http://chronicle.com/blogs/percolator/is-psychology-about-to-come-undone/29045

The plan is to replicate or fail to replicate all 2008 articles from three major Psychology journals.

ETA: http://openscienceframework.org/ is the homepage of the group behind this.  It's still in Beta, but will eventually include some nifty looking science toolkits in addition to the reproducibility project.

[link] Why We Reason (psychology blog)

4 [deleted] 18 April 2012 11:40AM

Why We Reason is an excellent psychology blog that has a great deal of subject matter in common with Less Wrong. Some of the topics discussed on the blog include social psychology, judgement and decision making, neuroscience, cognitive biases, and creativity. And there's even a hint of the kind of "cognitive philosophy" practiced on Less Wrong.

The author, Sam McNerney, is blessed with the rare gift of being able to distill psychology topics for a lay audience, and his posts are very lucid.

There's also a handy archive of every post on the site.

'Thinking, Fast and Slow' Chapter Summaries / Notes [link]

17 Lightwave 15 April 2012 09:14AM

I recently read Kahneman's 'Thinking Fast and Slow' (actually listened to the audiobook) and I wanted to find a summary of the experiments he describes and I stumbled upon this: http://sivers.org/book/ThinkingFastAndSlow. It has a summary of the interesting/important points of each chapter. Most of the statements seem to be direct quotes from the book, so if you have it in an electronic format (it can easily be obtained from uh, various sources) you can search for those quotes and find the context.

Bonus: Notes from Dan Ariely's Predictably Irrational and also many other books.

The principle of ‘altruistic arbitrage’

18 RobertWiblin 09 April 2012 01:29AM

Cross-posted from http://www.robertwiblin.com

There is a principle in finance that obvious and guaranteed ways to make a lot of money, so called ‘arbitrages’, should not exist. It has a simple rationale. If market prices made it possible to trade assets around and in the process make a guaranteed profit, people would do it, in so doing shifting some prices up and others down. They would only stop making these trades once the prices had adjusted and the opportunity to make money had disappeared. While opportunities to make ‘free money’ appear all the time, they are quickly noticed and the behaviour of traders eliminates them. The logic of selfishness and competition mean the only remaining ways to make big money should involve risk taking, luck and hard work. This is the ’no arbitrage‘ principle.

Should a similar principle exist for selfless as well as selfish finance? When a guaranteed opportunity to do a lot of good for the world appears, philanthropists should notice and pounce on it, and only stop shifting resources into that activity once the opportunity has been exhausted. This wouldn’t work as quickly as the elimination of arbitrage on financial markets of course. Rather it would look more like entrepreneurs searching for and exploiting opportunities to open new and profitable businesses. Still, in general competition to do good should make it challenging for an altruistic start-up or budding young philanthropist to beat existing charities at their own game.

There is a very important difference though. Most investors are looking to make money and so for them a dollar is a dollar, whatever business activity it comes from. Competition between investors makes opportunities to get those dollars hard to find. The same is not true of altruists, who have very diverse preferences about who is most deserving of help and how we should help them; a ‘util’ from one charitable activity is not the same as a ‘util’ from another. This suggests that unlike in finance, we may able to find ‘altruistic arbitrages’, that is to say ‘opportunities to do a lot of good for the world that others have left unexploited.’

The rule is simple: target groups you care about that other people mostly don’t, and take advantage of strategies other people are biased against using.  That rule is the root of a lot of advice offered to thoughtful givers and consequentialist-oriented folks. An obvious example is that you shouldn’t look to help poor people in rich countries. There are already a lot of government and private dollars chasing opportunities to assist them, so the low hanging fruit has all been used up and then some. The better value opportunities are going to be in poor, unromantic places you have never heard of, where fewer competing philanthropist dollars are directed. Similarly, you should think about taking high risk-high return strategies. Most do-gooders are searching for guaranteed and respectable opportunities to do a bit of good, rather than peculiar long-shot opportunities to do a lot of good. If you only care about the ‘expected‘ return to your charity, then you can do more by taking advantage of the quirky, improbable bets neglected by others.

Who do I personally care about more than others? For me the main candidates are animals, especially wild ones, and people who don’t yet exist and may never exist – interest groups that go largely ignored by the majority of humanity. What are the risky strategies I can employ to help these groups? Working on future technologies most people think are farcical naturally jumps to mind but I’m sure there are others and would love to hear them.

This principle is the main reason I am skeptical of mainstream political activism as a way to improve the world. If you are part of a significant worldwide movement, it’s unlikely that you’re working in a neglected area and exploiting how your altruistic preferences are distinct from those of others.

What other conclusions can we draw thinking about philanthropy in this way?

 

Evolutionary psychology: evolving three eyed monsters

14 Dmytry 16 March 2012 09:28PM

Summary

We should not expect evolution of complex psychological and cognitive adaptations in the timeframe in which, morphologically, animal bodies can only change by very little. The genetic alteration to the cognition for speech shouldn't be expected to be dramatically more complex than the alteration of vocal cords.

Evolutions that did not happen

When humans descended from trees and became bipedal, it would have been very advantageous to have an eye or two on back of the head, for detection of predators and to protect us against being back-stabbed by fellow humans. This is why all of us have an extra eye on the back of our heads, right? Ohh, we don't. Perhaps the mate selection resulted in the poor reproductive success of the back-eyed hominids. Perhaps the tribes would kill any mutant with eyes on the back.

There are pretty solid reasons why none the above has happened, and can't happen in such timeframes. The evolution does not happen simply because the trait is beneficial, or because there's a niche to be filled. A simple alteration to the DNA has to happen, causing a morphological change which results in some reproductive improvement; then DNA has to mutate again, etc. The unrelated nearly-neutral mutations may combine resulting in an unexpected change (for example, the wolves have many genes that alter their size; random selection of genes produces approximately normal distribution of the sizes; we can rapidly select smaller dogs utilizing the existing diversity). There's no such path rapidly leading up to an eye on back of the head. The eye on back of the head didn't evolve because evolution couldn't make that adaptation.

The speed of evolution is severely limited. The ways in which evolution can work, too, are very limited. In the time in which we humans have got down from the trees, we undergone rather minor adaptation in the shape of our bodies, as evident from the fossil record - and that is the degree of change we should expect in rest of our bodies including our brains.

The correct application of evolutionary theory should be entirely unable to account for outrageous hypothetical like extra eye on back of our heads (extra eye can evolve, of course, but would take very long time). Evolution is not magic. The power of scientific theory is that it can't explain everything, but only the things which are true - that's what makes scientific theory useful for finding the things that are true, in advance of observation. That is what gives science it's predictive power. That's what differentiates science from religion. The power of not explaining the wrong things.

Evolving the instincts

What do we think it would take to evolve a new innate instinct? To hard-wire a cognitive mechanism?

Groups of neurons have to connect in the new ways - the neurons on one side must express binding proteins, which would guide the axons towards them; the weights of the connections have to be adjusted. Majority of the genes expressed in neurons, affect all of the neurons; some affect just a group, but there is no known mechanism by which an entirely arbitrary group's bindings may be controlled from the DNA in 1 mutation. The difficulties are not unlike those of an extra eye. This, combined with above-mentioned speed constraints, imposes severe limitations on which sorts of wiring modifications humans could have evolved during the hunter gatherer environment, and ultimately the behaviours that could have evolved. Even very simple things - such as preference for particular body shape of the mates - have extreme hidden implementation complexity in terms of the DNA modifications leading up to the wiring leading up to the altered preferences. Wiring the brain for a specific cognitive fallacy is anything but simple. It may not always be as time consuming/impossible as adding an extra eye, but it is still no little feat.

Junk evolutionary psychology

It is extremely important to take into account the properties of evolutionary process when invoking evolution as explanation for traits and behaviours.

The evolutionary theory, as invoked in the evolutionary psychology, especially of the armchair variety, all too often is an universal explanation. It is magic that can explain anything equally well. Know of a fallacy of reasoning? Think up how it could have worked for the hunter gatherer, make a hypothesis, construct a flawed study across cultures, and publish.

No considerations are given for the strength of the advantage, for the size of 'mutation target', and for the mechanisms by which the mutation in the DNA would have resulted in the modification of the circuitry such as to result in the trait, nor to the gradual adaptability. All of that is glossed over entirely in common armchair evolutionary psychology, and unfortunately, even in the academia. The evolutionary psychology is littered with examples of traits which are alleged to have evolved over the same time during which we had barely adapted to walking upright.

It may be that when describing behaviours, a lot of complexity can be hidden into very simple-sounding concepts; and thus it seems like a good target for evolutionary explanation. But when you look at the details - the axons that have to find the targets; the gene must activate in the specific cells, but not others - there is a great deal of complexity in coding for even very simple traits.

Note: I originally did not intend to make an example of junk, for thou should not pick a strawman, but for sake of clarity, there is an example of what I would consider to be junk: the explanation of better performance at Wason Selection Task as result of evolved 'social contracts module', without a slightest consideration for what it might take, in terms of DNA, to code a Wason Selection Task solver circuit, nor for alternative plausible explanation, nor for a readily available fact that people can easily learn to solve Wason Selection Task correctly when taught - the fact which still implies general purpose learning, and the fact that high-IQ people can solve far more confusing tasks of far larger complexity, which demonstrates that the tasks can be solved in absence of specific evolved 'social contract' modules.

There is an example of non-junk: the evolutionary pressure can adjust strength of pre-existing emotions such as anger, fear, and so on, and even decrease the intelligence whenever the higher intelligence is maladaptive.

Other commonly neglected fact: the evolution is not a watchmaker, blind or not. It does not choose a solution for a problem and then work on this solution! It works on all adaptive mutations simultaneously. Evolution works on all the solutions, and the simpler changes to existing systems are much quicker to evolve. If mutation that tweaks existing system improves fitness, it will, too, be selected for, even if there was a third eye in progress.

As much as it would be more politically correct and 'moderate' for e.g. evolution of religion crowd to get their point across by arguing that the religious people have evolved specific god module which doesn't do anything but make them believe in god, than to imply that they are 'genetically stupid' in some way, the same selective pressure would also make the evolution select for non-god-specific heritable tweaks to learning, and the minor cognitive deficits, that increase religiosity.

Lined slate as a prior

As update for tabula rasa, picture lined writing paper; it provides some guidance for the handwriting; the horizontal lined paper is good for writing text, but not for arithmetic, the five-lines-near-eachother separated by spacing is good for writing music, and the grid paper is pretty universal. Different regions of the brain are tailored to different content; but should not be expected to themselves code different algorithms, save for few exceptions which had long time to evolve, early in vertebrate history.

edit: improved the language some. edit: specific what sort of evolutionary psychology I consider to be junk, and what I do not, albeit that was not the point of the article. The point of the article was to provide you with the notions to use to see what sorts of evolutionary psychology to consider junk, and what do not.

"How We Decide", by Jonah Lehrer, kindle version on sale for 99 cents at amazon

3 buybuydandavis 07 March 2012 06:43AM

http://www.amazon.com/How-We-Decide-ebook/dp/B003WMAAMG/ref=sr_1_1?s=digital-text&ie=UTF8&qid=1331098417&sr=1-1

I don't know how proper this is, but I'm quite cheap and like a bargain, and I've seen Lehrer referred to a number of times here. I hadn't read Kahneman before, but bought the kindle version and read him on my phone whenever I had some wait time somewhere.

It's better than a mokeskin pouch! I can have the top *thousand* books I'm reading on me at all times, and just pull one out anywhere! I never have to waste another minute of my life!

I don't like spam anymore than anyone else, but I'm going to be getting it cheap, and I just want everyone else who wants it to get it cheap too. It's okay to spam people about cheap books, right? That's a family tradition.

Online education and Conscientiousness

13 gwern 24 February 2012 09:05PM

I've wondered for some time now what the effects of online education might be on gender and income inequality, specifically as online education interacts with IQ and Conscientiousness (compared with offline education). I ran into a study of a course done online and offline that found correlations with Conscientiousness, which prompted me to start writing out my thoughts: https://plus.google.com/103530621949492999968/posts/aKa3qLatwZ3

The model/argument I give (towards the bottom) is logically trivial, and the basic idea seems pretty intuitive - offline classrooms remove some need for self-discipline/Conscientiousness and performance is more g-loaded - that I'm sure I can't be the first person to think of it.

Does anyone have statistics or citations handy which might help in any essay I write on the topic?

[LINK] The NYT on Everyday Habits

6 Alex_Altair 18 February 2012 08:23AM

The New York Times just published this article on how companies use data mining and the psychology of habit formation to effectively target ads.

The process within our brains that creates habits is a three-step loop. First, there is a cue, a trigger that tells your brain to go into automatic mode and which habit to use. Then there is the routine, which can be physical or mental or emotional. Finally, there is a reward, which helps your brain figure out if this particular loop is worth remembering for the future. Over time, this loop — cue, routine, reward; cue, routine, reward — becomes more and more automatic. The cue and reward become neurologically intertwined until a sense of craving emerges.

It has some decent depth of discussion, including an example of the author actually using the concepts to stop a bad habit. The article is based on an upcoming book by the same author titled The Power of Habit.

I haven't seen emphasis of this particular phenomenon—habits consisting of a cue, routine, and reward—on Lesswrong. Do people think it's a valid, scientifically supported phenomenon? The article gives this impression but, of course, doesn't cite specific academic work on it. It ties in to the System 1/System 2 theory easily as a System 1 process. How much of the whole System 1 can be explained as an implementation of this cue, routine, reward process?

And most importantly, how can this fit into the procrastination equation as a tool to subvert akrasia and establish good habits? 

Let's look at each of the four factors. If you've formed a habit, it means that the reward happened consistently, which means you have high expectancy. Given that it is a reward, the value is at least positive, but probably not large. Since habits mostly work on small time scales, delay is probably very small. And maybe increased habit formation means your impulsiveness is low. Each of these effects would increase motivation. In addition, because it's part of System 1, there is little energy cost to performing the habit, like there would be with many other conscious actions.

Does this explanation sound legitimate, or like an argument for the bottom line?

Personally, I can tell that context is a strong cue for behavior at work, school, and home. When I go into work, I'm automatically motivated to perform well, and that motivation remains for several hours. When I go into class, I'm automatically ready to focus on difficult material, or even enthusiastically take a test. Yet when I go home, something about the context switches that off, and I can't seem to get anything done at all. It might be worth significant experimentation to find out what cues trigger both modes, and change my contexts to induce what I want.

What do you think?

Edit: this phenomenon has been covered on LW in the form of operant conditioning in posts by Yvain.

[link] 101 Fascinating Brain Blogs

-4 Curiouskid 16 February 2012 11:47PM

A pretty interesting list of psychology blogs. One of my favorite blogs (Mind Hacks) was listed (so the others on the list must be good too. Right?).  

Also,

Does anybody know of any good textbooks on applied cognitive psychology?

For memory-Something that would put things like SRS in context with other things like the reasons we forget things, but more in more depth than blog posts? Or do you think that getting a textbook on the subject wouldn't be worthwhile because most of the low-hanging fruits can be grasped through blog posts? 

For emotions-Any good/practical introductions to CBT?


Do you think we should start up a book recommendations recurring thread?

 

 

The Personality of (great/creative) Scientists: Open and Conscientious

30 gwern 28 January 2012 08:01PM

We’ve discussed the Big Five in the past, such as the relationship of Openness to parasites & signaling or whether hallucinogens increase Openness and parasites decrease it, along with my little notes on the value of Conscientiousness. This is another entry in the topic of ‘what is Big Five good for’.

I researched the topic of how and whether Conscientiousness and Openness correlate with scientific achievement for Luke for the Intelligence Explosion paper; here is some of what I found:

continue reading »

Utopian hope versus reality

23 Mitchell_Porter 11 January 2012 12:55PM

I've seen an interesting variety of utopian hopes expressed recently. Raemon's "Ritual" sequence of posts is working to affirm the viability of LW's rationalist-immortalist utopianism, not just in the midst of an indifferent universe, but in the midst of an indifferent society. Leverage Research turn out to be social-psychology utopians, who plan to achieve their world of optimality by unleashing the best in human nature. And Russian life-extension activist Maria Konovalenko just blogged about the difficulty of getting people to adopt anti-aging research as the top priority in life, even though it's so obvious to her that it should be.

This phenomenon of utopian hope - its nature, its causes, its consequences, whether it's ever realistic, whether it ever does any good - certainly deserves attention and analysis, because it affects, and even afflicts, a lot of people, on this site and far beyond. It's a vast topic, with many dimensions. All my examples above have a futurist tinge to them - an AI singularity, and a biotech society where rejuvenation is possible, are clearly futurist concepts; and even the idea of human culture being transformed for the better by new ideas about the mind, belongs within the same broad scientific-technological current of Utopia Achieved Through Progress. But if we look at all the manifestations of utopian hope in history, and not just at those which resemble our favorites, other major categories of utopia can be observed - utopia achieved by reaching back to the conditions of a Golden Age; utopia achieved in some other reality, like an afterlife.

The most familiar form of utopia these days is the ideological social utopia, to be achieved once the world is run properly, according to the principles of some political "-ism". This type of utopia can cut across the categories I have mentioned so far; utopian communism, for example, has both futurist and golden-age elements to its thinking. The new society is to be created via new political forms and new philosophies, but the result is a restoration of human solidarity and community that existed before hierarchy and property... The student of utopian thought must also take note of religion, which until technology has been the main avenue through which humans have pursued their most transcendental hopes, like not having to die.

But I'm not setting out to study utopian thought and utopian psychology out of a neutral scholarly interest. I have been a utopian myself and I still am, if utopianism includes belief in the possibility (though not the inevitability) of something much better. And of course, the utopias that I have taken seriously are futurist utopias, like the utopia where we do away with death, and thereby also do away with a lot of other social and psychological pathologies, which are presumed to arise from the crippling futility of the universal death sentence.

However, by now, I have also lived long enough to know that my own hopes were mistaken many times over; long enough to know that sometimes the mistake was in the ideas themselves, and not just the expectation that everyone else would adopt them; and long enough to understand something of the ordinary non-utopian psychology, whose main features I would nominate as reconciliation with work and with death. Everyone experiences the frustration of having to work for a living and the quiet horror of physiological decline, but hardly anyone imagines that there might be an alternative, or rejects such a lifecycle as overall more bad than it is good.

What is the relationship between ordinary psychology and utopian psychology? First, the serious utopians should recognize that they are an extreme minority. Not only has the whole of human history gone by without utopia ever managing to happen, but the majority of people who ever lived were not utopians in the existentially revolutionary sense of thinking that the intolerable yet perennial features of the human condition might be overthrown. The confrontation with the evil aspects of life must usually have proceeded more at an emotional level - for example, terror that something might be true, and horror at the realization that it is true; a growing sense that it is impossible to escape; resignation and defeat; and thereafter a permanently diminished vitality, often compensated by achievement in the spheres of work and family.

The utopian response is typically made possible only because one imagines that there is a specific alternative to this process; and so, as ideas about alternatives are invented and circulated, it becomes easier for people to end up on the track of utopian struggle with life, rather than the track of resignation, which is why we can have enough people to form social movements and fundamentalist religions, and not just isolated weirdos. There is a continuum between full radical utopianism and very watered-down psychological phenomena which hardly deserve that name, but still have something in common - for example, a person who lives an ordinary life but draws some sustenance from the possibility of an afterlife of unspecified nature, where things might be different, and where old wrongs might be righted - but nonetheless, I would claim that the historically dominant temperament in adult human experience has been resignation to hopelessness and helplessness in ultimate matters, and an absorption in affairs where some limited achievement is possible, but which in themselves can never satisfy the utopian impulse.

The new factor in our current situation is science and technology. Our modern history offers evidence that the world really can change fundamentally, and such further explosive possibilities as artificial intelligence and rejuvenation biotechnology are considered possible for good, tough-minded, empirical reasons, not just because they offer a convenient vehicle for our hopes.

Technological utopians often exhibit frustration that their pet technologies and their favorite dreams of existential emancipation aren't being massively prioritized by society, and they don't understand why other people don't just immediately embrace the dream when they first hear about it. (Or they develop painful psychological theories of why the human race is ignoring the great hope.) So let's ask, what are the attitudes towards alleged technological emancipation that a person might adopt?

One is the utopian attitude: the belief that here, finally, one of the perennial dreams of the human race can come true. Another is denial: which is sometimes founded on bitter experience of disappointment, which teaches that the wise thing to do is not to fool yourself when another new hope comes up to you and cheerfully asserts that this time really is different. Another is to accept the possibility but deny the utopian hope. I think this is the most important interpretation to understand.

It is the one that precedent supports. History is full of new things coming to pass, but they have never yet led to utopia. So we might want to scrutinize our technological projections more closely, and see whether the utopian expectation is based on overlooking the downside. For example, let us contrast the idea of rejuvenation and the idea of immortality - not dying, ever. Just because we can take someone who is 80 and make them biologically 20, is not the same thing as making them immortal. It just means that won't die of aging, and that when they do die, it will be in a way befitting someone 20 years old. They'll die in an accident, or a suicide, or a crime. Incidentally, we should also note an element of psychological unrealism in the idea of never wanting to die. Forever is a long time; the whole history of the human race is about 10,000 years long. Just 10,000 years is enough to encompass all the difficulties and disappointments and permutations of outlook that have ever happened. Imagine taking the whole history of the human race into yourself; living through it personally. It's a lot to have endured.

It would be unfair to say that transhumanists as a rule are dominated by utopian thinking. Perhaps just as common is a sort of futurological bipolar disorder, in which the future looks like it will bring "utopia or oblivion", something really good or something really bad. The conservative wisdom of historical experience says that both these expectations are wrong; bad things can happen, even catastrophes, but life keeps going for someone - that is the precedent - and the expectation of total devastating extinction is just a plunge into depression as unrealistic as the utopian hope for a personal eternity; both extremes exhibiting an inflated sense of historical or cosmic self-importance. The end of you is not the end of the world, says this historical wisdom; imagining the end of the whole world is your overdramatic response to imagining the end of you - or the end of your particular civilization.

However, I think we do have some reason to suppose that this time around, the extremes are really possible. I won't go so far as to endorse the idea that (for example) intelligent life in the universe typically turns its home galaxy into one giant mass of computers; that really does look like a case of taking the concept and technology with which our current society is obsessed, and projecting it onto the cosmic unknown. But just the humbler ideas of transhumanity, posthumanity, and a genuine end to the human-dominated era on Earth, whether in extinction or in transformation. The real and verifiable developments of science and technology, and the further scientific and technological developments which they portend, are enough to justify such a radical, if somewhat nebulous, concept of the possible future. And again, while I won't simply endorse the view that of course we shall get to be as gods, and shall get to feel as good as gods might feel, it seems reasonable to suppose that there are possible futures which are genuinely and comprehensively better than anything that history has to offer - as well as futures that are just bizarrely altered, and futures which are empty and dead.

So that is my limited endorsement of utopianism: In principle, there might be a utopianism which is justified. But in practice, what we have are people getting high on hope, emerging fanaticisms, personal dysfunctionality in the present, all the things that come as no surprise to a cynical student of history. The one outcome that would be most surprising to a cynic is for a genuine utopia to arrive. I'm willing to say that this is possible, but I'll also say that almost any existing reference to a better world to come, and any psychological state or social movement which draws sublime happiness from the contemplation of an expected future, has something unrealistic about it.

In this regard, utopian hope is almost always an indicator of something wrong. It can just be naivete, especially in a young person. As I have mentioned, even non-utopian psychology inevitably has those terrible moments when it learns for the first time about the limits of life as we know it. If in your own life you start to enter that territory for the first time, without having been told from an early age that real life is fundamentally limited and frustrating, and perhaps with a few vague promises of hope, absorbed from diverse sources, to sustain you, then it's easy to see your hopes as, not utopian hopes, but simply a hope that life can be worth living. I think this is the experience of many young idealists in "environmental" and "social justice" movements; their culture has always implied to them that life should be a certain way, without also conveying to them that it has never once been that way in reality. The suffering of transhumanist idealists and other radical-futurist idealists, when they begin to run aground on the disjunction between their private subcultural expectations and those of the culture at large, has a lot in common with the suffering of young people whose ideals are more conventionally recognizable; and it is entirely conceivable that for some generation now coming up, rebellion against biological human limitations will be what rebellion against social limitations has been for preceding generations.

I should also mention, in passing, the option of a non-utopian transhumanism, something that is far more common than my discussion so far would mention. This is the choice of people who expect, not utopia, but simply an open future. Many cryonicists would be like this. Sure, they expect the world of tomorrow to be a great place, good enough that they want to get there; but they don't think of it as an eternal paradise of wish-fulfilment that may or may not be achieved, depending on heroic actions in the present. This is simply the familiar non-utopian view that life is overall worth living, combined with the belief that life can now be lived for much longer periods; the future not as utopia, but as more history, history that hasn't happened yet, and which one might get to personally experience. If I was wanting to start a movement in favor of rejuvenation and longevity, this is the outlook I would be promoting, not the idea that abolishing death will cure all evils (and not even the idea that death as such can be abolished; rejuvenation is not immortality, it's just more good life). In the spectrum of future possibilities, it's only the issue of artificial intelligence which lends some plausibility to extreme bipolar futurism, the idea that the future can be very good (by human standards) or very bad (by human standards), depending on what sort of utility functions govern the decision-making of transhuman intelligence.

That's all I have to say for now. It would be unrealistic to think we can completely avoid the pathologies associated with utopian hope, but perhaps we can moderate them, if we pay attention to the psychology involved.

Inverse p-zombies: the other direction in the Hard Problem of Consciousness

17 gwern 18 December 2011 09:32PM

402. "Nothing is so certain as that I possess consciousness." In that case, why shouldn't I let the matter rest? This certainty is like a mighty force whose point of application does not move, and so no work is accomplished by it.

403. Remember: most people say one feels nothing under anaesthetic. But some say: It could be that one feels, and simply forgets it completely.

--Wittgenstein, Zettel (1929-1948)

I offer for LW's consideration the interesting 2008 paper "Inverse zombies, anesthesia awareness, and the hard problem of unconsciousness" (Mashour & LaRock; NCBI); the abstract:

Philosophical (p-) zombies are constructs that possess all of the behavioral features and responses of a sentient human being, yet are not conscious. P-zombies are intimately linked to the hard problem of consciousness and have been invoked as arguments against physicalist approaches. But what if we were to invert the characteristics of p-zombies? Such an inverse (i-) zombie would possess all of the behavioral features and responses of an insensate being, yet would nonetheless be conscious. While p-zombies are logically possible but naturally improbable, an approximation of i-zombies actually exists: individuals experiencing what is referred to as "anesthesia awareness." Patients under general anesthesia may be intubated (preventing speech), paralyzed (preventing movement), and narcotized (minimizing response to nociceptive stimuli). Thus, they appear--and typically are--unconscious. In 1-2 cases/1000, however, patients may be aware of intraoperative events, sometimes without any objective indices. Furthermore, a much higher percentage of patients (22% in a recent study) may have the subjective experience of dreaming during general anesthesia. P-zombies confront us with the hard problem of consciousness--how do we explain the presence of qualia? I-zombies present a more practical problem--how do we detect the presence of qualia? The current investigation compares p-zombies to i-zombies and explores the "hard problem" of unconsciousness with a focus on anesthesia awareness.

continue reading »

A case study in fooling oneself

-2 Mitchell_Porter 15 December 2011 05:25AM

Note: This post assumes that the Oxford version of Many Worlds is wrong, and speculates as to why this isn't obvious. For a discussion of the hypothesis itself, see Problems of the Deutsch-Wallace version of Many Worlds.

smk asks how many worlds are produced in a quantum process where the outcomes have unequal probabilities; Emile says there's no exact answer, just like there's no exact answer for how many ink blots are in the messy picture; Tetronian says this analogy is a great way to demonstrate what a "wrong question" is; Emile has (at this writing) 9 upvotes, and Tetronian has 7.

My thesis is that Emile has instead provided an example of how to dismiss a question and thereby fool oneself; Tetronian provides an example of treating an epistemically destructive technique of dismissal as epistemically virtuous and fruitful; and the upvotes show that this isn't just their problem. [edit: Emile and Tetronian respond.]

I am as tired as anyone of the debate over Many Worlds. I don't expect the general climate of opinion on this site to change except as a result of new intellectual developments in the larger world of physics and philosophy of physics, which is where the question will be decided anyway. But the mission of Less Wrong is supposed to be the refinement of rationality, and so perhaps this "case study" is of interest, not just as another opportunity to argue over the interpretation of quantum mechanics, but as an opportunity to dissect a little bit of irrationality that is not only playing out here and now, but which evidently has a base of support.

The question is not just, what's wrong with the argument, but also, how did it get that base of support? How was a situation created where one person says something irrational (or foolish, or however the problem is best understood), and a lot of other people nod in agreement and say, that's an excellent example of how to think?

On this occasion, my quarrel is not with the Many Worlds interpretation as such; it is with the version of Many Worlds which says there's no actual number of worlds. Elsewhere in the thread, someone says there are uncountably many worlds, and someone else says there are two worlds. At least those are meaningful answers (although the advocate of "two worlds" as the answer, then goes on to say that one world is "stronger" than the other, which is meaningless).

But the proposition that there is no definite number of worlds, is as foolish and self-contradictory as any of those other contortions from the history of thought that rationalists and advocates of common sense like to mock or boggle at. At times I have wondered how to place Less Wrong in the history of thought; well, this is one way to do it - it can have its own chapter in the history of intellectual folly; it can be known by its mistakes.

Then again, this "mistake" is not original to Less Wrong. It appears to be one of the defining ideas of the Oxford-based approach to Many Worlds associated with David Deutsch and David Wallace; the other defining idea being the proposal to derive probabilities from rationality, rather than vice versa. (I refer to the attempt to derive the Born rule from arguments about how to behave rationally in the multiverse.) The Oxford version of MWI seems to be very popular among thoughtful non-physicist advocates of MWI - even though I would regard both its defining ideas as nonsense - and it may be that its ideas get a pass here, partly because of their social status. That is, an important faction of LW opinion believes that Many Worlds is the explanation of quantum mechanics, and the Oxford school of MWI has high status and high visibility within the world of MWI advocacy, and so its ideas will receive approbation without much examination or even much understanding, because of the social and psychological mechanisms which incline people to agree with, defend, and laud their favorite authorities, even if they don't really understand what these authorities are saying or why they are saying it.

However, it is undoubtedly the case that many of the LW readers who believe there's no definite number of worlds, believe this because the idea genuinely makes sense to them. They aren't just stringing together words whose meaning isn't known, like a Taliban who recites the Quran without knowing a word of Arabic; they've actually thought about this themselves; they have gone through some subjective process as a result of which they have consciously adopted this opinion. So from the perspective of analyzing how it is that people come to hold absurd-sounding views, this should be good news. It means that we're dealing with a genuine failure to reason properly, as opposed to a simple matter of reciting slogans or affirming allegiance to a view on the basis of something other than thought.

At a guess, the thought process involved is very simple. These people have thought about the wavefunctions that appear in quantum mechanics, at whatever level of technical detail they can muster; they have decided that the components or substructures of these wavefunctions which might be identified as "worlds" or "branches" are clearly approximate entities whose definition is somewhat arbitrary or subject to convention; and so they have concluded that there's no definite number of worlds in the wavefunction. And the failure in their thinking occurs when they don't take the next step and say, is this at all consistent with reality? That is, if a quantum world is something whose existence is fuzzy and which doesn't even have a definite multiplicity - that is, we can't even say if there's one, two, or many of them - if those are the properties of a quantum world, then is it possible for the real world to be one of those? It's the failure to ask that last question, and really think about it, which must be the oversight allowing the nonsense-doctrine of "no definite number of worlds" to gain a foothold in the minds of otherwise rational people.

If this diagnosis is correct, then at some level it's a case of "treating the map as the territory" syndrome. A particular conception of the quantum-mechanical wavefunction is providing the "map" of reality, and the individual thinker is perhaps making correct statements about what's on their map, but they are failing to check the properties of the map against the properties of the territory. In this case, the property of reality that falsifies the map is, the fact that it definitely exists, or perhaps the corollary of that fact, that something which definitely exists definitely exists at least once, and therefore exists with a definite, objective multiplicity.

Trying to go further in the diagnosis, I can identify a few cognitive tendencies which may be contributing. First is the phenomenon of bundled assumptions which have never been made distinct and questioned separately. I suppose that in a few people's heads, there's a rapid movement from "science (or materialism) is correct" to "quantum mechanics is correct" to "Many Worlds is correct" to "the Oxford school of MWI is correct". If you are used to encountering all of those ideas together, it may take a while to realize that they are not linked out of logical necessity, but just contingently, by the narrowness of your own experience.

Second, it may seem that "no definite number of worlds" makes sense to an individual, because when they test their own worldview for semantic coherence, logical consistency, or empirical adequacy, it seems to pass. In the case of "no-collapse" or "no-splitting" versions of Many Worlds, it seems that it often passes the subjective making-sense test, because the individual is actually relying on ingredients borrowed from the Copenhagen interpretation. A semi-technical example would be the coefficients of a reduced density matrix. In the Copenhagen interpetation, they are probabilities. Because they have the mathematical attributes of probabilities (by this I just mean that they lie between 0 and 1), and because they can be obtained by strictly mathematical manipulations of the quantities composing the wavefunction, Many Worlds advocates tend to treat these quantities as inherently being probabilities, and use their "existence" as a way to obtain the Born probability rule from the ontology of "wavefunction yes, wavefunction collapse no". But just because something is a real number between 0 and 1, doesn't yet explain how it manages to be a probability. In particular, I would maintain that if you have a multiverse theory, in which all possibilities are actual, then a probability must refer to a frequency. The probability of an event in the multiverse is simply how often it occurs in the multiverse. And clearly, just having the number 0.5 associated with a particular multiverse branch is not yet the same thing as showing that the events in that branch occur half the time.

I don't have a good name for this phenomenon, but we could call it "borrowed support", in which a belief system receives support from considerations which aren't legitimately its own to claim. (Ayn Rand apparently talked about a similar notion of "borrowed concepts".)

Third, there is a possibility among people who have a capacity for highly abstract thought, to adopt an ideology, ontology, or "theory of everything" which is only expressed in those abstract terms, and to then treat that theory as the whole of reality, in a way that reifies the abstractions. This is a highly specific form of treating the map as the territory, peculiar to abstract thinkers. When someone says that reality is made of numbers, or made of computations, this is at work. In the case at hand, we're talking about a theory of physics, but the ontology of that theory is incompatible with the definiteness of one's own existence. My guess is that the main psychological factor at work here is intoxication with the feeling that one understands reality totally and in its essence. The universe has bowed to the imperial ego; one may not literally direct the stars in their courses, but one has known the essence of things. Combine that intoxication, with "borrowed support" and with the simple failure to think hard enough about where on the map the imperial ego itself might be located, and maybe you have a comprehensive explanation of how people manage to believe theories of reality which are flatly inconsistent with the most basic features of subjective experience.

I should also say something about Emile's example of the ink blots. I find it rather superficial to just say "there's no definite number of blots". To say that the number of blots depends on definition is a lot closer to being true, but that undermines the argument, because that opens the possibility that there is a right definition of "world", and many wrong definitions, and that the true number of worlds is just the number of worlds according to the right definition.

Emile's picture can be used for the opposite purpose. All we have to do is to scrutinize, more closely, what it actually is. It's a JPEG that is 314 pixels by 410 pixels in size. Each of those pixels will have an exact color coding. So clearly we can be entirely objective in the way we approach this question; all we have to do is be precise in our concepts, and engage with the genuine details of the object under discussion. Presumably the image is a scan of a physical object, but even in that case, we can be precise - it's made of atoms, they are particular atoms, we can make objective distinctions on the basis of contiguity and bonding between these atoms, and so the question will have an objective answer, if we bother to be sufficiently precise. The same goes for "worlds" or "branches" in a wavefunction. And the truly pernicious thing about this version of Many Worlds is that it prevents such inquiry. The ideology that tolerates vagueness about worlds serves to protect the proposed ontology from necessary scrutiny.

The same may be said, on a broader scale, of the practice of "dissolving a wrong question". That is a gambit which should be used sparingly and cautiously, because it easily serves to instead justify the dismissal of a legitimate question. A community trained to dismiss questions may never even notice the gaping holes in its belief system, because the lines of inquiry which lead towards those holes are already dismissed as invalid, undefined, unnecessary. smk came to this topic fresh, and without a head cluttered with ideas about what questions are legitimate and what questions are illegitimate, and as a result managed to ask something which more knowledgeable people had already prematurely dismissed from their own minds.

Studying Psychology - Which path should I take to best help our cause? Suggestions please.

4 Friendly-HI 23 November 2011 07:52PM

If you solve the problem of human-friendly self-improving AI, you have indirectly solved every problem. After spending a decent amount of time on LW, I have been convinced of this premise and now I would like to devote my life to that cause.

 

Currently I'm living in Germany and I'm studying psychology in the first semester. The university I'm studying at has a great reputation (even internationally if I can believe the rankings) for the quality of its scientific psychology research and it ranks about #2nd or #3rd place when it comes to various psy-science-related criteria out of about 55 German universities where one can study psychology. Five semesters of statistics in my Bachelor of Science might also hint at that.

I want to finish my Bachelor of Science and then move on to my Master, so in about 5 years I might hit my "phase of actual productivity" in the working field. I'm flirting with cognitive neuroscience, but haven't made my decision yet - however, I am pretty sure that I want to move towards research and a scientific career rather than one in a therapeutic field.

Before discovering lesswrong my most dominant personal interest in psychology has been in the field of "positive psychology" or plainly speaking the "what makes humans happy" field. This interest hasn't really changed through the discovery of LW, as much as it has evolved into: "how can we distill what makes human life worthwhile and put it into terms a machine could execute for our benefit"?

 

As the title suggests, I'm writing all this because I want some creative input from you in order to expand my sense of possibilities concerning how I can help the development of friendly AI from the field of psychology most effectively.

 

To give you a better idea of what might fit me, a bit more background-info about myself and my abilities seems in order:

I like talking and writing a lot, mathematically I am a loser (whether due to early disgust or incompetence I can't really tell). I value and enjoy human contact and have constantly moved from being an introvert towards being an extrovert by several cognitive developments I can only speculate on. I would probably easily rank in the middle field of any positive extroversion scale nowadays. My IQ seems to be around 134 if one can trust the "International High IQ Society" (www.highiqsociety.org), but as mentioned my abilities probably lie more in the linguistic and to some extent analytic sphere than the mathematical. I understand Bayes' Theorem but haven't read the quantum mechanics sequence and many "higher" concepts here are still above my current level of comprehension. Although I haven't tried all that hard yet to be fair.

I have programmed some primitive HTML and CSS once and didn't really like it. From that experiecne and my mathematical inability I take away, that programming wouldn't be the way that I could contribute most efficiently towards friendly AI-research. It is none of my strenghts or at least it would take a lot of time to develop that, which would probably be better spent somewhere else. Also I quite surely wouldn't enjoy it as much as work in the psychological realm with humans.

My English is almost indistinguishable from that of a native speaker and I largely lack that (rightfully) despised and annoying German accent, so I could definitely see myself giving competent talks in English.

Like many of you I have serious problems with akrasia (regardless of whether that's a rationalist phenomenon or whether we are just more aware of it and tend to do types of work that tempt it more readily). Before I learned of how to effectively combat it (thank you Piers Steel!), I had plenty of motivation to get rid of it and sunk insane efforts into overcoming it, although ultimately it was largely an unsuccessful undertaking due to half-assed pop-science and the lack of a real insight about what procrastination is caused by and how it actually functions. Now that I know how to fix procrastination (or rather now that I know that it can't be fixed, as much as it has to be managed in a similar fashion to any given drug-addition), my motivation to overcome it is almost gone and I feel myself slacking. Also, the high certainty that there is no such thing as "free will" may have played a serious part in my procrastination habits (interestingly, there are at least two papers I recall showing this correlation). In a nutshell: Procrastination is a problem that I need to address, since it is definitely the Achilles' heel of my performance and it's absolutely crippling my potential. I probably rank middle-high on the impulsiveness- (and thus also on the procrastination-) scale.

That should be an adequate characterization of myself for now.

 

I am absolutely open for suggestions that are not related to the neuroscience of "what makes humans happy and how do I distill those goals and feelings into something a machine could work with"-field, but currently I am definitely flirting with that idea, even though I have absolutely no clue how the heck this area of research could be sufficiently financed in a decade from now and how it could spit out findings precise enough to benefit the creation of FAI. Yet maybe it's just a lack of imagination.

Trying to help set up and evolve a rationalist community in Germany would also be a decent task, but compared to specific research that actually directly aids our goals... I somehow feel it is less than what I could reasonably achieve if I really set my mind to it.

 

So tell me, where does a German psychologist go nowadays to achieve the biggest possible positive impact in the field of friendly AI?

[LINK] Fraud Case Seen as a Red Flag for Psychology Research

11 [deleted] 03 November 2011 08:12PM

An article in the NYT's about everyone's favourite messy science, you know the one we sometimes rely on to provide a throwaway line as we pontificate wisely about biases? ;)

A well-known psychologist in the Netherlands whose work has been published widely in professional journals falsified data and made up entire experiments, an investigating committee has found. Experts say the case exposes deep flaws in the way science is done in a field, psychology, that has only recently earned a fragile respectability.

The psychologist, Diederik Stapel, of Tilburg University, committed academic fraud in “several dozen” published papers, many accepted in respected journals and reported in the news media, according to a report released on Monday by the three Dutch institutions where he has worked ...

In recent years, psychologists have reported a raft of findings on race biases, brain imaging and even extrasensory perception that have not stood up to scrutiny. Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged. ...

In a prolific career, Dr. Stapel published papers on the effect of power on hypocrisy, on racial stereotyping and on how advertisements affect how people view themselves. Many of his findings appeared in newspapers around the world, including The New York Times, which reported in December on his study about advertising and identity.

In a statement posted Monday on Tilburg University’s Web site, Dr. Stapel apologized to his colleagues. “I have failed as a scientist and researcher,” it read, in part. “I feel ashamed for it and have great regret.” ...

Dr. Stapel has published about 150 papers, many of which, like the advertising study, seem devised to make a splash in the media. The study published in Science this year claimed that white people became more likely to “stereotype and discriminate” against black people when they were in a messy environment, versus an organized one. Another study, published in 2009, claimed that people judged job applicants as more competent if they had a male voice. The investigating committee did not post a list of papers that it had found fraudulent. ...

In a survey of more than 2,000 American psychologists scheduled to be published this year, Leslie John of Harvard Business School and two colleagues found that 70 percent had acknowledged, anonymously, to cutting some corners in reporting data. About a third said they had reported an unexpected finding as predicted from the start, and about 1 percent admitted to falsifying data.

Also common is a self-serving statistical sloppiness. In an analysis published this year, Dr. Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error that changed a reported finding — almost always in opposition to the authors’ hypothesis.

...

found that the more reluctant that scientists were to share their data, the more likely that evidence contradicted their reported findings.

...

“We know the general tendency of humans to draw the conclusions they want to draw — there’s a different threshold,” said Joseph P. Simmons, a psychologist at the University of Pennsylvania’s Wharton School. “With findings we want to see, we ask, ‘Can I believe this?’ With those we don’t, we ask, ‘Must I believe this?’

But reviewers working for psychology journals rarely take this into account in any rigorous way. Neither do they typically ask to see the original data. While many psychologists shade and spin, Dr. Stapel went ahead and drew any conclusion he wanted.

In any case this brought to my attention by a recent blog entry on iSteve.

Telling people what they want to hear

Steve Sailer thinks that what gets distorted the most in such a way is a matter of supply and demand. Which is obviously good signalling for him, but is also eminently plausible. One can't help but wonder especially on the interesting connections that exist between some of the "findings" of psychology of a certain period and place the obsessions and neurosis (heh) specific to that society.

Myers-Briggs / MLPTI personality-type conversion chart

3 PhilGoetz 01 November 2011 08:08PM

While psychology wonks have been going on for years about the statistical rigor and calibration of the Big Five, most people have just carried on using the Myers-Briggs type indicator (MBTI), which may not be statistical or scientific but is able to categorize people without insulting them.

A serious critique of the MBTI is the Myers-Briggs entropy distribution paradox (or, "Why are there 16 personality types when everyone I know is an INTJ?")  A new personality test which has been gaining ground recently, the MLPTI, does not break up the INTJ into multiple categories; but does reduce the number of bothersome non-INTJ personality types and thus ameliorates the entropy paradox.  For those not yet familiar with it, here is a rough translation between MLPTI and MBTI types.

MLP type
Traits M-B types
TS conscientious, introverted, self-conscious
INTJ
RD impulsive, activity-oriented, high stimulation threshold
ESFJ, ESFP
PP creative, un-self-conscious
ENFP
AJ pragmatic, disciplined, outcome-oriented
ISTJ
FS introverted, empathetic, anxious ISFJ, INFJ
R extroverted, creative, status-seeking
ENFJ

 

The loss of half of the MBTI categories is not a serious problem, as demonstrated by the fact that you can't even name the ones that were left out without going back and looking.  Seriously, when was the last time you met an ENTP?

Review of Kahneman, 'Thinking, Fast and Slow' (2011)

28 lukeprog 28 October 2011 01:59AM

Thinking, Fast and Slow is Kahneman's first book for a general audience, and a summary of his far-reaching and important work. Over the course of about 400 pages (this does not include the appendices, notes, or index), Kahneman explains his current views on: System 1 vs. System 2 thinking, heuristics and biases, overconfidence, decision making under uncertainty, the differences between the experiencing self and remembering self, and the implications of combining all this knowledge.

In short: If you care about improving your thinking and decision making, and thus you care about the cognitive science of rationality, then you are likely to enjoy — and benefit from — this book. And if you know people who won't read the Core Sequences, getting them to read Thinking, Fast and Slow will take them 30% of the way.

Kahneman leaps deftly between demonstration ("try this word problem, notice what your brain does"), theory, and research stories. He covers dozens of issues likely to familiar to veteran LWers, and perhaps a dozen more that have never been discussed on Less Wrong: availability cascades, causal stereotyping, illusion of validity, the stuff on expert intuition from chapter 22, duration neglect, the peak-end effect, affective forecasting and "miswanting," 

Each chapter ends with snippets of fictional dialogue, showing what it would like to use the concepts introduced in that chapter in everyday speech. What is remarkable is how much these snippets sound like things I hear in daily conversations at Singularity Institute. For example:

  • "What came quickly to my mind was an intuition from System 1. I’ll have to start over and search my memory deliberately."
  • "She knows nothing about this person’s management skills. All she is going by is the halo effect from a good presentation."
  • "Do we still remember the question we are trying to answer? Or have we substituted an easier one?"
  • "This start-up looks as if it could not fail, but the base rate of success in the industry is extremely low. How do we know this case is different?"
  • "Let's reframe the problem by changing the reference point. Imagine we did not own it; how much would we think it is worth?"

Other dialogue snippets from Kahneman's book are considered so obvious within Singularity Institute that sentences similar to Kahneman's snippets are often half-spoken before somebody interrupts and moves on because everyone in the room already knows the rest of the sentence, and everybody knows that everybody else knows the rest of the sentence:

  • "They were primed to find flaws, and this is exactly what they found."
  • "He underestimates the risks of indoor pollution because there are few media stories on them. That’s an availability effect. He should look at the statistics."
  • "The mistake appears obvious, but it is just hindsight. You could not have known in advance."
  • "He's taking an inside view. He should forget about his own case and look for what happened in other cases."
  • "He weighs losses about twice as much as gains, which is normal."

Other dialogue snippets from the book are even more obvious within Singularity Institute, and they can be communicated merely by raising an eyebrow at what someone has said:

  • "This is your System 1 talking. Slow down and let your System 2 take control."
  • "The sample of observations is too small to make any inferences. Let’s not follow the law of small numbers."

In the final chapter, Kahneman reflects on the good news that his and his colleagues' work is having an effect at the policy level. As a result of a book he wrote with Richard Thaler, Nudge: Improving Decisions about Health, Wealth, and Happiness, Cass Sunstein was invited by President Obama to be the administrator of the Office of Information and Regulatory Affairs. From that post Sunstein has successfully implemented many new policies that treat humans as humans instead of as members of Homo economicus:

...applications that have been implemented [by Sunstein] include automatic enrollment in health insurance, a new version of the dietary guidelines that replaces the incomprehensible Food Pyramid with the powerful image of a Food Plate loaded with a balanced diet, and a rule formulated by the USDA that permits the inclusion of messages such as “90% fat-free” on the label of meat products, provided that the statement “10% fat” is also displayed “contiguous to, in lettering of the same color, size, and type as, and on the same color background as, the statement of lean percentage.”

The British government has also responded by forming a special unit dedicated to applying decision science to successful policy-making. Officially it is called the Behavioural Insight Team, but internally people just call it the Nudge Unit.

Overcoming the Curse of Knowledge

42 JesseGalef 18 October 2011 05:39PM

[crossposted at Measure of Doubt]

What is the Curse of Knowledge, and how does it apply to science education, persuasion, and communication? No, it's not a reference to the Garden of Eden story. I'm referring to a particular psychological phenomenon that can make our messages backfire if we're not careful.

Communication isn't a solo activity; it involves both you and the audience. Writing a diary entry is a great way to sort out thoughts, but if you want to be informative and persuasive to others, you need to figure out what they'll understand and be persuaded by. A common habit is to use ourselves as a mental model - assuming that everyone else will laugh at what we find funny, agree with what we find convincing, and interpret words the way we use them. The model works to an extent - especially with people similar to us - but other times our efforts fall flat. You can present the best argument you've ever heard, only to have it fall on dumb - sorry, deaf - ears.

That's not necessarily your fault - maybe they're just dense! Maybe the argument is brilliant! But if we want to communicate successfully, pointing fingers and assigning blame is irrelevant. What matters is getting our point across, and we can't do it if we're stuck in our head, unable to see things from our audience's perspective. We need to figure out what words will work.

Unfortunately, that's where the Curse of Knowledge comes in. In 1990, Elizabeth Newton did a fascinating psychology experiment: She paired participants into teams of two: one tapper and one listener. The tappers picked one of 25 well-known songs and would tap out the rhythm on a table. Their partner - the designated listener - was asked to guess the song. How do you think they did?

Not well. Of the 120 songs tapped out on the table, the listeners only guessed 3 of them correctly - a measly 2.5 percent. But get this: before the listeners gave their answer, the tappers were asked to predict how likely their partner was to get it right. Their guess? Tappers thought their partners would get the song 50 percent of the time. You know, only overconfident by a factor of 20. What made the tappers so far off?

They lost perspective because they were "cursed" with the additional knowledge of the song title. Chip and Dan Heath use the story in their book Made to Stick to introduce the term:

 

"The problem is that tappers have been given knowledge (the song title) that makes it impossible for them to imagine what it's like to lack that knowledge. When they're tapping, they can't imagine what it's like for the listeners to hear isolated taps rather than a song. This is the Curse of Knowledge. Once we know something, we find it hard to imagine what it was like not to know it. Our knowledge has "cursed" us. And it becomes difficult or us to share our knowledge with others, because we can't readily re-create our listeners' state of mind."

 

So it goes with communicating complex information. Because we have all the background knowledge and understanding, we're overconfident that what we're saying is clear to everyone else. WE know what we mean! Why don't they get it? It's tough to remember that other people won't make the same inferences, have the same word-meaning connections, or share our associations.

It's particularly important in science education. The more time a person spends in a field, the more the field's obscure language becomes second nature. Without special attention, audiences might not understand the words being used - or worse yet, they might get the wrong impression.

Over at the American Geophysical Union blog, Callan Bentley gives a fantastic list of Terms that have different meanings for scientists and the public.

What great examples! Even though the scientific terms are technically correct in context, they're obviously the wrong ones to use when talking to the public about climate change. An inattentive scientist could know all the material but leave the audience walking away with the wrong message.

We need to spend the effort to phrase ideas in a way the audience will understand. Is that the same as "dumbing down" a message? After all, complicated ideas require complicated words and nuanced answers, right? Well, no. A real expert on a topic can give a simple distillation of material, identifying the core of the issue. Bentley did an outstanding job rephrasing technical, scientific terms in a way that conveys the intended message to the public.

That's not dumbing things down, it's showing a mastery of the concepts. And he was able to do it by overcoming the "curse of knowledge," seeing the issue from other people's perspective. Kudos to him - it's an essential part of science education, and something I really admire.

P.S. - By the way, I chose that image for a reason: I bet once you see the baby in the tree you won’t be able to ‘unsee’ it. (image via Richard Wiseman)

Weak supporting evidence can undermine belief

11 Lightwave 29 September 2011 10:11AM

Article: Weak supporting evidence can undermine belief in an outcome

Defying logic, people given weak evidence can regard predictions supported by that evidence as less likely than if they aren’t given the evidence at all.

...

Consider the following statement: “Widespread use of hybrid and electric cars could reduce worldwide carbon emissions. One bill that has passed the Senate provides a $250 tax credit for purchasing a hybrid or electric car. How likely is it that at least one-fifth of the U.S. car fleet will be hybrid or electric in 2025?”

That middle sentence is the weak evidence. People presented with the entire statement — or similar statements with the same three-sentence structure but on different topics — answered the final question lower than people who read the statement without the middle sentence. They did so even though other people who saw the middle statement in isolation rated it as positive evidence for, in this case, higher adoption of hybrid and electric cars.

 

Paper: When good evidence goes bad: The weak evidence effect in judgment and decision-making

Abstract:

An indispensable principle of rational thought is that positive evidence should increase
belief. In this paper, we demonstrate that people routinely violate this principle when pre-
dicting an outcome from a weak cause. In Experiment 1 participants given weak positive
evidence judged outcomes of public policy initiatives to be less likely than participants
given no evidence, even though the evidence was separately judged to be supportive.
Experiment 2 ruled out a pragmatic explanation of the result, that the weak evidence
implies the absence of stronger evidence. In Experiment 3, weak positive evidence made
people less likely to gamble on the outcome of the 2010 United States mid-term Congres-
sional election. Experiments 4 and 5 replicated these findings with everyday causal
scenarios. We argue that this ‘‘weak evidence effect’’ arises because people focus dispro-
portionately on the mentioned weak cause and fail to think about alternative causes.

[Poll] Who looks better in your eyes?

6 [deleted] 25 August 2011 11:29AM

This is thread where I'm trying to figure out a few things about signalling on LessWrong and need some information, so please immediately after reading about the two individuals please answer the poll. The two individuals:


A. Sees that an interpretation of reality shared by others is not correct, but tries to pretend otherwise for personal gain and/or safety.

B. Fails to see that an interpretation of reality is shared by others is flawed. He is therefore perfectly honest in sharing the interpretation of reality with others. The reward regime for outward behaviour is the same as with A.

 

To add a trivial inconvenience that matches the inconvenience of answering the poll before reading on, comments on what I think the two individuals signal,what the trade off is and what I speculate the results might be here versus the general population, is behind this link.

Akrasic Reasoning

-4 MatthewBaker 05 August 2011 08:22PM

This post is in a constant state of revision, similar to this post. This is mainly because I do not have a beta and this is based on many personal experiences that are unclear at times.

 

This subject has been touched on many times throughout LessWrong because Akrasia is the most dangerous foe of any true follower of Rationality. When you know you could be amazing but you find yourself unable to change due to the havoc that feelings can play with your thoughts you feel helpless and I want to help you surpass that. I am beginning a Journey to fight Akrasia directly in all its forms and in the past such Journey's have been abandoned without much progress. In this mini-sequence of posts I plan to not only document my fight to push past the depressing weight of Akrasia as a tool to keep me on the path, I will also provide some anti-Akrasia reports on my progress with different techniques that fellow LessWrongians can look back on and draw strength from in times of despair and laziness.

 

My name is Matthew Baker and I want to save the world.

I think most people share the feeling that the world should be saved and that only true sociopaths can discount the value of all sentient life. This is so important because the majority of people aren't able to defeat their innate Akrasic reasoning, ugh fields, and other factors that prevent them from functioning in a way that aligns with their beliefs. I think that if you believe in something, and you wish to be more rational towards the world then you should either push your beliefs towards the current state of reality or push reality towards your current state of beliefs.

When I was younger and sought something that I could devote effort to that would change the world for the better, I was quite disillusioned by the fact that nearly every cause relied on their innate biases to deal with the problems facing them. From political struggles to moral tribulation humanity is very good at ignoring things that don't coincide with their worldview. I always sought to surpass that but for a long time I failed to find anything to believe in that coincided with reality. Now that my skepticism is satisfied I have to logically take a look at what things are preventing me from promoting my beliefs. Akrasia is the most dangerous foe of any true follower of rationality. I've personally experienced Akrasia as the feeling when you know you could be amazing but you find yourself unable to change due to the havoc that feelings can play with your thoughts. I am beginning a journey to fight Akrasia directly in all its forms. I've attempted this in the past without making much progress; I'm hoping a different approach will help me succeed (or at least make new and different mistakes). In this mini-sequence of posts I plan to document my fight to push past the depressing weight of Akrasia. As a tool to keep me on the path, I will also provide some anti-Akrasia reports on my progress with different techniques.

My goals for this quest are varied yet connected. I don't intend to take them all on at once, but instead phase them in over the upcoming month and see if i can find the limit of my ability to avoid wasting time.

  1. My goal to make myself more fit and transition to eating healthier food, right now I'm fairly skinny and I want to build some muscle to match with my height(6'1"). Enough so that I dont have trouble picking up things and carrying them without much out-word signalling of effort, but I'm not looking to become a bodybuilder or anything I just wanna optimize the vessel carrying my consciousnesses with better food and habits.

  2. My goal is to become more skilled socially, I rested on my social laurels for a long time and focused on associating with people that fit my views on set issues. For maximum success I will focus on general social group construction as I advance into my second year of college. I wanna see how much fun and rationality I can spread if I focus on being skilled at gathering smart and interesting people into the fun vortex I can create around me.

  3. My goal is to get a substantially higher GPA then I did last semester. I spent very little time on school but managed to pull off a 3.1 which was lower than my first semester GPA and I want this trend to reverse as I spend more focused time on school and actually study for the first time in my life.

 

Things that prevent me from achieving my goals are mostly random web browsing and gaming, lots of ugh fields I've only recently been able to write down and start purging from my thought process, negative emotions that sap my willpower and currently unknown other factors. Hopefully I will be able to surpass these problems with the power of self reflection and sharing, classical conditioning and positive substance use.

My goals for the upcoming week involve some social and fitness goals until school starts on the 20th. Hopefully I can get these partially phased in and be able to focus more on academia once I'm back up at school. For specific milestones I want to dance closely with at least 1 girl at a rave I'm going to tonight up in LA and I want to start working on pull-ups so I can get back up to my previous total(3) and start building from there.

I expect I'll have to deal with some social anxiety at the rave and some ugh field's towards the fitness, but hopefully this form of specific goal setting and reflection will work well. I will also have substances available for backup in case I fail to perform to my personal expectations. Combined, this should allow me to surpass my Akrasic Reasoning of the past for the sake of our combined future.

What can you gain from my efforts as fellow rationalists? Hopefully, once I've competed my journey I'll be able to explain my mind state well enough that you can learn from it and apply it to your own goals. When my mental state is low reading about how someone else was able to push back up from a similarly bad state can be amazingly helpful and I hope that I can provide that to others.

 

Tsuyoku Naritai! My Friends

P.S. If luck exists, I wish to gain more of it and believe in it so wish me luck with my first top level post. :) Edit: Its now in discussion until I see a surge of excitement towards the idea of this mini-sequence.

 

 

 

[LINK] Reverse priming effect from awareness of persuasion attempt?

4 Sniffnoy 05 August 2011 06:02PM

Recently came across this blog post on Language Log summarizing this recent paper by Laran et al. Super-short version: When people are aware that a slogan is trying to persuade them, reverse-priming effects in which they avoid doing as it suggests can be seen.  However, if their attention is drawn away from the fact that it is trying to persuade them, the usual priming effects are seen.

How to detonate a technology singularity using only parrot level intelligence - new meetup.com group in Silicon Valley to design and create it

-15 BenRayfield 31 July 2011 06:27PM

 

http://www.meetup.com/technology-singularity-detonator

9 people joined in the last 5 hours and the first meetup hasn't even happened yet. This is the meetup description, including technical designs and how it leads to singularity:

 

The plan is to detonate an intelligence explosion (leading to a technology-singularity) starting with an open-source Java artificial intelligence (AI) software which networks peoples' minds together through the internet using realtime interactive psychology of feedback loops between mouse movements and generated audio. "Technological singularity refers to the hypothetical future emergence of greater-than human intelligence through technological means." http://en.wikipedia.org/wiki/Technological_singularity Computer programming is not required to join the group, but some kind of technical or abstract thinking skill is. We are going to make this happen, not talk about it endlessly like so many other AI groups do. Audivolv 0.1.7 is a very early and version of the user-interface. The final version will be a massively multiplayer audio game unlike any existing game. It will learn based on mouse movements in realtime instead of requiring good/bad buttons to train it. The core AI systems have not been created yet. Audivolv is just the user-interface for that. http://sourceforge.net/projects/audivolv The whole system will be 1 file you double-click to run and it works immediately on Windows, Mac, or Linux. This does not include Audivolv yet and has some parts that may be removed: http://sourceforge.net/projects/humanainet It must be a "Friendly AI", which means it will be designed not to happen like in the Terminator movies or similar science fiction. It will work toward more productive goals and help the Human species. http://en.wikipedia.org/wiki/Friendly_artificial_intelligence My plan to make that happen is for it to be made of many peoples' minds and many computers, so it is us. It becomes smarter when we become smarter. One of the effects of that will be to extremely increase Dunbar's Number, which is the number of people or organizations that a person can intelligently interact with before forgetting others. Dunbar's number is estimated around 150 today. http://en.wikipedia.org/wiki/Dunbar%27s_number

 

This only requires the AI be as smart as a parrot, since the people using the program do most of the thinking and the AI only organizes their thoughts statistically enough to decide who should connect to who else, in the way evolved code is traded (and verified to use only math so its safe) between computers automatically, in this massively multiplayer audio game. We will detonate a technology singularity using only the intelligence of a parrot plus the intelligence of people using the program. This is very surprising to most people who think huge grids of computers and experts are required to build Human intelligence in a machine. This is a shortcut, and will have much better results because it is us so it has no reason to act against us, like an AI made only of software may do.

 

Infrastructure

Communication between these programs through the internet will be done as a Distributed Hash Table. The most important part of that is each key (hash of some file bytes) has a well-defined distance to each other key, a distance(hash1,hash2) function, which proves the correct direction to search the network to find the bytes of any hash, or to statistically verify (but not certainly) that its not in the network. There may be a way to do it certainly, but for my purposes approximate searching will work.

In the same Distributed Hash Table, there will be public-keys, used like filenames or identities, whose content can be modified only by whoever has the private-key. If code evolves to include calculations based on your mouse movements and the mouse movements of 5 other people in realtime, then the numbers from those other mouse movements (between -1 and 1 for each of 2 dimensions, for each of 5 people) will be digitally-signed so everyone who uses the evolved code will know it is using the same people's continuing mouse movements instead of is a modified code. The code can be modified, but that would have a different hash and would be considered on its own merits instead of knowledge about the previous code and its specific connections to specific people. This will be done in realtime, not something to be saved and loaded later from a hard-drive. Each new mouse position (or a few of them sent at once) will be digitally-signed and broadcast to the network, the same as any other data broadcast to the network.

http://en.wikipedia.org/wiki/Distributed_hash_table

Similarly, but more fuzzy, the psychology of feedback loops between mouse movements and automatically evolving Java code, will be used as a distance function, and a second network organized that way, so you can search the network in the direction of other people whose psychology is more similar to your current state of mind and how you're using the program. This decentralized network will be searchable by your subconscious thoughts, because subconscious thoughts are expressed in how your mouse movements cause the code to evolve.

As you search this network automatically by moving your mouse, you will trade evolved code with those computers, always automatically verifying the code only uses math and no file-access or java.lang.System class or anything else not provably safe. You will experience the downloaded code as it gradually connects to the code evolved for your mouse movements, code which generates audio as 44100 audio amplitudes (number between -1 and 1) per second per speaker.

Some of the variables in the evolved code will be the hash of other evolved code. Each evolved code will have a hash, probably from the SHA-256 algorithm, so it could be a length 64 hex string written in the code. Each variable will be a number beween -1 and 1. No computer will have all the codes for all its variables, but for those it doesn't have, it will use them simply as a variable. If it has those codes, then there is an extra behavior of giving that code an amount of influence proportional to the value of the variable, or deleting the code if the variable becomes negative for too long. In that way, evolved code will decide which other evolved code to download and how much influence each evolved code should have on the array of floating point numbers in the local computer.

Since the decentralized network will be searched by psychology (instead of text or pixels in an image or other things search-engines know how to do today), and since its connected to each person's subconscious mind through mouse/music feedback loops, the effect will be a collective mind made of many people and computers. We are Human AI Net, do you want to be temporarily assimilated?

 

Alternative To Brain Implants 

Statistically inputs and outputs to neurons subconsciously without extra hardware. 

A neuron is a brain cell that connects to thousands of other neurons and slowly adjusts its electricity and chemical patterns as it learns. 

An incorrect assumption has extremely delayed the creation of technology that transfers thoughts between 2 brains. That assumption is, to quickly transfer large amounts of information between a brain and a computer, you need hardware that connects directly to neurons. 

Eyes and ears transfer a lot of information to a brain, but the other part of that assumption is eyes and ears are only useful for pictures and sounds that make sense and do not appear as complete randomness or whitenoise. People assume anything that sounds like radio static (a typical random sound) can't be used to transfer useful information into a brain. 

Most of us remember what a dial-up-modem sounds like. It sounds like information is in it but its too fast for Humans to understand. That's true of the dial-up-modem sound only because its digital and is designed for a modem instead of for Human ears which can hear around 1500 tones and simultaneously a volume for each. The dial-up-modem can only hear 1 tone that oscillates between 1 and 0, and no volume, just 1 or 0. It gets 56000 of those 1s and 0s per second. Human ears are analog so they have no such limits, but brains can think at most at 100 changes per second. 

If volume can have 20 different values per tone, then Human ears can hear up to 1500*100*log_base_2(20)=650000 bits of information per second. If you could take full advantage of that speed, you could transfer a book every few seconds into your brain, but the next bottleneck is your ability to think that fast. 

If you use ears the same way dial-up-modems use a phone line, but in a way designed for Human ears and Human brains instead of computers, then your ears are much faster data transfer devices than brain implants, and the same is true for transferring information as random-appearing grids of changing colors through your eyes. We have computer speakers and screens for input to brains. We still have some work to do on the output speeds of mouse and keyboard, but there are electricity devices you can wear on your head for the output direction. For the input direction, eyes and ears are currently far ahead of the most advanced technology in their data speeds to your brain. 

So why do businesses and governments keep throwing huge amounts of money at connecting computer chips directly to neurons? They should learn to use eyes and ears to their full potential before putting so much resources into higher bandwidth connections to brains. They're not nearly using the bandwidth they already have to brains. 

Intuitively most people know how music can affect their subconscious thoughts. Music is a low bandwidth example. It has mostly predictable and repeated sounds. The same voices. The same instruments. What I'm talking about would sound more like radio static or whitenoise. You wouldn't know what information is in it from its sound. You would only understand it after it echoed around your neuron electricity patterns in subconscious ways. 

Most people have only a normal computer available, so the brain-to-computer direction of information flow has to be low bandwidth. It can be mouse movements, gyroscope based game controllers, video camera detecting motion, or devices like that. The computer-to-brain direction can be high bandwidth, able to transfer information faster than you can think about it. 

Why hasn't this been tried? Because science proceeds in small steps. This is a big step from existing technology but a small step in the way most people already have the hardware (screen, speakers, mouse, etc). The big step is going from patterns of random-appearing sounds or video to subconscious thoughts to mouse movements to software to interpret it statistically, and around that loop many times as the Human and computer learn to predict each other. Compared to that, connecting a chip directly to neurons is a small step. 

Its a feedback loop: computer, random-appearing sound or video, ears or eyes, brain, mouse movements, and back to computer. Its very indirect but uses hardware that has evolved for millions of years, compared to low-bandwidth hardware they implant in brains. Eyes and ears are much higher bandwidth, and we should be using them in feedback loops for brain-to-brain and brain-to-computer communication. 

What would it feel like? You would move the mouse and instantly hear the sounds change based on how you moved it. You would feel around the sound space for abstract patterns of information you're looking for, and you would learn to find it. When many people are connected this way through the internet, using only mouse movements and abstract random-like sounds instead of words and pictures, thoughts will flow between the brains of different people, thoughts that they don't know how to put into words. They would gradually learn to think more as 1 mind. Brains naturally learn to communicate with any system connected to them. Brains dont care how they're connected. They grow into a larger mind. It happens between the parts of your brain, and it will happen between people using this system through the internet. 

Artificial intelligence software does not have to replace us or compete with us. The best way to use it is to connect our minds together. It can be done through brain implants, but why wait for that technology to advance and become cheap and safe enough? All you need is a normal computer and the software to connect our subconscious thoughts and statistical patterns of interaction with the computer. 

Dial-up-modem sounds were designed for computers. These interactive sounds/videos would be designed for Human ears/eyes and the slower but much bigger and parallel way the data goes into brains. For years I've been carefully designing a free open-source software http://HumanAI.net  - Human and Artificial Intelligence Network, or Human AI Net - to make this work. It will be a software that does for Human brains what dial-up-modems do for computers, and it will sound a little like a dial-up-modem at first but start to sound like music when you learn how to use it. I don't need brain implants to flow subconscious thoughts between your brains over internet wires. 

Intelligence is the most powerful thing we know of. The brain implants are simply overkill, even if they become advanced enough to do what I'll use software and psychology to do. We can network our minds together and amplify intelligence and share thoughts without extra hardware. After thats working, we can go straight to quantum devices for accessing brains without implants. Lets do this through software and skip the brain implant paradigm. If it works just a little, it will be enough that our combined minds will figure out how to make it work a lot more. Thats how I prefer to start a  http://en.wikipedia.org/wiki/Technological_singularity  We don't need businesses and militaries to do it first. We have the hardware on our desks. We're only missing the software. It doesn't have to be smarter than Human software. It just has to be smart enough to connect our subconscious thoughts together. The authorities have their own ideas about how we should communicate and how our minds should be allowed to think together, but their technology was obsolete before it was created. We can do everything they can do without brain implants, using only software and subconscious psychology. We don't need a smarter-than-Human software, or anything nearly that advanced, to create a technology singularity. Who wants to help me change the direction of Human evolution using an open-source (GNU GPL) software? Really, you can create a technology singularity starting from a software with the intelligence of a parrot, as long as you use it to connect Human minds together.

 

Bayesian justice

18 gwern 26 July 2011 12:58AM

"The mathematical mistakes that could be undermining justice"

They failed, though, to convince the jury of the value of the Bayesian approach, and Adams was convicted. He appealed twice unsuccessfully, with an appeal judge eventually ruling that the jury's job was "to evaluate evidence not by means of a formula... but by the joint application of their individual common sense."

But what if common sense runs counter to justice? For David Lucy, a mathematician at Lancaster University in the UK, the Adams judgment indicates a cultural tradition that needs changing. "In some cases, statistical analysis is the only way to evaluate evidence, because intuition can lead to outcomes based upon fallacies," he says.

Norman Fenton, a computer scientist at Queen Mary, University of London, who has worked for defence teams in criminal trials, has just come up with a possible solution. With his colleague Martin Neil, he has developed a system of step-by-step pictures and decision trees to help jurors grasp Bayesian reasoning (bit.ly/1c3tgj). Once a jury has been convinced that the method works, the duo argue, experts should be allowed to apply Bayes's theorem to the facts of the case as a kind of "black box" that calculates how the probability of innocence or guilt changes as each piece of evidence is presented. "You wouldn't question the steps of an electronic calculator, so why here?" Fenton asks.

It is a controversial suggestion. Taken to its logical conclusion, it might see the outcome of a trial balance on a single calculation. Working out Bayesian probabilities with DNA and blood matches is all very well, but quantifying incriminating factors such as appearance and behaviour is more difficult. "Different jurors will interpret different bits of evidence differently. It's not the job of a mathematician to do it for them," says Donnelly.

The linked paper is "Avoiding Probabilistic Reasoning Fallacies in Legal Practice using Bayesian Networks" by Norman Fenton and Martin Neil. The interesting parts, IMO, begin on page 9 where they argue for using the likelihood ratio as the key piece of information for evidence, and not simply raw probabilities; page 17, where a DNA example is worked out; and page 21-25 on the key piece of evidence in the Bellfield trial, no one claiming a lost possession (nearly worthless evidence)

Related reading: Inherited Improbabilities: Transferring the Burden of Proof, on Amanda Knox.

Psychologist making pseudo-claim that recent works "compromise the Bayesian point of view"

2 p4wnc6 18 July 2011 02:06PM

I have recently been corresponding with a friend who studies psychology regarding human cognition and the best underlying models for understanding it. His argument, summarized very briefly, is given by this quote:

Lastly, there has been a huge amount of research over the last two decades that shows human reasoning is 1) entirely constituted by emotion, and that it is 2) mostly unconscious and therefore out of our control. A lot of this research has seriously compromised the Bayesian point of view. I am referring to work done by Antonio Damasio, who demonstrated the essential role emotion plays in decision making (Descartes' Error), Timothy Wilson, who demonstrated the vital role of the unconscious (Strangers to Ourselves), and Jonathan Haidt, who demonstrated how moral reasoning is dictated by intuition and emotion (The Emotional Dog and its Rational Tail). I could go on and on here. I assume that you are familiar with this stuff. I'd just like to know how you who respond to this work from the point of view of your studies (in particular, those two points). I don't mean to get in a tit for tat debate here, just want the other side of the story.

I am having trouble synthesizing a response that captures the Bayesian point of view (and is sufficiently backed up by sources so that it will be useful for my friend rather than just gainsaying of the argument) because I am mostly a decision theory / probability person. Are these works of psychology and neuroscience really illustrating that human emotion governs decision making? What are some good neuroscience papers to read that deal with this, and how do Bayesians respond? It may be that everything he mentions above is a correct assessment (I don't know and don't have enough time to read the books right now), but that it is irrelevant if you want to make good decisions rather than just accept the types of decisions we already make.

Thinking without words?

10 PhilGoetz 09 July 2011 06:25PM

Before language, people must have thought without words.  I often have the impression that I have a thought fully-formed in my head, yet I wait to listen to it unfold in words before moving on to the next thought.  Perhaps I could think much faster if I weren't addicted to words.

Has anyone developed techniques for thinking without words?

This would have a little in common with Buddhist practices of emptying your mind, but wouldn't be the same thing.  For one thing, Buddhists also try to empty their minds of images.  More importantly, they are trying not to think, while I'm trying to think - just not unpack everything into words.

Distracting wolves and real estate agents

23 PhilGoetz 07 July 2011 01:49PM

I'm starting the process of looking for a house to buy.  The first thing every real estate agent says you need to do is to sign an exclusive contract with a real estate agent before they take you to look at houses.

I spoke to some co-workers, and none of them signed the contracts.  I didn't understand:  How can you avoid signing a contract, when the real estate agent, whom you must work with for months, will begin every meeting by telling you that the first thing to do is to sign the contract?

My boss told me that she distracted her agent.  Whenever he brought the subject up, she questioned him about details, which led on to other details, until they were in the car and driving to a showing and talking about something else completely.

This would be a useful skill.  And I can't imagine myself pulling it off.  Something in my gut would twist, and I would choke on my own words, if I tried to use conversation not to communicate information, but to entrap someone into doing something they didn't want to do by ensuring that they would have to violate social conventions to get out of it.

(I asked my boss if she'd ever done that to me.  She smiled very sweetly and said, "Never!")

And even if I could get over the choking, stuttering, and turning red, I don't think I could keep the game up for an entire hour.  I'm inclined to do search, not dynamic control optimization - to play chess (okay, Freecell), not to juggle or do magic tricks.

Somebody who did have the power to do that would be able to do awesome Jedi mind tricks.  (Like, say, pick up women.  Is it a coincidence that some pickup artists are also magicians?)

Are you like me in that way?  Are most of us left-brained Spocks who can't even try to lie or manipulate people?  I've met a lot of you, and I think the answer is "yes", but I really want to see your answers.  If so, is it because you choose to be that way, or because you have no choice?  What is this personality trait that we don't even have a name for, why is rationality so highly-correlated with it, and what else correlates with it?

If you think we're being rational to be so rational, say that, too.

At Wolf Park in Indiana, the biologists, who probably have at least a bit of the nerdy rationalist about them, have developed a technique for dealing with wolves when they're in the enclosure with them.  The wolves interact using dominance displays.  When humans go into the enclosure, and the wolves realize these same people keep coming back (they ignore visitors), the wolves want to establish the places of these humans in the dominance ladder (pedantic note: ladder, not hierarchy).  So they repeatedly try to engage the humans in dominance contests.

The thing about a dominance contest is that quitting equals losing.  You can't opt out of one once it's started (unless you have super Jedi mind skills).  Old-school wolf-handling was to become dominant; the problem with that is that, if you ever go into the wolf enclosure on a day when you have a cold, or are depressed, or just not focused on the task at hand, the nice beta wolf you have regarded as your friend for years may leap on your throat and (at best) throw you to the ground.  Even if you survive, you would be ill-advised to ever go back into the wolf pen; an alpha, once overthrown, moves (strangely) to the very bottom of the dominance ladder, and is fair game for every wolf in the pack.  The no-teaming-up rule which seems to apply to wolf dominance fights (doesn't to primates or felines, BTW) no longer applies.

It's better to leave the question of dominance unsettled.  Just like in the schoolyard, a person with an unknown rank has more leeway than one who is ranked; yet isn't constantly watched for signs of weakness the way an alpha is.  (An alpha wolf at Wolf Park once had a lower spinal injury.  He must have been in agony, but stayed where he was without whining or moving for 2 days before the humans rescued him - not because he couldn't move, but (we think) to keep the other wolves from noticing anything funny about his movements.)  Also, having the alphas be humans disrupts the pack dynamics that the biologists are studying.  So the biologists have developed a strategy of distracting the wolves before they can bring up the question of dominance.  One person does whatever it is they need to go into the enclosure to do; while the rest of them use toys, food, head-skritching, and tag-team techniques to keep any one wolf from focusing on any one human for long.

This is a use of the same technique that my boss used, that is practical and ethically laudable.  Could you do that?  If so, what makes it different?

I still don't think I could do it.  The cognitive load of trying to observe wolves and continually come up with novel distractions would be too great.  I don't think I could do it on the fly - at least, not well enough to ask people to bet their lives on it.  My mind operates with a long clock cycle.  I am CISC, not RISC.  Is that what makes nerds so famously poor at social interaction - that our minds are GOFAI, not cybernetics?

People neglect small probability events

11 XiXiDu 02 July 2011 10:54AM

Over at overcomingbias Robin Hanson wrote:

On September 9, 1713, so the story goes, Nicholas Bernoulli proposed the following problem in the theory of games of chance, after 1768 known as the St Petersburg paradox …:

Peter tosses a coin and continues to do so until it should land heads when it comes to the ground. He agrees to give Paul one ducat if he gets heads on the very first throw, two ducats if he gets it on the second, four if on the third, eight if on the fourth, and so on, so that with each additional throw the number of ducats he must pay is doubled.

Nicholas Bernoulli … suggested that more than five tosses of heads are morally impossible [and so ignored]. This proposition is experimentally tested through the elicitation of subjects‘ willingness-to-pay for various truncated versions of the Petersburg gamble that differ in the maximum payoff. … All gambles that involved probability levels smaller than 1/16 and maximum payoffs greater than 16 Euro elicited the same distribution of valuations. … The payoffs were as described …. but in Euros rather than in ducats. … The more senior students seemed to have a higher willingness-to-pay. … Offers increase significantly with income. (more)

This isn’t plausibly explained by risk aversion, nor by a general neglect of possibilities with a <5% chance. I suspect this is more about analysis complexity, about limiting the number of possibilities we’ll consider at any one time.  I also suspect this bodes ill for existential risk mitigation.

The title of the paper is 'Moral Impossibility in the Petersburg Paradox : A Literature Survey and Experimental Evidence' (PDF):

The Petersburg paradox has led to much thought for three centuries. This
paper describes the paradox, discusses its resolutions advanced in the
literature while alluding to the historical context, and presents experimental
data. In particular, Bernoulli’s search for the level of moral impossibility in
the Petersburg problem is stressed; beyond this level small probabilities are
considered too unlikely to be relevant for judgment and decision making. In
the experiment, the level of moral impossibility is elicited through variations
of the gamble-length in the Petersburg gamble. Bernoulli’s conjecture that
people neglect small probability events is supported by a statistical power
analysis.

I think that people who are interested to raise the awareness of risks from AI need to focus more strongly on this problem. Most discussions about how likely risks from AI are, or how seriously they should be taken, won't lead anywhere if the underlying reason for most of the superficial disagreement about risks from AI is that people discount anything under a certain threshold. There seems to be a point where things become vague enough that they get discounted completely.

The problem often doesn't seem to be that people doubt the possibility of artificial general intelligence. But most people would sooner question their grasp of “rationality” than give five dollars to a charity that tries to mitigate risks from AI because their calculations claim it was “rational” (those who have read the article by Eliezer Yudkowsky on 'Pascal's Mugging' know that I used a statement from that post and slightly rephrased it). The disagreement all comes down to a general averseness to options that have a low probability of being factual, even given that the stakes are high.

Nobody is so far able to beat arguments that bear resemblance to Pascal’s Mugging. At least not by showing that it is irrational to give in from the perspective of a utility maximizer. One can only reject it based on a strong gut feeling that something is wrong. And I think that is what many people are unknowingly doing when they argue against the SIAI or risks from AI. They are signaling that they are unable to take such risks into account. What most people mean when they doubt the reputation of people who claim that risks from AI need to be taken seriously, or who say that AGI might be far off, what those people mean is that risks from AI are too vague to be taken into account at this point, that nobody knows enough to make predictions about the topic right now.

When GiveWell, a charity evaluation service, interviewed the SIAI (PDF), they hinted at the possibility that one could consider the SIAI to be a sort of Pascal’s Mugging:

GiveWell: OK. Well that’s where I stand – I accept a lot of the controversial premises of your mission, but I’m a pretty long way from sold that you have the right team or the right approach. Now some have argued to me that I don’t need to be sold – that even at an infinitesimal probability of success, your project is worthwhile. I see that as a Pascal’s Mugging and don’t accept it; I wouldn’t endorse your project unless it passed the basic hurdles of credibility and workable approach as well as potentially astronomically beneficial goal.

This shows that lot of people do not doubt the possibility of risks from AI but are simply not sure if they should really concentrate their efforts on such vague possibilities.

Technically, from the standpoint of maximizing expected utility, given the absence of other existential risks, the answer might very well be yes. But even though we believe to understand this technical viewpoint of rationality very well in principle, it does also lead to problems such as Pascal’s Mugging. But it doesn’t need a true Pascal’s Mugging scenario to make people feel deeply uncomfortable with what Bayes’ Theorem, the expected utility formula, and Solomonoff induction seem to suggest one should do.

Again, we currently have no rational way to reject arguments that are framed as predictions of worst case scenarios that need to be taken seriously even given a low probability of their occurrence due to the scale of negative consequences associated with them. Many people are nonetheless reluctant to accept this line of reasoning without further evidence supporting the strong claims and request for money made by organisations such as the SIAI.

Here is for example what mathematician and climate activist John Baez has to say:

Of course, anyone associated with Less Wrong would ask if I’m really maximizing expected utility. Couldn’t a contribution to some place like the Singularity Institute of Artificial Intelligence, despite a lower chance of doing good, actually have a chance to do so much more good that it’d pay to send the cash there instead?

And I’d have to say:

1) Yes, there probably are such places, but it would take me a while to find the one that I trusted, and I haven’t put in the work. When you’re risk-averse and limited in the time you have to make decisions, you tend to put off weighing options that have a very low chance of success but a very high return if they succeed. This is sensible so I don’t feel bad about it.

2) Just to amplify point 1) a bit: you shouldn’t always maximize expected utility if you only live once. Expected values — in other words, averages — are very important when you make the same small bet over and over again. When the stakes get higher and you aren’t in a position to repeat the bet over and over, it may be wise to be risk averse.

3) If you let me put the $100,000 into my retirement account instead of a charity, that’s what I’d do, and I wouldn’t even feel guilty about it. I actually think that the increased security would free me up to do more risky but potentially very good things!

All this shows that there seems to be a fundamental problem with the formalized version of rationality. The problem might be human nature itself, that some people are unable to accept what they should do if they want to maximize their expected utility. Or we are missing something else and our theories are flawed. Either way, to solve this problem we need to research those issues and thereby increase the confidence in the very methods used to decide what to do about risks from AI, or to increase the confidence in risks from AI directly, enough to make it look like a sensible option, a concrete and discernable problem that needs to be solved.

Many people perceive the whole world to be at stake, either due to climate change, war or engineered pathogens. Telling them about something like risks from AI, even though nobody seems to have any idea about the nature of intelligence, let alone general intelligence or the possibility of recursive self-improvement, seems like just another problem, one that is too vague to outweigh all the other risks. Most people feel like having a gun pointed to their heads, telling them about superhuman monsters that might turn them into paperclips then needs some really good arguments to outweigh the combined risk of all other problems.

(Note: I am not making claim about the possibility of risks from AI in and of itself but rather put forth some ideas about the underyling reasons for why some people seem to neglect existential risks even though they know all the arguments.)

View more: Prev | Next