Saying "Everyone Is Biased" May Create Bias
It looks like telling people "everyone is biased" might make people not want to change their behavior to overcome their biases:
In initial experiments, participants were simply asked to rate a particular group, such as women, on a series of stereotypical characteristics, which for women were: warm, family-oriented and (less) career-focused. Beforehand, half of the participants were told that "the vast majority of people have stereotypical preconceptions." Compared to those given no messages, these participants produced more stereotypical ratings, whether about women, older people or the obese.Another experiment used a richer measure of stereotyping – the amount of clichés used by participants in their written account of an older person’s typical day. This time, those participants warned before writing that “Everyone Stereotypes” were more biased in their writings than those given no message; in contrast, those told that stereotyping was very rare were the least clichéd of all. Another experiment even showed that hearing the “Everyone Stereotypes” message led men to negotiate more aggressively with women, resulting in poorer outcomes for the women.
The authors suggest that telling participants that everyone is biased makes being biased seem like not much of a big deal. If everyone is doing it, then it's not wrong for me to do it as well. However, it looks like the solution to the problem presented here is to give a little white lie that will prompt people to overcome their biases:
A further experiment suggests a possible solution. In line with the other studies, men given the "Everyone Stereotypes" message were less likely to hire a hypothetical female job candidate who was assertive in arguing for higher compensation. But other men told that everyone tries to overcome their stereotypes were fairer than those who received no information at all. The participants were adjusting their behaviour to fit the group norms, but this time in a virtuous direction.
An Experiment In Social Status: Software Engineer vs. Data Science Manager
Here is an interesting blog post about a guy who did a resume experiment between two positions which he argues are by experience identical, but occupy different "social status" positions in tech: A software engineer and a data manager.
Interview A: as Software Engineer
Bill faced five hour-long technical interviews. Three went well. One was so-so, because it focused on implementation details of the JVM, and Bill’s experience was almost entirely in C++, with a bit of hobbyist OCaml. The last interview sounds pretty hellish. It was with the VP of Data Science, Bill’s prospective boss, who showed up 20 minutes late and presented him with one of those interview questions where there’s “one right answer” that took months, if not years, of in-house trial and error to discover. It was one of those “I’m going to prove that I’m smarter than you” interviews...
Let’s recap this. Bill passed three of his five interviews with flying colors. One of the interviewers, a few months later, tried to recruit Bill to his own startup. The fourth interview was so-so, because he wasn’t a Java expert, but came out neutral. The fifth, he failed because he didn’t know the in-house Golden Algorithm that took years of work to discover. When I asked that VP/Data Science directly why he didn’t hire Bill (and he did not know that I knew Bill, nor about this experiment) the response I got was “We need people who can hit the ground running.” Apparently, there’s only a “talent shortage” when startup people are trying to scam the government into changing immigration policy. The undertone of this is that “we don’t invest in people”.
Or, for a point that I’ll come back to, software engineers lack the social status necessary to make others invest in them.
Interview B: as Data Science manager.
A couple weeks later, Bill interviewed at a roughly equivalent company for the VP-level position, reporting directly to the CTO.
Worth noting is that we did nothing to make Bill more technically impressive than for Company A. If anything, we made his technical story more honest, by modestly inflating his social status while telling a “straight shooter” story for his technical experience. We didn’t have to cover up periods of low technical activity; that he was a manager, alone, sufficed to explain those away.
Bill faced four interviews, and while the questions were behavioral and would be “hard” for many technical people, he found them rather easy to answer with composure. I gave him the Golden Answer, which is to revert to “There’s always a trade-off between wanting to do the work yourself, and knowing when to delegate.” It presents one as having managerial social status (the ability to delegate) but also a diligent interest in, and respect for, the work. It can be adapted to pretty much any “behavioral” interview question...
Bill passed. Unlike for a typical engineering position, there were no reference checks. The CEO said, “We know you’re a good guy, and we want to move fast on you”. As opposed tot he 7-day exploding offers typically served to engineers, Bill had 2 months in which to make his decision. He got a fourth week of vacation without even having to ask for it, and genuine equity (about 75% of a year’s salary vesting each year)...
It was really interesting, as I listened in, to see how different things are once you’re “in the club”. The CEO talked to Bill as an equal, not as a paternalistic, bullshitting, “this is good for your career” authority figure. There was a tone of equality that a software engineer would never get from the CEO of a 100-person tech company.
The author concludes that positions that are labeled as code-monkey-like are low status, while positions that are labeled as managerial are high status. Even if they are "essentially" doing the same sort of work.
Not sure about this methodology, but it's food for thought.
[LINK] How Do Top Students Study?
I found this Quora discussion very informative.
2. Develop the ability to become an active reader. Don't just passively read material you are given. But pose questions, develop hypotheses and actively test them as you read through the material. I think this is what another poster referred to when he advised that you should develop a "mental model" of whatever concept they are teaching you. Having a mental model will give you the intuition and ability to answer a wider range of questions than would be otherwise possible if you lacked such a mental model.Where do you get this model? You creatively develop one as you are reading to try to explain the facts as they are presented to you. Sometimes you have to guess the model based on scarce evidence. Sometimes it is handed to you. If your model is a good one it should at least be able to explain what you are reading.
Having a model also tells you what to look for to disprove it -- so you can be hypersensitive for this disconfirming evidence. In fact, while you are reading you should be making predictions (in the form of one or more scenarios of where the narrative could lead) and carefully checking if the narrative is going there. You should also be making predictions and seeking contradictions to these predictions -- so you can quickly find out if your model is wrong.
Sometimes you may have two or more different models that can explain the evidence, so you task will be to quickly formulate questions that can prove one model while disconfirming the others. I suggest focusing on raising questions that could confirm/disprove the mostly likely one while disproving the others (think: differential diagnoses in medicine).
But once you have such a model that (i) explains the evidence and (ii) passes all the disconfirming tests you can throw at it then you have something you can interpolate and extrapolate from to answer far more than was initially explained to you.
Such models also makes retention easier because you only need to remember the model as opposed to the endless array of facts it explains. Of course, your model could be wrong, but that is why you actively test it as you are reading, and adjust it as necessary. Think of this process as the scientific method being applied by you, to try to discover the truth as best you can..
Sometimes you will still be left with contradictions. I often found speaking to the professor after class was an efficient way of resolving them.
The author lists 8 other criteria, but this one had the biggest "light bulb" moment for me.
It was interesting to me because I intuitively would use this technique while listening/taking notes during lectures. But I never actually made a conscious decision to apply this consistently in all of my classes; it would only happen in classes I was interested in.
[LINK] Meditation Can Debias The Mind
This is interesting. Apparently, meditating for 15 minutes can reduce susceptibility to the sunk cost bias.
Across two separate experiments, the researchers tested this by giving one group of participants a 15-minute mindfulness meditation induction.Then they were given a business scenario which was designed to test the sunk cost bias.
In comparison to a control condition, thinking mindfully doubled the number of people who could avoid the sunk cost bias.
In the control condition just over 40% of people were able to resist the bias. This shot up to almost 80% among those who were thinking mindfully.
The researchers achieved similar results in another experiment and then went on to examine exactly how mindfulness is helpful.
In a third experiment they found that mindfulness increases the focus on the present moment, as it should.
A focus on the present in turn reduced negative feelings participants had about the ‘sunk cost’–the time, money and effort that had gone to waste.
This reduction in negative emotion meant participants were much better equipped to resist the bias.
Ironically, I did a search on Less Wrong to see if something like this had been posted before and came across this comment:
Good points. The lack of scientific research discussed is certainly an issue. I did a quick literature sweep before writing this post, but decided not to include that information here.
One is a sunk cost fallacy. If you have sunk ten days into it you are less willing to ditch it because fallible humans are often unable to act like good economists and recognize that sunk costs are irrelevant.At the dhamma.org courses I haven't found that to be the case. The management at the Massachusetts center informed me that a large majority of students never return to take a second course. Perhaps the cost needs to be larger; people may find it difficult to give up the practice (when they have good reason to) if they have done it daily for some length of time.
According to that anecdote, a large majority of students never take a second course in meditation. It might be due to the study above, where meditating itself makes people less likely to engage in sunk cost thinking.
[LINK] What Can Internet Ads Teach Us About Persuasion?
How "one weird trick" conquered the Internet. Some excerpt I found interesting:
Research on persuasion shows the more arguments you list in favor of something, regardless of the quality of those arguments, the more that people tend to believe it,” Norton says. “Mainstream ads sometimes use long lists of bullet points—people don’t read them, but it’s persuasive to know there are so many reasons to buy.” OK, but if more is better, then why only one trick? “People want a simple solution that has a ton of support.”
I actually see this technique used in a lot of religious apologetics. There's even a name for one of them: The Gish Gallop. Would it be fair to say that this technique is taking advantage of a naive or intuitive understanding of Bayesian updates?
What about all the weirdness? “A word like ‘weird’ is not so negative, and kind of intriguing,” says Oleg Urminsky of the University of Chicago Booth School of Business. “There’s this foot-in-the-door model. If you lead with a strong, unbelievable claim it may turn people off. But if you start with ‘isn’t this kind of weird?’ it lowers the stakes.” The model also explains why some ads ask you to click on your age first. “Giving your age is low-stakes but it begins the dialogue. The hard sell comes later.”
The "click on your age" first gambit seems a bit like Cached Selves.
“People tend to think something is important if it’s secret,” says Michael Norton, a marketing professor at Harvard Business School. “Studies find that we give greater credence to information if we’ve been told it was once ‘classified.’ Ads like this often purport to be the work of one man, telling you something ‘they’ don’t want you to know.” The knocks on Big Pharma not only offered a tempting needle-free fantasy; they also had a whiff of secret knowledge, bolstering the ad’s credibility
Humanity's love affair with secrecy and its importance seems to go back quite a bit. The world's largest religion seems to have started out as one of many mystery religions in the Greco-Roman world at the time.
[LINK] Productivity Ninja: 5 Powerful Tips For Getting More Stuff Done
From the blog [Bakadesuyo](http://www.bakadesuyo.com/2013/10/productivity-ninja/):
>1) Know When You’re At Your Best
>And plan accordingly. To be a productivity ninja focus less on time management, and more on managing your energy.
>Charlie Munger, Vice-Chairman of Berkshire Hathaway, used a system like this to make sure he was always growing.
>He identified the hours when he was at his best — and then routinely stole one of those peak hours for learning.
>>Charlie Munger hit upon one strategy when he was a young lawyer. He decided that whenever his legal work was not as intellectually stimulating as he’d like, “I would sell the best hour of the day to myself.” He would take otherwise billable time at the peak of his day and dedicate it to his own thinking and learning. “And only after improving my mind — only after I’d used my best hour improving myself — would I sell my time to my professional clients.”
There are four more entries, but posting them here would probably violate copyright. Anyone implement any of the suggestions listed?
[Link] Are Children Natural Bayesians?
This recent article at Slate thinks so:
Why Your 4-Year-Old Is As Smart as Nate Silver
It turns out that even very young children reason [using Bayes Theorem]. For example, my student Tamar Kushnir, now at Cornell, and I showed 4-year-olds a toy and told them that blocks made it light up. Then we gave the kids a block and asked them how to make the toy light up. Almost all the children said you should put the block on the toy—they thought, sensibly, that touching the toy with the block was very likely to make it light up. That hypothesis had a high “prior.”
Then we showed 4-year-olds that when you put a block right on the toy it did indeed make it light up, but it did so only two out of six times. But when you waved a block over the top of the toy, it lit up two out of three times. Then we just asked the kids to make the toy light up.
The children adjusted their hypotheses appropriately when they saw the statistical data, just like good Bayesians—they were now more likely to wave the block over the toy, and you could precisely predict how often they did so. What’s more, even though both blocks made the machine light up twice, the 4-year-olds, only just learning to add, could unconsciously calculate that two out of three is more probable than two out of six. (In a current study, my colleagues and I have found that even 24-month-olds can do the same).
There also seems to be a reference to the Singularity Institute:
The Bayesian idea is simple, but it turns out to be very powerful. It’s so powerful, in fact, that computer scientists are using it to design intelligent learning machines, and more and more psychologists think that it might explain human intelligence.
(Of course, I don't know how many other AI researchers are using Bayes Theorem, so the author also might not have the SI in mind)
If children really are natural Bayesians, then why and how do you think we change?
Yale Creates First Self-Aware Robot?
Apparently a PhD candidate at the Social Robotics Lab at Yale created a self-aware robot:
In the mirror test, developed by Gordon Gallup in 1970, a mirror is placed in an animal’s enclosure, allowing the animal to acclimatize to it. At first, the animal will behave socially with the mirror, assuming its reflection to be another animal, but eventually most animals recognize the image to be their own reflections. After this, researchers remove the mirror, sedate the animal and place an ink dot on its frontal region, and then replace the mirror. If the animal inspects the ink dot on itself, it is said to have self-awareness, because it recognized the change in its physical appearance.
[...]
To adapt the traditional mirror test to a robot subject, computer science Ph.D. candidate Justin Hart said he would run a program that would have Nico, a robot that looks less like R2D2 and more like a jumble of wires with eyes and a smile, learn a three-dimensional model of its body and coloring. He would then change an aspect of the robot’s physical appearance and have Nico, by looking at a reflective surface, “identify where [his body] is different.”
What do Less Wrongians think? Is this "cheating" traditional concepts of self-awareness, or is self-awareness "self-awareness" regardless of the path taken to get there?
Availability Heuristic and Getting Stranded: Stay With Your Car Or Seek Help?
In every single news story that I can remember about someone getting stranded with a broken-down car, the person leaves their car looking for help and they end up dead. Or, if there are multiple people in the car, the one person who goes off looking for help ends up dead and the other people who stayed with the car survive.
My question: If I get stranded in my car somewhere, should I go looking for help? Or should I follow my availability heuristic and stay put? Since I'm still in the process of debiasing myself, should I do the opposite of what this particular mental shortcut suggests, or would that be a sort of bias bias (analogous to the fallacy fallacy)?
Teaching Bayesianism
I've had a bit of success with getting people to understand Bayesianism at parties and such, and I'm posting this thought experiment that I came up with to see if it can be improved or if an entirely different thought experiment would be grasped more intuitively in that context:
Say there is a jar that is filled with dice. There are two types of dice in the jar: One is an 8-sided die with the numbers 1 - 8 and the other is a trick die that has a 3 on all faces. The jar has an even distribution between the 8-sided die and the trick die. If a friend of yours grabbed a die from the jar at random and rolled it and told you that the number that landed was a 3, is it more likely that the person grabbed the 8-sided die or the trick die?
I originally came up with this idea to explain falsifiability which is why I didn't go with say the example in the better article on Bayesianism (i.e. any other number besides a 3 rolled refutes the possibility that the trick die was picked) and having a hypothesis that explains too much contradictory data, so eventually I increase the sides that the die has (like a hypothetical 50-sided die), the different types of die in the jar (100-sided, 6-sided, trick die), and different distributions of die in the jar (90% of the die are 200-sided but a 3 is rolled, etc.). Again, I've been discussing this at parties where alcohol is flowing and cognition is impaired yet people understand it, so I figure if it works there then it can be understood intuitively by many people.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)