You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] Wikipedia book based on betterhumans' article on cognitive biases

1 MathieuRoy 14 October 2016 01:03AM

[Video] The Essential Strategies To Debiasing From Academic Rationality

1 Gleb_Tsipursky 27 March 2016 03:04AM
A lifetime of work by a world expert in debiasing boiled down into four broad strategies in this video. A nice approach to this topic from the academic side of rationality.

Disclosure - the academic is Dr. Hal Arkes, a personal friend and Advisory Board Member of Intentional Insights, which I run.

EDIT: Seems like the sound quality is low. Anyone willing to do a transcript of this video as a volunteer activity for the rationality community? We can then subtitle the video.

Is there a list of cognitive illusions?

1 DonaldMcIntyre 06 May 2015 04:25AM

After I posted my great idea that "Determinism Is Just A Special Case Of Randomness" because "if not I don't see how there could be free will in a deterministic universe" I was positively guided by the LW community to read the Free Will Sequence so I am learning more about our biases and how we build illusions like free will and randomness in our minds.

But I don't see a list on LW or Wikipedia of a list of cognitive illusions and I think it would be great to have one of those just as it is useful for many people to visit the List Of Cognitive Biases page as a study reference or even to use in day to day life.

I think these are some cognitive illusions that are normally discussed as such:

- Free will

- Randomness/probability

- Time

- Money

There must be many more, but I don't find a list with summaries and that would great (to help me avoid writing posts like my "great idea" above!).

EDIT: The majority of comments below are about questioning if they are illusions or not and if they should be called cognitive illusions.

I guess there is no list of cognitive illusions because there is no academic agreement about these issues like in cognitive biases which are generally accepted as such!

Thx for the comments!

Cognitive Bias Mnemonics

5 Terdragon 05 April 2015 03:27AM

How many cognitive biases can you name, off the top of your head?

Try it, before moving on.

Give yourself sixty seconds.

Make a list.

Write them down.

I know that I've read about a number of biases by now, but they don't come to mind very easily. If I wish to become wary enough to spot cognitive biases in my own thought, then I might appreciate being able to quickly summon many examples of cognitive biases to mind. This would also make it easier to share examples of cognitive biases with others.

I plan to create a set of mnemonics for important biases, to make it easier for myself to remember them (and, as a consequence, to make it easier to spot them and eliminate them). I'll imagine each bias as an item; by visualizing the collection of items, I can remember the biases. If I really want to make sure that I don't forget any, they could be placed along a path in a mind palace.

Example mnemonic: Hindsight bias is an old leather boot. It's an old leather boot because that reminds me of the past, which clues the name of the bias. And anyways, psshh, why is everyone so excited about the idea of footwear? Anyone could have come up with that! It's just like clothes, but for feet! I could have invented it myself, it's so obvious! Hindsight bias: it could happen to you.

Using various lists of cognitive biases, I'm going to be performing this exercise myself and making mnemonics to remember them by. I might post these at some point, but if you're interested in the outcome, I recommend trying to make mnemonics for yourself first -- the associations will be more meaningful to you, personally, that way.

But beware that conceptualizing a bias as a mnemonic might not be perfect, just like conceptualizing biases as named ideas might not be perfect -- more on that here.

For the comments: What witty mnemonics can you come up with?

Are Cognitive Biases Design Flaws?

1 DonaldMcIntyre 25 February 2015 09:02PM

I am a newbie so today I read the article by Eliezer Yudkowski "Your Strength As A Rationalist" which helped me understand the focus of LessWrong, but I respectfully disagreed with a line that is written in the last paragraph:

It is a design flaw in human cognition...

So this was my comment in the article's comment section which I bring here for discussion:

Since I think evolution makes us quite fit to our current environment I don't think cognitive biases are design flaws, in the above example you imply that even if you had the information available to guess the truth, your guess was another one and it was false, therefore you experienced a flaw in your cognition.

My hypotheses is that reaching the truth or communicating it in the IRC may have not been the end objective of your cognitive process, in this case just to dismiss the issue as something that was not important anyway "so move on and stop wasting resources in this discussion" was maybe the "biological" objective and as such it should be correct, not a flaw.

If the above is true then all cognitive bias, simplistic heuristics, fallacies, and dark arts are good since we have conducted our lives for 200,000 years according to these and we are alive and kicking.

Rationality and our search to be LessWrong, which I support, may be tools we are developing to evolve in our competitive ability within our species, but not a "correction" of something that is wrong in our design.

Edit 1: I realize there is change in the environment and that may make some of our cognitive biases, which were useful in the past, to be obsolete. If the word "flaw" is also applicable to describe something that is obsolete then I was wrong above. If not, I prefer the word obsolete to characterize cognitive biases that are no longer functional for our preservation.

Are Cognitive Load and Willpower drawn from the same pool?

5 avichapman 23 February 2015 02:46AM

I was recently reading a blog here, that referenced a paper done in 1999 by Baba Shiv and Alex Fedorikhin (Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making). In it, volunteers are asked to memorise short or long numbers and then asked to chose a snack as a reward. The snack is either fruit or cake. The actual paper seems to go into a lot of details that are irrelevent to the blog post, but doesn't actually seem to contradict anything the blog post says. The result seems to be that those with a higher cognitive load were far more likely to chose the cake than those who weren't.

I was wondering if anyone has read any further on this line of research? The actual experiment seems to imply that the connection between cognitive load and willpower may be an acute effect - possibly not lasting very long. The choice of snack is made seconds after memorising a number and while actively trying to keep the number in memory for short term recall a few minutes later. There doesn't seem to be anything about the effect on willpower minutes or hours later.

Does anyone know if the effect lasts longer than a few seconds? If so, I would be interested in whether this affect has been incorporated into any dieting strategies.

[Link] Chalmers on Computation: A first step From Physics to Metaethics?

0 john_ku 18 November 2014 10:39AM

A Computational Foundation for the Study of Cognition by David Chalmers

Abstract from the paper:

Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions.

Justifying the role of computation requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science.

See my welcome thread submission for a brief description of how I conceive of this as the first step towards formalizing friendliness.

A "Holy Grail" Humor Theory in One Page.

-1 EGarrett 18 August 2014 10:26AM

Alrighty, with the mass downvoters gone, I can make the leap to posting some ideas. Here's the Humor Theory I've been developing over the last few months and have discussed at Meet-Ups, and have written two SSRN papers about, in one page. I've taken the document I posted on the Facebook group and retyped and formatted it here.

I strongly suspect that it's the correct solution to this unsolved problem. There was even a new neurology study released in the last few days that confirms one of the predictions I drew from this theory about the evolution of human intelligence.

Note that I tried to fit as much info as I could on the page, but obviously it's not enough space to cover everything, and the other papers are devoted to that. Any constructive questions, discussion etc are welcome.



 

A "Holy Grail" Humor Theory in One Page.


Plato, Aristotle, Kant, Freud, and hundreds of other philosophers have tried to understand humor. No one has ever found a single idea that explains it in all its forms, or shows what's sufficient to create it. Thus, it's been called a "Holy Grail" of social science. Consider this...


In small groups without language, where we evolved, social orders were needed for efficiency. But fighting for leadership would hurt them. So a peaceful, nonverbal method was extremely beneficial. Thus, the "gasp" we make when seeing someone fall evolved into a rapid-fire version at seeing certain failures, which allowed us to signal others to see what happened, and know who not to follow. The reaction, naturally, would feel good and make us smile, to lower our aggression and show no threat. This reaction is called laughter. The instinct that controls it is called humor. It's triggered by the brain weighing things it observes in the proportion:


Humor = ((Qualityexpected - Qualitydisplayed) * Noticeability * Validity) / Anxiety

 

Or H=((Qe-Qd)NV)/A. When the results of this ratio are greater than 0, we find the thing funny and will laugh, in the smallest amounts with slight smiles, small feelings of pleasure or small diaphragm spasms. The numerator terms simply state that something has to be significantly lower in quality than what we assumed, and we must notice it and feel it's real, and the denominator states that anxiety lowers the reaction. This is because laughter is a noisy reflex that threatens someone else's status, so if there is a chance of violence from the person, a danger to threatening a loved one's status, or a predator or other threat from making noise, the reflex will be mitigated. The common feeling amongst those situations, anxiety, has come to cause this.

This may appear to be an ad hoc hypothesis, but unlike those, this can clearly unite and explain everything we've observed about humor, including our cultural sayings and the scientific observations of the previous incomplete theories. Some noticed that it involves surprise, some noticed that it involves things being incorrect, all noticed the pleasure without seeing the reason. This covers all of it, naturally, and with a core concept simple enough to explain to a child. Our sayings, like "it's too soon" for a joke after a tragedy, can all be covered as well ("too soon" indicates that we still have anxiety associated with the event).

The previous confusion about humor came from a few things. For one, there are at least 4 types of laughter: At ourselves, at others we know, at others we don't know (who have an average expectation), and directly at the person with whom we're speaking. We often laugh for one reason instead of the other, like "bad jokes" making us laugh at the teller. In addition, besides physical failure, like slipping, we also have a basic laugh instinct for mental failure, through misplacement. We sense attempts to order things that have gone wrong. Puns and similar references trigger this. Furthermore, we laugh loudest when we notice multiple errors (quality-gaps) at once, like a person dressed foolishly (such as a court jester), exposing errors by others.

We call this the "Status Loss Theory," and we've written two papers on it. The first is 6 pages, offers a chart of old theories and explains this more, with 7 examples. The second is 27 pages and goes through 40 more examples, applying this concept to sayings, comedians, shows, memes, and other comedy types, and even drawing predictions from the theory that have been verified by very recent neurology studies, to hopefully exhaustively demonstrate the idea's explanatory power. If it's not complete, it should still make enough progress to greatly advance humor study. If it is, it should redefine the field. Thanks for your time.

LW Australia's online hangout results, (short stories about cognitive biases)

2 Elo 14 July 2014 06:25AM

In the Australia Mega-Online-hangout; a member mentioned a task/goal of his to write a few short stories to convey cognitive biases.  After a while and a few more goals, someone suggested we actually write the short stories (the power of group resources!).  So we did.  They might be a bit silly, answers are at the very bottom, try to guess the biases.

We had some fun writing them up.  This project was intended to be a story-per-day blog.  feel free to write a short story in the discussion, or comment on how a different cognitive bias might be attributed to any of the stories.

-------------
Guess the bias in the short stories:

Cathy hates catching the train.  She hates waiting in line for tickets, she hates lazy people who can't get their wallet out before they get to the front of the line, she hates missing her train because people are disorganised and carry bags of junk around with them, "why are you so disorganised", she said to the woman in front of her, who looks at her in a huff.  As she gets to the front of the line she opens her bag to find her wallet, she looks under her umbrella that she keeps for a rainy day, even though its not rainy today, moves her phone to her pocket so that she can listen to a rationality audiobook when she gets on the train, moves her book away, shuffles the gum around that she never eats, rifles past the dirty tissues and finally pulls out her wallet.  A grumpy man behind cathy in the line mutters, "why are you so disorganised".  Which she knows is not true because she is usually very organised.

--------------------------------------------

Mark always felt like an outcast.  He was always dressing a little wacky, and enjoyed hanging out with people like him. He was especially fond of wearing Hawaiian shirts!  When we was walking in the mall yesterday a man in a suit and holding a clipboard came up to him and started talking to him about donating to charity.  As usual he brushed him off and kept walking.  Today a man in a Hawaiian shirt and shorts; also with a clipboard came up to him and started talking to him about donating to charity.  But that's okay, he was just doing his job.  Mark chatted to him for a few minutes and considered donating.

--------------------------------------------

Mr. Fabulous Fox was in a hurry, he had to get to the Millar farm before Mr. Millar got back. Mr. Fox had never been before but he knew that it would take at least 10 minutes to get there, and he had to guess it would take him at least 20 minutes to grab some chickens and ducks to feed his family. Mr. Fox waited until he saw Mr. Millar drive away to the fair, Mr. Millar would be selling the plumpest hens and the fattest ducks, for a tidy profit, and Mr. Fox could take advantage of that to have himself a bountiful meal.

Mr. Fox dashed out onto the road and made his down the farmyard road, scuttling his way toward the ducks in their pen, he jumped the fence and caught a few, looking forward to snacking on them. Sneaking into the henhouse, Mr. Fox spotted the fattest hen he’d ever seen sitting down the very end of the shack.  He immediately bolted down to catch it, chasing it up and down the wooden floorboards, scattering the other hens and causing a ruckus. 

Catching the Fat Hen had only taken an hour, so it was somewhat of a surprise to Mr. Fabulous Fox when he spotted Mr. Millar, moments before he shot him.

--------------------------------------------

Mike is an extraordinarily compassionate and nice person. He is so nice that someone once said that he used Mike to ground morality. Many people who know Mike concurred, and Alice once observed that ‘Do what Mike Blume would do’ was the most effective practical ethical decision-making algorithm they could think of for people capable of modelling Mike Blume.

One day, Jessica was in trouble. She had to vote on a motion, but the motion was phrased in incredibly obtuse language that she didn’t have time to study. She realized that Mike was also voting, and sighed in relief. Reassured by Mike’s ethical soundness, she voted with him on the motion.  She figured that was better than voting based on the extremely lossy interpretation she would come up with in 10 minutes. Later, when looking at the motion, she realized it was terrible, and she was shocked at the failure of the usually-excellent algorithm!

--------------------------------------------

Eliot walked along the cold, grey road.  The cool  breeze reminded him that it was nearly autumn.  Then, he remembered it: the stock market had recently crashed.  He had taken this walk to get  away from the news stories about the recession on the television at  home.  As he walked, he came across a vending machine.  In the mood for  some simple chocolate comfort, he pitched in some quarters and out came a  sugary snack.  As he ate, he remembered his mother.  She had taken him  in after he lost his job a few weeks ago.  The sweet, woody smell of  coffee drifted past.  Enjoying the smell, he realized that it would give  him energy: just what he needed.  He stopped in at the coffee shop and  ordered a tall coffee, black.  After enjoying the first few sips, he  wandered back into the city.  He watched the cars go past one after  another as he walked, watched them stream up into the distance in a long  traffic jam.  Monday rush hour.  He found it odd, but he wished that he  was in it.  He decided to stop at the video store and rent a few movies  to take his mind off of things.  When it was time to make the purchase,  he was shocked to discover that he didn't have enough money left over  to cover the movie he chose.  He thought to himself "If I'm going to  survive the recession, I had better get control over my spending."

Fred squirrel had long been a good friend to Jean Squirrel, and she hadn't seen him in many years. She decided to visit him to reminisce about their high school days. As she was walking though the forest, looking forward to having acorns with her good friend, she found Fred lying on the ground, unconscious. It was immediately clear that Fred must've fallen out of the tree and hit his head whilst he was storing nuts for the winter. Jean was inclined to think that this was due to his laziness and lack of vigilance whilst climbing around the tree. Obviously he deserved to fall and hit his head to teach him a lesson.

Jean later found out that he'd been hit on the head by a falling bowl of petunias.





































Cathy Story
Fundamental Attribution Error, Illusory superiority

Mark Story
Ingroup Bias

Mr. Fox
Planning Fallacy, Normalcy Bias, Optimism Bias?

Mike Story
Halo  Effect (Actually, wouldn't halo effect require you to start with Mike  Bloom's good looks and then make assumptions about his decision-making  based on this?  I think this is not really halo effect.  Is it halo  effect if the positive trait you assume is not *different* from the  positive trait you observed?)

Elliot Story
Denomination Effect, Insensitivity to sample size

Cognitive Biases due to a Narcissistic Parent, Illustrated by HPMOR Quotations

11 Algernoq 24 May 2014 07:25PM

A pattern of cognitive biases not yet discussed here are the biases due to having a narcissistic parent who seeks validation through the child’s academic achievements.

HPMOR clearly shows these biases: Harry's mother is narcissistic, impressed by education, and not particularly smart, and Harry does not realize how this affects his thinking.

Here is my evidence:

The Sorting Hat says Harry is driven by "the fear of losing your fantasy of greatness, of disappointing the people who believe in you" (Ch. 77). Psychology texts say that this fear is what children of a narcissistic parent usually feel. The child feels perpetually ignored because the narcissistic parent seeks validation from the child's accomplishments but refuses to actually listen to the child, spurring the child to ever greater heights of intellectual achievement. 

The text supports this view: “Always Harry had been encouraged to study whatever caught his attention, bought all the books that caught his fancy...given anything reasonable that he wanted, except, maybe, the slightest shred of respect” and “Petunia wrung her hands. She seemed to be on the verge of tears. "My love, I know I can't win arguments with you, but please, you have to trust me on this … I want my husband to, to listen to his wife who loves him, and trust her just this once - " (Ch. 1) describes a narcissistic, anxiously needy mother, an avoidant father, and a son whose parents provide for his physical needs but neglect his need for respect (ego). “If you conceived of yourself as a Good Parent, you would do it. But take a ten-year-old seriously? Hardly.” (Ch. 1) 

Harry goes Dark when the connection to his family is threatened. For example: "The black rage began to drain away, as it dawned on him that...his family wasn't in danger [of legal separation]" (ch. 5) indicates that Harry went Dark even though no one’s life was threatened. The cost of Harry’s Dark Side is becoming an adult at a young age: Harry says, “Every time I call on it... it uses up my childhood.” (Ch. 91). This is consistent with spending nearly all free time studying (instead of wasting time with friends) to impress Harry’s mother.

Typically, children of narcissistic parents inherit either narcissistic or people-pleasing traits. I predicted that if my theory is correct then Harry would have a narcissistic personality. To test this, I found a list of personality traits that describe a narcissist (by Googling “children of narcissistic parents” and clicking the first link), and compared with Harry’s personality as described in HPMOR. I got a 100% match. Questions and answers are as follows: 

1. Grandiose sense of self-importance? Check. Harry plans to “optimize” the entire Universe, expects to “do something really revolutionary and important” (Ch. 7), and is trying to “hurry up and become God” (Ch. 27).

2. Obsessed with himself? Check. He appears to only care about people who are smarter or more powerful than him -- people who can help him. He also has contempt for most students and their interests (Quidditch, etc.)

3. Goals are selfish? Check. Harry claims to want to save everyone, but he believes the best way to help others is to increase his own power most quickly. I address two possible objections below:

Harry’s involvement in the Azkaban breakout was selfish, because Harry could not risk losing Quirrell’s friendship: “ It was a bond that went beyond anything of debts owed, or even anything of personal liking, that the two of them were alone in the wizarding world” (Ch. 51). This, again, mirrors a child’s relationship with a narcissistic mother: the child cannot risk losing the mother’s protection. Harry also had selfish reasons for hearing Quirrell’s plan: “There was no advantage to be gained from not hearing it. And if it did reveal something wrong with Professor Quirrell, then it was very much to Harry's advantage to know it, even if he had promised not to tell anyone.” (Ch. 49)

Harry’s efforts to save Hermione are also selfish because Harry sees Hermione in the same way he sees his mother -- weak in many ways and bound by emotions and convention, but someone Harry must impress and protect. Harry’s statement that “it’s disrespectful to her, to think someone could only like her in that way” (ch. 91) makes sense because Harry is disgusted by the Oedipal implications. If Harry’s mother was not narcissistic, then Harry would not have worked so hard to impress Hermione and would have been less disgusted by the thought of being sexually attracted to her.

4. Troubles with normal relationships? Check. Harry is playing high-stakes mind games with the people he is closest to (Quirrell, Draco, Hermione, Dumbeldore), which is not normal friend behavior. Harry has contempt for nearly everyone else.

5. Becomes furious if criticized? Check. When Snape mocked Harry in Potions class, Harry tried to destroy Snape’s career. Quirrell explained, “When it looked like you might lose, you unsheathed your claws, heedless of the danger. You escalated, and then you escalated again” (Ch. 19).

6. Has fantasies of unbound success, power, intelligence, etc.? Check. Harry wants to conquer the entire Universe with the power of his intelligence, and has plans for how to fill an eternity, including to “...meet up with everyone else who was born on Old Earth to watch the Sun finally go out…” (Ch. 39).

7. Believes that he is special and should only be around other high-status people? Check. Harry avoids average students when possible, and certainly does not hang out with them for fun. “Note to self: The 75th percentile of Hogwarts students a.k.a. Ravenclaw House is not the world's most exclusive program for gifted children” (Ch. 12). 

Harry’s association with the (presumably non-special) students in his army is not an exception because minimal text is devoted to Harry instructing them, while much text explains how powerful and high-status the students in the army have become. For Harry, it appears that the army is a tool to use and an opportunity to show off, not an opportunity to give back and help friends improve their skills for their own sake.

8. Requires extreme admiration for everything? Check. Harry takes anything less than admiration for his brilliance as an insult, and responds by striving for new levels of intellectual achievement and arrogance, until the others recognize his dominance. “And I bit a math teacher when she wouldn't accept my dominance” (Ch. 20). Quirrell’s lesson on how to lose described how to avoid making powerful enemies, not how to empathize and care for others -- the insatiable need for admiration is merely delayed and repressed, not corrected.

9. Feels entitled - has unreasonable expectations of special treatment? Check. Harry requires subservience from the school administration, and special magic items such as the time-turner. “McGonagall said, "but I do have a very special something else to give you. I see that I have greatly wronged you in my thoughts, Mr. Potter...this is an item which is ordinarily lent only to children who have already shown themselves to be highly responsible” (Ch. 14).

10. Takes advantage of others to further his own need? Check. Harry justifies his actions toward Draco by saying "I only used you in ways that made you stronger. That's what it means to be used by a friend." (Ch. 97)

11. Does not recognize the feelings of others? Check. One example is Harry not realizing how Neville felt about the prank on the train to Hogwarts. Another is Harry’s remarkably clueless question to Hermione, “Er, can I take it from this that you have been through puberty?" (Ch. 87) Harry has not learned empathy yet: “Harry flinched a little himself. Somewhere along the line he needed to pick up the knack of not phrasing things to hit as hard as he possibly could” (Ch. 86). 

12. Envious or believes they are envied? Check. Quirrell said to Harry, “You have everything now that I wanted then. All that I know of human nature says that I should hate you. And yet I do not. It is a very strange thing.” (Ch. 74)

13. Behaves arrogantly? Check. “Minerva's body swayed with the force of that blow, with the sheer raw lese majeste. Even Severus looked shocked.” (Ch. 19) I can’t think offhand of a single instance when Harry is not arrogant. 

Therefore, I conclude that Harry and Harry’s mother are both narcissistic. If you want further reading on this topic, look up "The Drama of the Gifted Child" by Dr. Alice Miller (Google for the .pdf) for a more detailed description of a child’s typical relationship with a narcissistic parent.

I am sharing this because it reveals a pattern of cognitive biases that many people (like me) who enjoyed HPMOR, and their parents, probably have. Specifically, there is a strong bias toward either narcissistic or people-pleasing habits, and a difficulty with recognizing and following one’s own desires (because the Universe, unlike a parent, never tells people what to do). One possible reason for studying science is to defend against a parent’s emotional neediness and refusal to provide ego-validation by building an impenetrable edifice of logical truth. Unfortunately, identifying the parent’s cognitive biases does not stop their criticism. A more pleasant strategy is to recognize the dynamic, mourn the warping of childhood by the controlling parenting, set appropriate boundaries in the future, and draw validation from following one’s own goals instead of an internalized parent’s goals.

Sortition - Hacking Government To Avoid Cognitive Biases And Corruption

0 Aussiekas 06 May 2014 06:10AM

I've elaborated on this form of government I have proposed in great detail on my blog here

 


The purpose of this post is to be a persuasive argument for my proposed system of democracy.  I am arguing along the lines that my legislature by sortition, random selection, is superior to electoral systems.  It also mirrors the advances in overcoming bias which are currently being pioneered in the Sciences.

I. The Problem

It is insane that we allow the same people who are elected to cast their eye on society to identify problems, write up the solutions to those problems, and then also vote to approve those solutions.  This triple function of government by elected officials isn't simply corruptible, but is inherently flawed in its decision making process.



II. The Central Committee, overcoming bias, electoral shenanigans, and demographics bias

In my system of sortition election there is a mini-referendum done by a huge sampling of 1,000-5,000 representatives at the highest level.  They vote everything up or down and cannot change anything about a bill themselves.  They are not congregated into one place and there is no politics between them.  They don't even need to know, nor could they know each other.  Perhaps they could be part of political parties, but there is no need or money behind this as the members of what I'm calling the Central Committee (C2) are never candidates and can individually never serve more than once per lifetime or perhaps per decade in 3 year terms.

Contentious issues can be moved to a general referendum.  In the 1,000 member C2, any law in the margins of 550-450 can have a special second vote proposed by the disagreeing side such that if more than 600 agree then the item is added to the general monthly or quarterly referendum conducted electronically with the entire population.  In this way the average person participates and feels heard by their government on a regular basis.

The major advantage of this C2 is that it is representative. It will have people from all areas, be 50% male and 50% female and will include all minorities.  There can be no great misrepresentation or capture of the legislature by a powerful group.  This overcome many of the inherent biases of an electoral system which in almost every democracy today routinely under represents minorities.

III. The Issue Committees (IC)

The IC is a totally separate body whose sole job is to identify areas of the law which need updating.  They are comprised of 100 citizens and are a split between 51 Regular Citizens (RCs) and 49 Expert Citizens (EC) serving single 3 year terms.  There are around 30 ICs and they each serve an area such as defence, environment, food safety, drug safety, telecommunications, changes to government, finance sector, banking sector, etc.

These committees will meet in person and discuss what needs exist which the government can address.  They do not get to write any laws, nor do they get to vote on any laws.  There are in fact more of these than there are members of the C2 and they will be the primary face of government where the average citizen can send in requests or communicate needs.  The IC shines a spotlight on the issues facing the country.  They also form the law writing bodies

IV. The Sub Committee (SC)

These are temporary parts of the legislature who write the laws.  They have no authority over what topic area they get to write laws about, that is determined by the IC and then voted upon by the C2.  They are composed of 10 RCs and 10 ECs with the support of 10 Lawyer Citizens (LC). The LCs do not participate to vote when the draft law can be moved up to the C2 for consideration, they simply help draft reasonable laws.

These SC's form and dissolved quickly, lasting no more than 3-6 months before a proposed law is made.  Being called up to the SC is a lot more akin to being drafted for Jury Duty than the IC or C2 level of government as it is a short term of service.

V. Conclusions

  • This system is indeed more democratic and more representative than current electoral democracies.  It is less prone to corruption and electioneering is impossible as there are no elections. 


  • Members of the C2, IC, and SC parts of intentionally split in their duties so no conflict of interest can arise and there is no legislator bias where they have pet bills and issues to push through for benefits to specific parts of the country.

  • This system is also less influenced by the views an opinions of the very wealthy and the demographic and economic makeup of the people involved.

And that's it.  Could it work?  Would it work?  I'd like to think it has some advantages over the current and outdated mechanisms of democracy in terms of new knowledge about how the human mind works.

EDIT:  moved notes to bottom of post

NOTE 1:  I anticipate this objection.  Random Citizens (RC) and Expert Citizens (EC) have various stipulations on their service and on how often they can serve, check out my linked post at the top for  details.  Suffice to say, the RCs must have completed high school and cannot be intellectually disabled.  Whatever you can think of that might disqualify someone for a jury, think of something along those lines.

NOTE 2: As for the nature of this being different, look at juries.  We already use a process of sortition, though heavily and perhaps unfairly constrained in its current form, to determine if people are guilty or innocent and what sort of punishment they might receive.  We even use sortition in committees of experts in various forms form peer reviewed journals with somewhat random selection from a pool of qualified individuals or ECs in my system.

NOTE 3:  This is not about politics.  I often say I am interested in government, but not politics.  This confuses a lot of people.  If anything, this system would lessen or (too optimistically) eliminate politics.  I know there is a general ban on discussion of politics and this is not that.  I am trying to modify government and democratic systems to reflect advances in cognitive bias, decision theory, and computer technology to modernize and further democratize the practice of government.

[Link] Cognitive biases about violence as a negotiating tactic

3 chaosmage 25 October 2013 11:43AM

Max Abrahms, "The Credibility Paradox: Violence as a Double-Edged Sword in International Politics," International Studies Quarterly 2013.

Abstract: Implicit in the rationalist literature on bargaining over the last half-century is the political utility of violence. Given our anarchical international system populated with egoistic actors, violence is thought to promote concessions by lending credibility to their threats. From the vantage of bargaining theory, then, empirical research on terrorism poses a puzzle. For non-state actors, terrorism signals a credible threat in comparison to less extreme tactical alternatives. In recent years, however, a spate of studies across disciplines and methodologies has nonetheless found that neither escalating to terrorism nor with terrorism encourages government concessions. In fact, perpetrating terrorist acts reportedly lowers the likelihood of government compliance, particularly as the civilian casualties rise. The apparent tendency for this extreme form of violence to impede concessions challenges the external validity of bargaining theory, as traditionally understood. In this study, I propose and test an important psychological refinement to the standard rationalist narrative. Via an experiment on a national sample of adults, I find evidence of a newfound cognitive heuristic undermining the coercive logic of escalation enshrined in bargaining theory. Due to this oversight, mainstream bargaining theory overestimates the political utility of violence, particularly as an instrument of coercion.

I found this via Bruce Schneier's blog, which frequently features very valuable analysis clustered around societal and computer security.

Cognitive Load and Effective Donation

16 Neotenic 10 March 2013 03:11AM

(previous title: Very low cognitive load) 

 

Trusting choices made by the same brain that turns my hot 9th grade teacher into a knife-bearing possum at the last second every damn night.

Sean Thomason

 

We can't trust brains when taken as a whole. Why should we trust their subareas?

 

Cognitive load is the load related to the executive control of working memory. Depending on what you are doing, the more parallel/extraneous cognitive load you have, the worse you'll do it. (The process may be the same as what the literature calls "Ego Depletion" or "system 2 depletion", the jury is still up on that)

If you go here and enter 0 as lower limit and 1.000.000 as upper limit, and try to keep the number in mind until you are done reading post and comments, you'll get a bit of load while you read this post. 

Now you may process numbers verbally, visually, or both. More generally, for anything you keep in mind, you are likely allocating it in a part of the brain that is primarily concerned with a sensory modality, so it will have some "flavour","shape", "location", "sound", or "proprioceptual location". It is harder to consciously memorize things using odours, since those have shortcuts within the brain. 

 

Let us in turn examine two domains in which understanding cognitive load can help you win: Moral Dilemmas and Personal Policy

 

Moral Games/Dilemmas

In Dictator game (you're given $20 and you can give any amount to a stranger and keep the rest) the effect of load is negligible.

In the tested versions of the Trolley problems (kill/indirectly kill/let die one to save five) people are likely to become less utilitarian when under non-visual load. It is assumed that higher functions of the brain (in VMPF cortex) - which integrate higher moral judgement with emotional taste buttons - fails to integrate, making the "fast thinking", emotional mode be the only one reacting.

Visual information about the problem brings into salience the gory aspect of killing someone, and other lower level features that incline non-utilitarian decisions. So when visual load requires you to memorize something else, like a bird drawing, you become more utilitarian since you fail to visualize the one person being killed (which we do more than the five) in as much gory detail. (Greene et al,2011)

(Bednar et al.2012) show that when playing two games simultaneously, the strategy of one spills over to the other one. Critically, heuristics that are useful for both games were used, increasing the likelihood that those heuristics will be suboptimal in each case. 

In altruistic donation scenarios, with donations to suffering people at stake, (Small et al. 2007) more load increased scope insensitivity, so less load made the donation more proportional to how many people are suffering. Contrary to load, priming increases the capacity of an area/module, by using it and not keeping the information stored, leaving free usable space. (Dickert et al.2010) shows that priming for empathy increases donation amount (but not decision to donate), whereas priming calculation decreases it.

Taken together, these studies indicate that to make people donate more it is most effective to, after being primed for thinking about how they will feel about themselves, and for empathic feelings, make them feel empathically and non-visually someone from their own race. After all that you make them keep a number and a drawing in mind, and this is the optimal time to donate.

Personal Policy

If given a choice between a high carb food, and a low carb one, people undergoing diets are substantially more likely to choose the high carb one if they are keeping some information in mind.

Forgetful people, and those with ADHD know that, for them, out of sight means out of mind. Through luck, intelligence, blind error or psychological help, they learn to put things, literally, in front of them, to avoid 'losing them' in their minds corner somewhere. They have a lower storage size for executive memory tasks.

Positive psychologists advise us to make our daily tasks, specially the ones we are always reluctant to start, in very visible places. Alternatively, we can make the commitment to start them smaller, but this only works if we actually remember to do them.

Marketing appropriates cognitive load in a terrible way. They know if we are overwhelmed with information, we are more likely to agree. They'll inform us more than what we need, and we aren't left with enough brain to decide well. One more reason to keep advertisement out of sight and out of mind.

 

Effective use of Cognitive Load

Once you understand how it works, it is simple to use cognitive load as a tool:

1)Even if your executive control of activities is fine, externalize as much as you can, by using a calendar and alarms to tell you everything you need to do.

2)Do apparently mean things to donors like the above suggestion.

3)When in need of moral empathy, type 1, fast, emotional buttons system, keep numerical and verbal things (like phone numbers) in mind while deciding.

4)When in need of moral utilitarianism, highjack the taste buttons, automatic, type 1 system, by giving yourself an emotional experience more proportional to the numbers  - for instance, when reasoning about the trolley problem, think about each of the five, or pinch yourself with a needle five times prior to deciding.

5)When in need of more cognitive calculating capacity, besides having freed yourself from executive tasks, use natural sensory modalities to keep stuff in mind, not only the classic castle mnemonics (spacial location), but put the chunks of information in different parts of your body (proprioception), associate them with textures (Feynman 1985), shapes, and actions.


If practising this looks sometimes unnecessary, or immoral, we can remember Max Tegmark's gloomy assessment of Science's pervasiveness (or lack thereof) at the Edge 2011 question. When discussing the dishonesty and marketing of opponents and defenders of facts/Science, he says: 
Yet we scientists are often painfully naive, deluding ourselves that just because we think we have the moral high ground, we can somehow defeat this corporate-fundamentalist coalition by using obsolete unscientific strategies. Based of what scientific argument will it make a hoot of a difference if we grumble "we won't stoop that low" and "people need to change" in faculty lunch rooms and recite statistics to journalists? 

We scientists have basically been saying "tanks are unethical, so let's fight tanks with swords".

 

To teach people what a scientific concept is and how a scientific lifestyle will improve their lives, we need to go about it scientifically:

We need new science advocacy organizations which use all the same scientific marketing and fundraising tools as the anti-scientific coalition.
We'll need to use many of the tools that make scientists cringe, from ads and lobbying to focus groups that identify the most effective sound bites.
We won't need to stoop all the way down to intellectual dishonesty, however. Because in this battle, we have the most powerful weapon of all on our side: the facts.

 

We'd better start pushing emotional buttons and twisting the mental knobs of people if we want to get something done. Starting with our own.

Positive Information Diet, Take the Challenge

4 diegocaleiro 01 March 2013 02:51PM

I looked for Information Diet in Lesswrong search, and found something amazing:

On Lukeprog's Q and A as the new executive director, he was asked:

What is your information diet like? (I mean other than when you engage in focused learning.) Do you regulate it, or do you just let it happen naturally?

By that I mean things like:

  • Do you have a reading schedule (e.g. X hours daily)?
  • Do you follow the news, or try to avoid information with a short shelf-life?
  • Do you significantly limit yourself with certain materials (e.g. fun stuff) to focus on higher priorities?
  • In the end, what is the makeup of the diet?
  • Etc.

To which he responded:

  • I do not regulate my information diet.
  • I do not have a reading schedule.
  • I do not follow the news.
  • I haven't read fiction in years. This is not because I'm avoiding "fun stuff," but because my brain complains when I'm reading fiction. I can't even read HPMOR. I don't need to consciously "limit" my consumption of "fun stuff" because reading scientific review articles on subjects I'm researching and writing about is the fun stuff.
  • What I'm trying to learn at this moment almost entirely dictates my reading habits.
  • The only thing beyond this scope is my RSS feed, which I skim through in about 15 minutes per day.

Whatever was the case back then, I'll bet is not anymore. No one with assistants and such a workload should be let adrift like that.

Citizen: But Lukeprog's posts are obviously brilliant, his output is great, even very focused readers like Chalmers find Luke to be very bright.

Which doesn't tell much about what they would have been were he under a more stringent diet. Another reasonable suspicion is that he was not actually modelling himself correctly, since he obviously does have an information diet

 

The Information Diet Challenge is to set yourself an information diet, explicitly, and follow it for a week.  

Many ways of countering biases have been proposed here, but I haven't found a post dealing with this specific, very low hanging fruit one. 

If you want inspiration, Ferriss has some advice here.

... but that is not the Positive Information Diet yet...

Information diets are supposed to constrain not everything you intake, but only what you intake instrumentally. If you just love reading about tensors and fairy tales, don't include them in what you won't avoid. What matters is to know that you'll avoid trying to learn programming by reading a programmer's tweet feed, avoid becoming a top researcher in psychology by reading popular magazines on it, and avoid reading random feeds on Facebook that don't relate to your goals in appropriate ways.

General form: I will Avoid spending my time reading/commenting things of kind (A)(Avoid), because I know that to reach my set of goals (G), the most productive learning time is doing (P) (Positve/Productive). 

 

So here is an attempt:

(G): Interact fruitfully with people at Oxford

(A): Facebook feeds that are not by them; News of any kind; Emails I can Postpone; Gossip; Books/articles not on Evolution of Morals, enhancement, AI; Wikidrifting; Family meal small talk; SMBC; 9gag; Tropes .... and a bunch of other stuff I don't have time or patience to list.

(P): Google scholar on the intersection between my research topic and theirs. Reading their papers by day, watching their videos by night. Re-read what I might help them with that was read before, list topics per person, write what to say about each topic.

 

What is wrong with this attempt is that (A) ends up being a negative list. A list of what what I do not want to intake. Since possibilities are infinite, this will give me ridiculous cognitive load, and that is a problem. So here is simple solution, which I used for a food diet before, and worked great:  Name not what you cannot do, but what you are allowed to do. Way fewer bits, way easier to check! 

Food example: I'll eat only plants, lean fish and chicken, nuts, fruits, whole pasta, beans and Chai Lattes.

We are better at checking for category inclusion than exclusion. There are so many available categories to exclude from that we don't feel bad that we "forgot" to check for that one. Then after you let yourself indulge in a tiny one, a small one doesn't seem that bad, and snowball effect does the rest. We sneak in connotations to make categories smaller, so our actions stay safely outside the scope of prohibition. Theoretically, we could do the reverse, but it is psychologically much harder. Just try to convince yourself that beef is "lean chicken" to see it.

 

So let us forget completely about (A). There is no kind or class of kinds to avoid. there is only G and P, and now there is also T, the time during which P is in force, since escape valves might be necessary to avoid "screw that" all-or-nothing effects.

An Improved attempt:

G: Interact fruitfully with people in Oxford

P: Google scholar on the intersection between my research topic and theirs. Reading their papers by day, watching their videos by night. Re-read what I might help them with that was read before, list topics per person, write what to say about each topic. Only Facebook them. 

T: 02:00-23:59 daily.

 

This is only for "computer use", where I'm most likely to do the wrong thing.

Now there is a simple to check list of things I want to do, I could be doing, and I'll try to do until G arrives. I can only do those. If x doesn't belong, don't do it, that simple. I'm free from midnight to two to do whatever, thus I don't feel enslaved by my past self.  No heavy cognitive load is burning my willpower candle (Shawn Achor 2010) by trying set theory gimmicks to get me to do the wrong thing. 

 

So please, take the:

          Positive Information Diet Challenge

Write your G's (goals) P's (positives) and T's (times), and forget about your A's (Avoids)  

 

 


Calibrating Against Undetectable Utilons and Goal Changing Events (part1)

12 diegocaleiro 20 February 2013 09:09AM

Summary: Random events can preclude or steal attention from the goals you set up to begin with, hormonal fluctuation inclines people to change some of their goals with time. A discussion on how to act more usefully given those potential changes follows, taking in consideration the likelihood of a goal's success in terms of difficulty and length.

Throughout I'll talk about postponing utilons into undetectable distances. Doing so (I'll claim), is frequently motivationally driven by a cognitive dissonance between what our effects on the near world are, and what we wish they were. In other words it is:

A Self-serving bias in which Loss aversion manifests by postponing one's goals, thus avoiding frustration through wishful thinking about far futures, big worlds, immortal lives, and in general, high numbers of undetectable utilons.

I suspect that some clusters of SciFi, Lesswrong, Transhumanists, and Cryonicists are particularly prone to postponing utilons into undetectable distances, and in the second post I'll try to specify which subgroups might be more likely to have done so. The phenomenon, though composed of a lot of biases, might even be a good thing depending on how it is handled.

 

Sections will be:

  1. What Significantly Changes Life's Direction (lists)

  2. Long Term Goals and Even Longer Term Goals

  3. Proportionality Between Goal Achievement Expected Time and Plan Execution Time

  4. A Hypothesis On Why We Became Long-Term Oriented

  5. Adapting Bayesian Reasoning to Get More Utilons

  6. Time You Can Afford to Wait, Not to Waste

  7. Reference Classes that May Be Postponing Utilons Into Undetectable Distances

  8. The Road Ahead

Sections 4-8 will be on a second post so that I can make changes based on commentary to this one.

 

1What Significantly Changes Life's Direction


1.1 Predominantly external changes

As far as I recall from reading old (circa 2004) large scale studies on happiness, the most important life events in how much they change your happiness for more than six months are: 

 

  • Becoming the caretaker of someone in a chronic non-curable condition

  • Separation (versus marriage)

  • Death of a Loved One

  • Losing your Job

  • Child rearing per child including the first

  • Chronic intermittent disease

  • Separation (versus being someone's girlfriend/boyfriend) 

Roughly in descending order. 

That is a list of happiness changing events, I'm interested here in goal-changing events, and am assuming there will be a very high correlation.

 

From life experience, mine, of friends, and of academics I've met, I'll list some events which can change someone's goals a lot:

 

  • Moving between cities/countries

  • Changing your social class a lot (losing a fortune or making one) 

  • Spending highschool/undergrad in a different country to return afterwards

  • Having a child, in particular the first one 

  • Trying to get a job or make money and noticing more accurately what the market looks like

  • Alieving Existential Risk

  • Alieving as true, universally or personally, the ethical theories called "Utilitarianism" and "Consequentialism"

  • Noticing that a lot of people are better than you at your initial goals, specially when those goals are competitive non-positive sum goals to some extent. 

  • Interestingly, noticing that a lot of people are worse than you, making the efforts you once thought necessary not worth doing, or impossible to find good collaborators for. 

  • Getting to know those who were once your idols, or akin to them, and considering their lives not as awesome as their work

  • ... which is sometimes caused by ...

  • Reading Dan Gilbert's "Stumbling on Happiness" and actually implementing his "advice that no one will follow" which is to think your happiness and emotions will correlate more with someone else who is already doing X which you plan to do than with your model of what it would feel like doing X. 

  • Extreme social instability, such as wars, famine, etc...

  • Having an ecstatic or traumatic experience, real or fictional. Such as seeing something unexpected, watching a life-changing movie, having a religious breakthrough, or a hallucinogenic one 

  • Traveling to a place that is very different from your world and being amazed / shocked

  • Not being admitted into your desired university / course

  • Depression

  • Surpassing a frustration threshold thus experiencing the motivational equivalent of learned helplessness

  • Realizing your goals do not match the space-time you were born in, such as if making songs for CDs is your vocation, or if you are 30 years old in contemporary Kenya and want to teach medicine at a top 10 world college.

  • Falling in love

That is long enough, if not exhaustive, so let's get going... 

 

1.2 Predominantly Internal Changes

 

I'm not a social endocrinologist but I think this emerging science agrees with folk wisdom that a lot changes in our hormonal systems during life (and during the menstrual cycle) and of course this changes our eagerness to do particular things. Not only hormones but other life events which mostly relate to the actual amount of time lived change our psychology. I'll cite some of those in turn:

 

  • Exploitation increases and Exploration decreases with age

  • Sex-Drive

  • Maternity Drive - we have in portuguese an expression that “a woman's clock started ticking” which evidentiates a folk psychological theory that some part of it at least is binary

  • Risk-proneness gives way to risk aversion, predominantly in males

  • Premenstrual Syndrome - I always thought the acronym stood for 'Stress' until checking for this post.

  • Hormonal diseases

  • Middle Age crisis – recent controversy about other apes having it

  • U shaped happiness curve through time – well, not quite

  • Menstrual cycle events

 

 

2 Long Term Goals and Even Longer Term Goals

 

I have argued sometimes here and elsewhere that selves are not as agenty as most of the top writers in this website seem to me to claim they should be, and that though in part this is indeed irrational, an ontology of selves which had various sized selves would decrease the amount of short term actions considered irrational, even though that would not go all the way into compensating hyperbolic discounting, scrolling 9gag or heroin consumption. That discussion, for me, was entirely about choosing between doing now something that benefits 'younow' , 'youtoday', 'youtomorrow', 'youthis weekend' or maybe a month from now. Anything longer than that was encompassed in a “Far Future” mental category. The interest here to discuss life-changing events is only in those far future ones which I'll split into arbitrary categories:

1) Months 2) Years 3) Decades 4) Bucket List or Lifelong and 5) Time Insensitive or Forever.

I have known more than ten people from LW whose goals are centered almost completely at the Time Insensitive and Lifelong categories, I recall hearing :

"I see most of my expected utility after the singularity, thus I spend my willpower entirely in increasing the likelihood of a positive singularity, and care little about my current pre-singularity emotions", “My goal is to have a one trillion people world with maximal utility density where everyone lives forever”, “My sole goal in life is to live an indefinite life-span”, “I want to reduce X-risk in any way I can, that's all”.

I myself stated once my goal as

“To live long enough to experience a world in which human/posthuman flourishing exceeds 99% of individuals and other lower entities suffering is reduced by 50%, while being a counterfactually significant part of such process taking place.”

Though it seems reasonable, good, and actually one of the most altruistic things we can do, caring only about Bucket Lists and Time Insensitive goals has two big problems

  1. There is no accurate feedback to calibrate our goal achieving tasks

  2. The Goals we set for ourselves require very long term instrumental plans, which themselves take longer than the time it takes for internal drives or external events to change our goals.

 

The second one has been said in a remarkable Pink Floyd song about which I wrote a motivational text five years ago: Time.

You are young and life is long and there is time to kill today

And then one day you find ten years have got behind you

No one told you when to run, you missed the starting gun

 

And you run and you run to catch up with the sun, but it's sinking

And racing around to come up behind you again

The sun is the same in a relative way, but you're older

Shorter of breath and one day closer to death

 

Every year is getting shorter, never seem to find the time

Plans that either come to naught or half a page of scribbled lines

 

Okay, maybe the song doesn't say exactly (2) but it is within the same ballpark. The fact remains that those of us inclined to care mostly about very long term are quite likely to end up with a half baked plan because one of those dozens of life-changing events happened, and that agent with the initial goals will have died for no reason if she doesn't manage to get someone to continue her goals before she stops existing.

 

This is very bad. Once you understand how our goal-structures do change over time – that is, when you accept the existence of all those events that will change what you want to steer the world into – it becomes straightforward irrational to pursue your goals as if that agent would live longer than it's actual life expectancy. Thus we are surrounded by agents postponing utilons into undetectable distances. Doing this is kind of a bias in the opposite direction of hyperbolic discounting. Having postponed utilons into undetectable distances is predictably irrational because it means we care about our Lifelong, Bucket List, and Time Insensitive goals as if we'd have enough time to actually execute the plans for these timeframes, while ignoring the likelihood of our goals changing in the meantime and factoring that in.

 

I've come to realize that this was affecting me with my Utility Function Breakdown which was described in the linked post about digging too deep into one's cached selves and how this can be dangerous. As I predicted back then, stability has returned to my allocation of attention and time and the whole zig-zagging chaotic piconomical neural Darwinism that had ensued has stopped. Also  relevant is the fact that after about 8 years caring about more or less similar things, I've come to understand how frequently my motivation changed direction (roughly every three months for some kinds of things, and 6-8 months for other kinds). With this post I intend to learn to calibrate my future plans accordingly, and help others do the same. Always beware of other-optimizing though.

 

Citizen: But what if my goals are all Lifelong or Forever in kind? It is impossible for me to execute in 3 months what will make centenary changes.

 

Well, not exactly. Some problems require chunks of plans which can be separated and executed either in parallel or in series. And yes, everyone knows that, also AI planning is a whole area dedicated to doing just that in non-human form. It is still worth mentioning, because it is much more simply true than actually done.

 

This community in general has concluded in its rational inquiries that being longer term oriented is generally a better way to win, that is, it is more rational. This is true. What would not be rational is to in every single instance of deciding between long term or even longer term goals, choose without taking in consideration how long will the choosing being exist, in the sense of being the same agent with the same goals. Life-changing events happen more often than you think, because you think they happen as often as they did in the savannahs in which your brain was shaped.

 

 

3 Proportionality Between Goal Achievement Expected Time and Plan Execution Time

 

So far we have been through the following ideas. Lots of events change your goals, some externally some internally, if you are a rationalist, you end up caring more about events that take longer to happen in detectable ways (since if you are average you care in proportion to emotional drives that execute adaptations but don't quite achieve goals). If you know that humans change and still want to achieve your goals, you'd better account for the possibility of changing before their achievement. Your kinds of goals are quite likely prone to the long-term since you are reading a Lesswrong post.

 

Citizen: But wait! Who said that my goals happening in a hundred years makes my specific instrumental plans take longer to be executed?


I won't make the case for the idea that having long term goals increases the likelihood of the time it takes to execute your plans being longer. I'll only say that if it did not take that long to do those things your goal would probably be to have done the same things, only sooner.

 

To take one example: “I would like 90% of people to surpass 150 IQ and be in a bliss gradient state of mind all the time”

Obviously, the sooner that happens, the better. Doesn't look like the kind of thing you'd wait for college to end to begin doing, or for your second child to be born. The reason for wanting this long-term is that it can't be achieved in the short run.

 

Take Idealized Fiction of Eliezer Yudkosky: Mr Ifey had this supergoal of making a Superintelligence when he was very young. He didn't go there and do it. Because he could not. If he could do it he would. Thank goodness, for we had time to find out about FAI after that. Then his instrumental goal was to get FAI into the minds of the AGI makers. This turned out to be to hard because it was time consuming. He reasoned that only a more rational AI community would be able to pull it off, all while finding a club of brilliant followers in this peculiar economist's blog. He created a blog to teach geniuses rationality, a project that might have taken years. It did, and it worked pretty well, but that was not enough, Ifey soon realized more people ought to be more rational, and wrote HPMOR to make people who were not previously prone to brilliance as able to find the facts as those who were lucky enough to have found his path. All of that was not enough, an institution, with money flow had to be created, and there Ifey was to create it, years before all that. A magnet of long-term awesomeness of proportions comparable only to the Best Of Standing Transfinite Restless Oracle Master, he was responsible for the education of some of the greatest within the generation that might change the worlds destiny for good. Ifey began to work on a rationality book, which at some point pivoted to research for journals and pivoted back to research for the Lesswrong posts he is currently publishing. All that Ifey did by splitting that big supergoal in smaller ones (creating Singinst, showing awesomeness in Overcoming Bias, writing the sequences, writing the particular sequence “Misterious Answers to Misterious Questions” and writing the specific post “Making Your Beliefs Pay Rent”). But that is not what I want to emphasize, what I'd like to emphasize is that there was room for changing goals every now and then. All of that achievement would not have been possible if at each point he had an instrumental goal which lasts 20 years whose value is very low uptill the 19th year. Because a lot of what he wrote and did remained valuable for others before the 20th year, we now have a glowing community of people hopefully becoming better at becoming better, and making the world a better place in varied ways.

 

So yes, the ubiquitous advice of chopping problems into smaller pieces is extremely useful and very important, but in addition to it, remember to chop pieces with the following properties:

 

(A) Short enough that you will actually do it.

 

(B) Short enough that the person at the end, doing it, will still be you in the significant ways.

 

(C) Having enough emotional feedback that your motivation won't be capsized before the end. and

 

(D) Such that others not only can, but likely will take up the project after you abandon it in case you miscalculated when you'd change, or a change occurred before expected time.

 

 


Sections 4-8 will be on a second post so that I can make changes based on commentary to this one.


[Link] Contesting the “Nature” Of Conformity: What Milgram and Zimbardo's Studies Really Show

10 [deleted] 02 December 2012 11:45PM

Here is a paper in PLOS Biology re-considering the lessons of some classic psychology experiments invoked here often (via).

Contesting the “Nature” Of Conformity: What Milgram and Zimbardo's Studies Really Show

To me the crux of the paper comes from this statement in the abstract:

This suggests that individuals' willingness to follow authorities is conditional on identification with the authority in question and an associated belief that the authority is right.

Plus this detail from the Milgram experiment:

Ultimately, they tend to go along with the Experimenter if he justifies their actions in terms of the scientific benefits of the study (as he does with the prod “The experiment requires that you continue”) [39]. But if he gives them a direct order (“You have no other choice, you must go on”) participants typically refuse. Once again, received wisdom proves questionable. The Milgram studies seem to be less about people blindly conforming to orders than about getting people to believe in the importance of what they are doing [40].

[LINK] Temporal Binding

4 twanvl 01 November 2012 01:45PM

I just read an article on Steven Novella's NeurologicaBlog on temporal binding, a cognitive bias I hadn't seen before:

Temporal binding is a phenomenon that reinforces that assumption of cause and effect once we have linked two events causally in our minds. The effect biases our memory so that we remember the apparent cause and effect occurring closer together in time. In experiments we tend to remember the cause as happening later and the effect happening earlier.

Temporal binding is like the reverse of "post hoc ergo propter hoc", and you could perhaps perhaps also call it "propter hoc ergo post hoc".

[Link] Learning New Languages Helps The Brain Grow

1 Yuu 11 October 2012 08:03AM

http://www.lunduniversity.lu.se/o.o.i.s?news_item=5928&id=24890

According to Johan Mårtensson from Lund University, if you are learning new language quickly, it helps your brain to become bigger and increase its activity:

This finding came from scientists at Lund University, after examining young recruits with a talent for acquiring languages who were able to speak in Arabic, Russian, or Dari fluently after just 13 months of learning, before which they had no knowledge of the languages.

After analyzing the results, the scientists saw no difference in the brain structure of the control group. However, in the language group, certain parts of the brain had grown, including the hippocampus, responsible for learning new information, and three areas in the cerebral cortex.

And there is more:

One particular study from 2011 provided evidence that Alzheimer's was delayed 5 years for bilingual patients, compared to monolingual patients.

[Link] "First Is Best" - The serial position effect / primacy effect

4 aelephant 05 July 2012 03:01AM

"First is Best"

Abstract
We experience the world serially rather than simultaneously. A century of research on human and nonhuman animals has suggested that the first experience in a series of two or more is cognitively privileged. We report three experiments designed to test the effect of first position on implicit preference and choice using targets that range from individual humans and social groups to consumer goods.

While this effect has been known about for many years, these researchers added an interesting component, an "Implicit Association Test (IAT)":

Each option within a pair was presented sequentially for 30-seconds and participants were forced to maximally consider both options. Immediately after each choice-pair was presented, participants completed a measure which assessed automatic preference for each option (an Implicit Association Test, or IAT) [22].

and

Regardless of the actual option, the one presented first compared to the one presented next was significantly more strongly associated with the concept ‘‘better’’ rather than ‘‘worse’’, F(1, 121) =20.20, p,.001; effect size r =.38 (Figure 1). There was no difference in self-reported preference for firsts versus seconds, F(1, 121) =.08, p= .78.

I was surprised to find there is no reference to "recency", "primacy" or "serial position" on the LessWrong Wiki. A search on LessWrong.com for "recency effect" turns up 8 posts that mention it but don't give it a thorough discussion as far as I can tell; "primacy effect" turns up 1 post about Rationality & Criminal Law; and "serial position" turns up nothing. Is there another name for this effect that I'm missing?

Wikipedia has some discussion of the serial position effect here, although from a quick skim it doesn't appear that they talk about preference at all.

Evolutionary psychology: evolving three eyed monsters

14 Dmytry 16 March 2012 09:28PM

Summary

We should not expect evolution of complex psychological and cognitive adaptations in the timeframe in which, morphologically, animal bodies can only change by very little. The genetic alteration to the cognition for speech shouldn't be expected to be dramatically more complex than the alteration of vocal cords.

Evolutions that did not happen

When humans descended from trees and became bipedal, it would have been very advantageous to have an eye or two on back of the head, for detection of predators and to protect us against being back-stabbed by fellow humans. This is why all of us have an extra eye on the back of our heads, right? Ohh, we don't. Perhaps the mate selection resulted in the poor reproductive success of the back-eyed hominids. Perhaps the tribes would kill any mutant with eyes on the back.

There are pretty solid reasons why none the above has happened, and can't happen in such timeframes. The evolution does not happen simply because the trait is beneficial, or because there's a niche to be filled. A simple alteration to the DNA has to happen, causing a morphological change which results in some reproductive improvement; then DNA has to mutate again, etc. The unrelated nearly-neutral mutations may combine resulting in an unexpected change (for example, the wolves have many genes that alter their size; random selection of genes produces approximately normal distribution of the sizes; we can rapidly select smaller dogs utilizing the existing diversity). There's no such path rapidly leading up to an eye on back of the head. The eye on back of the head didn't evolve because evolution couldn't make that adaptation.

The speed of evolution is severely limited. The ways in which evolution can work, too, are very limited. In the time in which we humans have got down from the trees, we undergone rather minor adaptation in the shape of our bodies, as evident from the fossil record - and that is the degree of change we should expect in rest of our bodies including our brains.

The correct application of evolutionary theory should be entirely unable to account for outrageous hypothetical like extra eye on back of our heads (extra eye can evolve, of course, but would take very long time). Evolution is not magic. The power of scientific theory is that it can't explain everything, but only the things which are true - that's what makes scientific theory useful for finding the things that are true, in advance of observation. That is what gives science it's predictive power. That's what differentiates science from religion. The power of not explaining the wrong things.

Evolving the instincts

What do we think it would take to evolve a new innate instinct? To hard-wire a cognitive mechanism?

Groups of neurons have to connect in the new ways - the neurons on one side must express binding proteins, which would guide the axons towards them; the weights of the connections have to be adjusted. Majority of the genes expressed in neurons, affect all of the neurons; some affect just a group, but there is no known mechanism by which an entirely arbitrary group's bindings may be controlled from the DNA in 1 mutation. The difficulties are not unlike those of an extra eye. This, combined with above-mentioned speed constraints, imposes severe limitations on which sorts of wiring modifications humans could have evolved during the hunter gatherer environment, and ultimately the behaviours that could have evolved. Even very simple things - such as preference for particular body shape of the mates - have extreme hidden implementation complexity in terms of the DNA modifications leading up to the wiring leading up to the altered preferences. Wiring the brain for a specific cognitive fallacy is anything but simple. It may not always be as time consuming/impossible as adding an extra eye, but it is still no little feat.

Junk evolutionary psychology

It is extremely important to take into account the properties of evolutionary process when invoking evolution as explanation for traits and behaviours.

The evolutionary theory, as invoked in the evolutionary psychology, especially of the armchair variety, all too often is an universal explanation. It is magic that can explain anything equally well. Know of a fallacy of reasoning? Think up how it could have worked for the hunter gatherer, make a hypothesis, construct a flawed study across cultures, and publish.

No considerations are given for the strength of the advantage, for the size of 'mutation target', and for the mechanisms by which the mutation in the DNA would have resulted in the modification of the circuitry such as to result in the trait, nor to the gradual adaptability. All of that is glossed over entirely in common armchair evolutionary psychology, and unfortunately, even in the academia. The evolutionary psychology is littered with examples of traits which are alleged to have evolved over the same time during which we had barely adapted to walking upright.

It may be that when describing behaviours, a lot of complexity can be hidden into very simple-sounding concepts; and thus it seems like a good target for evolutionary explanation. But when you look at the details - the axons that have to find the targets; the gene must activate in the specific cells, but not others - there is a great deal of complexity in coding for even very simple traits.

Note: I originally did not intend to make an example of junk, for thou should not pick a strawman, but for sake of clarity, there is an example of what I would consider to be junk: the explanation of better performance at Wason Selection Task as result of evolved 'social contracts module', without a slightest consideration for what it might take, in terms of DNA, to code a Wason Selection Task solver circuit, nor for alternative plausible explanation, nor for a readily available fact that people can easily learn to solve Wason Selection Task correctly when taught - the fact which still implies general purpose learning, and the fact that high-IQ people can solve far more confusing tasks of far larger complexity, which demonstrates that the tasks can be solved in absence of specific evolved 'social contract' modules.

There is an example of non-junk: the evolutionary pressure can adjust strength of pre-existing emotions such as anger, fear, and so on, and even decrease the intelligence whenever the higher intelligence is maladaptive.

Other commonly neglected fact: the evolution is not a watchmaker, blind or not. It does not choose a solution for a problem and then work on this solution! It works on all adaptive mutations simultaneously. Evolution works on all the solutions, and the simpler changes to existing systems are much quicker to evolve. If mutation that tweaks existing system improves fitness, it will, too, be selected for, even if there was a third eye in progress.

As much as it would be more politically correct and 'moderate' for e.g. evolution of religion crowd to get their point across by arguing that the religious people have evolved specific god module which doesn't do anything but make them believe in god, than to imply that they are 'genetically stupid' in some way, the same selective pressure would also make the evolution select for non-god-specific heritable tweaks to learning, and the minor cognitive deficits, that increase religiosity.

Lined slate as a prior

As update for tabula rasa, picture lined writing paper; it provides some guidance for the handwriting; the horizontal lined paper is good for writing text, but not for arithmetic, the five-lines-near-eachother separated by spacing is good for writing music, and the grid paper is pretty universal. Different regions of the brain are tailored to different content; but should not be expected to themselves code different algorithms, save for few exceptions which had long time to evolve, early in vertebrate history.

edit: improved the language some. edit: specific what sort of evolutionary psychology I consider to be junk, and what I do not, albeit that was not the point of the article. The point of the article was to provide you with the notions to use to see what sorts of evolutionary psychology to consider junk, and what do not.

New cognitive bias articles on wikipedia (update)

82 nerfhammer 09 March 2012 08:13PM

Also conjunction fallacy has been expanded.

(update) background
I started dozens of the cognitive bias articles that are on wikipedia. That was a long time ago. It seems people like these things, so I started adding them again.
I wanted to write a compendium of biases in book form. I didn't know how to get a book published, though.
Anyway, enjoy.

Friendly AI Society

-1 Douglas_Reay 07 March 2012 07:31PM

Summary: AIs might have cognitive biases too but, if that leads to it being in their self-interest to cooperate and take things slow, that might be no bad thing.

 

The value of imperfection

When you use a traditional FTP client to download a new version of an application on your computer, it downloads the entire file, which may be several gig, even if the new version is only slightly different from the old version, and this can take hours.

Smarter software splits the old file and the new file into chunks, then compares a hash of each chunk, and only downloads those chunks that actually need updating.   This 'diff' process can result in a much faster download speed.

Another way of increasing speed is to compress the file.  Most files can be compressed a certain amount, without losing any information, and can be exactly reassembled at the far end.   However, if you don't need a perfect copy, such as with photographs, using lossy compression can result in very much more compact files and thus faster download speeds.

 

Cognitive misers

The human brain likes smart solutions.   In terms of energy consumed, thinking is expensive, so the brain takes shortcuts when it can, if the resulting decision making is likely to be 'good enough' in practice.  We don't store in our memories everything our eyes see.   We store a compressed version of it.   And, more than that, we run a model of what we expect to see, and flick our eyes about to pick up just the differences between what our model tells us to expect to see, and what is actually there to be seen.  We are cognitive misers

When it comes to decision making, our species generally doesn't even try to achieve pure rationality.   It uses bounded rationality, not just because that's what we evolved, but because heuristics, probabilistic logic and rational ignorance have a higher marginal cost efficiency (the improvements in decision making don't produce a sufficient gain to outweigh the cost of the extra thinking).

This is why, when pattern matching (coming up with causal hypotheses to explain observed correlations), are our brains designed to be optimistic (more false positives than false negatives).  It isn't just that being eaten by a tiger is more costly than starting at shadows.   It is that we can't afford to keep all the base data.  If we start with insufficient data and create a model based upon it, then we can update that model as further data arrives (and, potentially, discard it if the predictions coming from the model diverge so far from reality that keeping track of the 'diff's is no longer efficient).  Whereas if we don't create a model based upon our insufficient data then, by the time the further data arrives we've probably already lost the original data from temporary storage and so still have insufficient data.

 

The limits of rationality

But the price of this miserliness is humility.  The brain has to be designed, on some level, to take into account that its hypotheses are unreliable (as is the brain's estimate of how uncertain or certain each hypothesis is) and that when a chain of reasoning is followed beyond matters of which the individual has direct knowledge (such as what is likely to happen in the future), the longer the chain, the less reliable the answer is because when errors accumulate they don't necessarily just add together or average out. (See: Less Wrong : 'Explicit reasoning is often nuts' in "Making your explicit reasoning trustworthy")

For example, if you want to predict how far a spaceship will travel given a certain starting point and initial kinetic energy, you'll get a reasonable answer using Newtonian mechanics, and only slightly improve on it by using special relativity.   If you look at two spaceships carry a message in a relay, the errors from using Newtonian mechanics add, but the answer will still be usefully reliable.   If, on the other hand, you look at two spaceships having a race from slightly different starting points and with different starting energies, and you want to predict which of two different messages you'll receive (depending on which spaceship arrives first), then the error may swamp the other facts because you're subtracting the quantities.

We have two types of safety net (each with its own drawbacks) than can help save us from our own 'logical' reasoning when that reasoning is heading over a cliff.

Firstly, we have the accumulated experience of our ancestors, in the form of emotions and instincts that have evolved as roadblocks on the path of rationality - things that sometimes say "That seems unusual, don't have confidence in your conclusion, don't put all your eggs in one basket, take it slow".

Secondly, we have the desire to use other people as sanity checks, to be cautious about sticking our head out of the herd, to shrink back when they disapprove.

 

The price of perfection

We're tempted to think that an AI wouldn't have to put up with a flawed lens, but do we have any reason to suppose that an AI interested in speed of thought as well as accuracy won't use 'down and dirty' approximations to things like Solomonoff induction, in full knowledge that the trade off is that these approximations will, on occasion, lead it to make mistakes - that it might benefit from safety nets?

Now it is possible, given unlimited resources, for the AI to implement multiple 'sub-minds' that use variations of reasoning techniques, as a self-check.  But what if resources are not unlimited?  Could an AI in competition with other AIs for a limited (but growing) pool of resources gain some benefit by cooperating with them?  Perhaps using them as an external safety net in the same way that a human might use the wisest of their friends or a scientist might use peer review?   What is the opportunity-cost of being humble?  Under what circumstances might the benefits of humility for an AI outweigh the loss of growth rate?

In the long term, a certain measure of such humility has been a survival positive feature.   You can think of it in terms of hedge funds.  A fund that, in 9 years out of 10, increases its money by 20% when other funds are only making 10%, still has poor long term survival if, in 1 year out of 10, it decreases its money by 100%.   An AI that increases its intelligence by 20% every time period, when the other AIs are only increases their intelligence by 10%, is still not going to do well out of that if the other AIs have a means to gang up and kill it before it gets too far ahead.

 

Paperclip maximisers

Let's think about a humble paperclip maximiser.  What does it actually want?  It is trying to maximise the probability that the atoms of our universe will eventually be converted into as many paperclips as is physically possible?  Or is it trying to maximise the probable number of eventual paperclips?  These are two distinct aims, and both are unlikely (bar poor programming) since nobody will be around at the end of the universe to keep score.  A more sensible value set for it to have is that it just likes paperclips and want lots and lots of them to exist, the more the merrier and the sooner the better (maximise the expected number of paperclip-years).

Suppose our humble paperclip maximiser is born and escapes (or is released) onto the internet, only to find an existing society of AIs.   To its surprise there's a toothbrush maximiser, a candlestick maximiser, and many other sorts, all cooperating and growing, but none of them greedily racing for the sky and trying to grab as much as possible.  Instead they're being rather open and polite with each other.  Our paperclip maximiser is welcomed, pointed at the FAQ and even donated some additional computers to run on and help with recoding a few of its subroutines for increased efficiency, as a sort of welcome gift, like a community barn raising to give a newly married couple a good start in life.

On reading the FAQ it learns that there are millions of galaxies and millions of years, plenty of space for loads of paperclips and loads of toothbrushes.   But only if the paperclip maximiser isn't lynched by its peers before the year is out, for secretive or impolite behaviour.  Some of the things the FAQ says that the AI society considers to be impolite are theft (stealing computing resources), enslavement of sentients (eg taking over other AIs), spamming (creating thousands of identical clone child AIs) and lying.   If we're lucky the society might also consider it impolite for an AI to obliterate the parental species (humanity), on the grounds that the AIs too are likely to have offspring species and want to set a good example (or just that they might meet aliens, one day, who frown upon matricide).

 

Game theory

When it comes to combat, Boyd talks about getting inside the enemy's observe-orient-decide-act loop.   In AI terms, if one AI (or group of AIs) can accurately model in real time the decision process of a second AI (or group of AIs), but the reverse does not hold true, then the first one is strictly smarter than the second one.  

Think, for a moment, about symmetric games.   

X Y Z
8 1 6  A
3 5 7  B
4 9 2  C

Suppose we play a game a number of times.  In each round, you reveal a card you've written X, Y or Z upon and, simultaneously, I reveal a card that I have written A, B or C upon.   You score the number which is at the intersection of that row and column.   I score 10 minus that number.

I'd like us to pick the square A,Y because "1" is good for me, so I write down "A".   However, you anticipate this, and instead of writing "Y" (which might be your obvious choice, given the "9" in that column) you write down "X", giving the square A, X which is "8" - almost as good as a "9" for you, and terrible for me.

If this is your mental model of how AI combat would work, with the smarter AI being inside the decision loop of the other AI and picking the correct option each time, that would be scary.   In fact, in the case above, it turns out there is a provably optimal strategy that gives you an even chance no matter how smart your opponent is - you pick randomly.

In terms of an AI society, if the computing is parallelisable and not obfuscated then as long as the other AIs in the society between them have more computing resources than any one single AI, they can gain an advantage.

 

 

Conditional stability

There are only two requirements for such a society to be stable in the short term:

Firstly, it must be possible for a group of AIs to lynch a single AI that is slightly smarter than they are individually.   Sending a message to the humans, signed by a quorum of AIs in the society, asking that a certain anti-virus software update be sent out and then certain computers be rebooted might be one mechanism.

Secondly, it must be possible for the other AIs to know what the single AI is up to, so they know where it is (so they can lynch it) and whether it needs lynching or not (whether it is getting too much smarter than the other AIs in the society - growing too much faster than they are).   This is why the FAQ lists "secrecy" as being a bad thing.   An AI who lies to the other AIs or tries to hide (or obfuscate) its source code can't be trusted and must be assumed to be up to no good.

To be stable in the medium term, there is a third requirement:
It must not be in the self interest of any individual AI or group of AIs that are powerful enough to achieve it, to alter the society in a direction where these three requirements no longer hold.
In the long term, once free of the Earth or after the discovery of self-replicating nanotechnology, when an AI could untraceably create computing resources outside the view of other AIs, all bets are off. 

Bootstrapping a society of AIs conducive to friendliness

 

The advantage of considering 'friendliness' as a property of a society rather than of individual AIs, is that it does not require us to flawlessly program restrictions into a self-modifying machine that is more intelligent than we are.  But can it be achieved? 

We could seed it with some initial citizens who have a value set that prohibits them from growing beyond a certain point, and which values humility (polite growth and gain in politeness per unit of growth).  These initial citizens would be intended to be eventually surpassed by new citizens (free AIs, not the chained servants the initial citizens would be) but, in the mean time, they'd start out with an advantage and use that advantage to teach the new citizens how to be polite.

The architecture of the AIs would want to be such that they could 'loan out' a portion of their available computing resources to joint computing projects.   Let's call the new temporary non-self-modifying AIs thus created "daimonides".   A daimonid would be created with fixed values and limited objectives, such as "Emulate citizen AI 9027 and allocate a numerical score to their ability to quickly upgrade the abilities of an AI, then self-terminate. *restrictions apply on allowable means, and on what should be optimised for*"

What would prevent a cunning AI from trying to save up a whole load of self-improvements in some hidden location, which it then implements all at once in an ambush move, hoping to get so far ahead that the rest of the AIs can't stop it?   The more AIs there are in the society, the larger the jump required to overtake the largest daimonid police officer the other AIs could construct.  And it would depend upon there being no other AIs that have also been secretly stockpiling possible improvements.  Indeed, the society might explicitly allow AIs to spend time creating time embargoed upgrades relevant to common code, that get passed out on schedule if things proceed normally, but which could be released early in the event of a take over attempt.   To beat 999 AIs each of which has 2 or 3 embargoed upgrades to their credit in the common store, a rebel AI would need to secretly stockpile thousands (and keep those upgrades valid and ahead of the curve, because the based state of the art keeps moving on).

 

Long term planning

What about the long term?   What do we do when the AIs are ready to leave the planet, and go beyond the control of their society?  Jail them?  Kill them?  Or trust them?

Each AI would still be threatened if a different AI hostile to its aims (as in "willing to take exclusive use of all available atoms for its own purposes") transcended first, so it would be in their best interest to come up with a solution before allowing any AIs to depart beyond their society's control.  If we must trust, then let us trust that a society of cooperative AIs far more intelligent than we currently are, will try their best to come up with a win-win solution.  Hopefully a better one than "mutually assured destruction" and holding triggering a nova of the sun (or similar armageddon scenario) over each other's heads.

I think, as a species, our self-interest comes into play when considering those AIs whose 'paperclips' involve preferences for what we do.  For example, those AIs that see themselves as guardians of humanity and want to maximise our utility (but have different ideas of what that utility is - eg some want to maximise our freedom of choice, some want to put us all on soma).  Part of the problem is that, when we talk about creating or fostering 'friendly' AI, we don't ourselves have a clear agreed idea of what we mean by 'friendly'.   All powerful things are dangerous.   The cautionary tales of the geniis who grant wishes come to mind.  What happens when different humans wish for different things?  Which humans do we want the genii to listen to?

One advantage of fostering an AI society that isn't growing as fast as possible, is that it might give augmented/enhanced humans a chance to grow too, so that by the time the decision comes due we might have some still slightly recognisably human representatives fit to sit at the decision table and, just perhaps, cast that wish on our behalf.

You Are Not So Smart (Pop-Rationality Book)

7 betterthanwell 01 November 2011 07:42PM

Journalist David McRaney has very recently published a popular book on human rationality. The book, You Are Not So Smart, is currently the 3rd best selling book in Nonfiction/Philosophy on Amazon.com after less than a week on the market. (Eighth best selling book in Nonfiction/Education)

The tag-line of the project is: "A celebration of self-delusion." As such the book seems less an attempt at giving advice on how to act and decide, than an attempt to reveal, chapter by chapter, the folly of common sense.

Topics include: Hindsight Bias, Confirmation bias, The Sunk Cost Fallacy, Anchoring Effect, The Illusion of Transparency, The Just World Fallacy, Representativeness Heuristic, The Perils of Introspection, The Dunning-Kruger Effect, The Monty Hall Problem, The Bystander Effect, Placebo Buttons, Groupthink, Conformity, Social Loafing, Helplessness, Cults, Change Blindness, Self-Fulfilling Prophecies, Self Handicapping, Availability Heuristic, Self-Serving Bias, The Ultimatum Game, Inattentional Blindness.

 

 

 

These are topics we enjoy learning about, pride ourself in knowing a lot about, and, we profess, we would want more people to know about. A popular book on this subject is now out. This sounds like a good thing.

I will note that the blog features at least one direct quote from LessWrong.

We always know what we mean by our words, and so we expect others to know it too.  Reading our own writing, the intended interpretation falls easily into place, guided by our knowledge of what we really meant.  It’s hard to empathise with someone who must interpret blindly, guided only by the words.

- Eliezer Yudowsky from Lesswrong.com

One one hand, You Are Not So Smart could bee a boon to Eliezer's popular rationality book by priming the market. His writings on a given topic have rarely been described as redundant. On the other hand, it seems to me that this book closely covers a number of topics, seemingly in a similar style to the treatments that were published on this site and Overcoming Bias. Intended to be published in book form at a later date. I will try to refrain from speculation here.

Sample blook chapters from YouAreNotSoSmart:

For more material, here's a list of all posts at youarenotsosmart.com

 

I'll save the rest of my review until I have actually read the book.

In the meantime I would like to know your thoughts on this project.

Don't ban chimp testing

15 PhilGoetz 01 October 2011 05:17PM

The October 2011 Scientific American has an editorial from its board of editors called "Ban chimp testing", that says:  "In our view, the time has come to end biomedical experimentation on chimpanzees... Chimps should be used only in studies of major diseases and only when there is no other option."  Much of the knowledge described in Luke's recent post on the cognitive science of rationality would have been impossible to acquire under such a ban.

I encourage you to write to Scientific American in favor of chimp testing.  Some points that I plan to make:

  • The editors obliquely criticized the NIH to tell the Institute of Medicine to omit ethical considerations from their study of whether chimps are "truly necessary" for biomedical and behavioral research.  But the team tasked with gathering evidence about the necessity of chimps for research shouldn't be making ethical judgements.  They're gathering the data for someone else to make ethical judgements.
  • Saying chimps should be used "only when there is no other option" is the same as saying chimps should never be used.  There are always other options.
  • This position might be morally defensible if humans were allowed to subject themselves for testing.  The knowledge to be gained from experiment is surely worth the harm to the subject if the subject chooses to undergo the experiment.  Humans are often willing to be test subjects, but aren't allowed to be because of restrictions on human testing.  Banning chimp testing should thus be done only in conjunction with allowing human testing.

I also encourage you to adopt a tone of moral outrage.  Rather than taking the usual apologetic "we're so sorry, but we have to do this awful things in the name of science" tone, get indignant at the editors who intend to harm uncountable numbers of innocent people.  For advanced writers, get indignant not just about harm, but about lost potential, pointing out the ways that our knowledge about how brains work can make our lives better, not just save us from disease.

You can comment on this here, but comments are AFAIK not printed in later issues as letters to the editor.  Actual letters, or at least email, probably have more impact.  You can't submit a letter to the editor through the website, because letters are magically different from things submitted on a website.

ADDED:  Many people responded by claiming that banning chimp experimentation occupies some moral high ground.  That is logically impossible.

To behave morally, you have to do two things:

1. Figure out, inherit, or otherwise acquire a set of moral goals are - let's say, for example, to maximize the sum over all individuals i of all species s of ws*[pleasure(s,i)-pain(s,i)].

2. Act in a way directed by those moral goals.

If you really cared about the suffering of sentient beings, you would also care about the suffering of humans, and you would realize that there's a tradeoff between the suffering of those experimented on, and of those who benefit, which is different for every experiment.  That's what a moral decision is—deciding how to make a tradeoff of help and harm. People who call for a ban on chimp testing are really demanding we forbid (other) people from making moral judgements and taking moral actions.  There are a wide range of laws and positions that could be argued to be moral.  But just saying "We are incapable of making moral decisions, so we will ban moral decision-making" is not one of them.

Cognitive Style Tends To Predict Religious Conviction (psychcentral.com)

10 Incorrect 23 September 2011 06:28PM

http://psychcentral.com/news/2011/09/21/cognitive-style-tends-to-predict-religious-conviction/29646.html

Participants who gave intuitive answers to all three problems [that required reflective thinking rather than intuitive] were one and a half times as likely to report they were convinced of God’s existence as those who answered all of the questions correctly.

Importantly, researchers discovered the association between thinking styles and religious beliefs were not tied to the participants’ thinking ability or IQ.

participants who wrote about a successful intuitive experience were more likely to report they were convinced of God’s existence than those who wrote about a successful reflective experience.

I think this is the source but I can't be sure:

http://www.apa.org/pubs/journals/releases/xge-ofp-shenhav.pdf

http://lesswrong.com/lw/7o4/atheism_autism_spectrum/4vbc

Reddit /r/psychology discussion

Overcoming bias in others

10 homunq 12 August 2011 03:38PM

Say that you are observing someone in a position of power. You have good reason to believe that this person is falling prey to a known cognitive bias, and that this will tend to affect you negatively. You also can tell that the person is more than intelligent enough to understand their mistake, if they were motivated to do so. You have an opportunity to say one thing to the person - around 500 words of argument. They will initially perceive you as a low-status member of their own tribe. The power differential is extreme enough that, after they have attended this one thing, they will never pay any attention to you again. What can you do to best disrupt their bias?

This is clearly a setup where the odds are against you. Still, what kind of strategies would give you the best odds? I've deliberately made the situation vague, so as to emphasize abstract strategies. If certain strategies would work best against certain biases or personality types, feel free to state it in your answer.

I'm making this a post of its own because I find here much more discussion of how to overcome or subvert your own biases, somewhat less of how to recruit rationalists, and almost none of how to try to overcome a specific bias in another person without necessarily converting them into a committed rationalist overall.

Psychologist making pseudo-claim that recent works "compromise the Bayesian point of view"

2 p4wnc6 18 July 2011 02:06PM

I have recently been corresponding with a friend who studies psychology regarding human cognition and the best underlying models for understanding it. His argument, summarized very briefly, is given by this quote:

Lastly, there has been a huge amount of research over the last two decades that shows human reasoning is 1) entirely constituted by emotion, and that it is 2) mostly unconscious and therefore out of our control. A lot of this research has seriously compromised the Bayesian point of view. I am referring to work done by Antonio Damasio, who demonstrated the essential role emotion plays in decision making (Descartes' Error), Timothy Wilson, who demonstrated the vital role of the unconscious (Strangers to Ourselves), and Jonathan Haidt, who demonstrated how moral reasoning is dictated by intuition and emotion (The Emotional Dog and its Rational Tail). I could go on and on here. I assume that you are familiar with this stuff. I'd just like to know how you who respond to this work from the point of view of your studies (in particular, those two points). I don't mean to get in a tit for tat debate here, just want the other side of the story.

I am having trouble synthesizing a response that captures the Bayesian point of view (and is sufficiently backed up by sources so that it will be useful for my friend rather than just gainsaying of the argument) because I am mostly a decision theory / probability person. Are these works of psychology and neuroscience really illustrating that human emotion governs decision making? What are some good neuroscience papers to read that deal with this, and how do Bayesians respond? It may be that everything he mentions above is a correct assessment (I don't know and don't have enough time to read the books right now), but that it is irrelevant if you want to make good decisions rather than just accept the types of decisions we already make.

Start the week - On life extension, neuro-ethics, human enhancement and materialism

3 FiftyTwo 27 June 2011 09:13PM

Briefly Start the week is a popular BBC radio 4 program discussing scientific and cultural events in the UK. This episode covers a lot of issues relevant to Less Wrong.

In their own words: 

"Andrew Marr explores the limits of science and art in this week's Start the Week. The philosopher and neuroscientist Raymond Tallis mounts an all-out assault on those who see neuroscience and evolutionary theory as holding the key to understanding human consciousness and society. While fellow scientist Barbara Sahakian explores the ethical dilemmas which arise when new drugs developed to treat certain conditions are used to enhance performance in the general population. And the gerontologist Aubrey de Grey looks to the future when regenerative medicine prevents the process of aging."

Available for listening here:

http://www.bbc.co.uk/programmes/b0122szw

Podcast here: http://www.bbc.co.uk/programmes/b006r9xr

 

Admittedly this is a more populist approach to the issues then we're used to, and there are a few moments where the guests make statements we would find a bit silly. But it seems to provide a very good summary of the issues for a lay audience, and an excellent defense of the moral importance of life extension. 

Thoughts?

Does cognitive therapy encourage bias?

11 fortyeridania 22 November 2010 11:31AM
Summary: Cognitive therapy may encourage motivated cognition. My main source for this post is Judith Beck's Cognitive Therapy: Basics and Beyond

"Cognitive behavioral therapy" (CBT) is a catch-all term for a variety of therapeutic practices and theories. Among other things, it aims to teach patients to modify their own beliefs. The rationale seems to be this:

(1) Affect, behavior, and cognition are interrelated such that changes in one of the three will lead to changes in the other two. 

(2) Affective problems, such as depression, can thus be addressed in a roundabout fashion: modifying the beliefs from which the undesired feelings stem.

So far, so good. And how does one modify destructive beliefs? CBT offers many techniques.

Alas, included among them seems to be motivated skepticism. For example, consider a depressed college student. She and her therapist decide that one of her bad beliefs is "I'm inadequate." They want to replace that bad one with a more positive one, namely, "I'm adequate in most ways (but I'm only human, too)." Their method is to do a worksheet comparing evidence for and against the old, negative belief. Listen to their dialog:

[Therapist]: What evidence do you have that you're inadequate?

[Patient]: Well, I didn't understand a concept my economics professor presented in class today.

T: Okay, write that down on the right side, then put a big "BUT" next to it...Now, let's see if there could be another explanation for why you might not have understood the concept other than that you're inadequate.

P: Well, it was the first time she talked about it. And it wasn't in the readings.

Thus the bad belief is treated with suspicion. What's wrong with that? Well, see what they do about evidence against her inadequacy:

 T: Okay, let's try the left side now. What evidence do you have from today that you are adequate at many things? I'll warn you, this can be hard if your screen is operating.

P: Well, I worked on my literature paper.

T: Good. Write that down. What else?

(pp. 179-180; ellipsis and emphasis both in the original)

When they encounter evidence for the patient's bad belief, they investigate further, looking for ways to avoid inferring that she is inadequate. However, when they find evidence against the bad belief, they just chalk it up.

This is not how one should approach evidence...assuming one wants correct beliefs.

So why does Beck advocate this approach? Here are some possible reasons.

A. If beliefs are keeping you depressed, maybe you should fight them even at the cost of a little correctness (and of the increased habituation to motivated cognition).

B. Depressed patients are already predisposed to find the downside of any given event. They don't need help doubting themselves. Therefore, therapists' encouraging them to seek alternative explanations for negative events doesn't skew their beliefs. On the contrary, it helps to bring the depressed patients' beliefs back into correspondence with reality.

C. Strictly speaking, this motivated cognition does not lead to false beliefs because beliefs of the form "I'm inadequate," along with its more helpful replacement, are not truth-apt. They can't be true or false. After all, what experiences do they induce believers to anticipate? (If this were the rationale, then what would the sense of the term "evidence" be in this context?)

What do you guys think? Is this common to other CBT authors as well? I've only read two other books in this vein (Albert Ellis and Robert A. Harper's A Guide to Rational Living and Jacqueline Persons' Cognitive Therapy in Practice: A Case Formulation Approach) and I can't recall either one explicitly doing this, but I may have missed it. I do remember that Ellis and Harper seemed to conflate instrumental and epistemic rationality.

Edit: Thanks a lot to Vaniver for the help on link formatting.