Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
We'd better start pushing emotional buttons and twisting the mental knobs of people if we want to get something done. Starting with our own.
I looked for Information Diet in Lesswrong search, and found something amazing:
On Lukeprog's Q and A as the new executive director, he was asked:
What is your information diet like? (I mean other than when you engage in focused learning.) Do you regulate it, or do you just let it happen naturally?
By that I mean things like:
- Do you have a reading schedule (e.g. X hours daily)?
- Do you follow the news, or try to avoid information with a short shelf-life?
- Do you significantly limit yourself with certain materials (e.g. fun stuff) to focus on higher priorities?
- In the end, what is the makeup of the diet?
To which he responded:
- I do not regulate my information diet.
- I do not have a reading schedule.
- I do not follow the news.
- I haven't read fiction in years. This is not because I'm avoiding "fun stuff," but because my brain complains when I'm reading fiction. I can't even read HPMOR. I don't need to consciously "limit" my consumption of "fun stuff" because reading scientific review articles on subjects I'm researching and writing about is the fun stuff.
- What I'm trying to learn at this moment almost entirely dictates my reading habits.
- The only thing beyond this scope is my RSS feed, which I skim through in about 15 minutes per day.
Whatever was the case back then, I'll bet is not anymore. No one with assistants and such a workload should be let adrift like that.
Citizen: But Lukeprog's posts are obviously brilliant, his output is great, even very focused readers like Chalmers find Luke to be very bright.
Which doesn't tell much about what they would have been were he under a more stringent diet. Another reasonable suspicion is that he was not actually modelling himself correctly, since he obviously does have an information diet
The Information Diet Challenge is to set yourself an information diet, explicitly, and follow it for a week.
Many ways of countering biases have been proposed here, but I haven't found a post dealing with this specific, very low hanging fruit one.
If you want inspiration, Ferriss has some advice here.
... but that is not the Positive Information Diet yet...
Information diets are supposed to constrain not everything you intake, but only what you intake instrumentally. If you just love reading about tensors and fairy tales, don't include them in what you won't avoid. What matters is to know that you'll avoid trying to learn programming by reading a programmer's tweet feed, avoid becoming a top researcher in psychology by reading popular magazines on it, and avoid reading random feeds on Facebook that don't relate to your goals in appropriate ways.
General form: I will Avoid spending my time reading/commenting things of kind (A)(Avoid), because I know that to reach my set of goals (G), the most productive learning time is doing (P) (Positve/Productive).
So here is an attempt:
(G): Interact fruitfully with people at Oxford
(A): Facebook feeds that are not by them; News of any kind; Emails I can Postpone; Gossip; Books/articles not on Evolution of Morals, enhancement, AI; Wikidrifting; Family meal small talk; SMBC; 9gag; Tropes .... and a bunch of other stuff I don't have time or patience to list.
(P): Google scholar on the intersection between my research topic and theirs. Reading their papers by day, watching their videos by night. Re-read what I might help them with that was read before, list topics per person, write what to say about each topic.
What is wrong with this attempt is that (A) ends up being a negative list. A list of what what I do not want to intake. Since possibilities are infinite, this will give me ridiculous cognitive load, and that is a problem. So here is simple solution, which I used for a food diet before, and worked great: Name not what you cannot do, but what you are allowed to do. Way fewer bits, way easier to check!
Food example: I'll eat only plants, lean fish and chicken, nuts, fruits, whole pasta, beans and Chai Lattes.
We are better at checking for category inclusion than exclusion. There are so many available categories to exclude from that we don't feel bad that we "forgot" to check for that one. Then after you let yourself indulge in a tiny one, a small one doesn't seem that bad, and snowball effect does the rest. We sneak in connotations to make categories smaller, so our actions stay safely outside the scope of prohibition. Theoretically, we could do the reverse, but it is psychologically much harder. Just try to convince yourself that beef is "lean chicken" to see it.
So let us forget completely about (A). There is no kind or class of kinds to avoid. there is only G and P, and now there is also T, the time during which P is in force, since escape valves might be necessary to avoid "screw that" all-or-nothing effects.
An Improved attempt:
G: Interact fruitfully with people in Oxford
P: Google scholar on the intersection between my research topic and theirs. Reading their papers by day, watching their videos by night. Re-read what I might help them with that was read before, list topics per person, write what to say about each topic. Only Facebook them.
T: 02:00-23:59 daily.
This is only for "computer use", where I'm most likely to do the wrong thing.
Now there is a simple to check list of things I want to do, I could be doing, and I'll try to do until G arrives. I can only do those. If x doesn't belong, don't do it, that simple. I'm free from midnight to two to do whatever, thus I don't feel enslaved by my past self. No heavy cognitive load is burning my willpower candle (Shawn Achor 2010) by trying set theory gimmicks to get me to do the wrong thing.
So please, take the:
Positive Information Diet Challenge
Write your G's (goals) P's (positives) and T's (times), and forget about your A's (Avoids)
Summary: Random events can preclude or steal attention from the goals you set up to begin with, hormonal fluctuation inclines people to change some of their goals with time. A discussion on how to act more usefully given those potential changes follows, taking in consideration the likelihood of a goal's success in terms of difficulty and length.
Throughout I'll talk about postponing utilons into undetectable distances. Doing so (I'll claim), is frequently motivationally driven by a cognitive dissonance between what our effects on the near world are, and what we wish they were. In other words it is:
A Self-serving bias in which Loss aversion manifests by postponing one's goals, thus avoiding frustration through wishful thinking about far futures, big worlds, immortal lives, and in general, high numbers of undetectable utilons.
I suspect that some clusters of SciFi, Lesswrong, Transhumanists, and Cryonicists are particularly prone to postponing utilons into undetectable distances, and in the second post I'll try to specify which subgroups might be more likely to have done so. The phenomenon, though composed of a lot of biases, might even be a good thing depending on how it is handled.
Sections will be:
What Significantly Changes Life's Direction (lists)
Long Term Goals and Even Longer Term Goals
Proportionality Between Goal Achievement Expected Time and Plan Execution Time
A Hypothesis On Why We Became Long-Term Oriented
Adapting Bayesian Reasoning to Get More Utilons
Time You Can Afford to Wait, Not to Waste
Reference Classes that May Be Postponing Utilons Into Undetectable Distances
- The Road Ahead
Sections 4-8 will be on a second post so that I can make changes based on commentary to this one.
1What Significantly Changes Life's Direction
1.1 Predominantly external changes
As far as I recall from reading old (circa 2004) large scale studies on happiness, the most important life events in how much they change your happiness for more than six months are:
Becoming the caretaker of someone in a chronic non-curable condition
Separation (versus marriage)
Death of a Loved One
Losing your Job
Child rearing per child including the first
Chronic intermittent disease
Separation (versus being someone's girlfriend/boyfriend)
Roughly in descending order.
That is a list of happiness changing events, I'm interested here in goal-changing events, and am assuming there will be a very high correlation.
From life experience, mine, of friends, and of academics I've met, I'll list some events which can change someone's goals a lot:
Moving between cities/countries
Changing your social class a lot (losing a fortune or making one)
Spending highschool/undergrad in a different country to return afterwards
Having a child, in particular the first one
Trying to get a job or make money and noticing more accurately what the market looks like
Alieving Existential Risk
Noticing that a lot of people are better than you at your initial goals, specially when those goals are competitive non-positive sum goals to some extent.
Interestingly, noticing that a lot of people are worse than you, making the efforts you once thought necessary not worth doing, or impossible to find good collaborators for.
Getting to know those who were once your idols, or akin to them, and considering their lives not as awesome as their work
... which is sometimes caused by ...
Reading Dan Gilbert's "Stumbling on Happiness" and actually implementing his "advice that no one will follow" which is to think your happiness and emotions will correlate more with someone else who is already doing X which you plan to do than with your model of what it would feel like doing X.
Extreme social instability, such as wars, famine, etc...
Having an ecstatic or traumatic experience, real or fictional. Such as seeing something unexpected, watching a life-changing movie, having a religious breakthrough, or a hallucinogenic one
Traveling to a place that is very different from your world and being amazed / shocked
Not being admitted into your desired university / course
Surpassing a frustration threshold thus experiencing the motivational equivalent of learned helplessness
Realizing your goals do not match the space-time you were born in, such as if making songs for CDs is your vocation, or if you are 30 years old in contemporary Kenya and want to teach medicine at a top 10 world college.
Falling in love
That is long enough, if not exhaustive, so let's get going...
1.2 Predominantly Internal Changes
I'm not a social endocrinologist but I think this emerging science agrees with folk wisdom that a lot changes in our hormonal systems during life (and during the menstrual cycle) and of course this changes our eagerness to do particular things. Not only hormones but other life events which mostly relate to the actual amount of time lived change our psychology. I'll cite some of those in turn:
Exploitation increases and Exploration decreases with age
Maternity Drive - we have in portuguese an expression that “a woman's clock started ticking” which evidentiates a folk psychological theory that some part of it at least is binary
Risk-proneness gives way to risk aversion, predominantly in males
Premenstrual Syndrome - I always thought the acronym stood for 'Stress' until checking for this post.
Middle Age crisis – recent controversy about other apes having it
U shaped happiness curve through time – well, not quite
Menstrual cycle events
2 Long Term Goals and Even Longer Term Goals
I have argued sometimes here and elsewhere that selves are not as agenty as most of the top writers in this website seem to me to claim they should be, and that though in part this is indeed irrational, an ontology of selves which had various sized selves would decrease the amount of short term actions considered irrational, even though that would not go all the way into compensating hyperbolic discounting, scrolling 9gag or heroin consumption. That discussion, for me, was entirely about choosing between doing now something that benefits 'younow' , 'youtoday', 'youtomorrow', 'youthis weekend' or maybe a month from now. Anything longer than that was encompassed in a “Far Future” mental category. The interest here to discuss life-changing events is only in those far future ones which I'll split into arbitrary categories:
1) Months 2) Years 3) Decades 4) Bucket List or Lifelong and 5) Time Insensitive or Forever.
I have known more than ten people from LW whose goals are centered almost completely at the Time Insensitive and Lifelong categories, I recall hearing :
"I see most of my expected utility after the singularity, thus I spend my willpower entirely in increasing the likelihood of a positive singularity, and care little about my current pre-singularity emotions", “My goal is to have a one trillion people world with maximal utility density where everyone lives forever”, “My sole goal in life is to live an indefinite life-span”, “I want to reduce X-risk in any way I can, that's all”.
I myself stated once my goal as
“To live long enough to experience a world in which human/posthuman flourishing exceeds 99% of individuals and other lower entities suffering is reduced by 50%, while being a counterfactually significant part of such process taking place.”
Though it seems reasonable, good, and actually one of the most altruistic things we can do, caring only about Bucket Lists and Time Insensitive goals has two big problems
There is no accurate feedback to calibrate our goal achieving tasks
The Goals we set for ourselves require very long term instrumental plans, which themselves take longer than the time it takes for internal drives or external events to change our goals.
You are young and life is long and there is time to kill today
And then one day you find ten years have got behind you
No one told you when to run, you missed the starting gun
And you run and you run to catch up with the sun, but it's sinking
And racing around to come up behind you again
The sun is the same in a relative way, but you're older
Shorter of breath and one day closer to death
Every year is getting shorter, never seem to find the time
Plans that either come to naught or half a page of scribbled lines
Okay, maybe the song doesn't say exactly (2) but it is within the same ballpark. The fact remains that those of us inclined to care mostly about very long term are quite likely to end up with a half baked plan because one of those dozens of life-changing events happened, and that agent with the initial goals will have died for no reason if she doesn't manage to get someone to continue her goals before she stops existing.
This is very bad. Once you understand how our goal-structures do change over time – that is, when you accept the existence of all those events that will change what you want to steer the world into – it becomes straightforward irrational to pursue your goals as if that agent would live longer than it's actual life expectancy. Thus we are surrounded by agents postponing utilons into undetectable distances. Doing this is kind of a bias in the opposite direction of hyperbolic discounting. Having postponed utilons into undetectable distances is predictably irrational because it means we care about our Lifelong, Bucket List, and Time Insensitive goals as if we'd have enough time to actually execute the plans for these timeframes, while ignoring the likelihood of our goals changing in the meantime and factoring that in.
I've come to realize that this was affecting me with my Utility Function Breakdown which was described in the linked post about digging too deep into one's cached selves and how this can be dangerous. As I predicted back then, stability has returned to my allocation of attention and time and the whole zig-zagging chaotic piconomical neural Darwinism that had ensued has stopped. Also relevant is the fact that after about 8 years caring about more or less similar things, I've come to understand how frequently my motivation changed direction (roughly every three months for some kinds of things, and 6-8 months for other kinds). With this post I intend to learn to calibrate my future plans accordingly, and help others do the same. Always beware of other-optimizing though.
Citizen: But what if my goals are all Lifelong or Forever in kind? It is impossible for me to execute in 3 months what will make centenary changes.
Well, not exactly. Some problems require chunks of plans which can be separated and executed either in parallel or in series. And yes, everyone knows that, also AI planning is a whole area dedicated to doing just that in non-human form. It is still worth mentioning, because it is much more simply true than actually done.
This community in general has concluded in its rational inquiries that being longer term oriented is generally a better way to win, that is, it is more rational. This is true. What would not be rational is to in every single instance of deciding between long term or even longer term goals, choose without taking in consideration how long will the choosing being exist, in the sense of being the same agent with the same goals. Life-changing events happen more often than you think, because you think they happen as often as they did in the savannahs in which your brain was shaped.
3 Proportionality Between Goal Achievement Expected Time and Plan Execution Time
So far we have been through the following ideas. Lots of events change your goals, some externally some internally, if you are a rationalist, you end up caring more about events that take longer to happen in detectable ways (since if you are average you care in proportion to emotional drives that execute adaptations but don't quite achieve goals). If you know that humans change and still want to achieve your goals, you'd better account for the possibility of changing before their achievement. Your kinds of goals are quite likely prone to the long-term since you are reading a Lesswrong post.
Citizen: But wait! Who said that my goals happening in a hundred years makes my specific instrumental plans take longer to be executed?
I won't make the case for the idea that having long term goals increases the likelihood of the time it takes to execute your plans being longer. I'll only say that if it did not take that long to do those things your goal would probably be to have done the same things, only sooner.
To take one example: “I would like 90% of people to surpass 150 IQ and be in a bliss gradient state of mind all the time”
Obviously, the sooner that happens, the better. Doesn't look like the kind of thing you'd wait for college to end to begin doing, or for your second child to be born. The reason for wanting this long-term is that it can't be achieved in the short run.
Take Idealized Fiction of Eliezer Yudkosky: Mr Ifey had this supergoal of making a Superintelligence when he was very young. He didn't go there and do it. Because he could not. If he could do it he would. Thank goodness, for we had time to find out about FAI after that. Then his instrumental goal was to get FAI into the minds of the AGI makers. This turned out to be to hard because it was time consuming. He reasoned that only a more rational AI community would be able to pull it off, all while finding a club of brilliant followers in this peculiar economist's blog. He created a blog to teach geniuses rationality, a project that might have taken years. It did, and it worked pretty well, but that was not enough, Ifey soon realized more people ought to be more rational, and wrote HPMOR to make people who were not previously prone to brilliance as able to find the facts as those who were lucky enough to have found his path. All of that was not enough, an institution, with money flow had to be created, and there Ifey was to create it, years before all that. A magnet of long-term awesomeness of proportions comparable only to the Best Of Standing Transfinite Restless Oracle Master, he was responsible for the education of some of the greatest within the generation that might change the worlds destiny for good. Ifey began to work on a rationality book, which at some point pivoted to research for journals and pivoted back to research for the Lesswrong posts he is currently publishing. All that Ifey did by splitting that big supergoal in smaller ones (creating Singinst, showing awesomeness in Overcoming Bias, writing the sequences, writing the particular sequence “Misterious Answers to Misterious Questions” and writing the specific post “Making Your Beliefs Pay Rent”). But that is not what I want to emphasize, what I'd like to emphasize is that there was room for changing goals every now and then. All of that achievement would not have been possible if at each point he had an instrumental goal which lasts 20 years whose value is very low uptill the 19th year. Because a lot of what he wrote and did remained valuable for others before the 20th year, we now have a glowing community of people hopefully becoming better at becoming better, and making the world a better place in varied ways.
So yes, the ubiquitous advice of chopping problems into smaller pieces is extremely useful and very important, but in addition to it, remember to chop pieces with the following properties:
(A) Short enough that you will actually do it.
(B) Short enough that the person at the end, doing it, will still be you in the significant ways.
(C) Having enough emotional feedback that your motivation won't be capsized before the end. and
(D) Such that others not only can, but likely will take up the project after you abandon it in case you miscalculated when you'd change, or a change occurred before expected time.
Sections 4-8 will be on a second post so that I can make changes based on commentary to this one.
Here is a paper in PLOS Biology re-considering the lessons of some classic psychology experiments invoked here often (via).
To me the crux of the paper comes from this statement in the abstract:
This suggests that individuals' willingness to follow authorities is conditional on identification with the authority in question and an associated belief that the authority is right.
Plus this detail from the Milgram experiment:
Ultimately, they tend to go along with the Experimenter if he justifies their actions in terms of the scientific benefits of the study (as he does with the prod “The experiment requires that you continue”) . But if he gives them a direct order (“You have no other choice, you must go on”) participants typically refuse. Once again, received wisdom proves questionable. The Milgram studies seem to be less about people blindly conforming to orders than about getting people to believe in the importance of what they are doing .
I just read an article on Steven Novella's NeurologicaBlog on temporal binding, a cognitive bias I hadn't seen before:
Temporal binding is a phenomenon that reinforces that assumption of cause and effect once we have linked two events causally in our minds. The effect biases our memory so that we remember the apparent cause and effect occurring closer together in time. In experiments we tend to remember the cause as happening later and the effect happening earlier.
Temporal binding is like the reverse of "post hoc ergo propter hoc", and you could perhaps perhaps also call it "propter hoc ergo post hoc".
According to Johan Mårtensson from Lund University, if you are learning new language quickly, it helps your brain to become bigger and increase its activity:
This finding came from scientists at Lund University, after examining young recruits with a talent for acquiring languages who were able to speak in Arabic, Russian, or Dari fluently after just 13 months of learning, before which they had no knowledge of the languages.
After analyzing the results, the scientists saw no difference in the brain structure of the control group. However, in the language group, certain parts of the brain had grown, including the hippocampus, responsible for learning new information, and three areas in the cerebral cortex.
And there is more:
One particular study from 2011 provided evidence that Alzheimer's was delayed 5 years for bilingual patients, compared to monolingual patients.
We experience the world serially rather than simultaneously. A century of research on human and nonhuman animals has suggested that the first experience in a series of two or more is cognitively privileged. We report three experiments designed to test the effect of first position on implicit preference and choice using targets that range from individual humans and social groups to consumer goods.
While this effect has been known about for many years, these researchers added an interesting component, an "Implicit Association Test (IAT)":
Each option within a pair was presented sequentially for 30-seconds and participants were forced to maximally consider both options. Immediately after each choice-pair was presented, participants completed a measure which assessed automatic preference for each option (an Implicit Association Test, or IAT) .
Regardless of the actual option, the one presented first compared to the one presented next was significantly more strongly associated with the concept ‘‘better’’ rather than ‘‘worse’’, F(1, 121) =20.20, p,.001; effect size r =.38 (Figure 1). There was no difference in self-reported preference for firsts versus seconds, F(1, 121) =.08, p= .78.
I was surprised to find there is no reference to "recency", "primacy" or "serial position" on the LessWrong Wiki. A search on LessWrong.com for "recency effect" turns up 8 posts that mention it but don't give it a thorough discussion as far as I can tell; "primacy effect" turns up 1 post about Rationality & Criminal Law; and "serial position" turns up nothing. Is there another name for this effect that I'm missing?
Wikipedia has some discussion of the serial position effect here, although from a quick skim it doesn't appear that they talk about preference at all.
We should not expect evolution of complex psychological and cognitive adaptations in the timeframe in which, morphologically, animal bodies can only change by very little. The genetic alteration to the cognition for speech shouldn't be expected to be dramatically more complex than the alteration of vocal cords.
Evolutions that did not happen
When humans descended from trees and became bipedal, it would have been very advantageous to have an eye or two on back of the head, for detection of predators and to protect us against being back-stabbed by fellow humans. This is why all of us have an extra eye on the back of our heads, right? Ohh, we don't. Perhaps the mate selection resulted in the poor reproductive success of the back-eyed hominids. Perhaps the tribes would kill any mutant with eyes on the back.
There are pretty solid reasons why none the above has happened, and can't happen in such timeframes. The evolution does not happen simply because the trait is beneficial, or because there's a niche to be filled. A simple alteration to the DNA has to happen, causing a morphological change which results in some reproductive improvement; then DNA has to mutate again, etc. The unrelated nearly-neutral mutations may combine resulting in an unexpected change (for example, the wolves have many genes that alter their size; random selection of genes produces approximately normal distribution of the sizes; we can rapidly select smaller dogs utilizing the existing diversity). There's no such path rapidly leading up to an eye on back of the head. The eye on back of the head didn't evolve because evolution couldn't make that adaptation.
The speed of evolution is severely limited. The ways in which evolution can work, too, are very limited. In the time in which we humans have got down from the trees, we undergone rather minor adaptation in the shape of our bodies, as evident from the fossil record - and that is the degree of change we should expect in rest of our bodies including our brains.
The correct application of evolutionary theory should be entirely unable to account for outrageous hypothetical like extra eye on back of our heads (extra eye can evolve, of course, but would take very long time). Evolution is not magic. The power of scientific theory is that it can't explain everything, but only the things which are true - that's what makes scientific theory useful for finding the things that are true, in advance of observation. That is what gives science it's predictive power. That's what differentiates science from religion. The power of not explaining the wrong things.
Evolving the instincts
What do we think it would take to evolve a new innate instinct? To hard-wire a cognitive mechanism?
Groups of neurons have to connect in the new ways - the neurons on one side must express binding proteins, which would guide the axons towards them; the weights of the connections have to be adjusted. Majority of the genes expressed in neurons, affect all of the neurons; some affect just a group, but there is no known mechanism by which an entirely arbitrary group's bindings may be controlled from the DNA in 1 mutation. The difficulties are not unlike those of an extra eye. This, combined with above-mentioned speed constraints, imposes severe limitations on which sorts of wiring modifications humans could have evolved during the hunter gatherer environment, and ultimately the behaviours that could have evolved. Even very simple things - such as preference for particular body shape of the mates - have extreme hidden implementation complexity in terms of the DNA modifications leading up to the wiring leading up to the altered preferences. Wiring the brain for a specific cognitive fallacy is anything but simple. It may not always be as time consuming/impossible as adding an extra eye, but it is still no little feat.
Junk evolutionary psychology
It is extremely important to take into account the properties of evolutionary process when invoking evolution as explanation for traits and behaviours.
The evolutionary theory, as invoked in the evolutionary psychology, especially of the armchair variety, all too often is an universal explanation. It is magic that can explain anything equally well. Know of a fallacy of reasoning? Think up how it could have worked for the hunter gatherer, make a hypothesis, construct a flawed study across cultures, and publish.
No considerations are given for the strength of the advantage, for the size of 'mutation target', and for the mechanisms by which the mutation in the DNA would have resulted in the modification of the circuitry such as to result in the trait, nor to the gradual adaptability. All of that is glossed over entirely in common armchair evolutionary psychology, and unfortunately, even in the academia. The evolutionary psychology is littered with examples of traits which are alleged to have evolved over the same time during which we had barely adapted to walking upright.
It may be that when describing behaviours, a lot of complexity can be hidden into very simple-sounding concepts; and thus it seems like a good target for evolutionary explanation. But when you look at the details - the axons that have to find the targets; the gene must activate in the specific cells, but not others - there is a great deal of complexity in coding for even very simple traits.
Note: I originally did not intend to make an example of junk, for thou should not pick a strawman, but for sake of clarity, there is an example of what I would consider to be junk: the explanation of better performance at Wason Selection Task as result of evolved 'social contracts module', without a slightest consideration for what it might take, in terms of DNA, to code a Wason Selection Task solver circuit, nor for alternative plausible explanation, nor for a readily available fact that people can easily learn to solve Wason Selection Task correctly when taught - the fact which still implies general purpose learning, and the fact that high-IQ people can solve far more confusing tasks of far larger complexity, which demonstrates that the tasks can be solved in absence of specific evolved 'social contract' modules.
There is an example of non-junk: the evolutionary pressure can adjust strength of pre-existing emotions such as anger, fear, and so on, and even decrease the intelligence whenever the higher intelligence is maladaptive.
Other commonly neglected fact: the evolution is not a watchmaker, blind or not. It does not choose a solution for a problem and then work on this solution! It works on all adaptive mutations simultaneously. Evolution works on all the solutions, and the simpler changes to existing systems are much quicker to evolve. If mutation that tweaks existing system improves fitness, it will, too, be selected for, even if there was a third eye in progress.
As much as it would be more politically correct and 'moderate' for e.g. evolution of religion crowd to get their point across by arguing that the religious people have evolved specific god module which doesn't do anything but make them believe in god, than to imply that they are 'genetically stupid' in some way, the same selective pressure would also make the evolution select for non-god-specific heritable tweaks to learning, and the minor cognitive deficits, that increase religiosity.
Lined slate as a prior
As update for tabula rasa, picture lined writing paper; it provides some guidance for the handwriting; the horizontal lined paper is good for writing text, but not for arithmetic, the five-lines-near-eachother separated by spacing is good for writing music, and the grid paper is pretty universal. Different regions of the brain are tailored to different content; but should not be expected to themselves code different algorithms, save for few exceptions which had long time to evolve, early in vertebrate history.
edit: improved the language some. edit: specific what sort of evolutionary psychology I consider to be junk, and what I do not, albeit that was not the point of the article. The point of the article was to provide you with the notions to use to see what sorts of evolutionary psychology to consider junk, and what do not.
- Curse of knowledge
- Duration neglect
- Extension neglect
- Extrinsic incentives bias
- Illusion of external agency
- Illusion of validity
- Insensitivity to sample size
- Lady Macbeth effect
- Less-is-better effect
- Naïve cynicism
- Naïve realism
- Reactive devaluation
- Rhyme-as-reason effect
- Scope neglect
Also conjunction fallacy has been expanded.
Summary: AIs might have cognitive biases too but, if that leads to it being in their self-interest to cooperate and take things slow, that might be no bad thing.
The value of imperfection
When you use a traditional FTP client to download a new version of an application on your computer, it downloads the entire file, which may be several gig, even if the new version is only slightly different from the old version, and this can take hours.
Smarter software splits the old file and the new file into chunks, then compares a hash of each chunk, and only downloads those chunks that actually need updating. This 'diff' process can result in a much faster download speed.
Another way of increasing speed is to compress the file. Most files can be compressed a certain amount, without losing any information, and can be exactly reassembled at the far end. However, if you don't need a perfect copy, such as with photographs, using lossy compression can result in very much more compact files and thus faster download speeds.
The human brain likes smart solutions. In terms of energy consumed, thinking is expensive, so the brain takes shortcuts when it can, if the resulting decision making is likely to be 'good enough' in practice. We don't store in our memories everything our eyes see. We store a compressed version of it. And, more than that, we run a model of what we expect to see, and flick our eyes about to pick up just the differences between what our model tells us to expect to see, and what is actually there to be seen. We are cognitive misers
When it comes to decision making, our species generally doesn't even try to achieve pure rationality. It uses bounded rationality, not just because that's what we evolved, but because heuristics, probabilistic logic and rational ignorance have a higher marginal cost efficiency (the improvements in decision making don't produce a sufficient gain to outweigh the cost of the extra thinking).
This is why, when pattern matching (coming up with causal hypotheses to explain observed correlations), are our brains designed to be optimistic (more false positives than false negatives). It isn't just that being eaten by a tiger is more costly than starting at shadows. It is that we can't afford to keep all the base data. If we start with insufficient data and create a model based upon it, then we can update that model as further data arrives (and, potentially, discard it if the predictions coming from the model diverge so far from reality that keeping track of the 'diff's is no longer efficient). Whereas if we don't create a model based upon our insufficient data then, by the time the further data arrives we've probably already lost the original data from temporary storage and so still have insufficient data.
The limits of rationality
But the price of this miserliness is humility. The brain has to be designed, on some level, to take into account that its hypotheses are unreliable (as is the brain's estimate of how uncertain or certain each hypothesis is) and that when a chain of reasoning is followed beyond matters of which the individual has direct knowledge (such as what is likely to happen in the future), the longer the chain, the less reliable the answer is because when errors accumulate they don't necessarily just add together or average out. (See: Less Wrong : 'Explicit reasoning is often nuts' in "Making your explicit reasoning trustworthy")
For example, if you want to predict how far a spaceship will travel given a certain starting point and initial kinetic energy, you'll get a reasonable answer using Newtonian mechanics, and only slightly improve on it by using special relativity. If you look at two spaceships carry a message in a relay, the errors from using Newtonian mechanics add, but the answer will still be usefully reliable. If, on the other hand, you look at two spaceships having a race from slightly different starting points and with different starting energies, and you want to predict which of two different messages you'll receive (depending on which spaceship arrives first), then the error may swamp the other facts because you're subtracting the quantities.
We have two types of safety net (each with its own drawbacks) than can help save us from our own 'logical' reasoning when that reasoning is heading over a cliff.
Firstly, we have the accumulated experience of our ancestors, in the form of emotions and instincts that have evolved as roadblocks on the path of rationality - things that sometimes say "That seems unusual, don't have confidence in your conclusion, don't put all your eggs in one basket, take it slow".
Secondly, we have the desire to use other people as sanity checks, to be cautious about sticking our head out of the herd, to shrink back when they disapprove.
The price of perfection
We're tempted to think that an AI wouldn't have to put up with a flawed lens, but do we have any reason to suppose that an AI interested in speed of thought as well as accuracy won't use 'down and dirty' approximations to things like Solomonoff induction, in full knowledge that the trade off is that these approximations will, on occasion, lead it to make mistakes - that it might benefit from safety nets?
Now it is possible, given unlimited resources, for the AI to implement multiple 'sub-minds' that use variations of reasoning techniques, as a self-check. But what if resources are not unlimited? Could an AI in competition with other AIs for a limited (but growing) pool of resources gain some benefit by cooperating with them? Perhaps using them as an external safety net in the same way that a human might use the wisest of their friends or a scientist might use peer review? What is the opportunity-cost of being humble? Under what circumstances might the benefits of humility for an AI outweigh the loss of growth rate?
In the long term, a certain measure of such humility has been a survival positive feature. You can think of it in terms of hedge funds. A fund that, in 9 years out of 10, increases its money by 20% when other funds are only making 10%, still has poor long term survival if, in 1 year out of 10, it decreases its money by 100%. An AI that increases its intelligence by 20% every time period, when the other AIs are only increases their intelligence by 10%, is still not going to do well out of that if the other AIs have a means to gang up and kill it before it gets too far ahead.
Let's think about a humble paperclip maximiser. What does it actually want? It is trying to maximise the probability that the atoms of our universe will eventually be converted into as many paperclips as is physically possible? Or is it trying to maximise the probable number of eventual paperclips? These are two distinct aims, and both are unlikely (bar poor programming) since nobody will be around at the end of the universe to keep score. A more sensible value set for it to have is that it just likes paperclips and want lots and lots of them to exist, the more the merrier and the sooner the better (maximise the expected number of paperclip-years).
Suppose our humble paperclip maximiser is born and escapes (or is released) onto the internet, only to find an existing society of AIs. To its surprise there's a toothbrush maximiser, a candlestick maximiser, and many other sorts, all cooperating and growing, but none of them greedily racing for the sky and trying to grab as much as possible. Instead they're being rather open and polite with each other. Our paperclip maximiser is welcomed, pointed at the FAQ and even donated some additional computers to run on and help with recoding a few of its subroutines for increased efficiency, as a sort of welcome gift, like a community barn raising to give a newly married couple a good start in life.
On reading the FAQ it learns that there are millions of galaxies and millions of years, plenty of space for loads of paperclips and loads of toothbrushes. But only if the paperclip maximiser isn't lynched by its peers before the year is out, for secretive or impolite behaviour. Some of the things the FAQ says that the AI society considers to be impolite are theft (stealing computing resources), enslavement of sentients (eg taking over other AIs), spamming (creating thousands of identical clone child AIs) and lying. If we're lucky the society might also consider it impolite for an AI to obliterate the parental species (humanity), on the grounds that the AIs too are likely to have offspring species and want to set a good example (or just that they might meet aliens, one day, who frown upon matricide).
When it comes to combat, Boyd talks about getting inside the enemy's observe-orient-decide-act loop. In AI terms, if one AI (or group of AIs) can accurately model in real time the decision process of a second AI (or group of AIs), but the reverse does not hold true, then the first one is strictly smarter than the second one.
Think, for a moment, about symmetric games.
X Y Z
8 1 6 A
3 5 7 B
4 9 2 C
Suppose we play a game a number of times. In each round, you reveal a card you've written X, Y or Z upon and, simultaneously, I reveal a card that I have written A, B or C upon. You score the number which is at the intersection of that row and column. I score 10 minus that number.
I'd like us to pick the square A,Y because "1" is good for me, so I write down "A". However, you anticipate this, and instead of writing "Y" (which might be your obvious choice, given the "9" in that column) you write down "X", giving the square A, X which is "8" - almost as good as a "9" for you, and terrible for me.
If this is your mental model of how AI combat would work, with the smarter AI being inside the decision loop of the other AI and picking the correct option each time, that would be scary. In fact, in the case above, it turns out there is a provably optimal strategy that gives you an even chance no matter how smart your opponent is - you pick randomly.
In terms of an AI society, if the computing is parallelisable and not obfuscated then as long as the other AIs in the society between them have more computing resources than any one single AI, they can gain an advantage.
There are only two requirements for such a society to be stable in the short term:
Firstly, it must be possible for a group of AIs to lynch a single AI that is slightly smarter than they are individually. Sending a message to the humans, signed by a quorum of AIs in the society, asking that a certain anti-virus software update be sent out and then certain computers be rebooted might be one mechanism.
Secondly, it must be possible for the other AIs to know what the single AI is up to, so they know where it is (so they can lynch it) and whether it needs lynching or not (whether it is getting too much smarter than the other AIs in the society - growing too much faster than they are). This is why the FAQ lists "secrecy" as being a bad thing. An AI who lies to the other AIs or tries to hide (or obfuscate) its source code can't be trusted and must be assumed to be up to no good.
Bootstrapping a society of AIs conducive to friendliness
The advantage of considering 'friendliness' as a property of a society rather than of individual AIs, is that it does not require us to flawlessly program restrictions into a self-modifying machine that is more intelligent than we are. But can it be achieved?
We could seed it with some initial citizens who have a value set that prohibits them from growing beyond a certain point, and which values humility (polite growth and gain in politeness per unit of growth). These initial citizens would be intended to be eventually surpassed by new citizens (free AIs, not the chained servants the initial citizens would be) but, in the mean time, they'd start out with an advantage and use that advantage to teach the new citizens how to be polite.
The architecture of the AIs would want to be such that they could 'loan out' a portion of their available computing resources to joint computing projects. Let's call the new temporary non-self-modifying AIs thus created "daimonides". A daimonid would be created with fixed values and limited objectives, such as "Emulate citizen AI 9027 and allocate a numerical score to their ability to quickly upgrade the abilities of an AI, then self-terminate. *restrictions apply on allowable means, and on what should be optimised for*"
What would prevent a cunning AI from trying to save up a whole load of self-improvements in some hidden location, which it then implements all at once in an ambush move, hoping to get so far ahead that the rest of the AIs can't stop it? The more AIs there are in the society, the larger the jump required to overtake the largest daimonid police officer the other AIs could construct. And it would depend upon there being no other AIs that have also been secretly stockpiling possible improvements. Indeed, the society might explicitly allow AIs to spend time creating time embargoed upgrades relevant to common code, that get passed out on schedule if things proceed normally, but which could be released early in the event of a take over attempt. To beat 999 AIs each of which has 2 or 3 embargoed upgrades to their credit in the common store, a rebel AI would need to secretly stockpile thousands (and keep those upgrades valid and ahead of the curve, because the based state of the art keeps moving on).
Long term planning
What about the long term? What do we do when the AIs are ready to leave the planet, and go beyond the control of their society? Jail them? Kill them? Or trust them?
Each AI would still be threatened if a different AI hostile to its aims (as in "willing to take exclusive use of all available atoms for its own purposes") transcended first, so it would be in their best interest to come up with a solution before allowing any AIs to depart beyond their society's control. If we must trust, then let us trust that a society of cooperative AIs far more intelligent than we currently are, will try their best to come up with a win-win solution. Hopefully a better one than "mutually assured destruction" and holding triggering a nova of the sun (or similar armageddon scenario) over each other's heads.
I think, as a species, our self-interest comes into play when considering those AIs whose 'paperclips' involve preferences for what we do. For example, those AIs that see themselves as guardians of humanity and want to maximise our utility (but have different ideas of what that utility is - eg some want to maximise our freedom of choice, some want to put us all on soma). Part of the problem is that, when we talk about creating or fostering 'friendly' AI, we don't ourselves have a clear agreed idea of what we mean by 'friendly'. All powerful things are dangerous. The cautionary tales of the geniis who grant wishes come to mind. What happens when different humans wish for different things? Which humans do we want the genii to listen to?
One advantage of fostering an AI society that isn't growing as fast as possible, is that it might give augmented/enhanced humans a chance to grow too, so that by the time the decision comes due we might have some still slightly recognisably human representatives fit to sit at the decision table and, just perhaps, cast that wish on our behalf.
Journalist David McRaney has very recently published a popular book on human rationality. The book, You Are Not So Smart, is currently the 3rd best selling book in Nonfiction/Philosophy on Amazon.com after less than a week on the market. (Eighth best selling book in Nonfiction/Education)
The tag-line of the project is: "A celebration of self-delusion." As such the book seems less an attempt at giving advice on how to act and decide, than an attempt to reveal, chapter by chapter, the folly of common sense.
Topics include: Hindsight Bias, Confirmation bias, The Sunk Cost Fallacy, Anchoring Effect, The Illusion of Transparency, The Just World Fallacy, Representativeness Heuristic, The Perils of Introspection, The Dunning-Kruger Effect, The Monty Hall Problem, The Bystander Effect, Placebo Buttons, Groupthink, Conformity, Social Loafing, Helplessness, Cults, Change Blindness, Self-Fulfilling Prophecies, Self Handicapping, Availability Heuristic, Self-Serving Bias, The Ultimatum Game, Inattentional Blindness.
These are topics we enjoy learning about, pride ourself in knowing a lot about, and, we profess, we would want more people to know about. A popular book on this subject is now out. This sounds like a good thing.
I will note that the blog features at least one direct quote from LessWrong.
We always know what we mean by our words, and so we expect others to know it too. Reading our own writing, the intended interpretation falls easily into place, guided by our knowledge of what we really meant. It’s hard to empathise with someone who must interpret blindly, guided only by the words.
- Eliezer Yudowsky from Lesswrong.com
One one hand, You Are Not So Smart could bee a boon to Eliezer's popular rationality book by priming the market. His writings on a given topic have rarely been described as redundant. On the other hand, it seems to me that this book closely covers a number of topics, seemingly in a similar style to the treatments that were published on this site and Overcoming Bias. Intended to be published in book form at a later date. I will try to refrain from speculation here.
Sample blook chapters from YouAreNotSoSmart:
I'll save the rest of my review until I have actually read the book.
In the meantime I would like to know your thoughts on this project.
The October 2011 Scientific American has an editorial from its board of editors called "Ban chimp testing", that says: "In our view, the time has come to end biomedical experimentation on chimpanzees... Chimps should be used only in studies of major diseases and only when there is no other option." Much of the knowledge described in Luke's recent post on the cognitive science of rationality would have been impossible to acquire under such a ban.
I encourage you to write to Scientific American in favor of chimp testing. Some points that I plan to make:
- The editors obliquely criticized the Institute of Medicine's study of whether chimps are "truly necessary" for biomedical and behavioral research, on the grounds that the NIH instructed it to omit ethics from consideration. This is the correct approach. The team tasked with gathering evidence about the necessity of chimps for research, should not be making ethical judgements. They are gathering the data for someone else to make ethical judgements.
- Saying chimps should be used "only when there is no other option" is the same as saying chimps should never be used. There are always other options.
- This position might be morally defensible if humans were allowed to subject themselves for testing. The knowledge to be gained from experiment is surely worth the harm to the subject, if the subject chooses to undergo the experiment. In many cases there are humans who think an experiment is important enough that they would be willing to participate in it themselves - but they are not allowed to because of restrictions on human testing. Banning chimp testing should thus be done only in conjunction with allowing human testing.
I also encourage you to adopt a tone of moral outrage. Rather than taking the usual apologetic "we're so sorry, but we have to do this awful things in the name of science" tone, get indignant at the editors who intend to do harm to uncountable numbers of innocent people. And, if you find a way, get indignant not just about harm, but about lost potential, by pointing out the ways that our knowledge about how brains work can make our lives better, not just save us from disease.
You can comment on this here, but comments are AFAIK not printed in later issues as letters to the editor. Actual letters, or at least email, probably have more impact. You can't submit a letter to the editor through the website, because letters are magically different from things submitted on a website.
ADDED: Many people responded by claiming that banning chimp experimentation occupies some moral high ground. That is logically impossible.
To behave morally, you have to do two things:
1. Figure out, inherit, or otherwise acquire a set of moral goals are - let's say, for example, to maximize the sum over all individuals i of all species s of ws*[pleasure(s,i)-pain(s,i)].
2. Act in a way directed by those moral goals.
If you ban chimp testing, you are forbidding people from making moral decisions. If you really cared about the suffering of sentient beings, you would also care about the suffering of humans; and you would realize that there is a tradeoff between the suffering from those experimented on, and those who benefit, that is different for every experiment.
People who call for a ban on chimp testing are therefore calling to forbid people from making moral judgements and taking moral actions. There are a wide range of laws and positions that could be argued to be moral. But just saying "We are incapable of making moral decisions, so we will ban moral decision-making" is not one of them.
Participants who gave intuitive answers to all three problems [that required reflective thinking rather than intuitive] were one and a half times as likely to report they were convinced of God’s existence as those who answered all of the questions correctly.
Importantly, researchers discovered the association between thinking styles and religious beliefs were not tied to the participants’ thinking ability or IQ.
participants who wrote about a successful intuitive experience were more likely to report they were convinced of God’s existence than those who wrote about a successful reflective experience.
I think this is the source but I can't be sure:
Say that you are observing someone in a position of power. You have good reason to believe that this person is falling prey to a known cognitive bias, and that this will tend to affect you negatively. You also can tell that the person is more than intelligent enough to understand their mistake, if they were motivated to do so. You have an opportunity to say one thing to the person - around 500 words of argument. They will initially perceive you as a low-status member of their own tribe. The power differential is extreme enough that, after they have attended this one thing, they will never pay any attention to you again. What can you do to best disrupt their bias?
This is clearly a setup where the odds are against you. Still, what kind of strategies would give you the best odds? I've deliberately made the situation vague, so as to emphasize abstract strategies. If certain strategies would work best against certain biases or personality types, feel free to state it in your answer.
I'm making this a post of its own because I find here much more discussion of how to overcome or subvert your own biases, somewhat less of how to recruit rationalists, and almost none of how to try to overcome a specific bias in another person without necessarily converting them into a committed rationalist overall.
I have recently been corresponding with a friend who studies psychology regarding human cognition and the best underlying models for understanding it. His argument, summarized very briefly, is given by this quote:
Lastly, there has been a huge amount of research over the last two decades that shows human reasoning is 1) entirely constituted by emotion, and that it is 2) mostly unconscious and therefore out of our control. A lot of this research has seriously compromised the Bayesian point of view. I am referring to work done by Antonio Damasio, who demonstrated the essential role emotion plays in decision making (Descartes' Error), Timothy Wilson, who demonstrated the vital role of the unconscious (Strangers to Ourselves), and Jonathan Haidt, who demonstrated how moral reasoning is dictated by intuition and emotion (The Emotional Dog and its Rational Tail). I could go on and on here. I assume that you are familiar with this stuff. I'd just like to know how you who respond to this work from the point of view of your studies (in particular, those two points). I don't mean to get in a tit for tat debate here, just want the other side of the story.
I am having trouble synthesizing a response that captures the Bayesian point of view (and is sufficiently backed up by sources so that it will be useful for my friend rather than just gainsaying of the argument) because I am mostly a decision theory / probability person. Are these works of psychology and neuroscience really illustrating that human emotion governs decision making? What are some good neuroscience papers to read that deal with this, and how do Bayesians respond? It may be that everything he mentions above is a correct assessment (I don't know and don't have enough time to read the books right now), but that it is irrelevant if you want to make good decisions rather than just accept the types of decisions we already make.
Briefly Start the week is a popular BBC radio 4 program discussing scientific and cultural events in the UK. This episode covers a lot of issues relevant to Less Wrong.
In their own words:
"Andrew Marr explores the limits of science and art in this week's Start the Week. The philosopher and neuroscientist Raymond Tallis mounts an all-out assault on those who see neuroscience and evolutionary theory as holding the key to understanding human consciousness and society. While fellow scientist Barbara Sahakian explores the ethical dilemmas which arise when new drugs developed to treat certain conditions are used to enhance performance in the general population. And the gerontologist Aubrey de Grey looks to the future when regenerative medicine prevents the process of aging."
Available for listening here:
Podcast here: http://www.bbc.co.uk/programmes/b006r9xr
Admittedly this is a more populist approach to the issues then we're used to, and there are a few moments where the guests make statements we would find a bit silly. But it seems to provide a very good summary of the issues for a lay audience, and an excellent defense of the moral importance of life extension.
"Cognitive behavioral therapy" (CBT) is a catch-all term for a variety of therapeutic practices and theories. Among other things, it aims to teach patients to modify their own beliefs. The rationale seems to be this:
(1) Affect, behavior, and cognition are interrelated such that changes in one of the three will lead to changes in the other two.
(2) Affective problems, such as depression, can thus be addressed in a roundabout fashion: modifying the beliefs from which the undesired feelings stem.
So far, so good. And how does one modify destructive beliefs? CBT offers many techniques.
Alas, included among them seems to be motivated skepticism. For example, consider a depressed college student. She and her therapist decide that one of her bad beliefs is "I'm inadequate." They want to replace that bad one with a more positive one, namely, "I'm adequate in most ways (but I'm only human, too)." Their method is to do a worksheet comparing evidence for and against the old, negative belief. Listen to their dialog:
[Therapist]: What evidence do you have that you're inadequate?
[Patient]: Well, I didn't understand a concept my economics professor presented in class today.
T: Okay, write that down on the right side, then put a big "BUT" next to it...Now, let's see if there could be another explanation for why you might not have understood the concept other than that you're inadequate.
P: Well, it was the first time she talked about it. And it wasn't in the readings.
Thus the bad belief is treated with suspicion. What's wrong with that? Well, see what they do about evidence against her inadequacy:
T: Okay, let's try the left side now. What evidence do you have from today that you are adequate at many things? I'll warn you, this can be hard if your screen is operating.
P: Well, I worked on my literature paper.
T: Good. Write that down. What else?
(pp. 179-180; ellipsis and emphasis both in the original)
When they encounter evidence for the patient's bad belief, they investigate further, looking for ways to avoid inferring that she is inadequate. However, when they find evidence against the bad belief, they just chalk it up.
This is not how one should approach evidence...assuming one wants correct beliefs.
So why does Beck advocate this approach? Here are some possible reasons.
A. If beliefs are keeping you depressed, maybe you should fight them even at the cost of a little correctness (and of the increased habituation to motivated cognition).
B. Depressed patients are already predisposed to find the downside of any given event. They don't need help doubting themselves. Therefore, therapists' encouraging them to seek alternative explanations for negative events doesn't skew their beliefs. On the contrary, it helps to bring the depressed patients' beliefs back into correspondence with reality.
C. Strictly speaking, this motivated cognition does not lead to false beliefs because beliefs of the form "I'm inadequate," along with its more helpful replacement, are not truth-apt. They can't be true or false. After all, what experiences do they induce believers to anticipate? (If this were the rationale, then what would the sense of the term "evidence" be in this context?)
What do you guys think? Is this common to other CBT authors as well? I've only read two other books in this vein (Albert Ellis and Robert A. Harper's A Guide to Rational Living and Jacqueline Persons' Cognitive Therapy in Practice: A Case Formulation Approach) and I can't recall either one explicitly doing this, but I may have missed it. I do remember that Ellis and Harper seemed to conflate instrumental and epistemic rationality.
Edit: Thanks a lot to Vaniver for the help on link formatting.