Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
"A group blog, More Right is a place to discuss the many things that are touched by politics that we prefer wouldn’t be, as well as right wing ideas in general. It grew out of the correspondences among like minded people in late 2012, who first began their journey studying the findings of modern cognitive science on the failings of human reasoning and ended it reading serious 19th century gentlemen denouncing democracy. Surveying modernity, we found cracks in its façade. Findings and seemingly correct ideas, carefully bolted down and hidden, met with disapproving stares and inarticulate denunciation when unearthed. This only whetted our appetites. Proceeding from the surface to the foundations, we found them lacking. This is reflected in the spirit of the site."
A Guardian article on the impact of climate change on food security. This is worrying (albeit perhaps not a global catastrophic (or existential) risk). It has the potential to wipe out the gains made against extreme poverty in the last few decades.
Should we be so pessimistic? Climate change might be averted through government action or a technological fix; or the poorest might get rich enough to be protected from this insecurity; or we could see a second 'Green Revolution' with GM, etc. I've also seen some discussion that climate change could in fact increase food cultivation - in Russia and Canada for example.
How do people feel about this - optimistic or pessimistic?
Now, I had been taught in school that scurvy had been conquered in 1747, when the Scottish physician James Lind proved in one of the first controlled medical experiments that citrus fruits were an effective cure for the disease. From that point on, we were told, the Royal Navy had required a daily dose of lime juice to be mixed in with sailors’ grog, and scurvy ceased to be a problem on long ocean voyages.
But here was a Royal Navy surgeon in 1911 apparently ignorant of what caused the disease, or how to cure it. Somehow a highly-trained group of scientists at the start of the 20th century knew less about scurvy than the average sea captain in Napoleonic times. Scott left a base abundantly stocked with fresh meat, fruits, apples, and lime juice, and headed out on the ice for five months with no protection against scurvy, all the while confident he was not at risk. What happened?
This article is a vivid illustration of just how nonlinear and downright messy science actually is, and how little the superficial presentation of science as neat "progress" reflects the reality of the field.
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation.
We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread.
We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing.
We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.
This is a fascinating article with many, many interesting points. I'm excerpting some of them below, but mostly just to get you to read it: if I were to quote everything interesting, I'd have to pretty much copy the entire (long!) article.
Rumors and fiction
[...] A related but perhaps more surprising source of misinformation is literary fiction. People extract knowledge even from sources that are explicitly identified as fictional. This process is often adaptive, because fiction frequently contains valid information about the world. For example, non-Americans’ knowledge of U.S. traditions, sports, climate, and geography partly stems from movies and novels, and many Americans know from movies that Britain and Australia have left-hand traffic. By definition, however, fiction writers are not obliged to stick to the facts, which creates an avenue for the spread of misinformation, even by stories that are explicitly identified as fictional. A study by Marsh, Meade, and Roediger (2003) showed that people relied on misinformation acquired from clearly fictitious stories to respond to later quiz questions, even when these pieces of misinformation contradicted common knowledge. In most cases, source attribution was intact, so people were aware that their answers to the quiz questions were based on information from the stories, but reading the stories also increased people’s illusory belief of prior knowledge. In other words, encountering misinformation in a fictional context led people to assume they had known it all along and to integrate this misinformation with their prior knowledge (Marsh & Fazio, 2006; Marsh et al., 2003).
The effects of fictional misinformation have been shown to be stable and difficult to eliminate. Marsh and Fazio (2006) reported that prior warnings were ineffective in reducing the acquisition of misinformation from fiction, and that acquisition was only reduced (not eliminated) under conditions of active on-line monitoring—when participants were instructed to actively monitor the contents of what they were reading and to press a key every time they encountered a piece of misinformation (see also Eslick, Fazio, & Marsh, 2011). Few people would be so alert and mindful when reading fiction for enjoyment. These links between fiction and incorrect knowledge are particularly concerning when popular fiction pretends to accurately portray science but fails to do so, as was the case with Michael Crichton’s novel State of Fear. The novel misrepresented the science of global climate change but was nevertheless introduced as “scientific” evidence into a U.S. Senate committee (Allen, 2005; Leggett, 2005).
Writers of fiction are expected to depart from reality, but in other instances, misinformation is manufactured intentionally. There is considerable peer-reviewed evidence pointing to the fact that misinformation can be intentionally or carelessly disseminated, often for political ends or in the service of vested interests, but also through routine processes employed by the media. [...]
Assessing the Truth of a Statement: Recipients’ Strategies
Misleading information rarely comes with a warning label. People usually cannot recognize that a piece of information is incorrect until they receive a correction or retraction. For better or worse, the acceptance of information as true is favored by tacit norms of everyday conversational conduct: Information relayed in conversation comes with a “guarantee of relevance” (Sperber & Wilson, 1986), and listeners proceed on the assumption that speakers try to be truthful, relevant, and clear, unless evidence to the contrary calls this default into question (Grice, 1975; Schwarz, 1994, 1996). Some research has even suggested that to comprehend a statement, people must at least temporarily accept it as true (Gilbert, 1991). On this view, belief is an inevitable consequence of—or, indeed, precursor to—comprehension.
Although suspension of belief is possible (Hasson, Simmons, & Todorov, 2005; Schul, Mayo, & Burnstein, 2008), it seems to require a high degree of attention, considerable implausibility of the message, or high levels of distrust at the time the message is received. So, in most situations, the deck is stacked in favor of accepting information rather than rejecting it, provided there are no salient markers that call the speaker’s intention of cooperative conversation into question. Going beyond this default of acceptance requires additional motivation and cognitive resources: If the topic is not very important to you, or you have other things on your mind, misinformation will likely slip in." [...]
Is the information compatible with what I believe?
As numerous studies in the literature on social judgment and persuasion have shown, information is more likely to be accepted by people when it is consistent with other things they assume to be true (for reviews, see McGuire, 1972; Wyer, 1974). People assess the logical compatibility of the information with other facts and beliefs. Once a new piece of knowledge-consistent information has been accepted, it is highly resistant to change, and the more so the larger the compatible knowledge base is. From a judgment perspective, this resistance derives from the large amount of supporting evidence (Wyer, 1974); from a cognitive-consistency perspective (Festinger, 1957), it derives from the numerous downstream inconsistencies that would arise from rejecting the prior information as false. Accordingly, compatibility with other knowledge increases the likelihood that misleading information will be accepted, and decreases the likelihood that it will be successfully corrected.
When people encounter a piece of information, they can check it against other knowledge to assess its compatibility. This process is effortful, and it requires motivation and cognitive resources. A less demanding indicator of compatibility is provided by one’s meta-cognitive experience and affective response to new information. Many theories of cognitive consistency converge on the assumption that information that is inconsistent with one’s beliefs elicits negative feelings (Festinger, 1957). Messages that are inconsistent with one’s beliefs are also processed less fluently than messages that are consistent with one’s beliefs (Winkielman, Huber, Kavanagh, & Schwarz, 2012). In general, fluently processed information feels more familiar and is more likely to be accepted as true; conversely, disfluency elicits the impression that something doesn’t quite “feel right” and prompts closer scrutiny of the message (Schwarz et al., 2007; Song & Schwarz, 2008). This phenomenon is observed even when the fluent processing of a message merely results from superficial characteristics of its presentation. For example, the same statement is more likely to be judged as true when it is printed in high rather than low color contrast (Reber & Schwarz, 1999), presented in a rhyming rather than nonrhyming form (McGlone & Tofighbakhsh, 2000), or delivered in a familiar rather than unfamiliar accent (Levy-Ari & Keysar, 2010). Moreover, misleading questions are less likely to be recognized as such when printed in an easy-to-read font (Song & Schwarz, 2008).
As a result, analytic as well as intuitive processing favors the acceptance of messages that are compatible with a recipient’s preexisting beliefs: The message contains no elements that contradict current knowledge, is easy to process, and “feels right.”
Is the story coherent?
Whether a given piece of information will be accepted as true also depends on how well it fits a broader story that lends sense and coherence to its individual elements. People are particularly likely to use an assessment strategy based on this principle when the meaning of one piece of information cannot be assessed in isolation because it depends on other, related pieces; use of this strategy has been observed in basic research on mental models (for a review, see Johnson-Laird, 2012), as well as extensive analyses of juries’ decision making (Pennington & Hastie, 1992, 1993).
A story is compelling to the extent that it organizes information without internal contradictions in a way that is compatible with common assumptions about human motivation and behavior. Good stories are easily remembered, and gaps are filled with story-consistent intrusions. Once a coherent story has been formed, it is highly resistant to change: Within the story, each element is supported by the fit of other elements, and any alteration of an element may be made implausible by the downstream inconsistencies it would cause. Coherent stories are easier to process than incoherent stories are (Johnson-Laird, 2012), and people draw on their processing experience when they judge a story’s coherence (Topolinski, 2012), again giving an advantage to material that is easy to process. [...]
Is the information from a credible source?
[...] People’s evaluation of a source’s credibility can be based on declarative information, as in the above examples, as well as experiential information. The mere repetition of an unknown name can cause it to seem familiar, making its bearer “famous overnight” (Jacoby, Kelley, Brown, & Jaseschko, 1989)—and hence more credible. Even when a message is rejected at the time of initial exposure, that initial exposure may lend it some familiarity-based credibility if the recipient hears it again.
Do others believe this information?
Repeated exposure to a statement is known to increase its acceptance as true (e.g., Begg, Anas, & Farinacci, 1992; Hasher, Goldstein, & Toppino, 1977). In a classic study of rumor transmission, Allport and Lepkin (1945) observed that the strongest predictor of belief in wartime rumors was simple repetition. Repetition effects may create a perceived social consensus even when no consensus exists. Festinger (1954) referred to social consensus as a “secondary reality test”: If many people believe a piece of information, there’s probably something to it. Because people are more frequently exposed to widely shared beliefs than to highly idiosyncratic ones, the familiarity of a belief is often a valid indicator of social consensus. But, unfortunately, information can seem familiar for the wrong reason, leading to erroneous perceptions of high consensus. For example, Weaver, Garcia, Schwarz, and Miller (2007) exposed participants to multiple iterations of the same statement, provided by the same communicator. When later asked to estimate how widely the conveyed belief is shared, participants estimated consensus to be greater the more often they had read the identical statement from the same, single source. In a very real sense, a single repetitive voice can sound like a chorus. [...]
The extent of pluralistic ignorance (or of the false-consensus effect) can be quite striking: In Australia, people with particularly negative attitudes toward Aboriginal Australians or asylum seekers have been found to overestimate public support for their attitudes by 67% and 80%, respectively (Pedersen, Griffiths, & Watt, 2008). Specifically, although only 1.8% of people in a sample of Australians were found to hold strongly negative attitudes toward Aboriginals, those few individuals thought that 69% of all Australians (and 79% of their friends) shared their fringe beliefs. This represents an extreme case of the false-consensus effect. [...]
The Continued Influence Effect: Retractions Fail to Eliminate the Influence of Misinformation
We first consider the cognitive parameters of credible retractions in neutral scenarios, in which people have no inherent reason or motivation to believe one version of events over another. Research on this topic was stimulated by a paradigm pioneered by Wilkes and Leatherbarrow (1988) and H. M. Johnson and Seifert (1994). In it, people are presented with a fictitious report about an event unfolding over time. The report contains a target piece of information: For some readers, this target information is subsequently retracted, whereas for readers in a control condition, no correction occurs. Participants’ understanding of the event is then assessed with a questionnaire, and the number of clear and uncontroverted references to the target (mis-)information in their responses is tallied.
A stimulus narrative commonly used in this paradigm involves a warehouse fire that is initially thought to have been caused by gas cylinders and oil paints that were negligently stored in a closet (e.g., Ecker, Lewandowsky, Swire, & Chang, 2011; H. M. Johnson & Seifert, 1994; Wilkes & Leatherbarrow, 1988). Some participants are then presented with a retraction, such as “the closet was actually empty.” A comprehension test follows, and participants’ number of references to the gas and paint in response to indirect inference questions about the event (e.g., “What caused the black smoke?”) is counted. In addition, participants are asked to recall some basic facts about the event and to indicate whether they noticed any retraction.
Research using this paradigm has consistently found that retractions rarely, if ever, have the intended effect of eliminating reliance on misinformation, even when people believe, understand, and later remember the retraction (e.g., Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011; Ecker, Lewandowsky, & Tang, 2010; Fein, McCloskey, & Tomlinson, 1997; Gilbert, Krull, & Malone, 1990; Gilbert, Tafarodi, & Malone, 1993; H. M. Johnson & Seifert, 1994, 1998, 1999; Schul & Mazursky, 1990; van Oostendorp, 1996; van Oostendorp & Bonebakker, 1999; Wilkes & Leatherbarrow, 1988; Wilkes & Reynolds, 1999). In fact, a retraction will at most halve the number of references to misinformation, even when people acknowledge and demonstrably remember the retraction (Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011); in some studies, a retraction did not reduce reliance on misinformation at all (e.g., H. M. Johnson & Seifert, 1994).
When misinformation is presented through media sources, the remedy is the presentation of a correction, often in a temporally disjointed format (e.g., if an error appears in a newspaper, the correction will be printed in a subsequent edition). In laboratory studies, misinformation is often retracted immediately and within the same narrative (H. M. Johnson & Seifert, 1994). Despite this temporal and contextual proximity to the misinformation, retractions are ineffective. More recent studies (Seifert, 2002) have examined whether clarifying the correction (minimizing misunderstanding) might reduce the continued influence effect. In these studies, the correction was thus strengthened to include the phrase “paint and gas were never on the premises.” Results showed that this enhanced negation of the presence of flammable materials backfired, making people even more likely to rely on the misinformation in their responses. Other additions to the correction were found to mitigate to a degree, but not eliminate, the continued influence effect: For example, when participants were given a rationale for how the misinformation originated, such as, “a truckers’ strike prevented the expected delivery of the items,” they were somewhat less likely to make references to it. Even so, the influence of the misinformation could still be detected. The wealth of studies on this phenomenon have documented its pervasive effects, showing that it is extremely difficult to return the beliefs of people who have been exposed to misinformation to a baseline similar to those of people who were never exposed to it.
Multiple explanations have been proposed for the continued influence effect. We summarize their key assumptions next. [...]
Concise recommendations for practitioners
[...] We summarize the main points from the literature in Figure 1 and in the following list of recommendations:
Consider what gaps in people’s mental event models are created by debunking and fill them using an alternative explanation.
Use repeated retractions to reduce the influence of misinformation, but note that the risk of a backfire effect increases when the original misinformation is repeated in retractions and thereby rendered more familiar.
To avoid making people more familiar with misinformation (and thus risking a familiarity backfire effect), emphasize the facts you wish to communicate rather than the myth.
Provide an explicit warning before mentioning a myth, to ensure that people are cognitively on guard and less likely to be influenced by the misinformation.
Ensure that your material is simple and brief. Use clear language and graphs where appropriate. If the myth is simpler and more compelling than your debunking, it will be cognitively more attractive, and you will risk an overkill backfire effect.
Consider whether your content may be threatening to the worldview and values of your audience. If so, you risk a worldview backfire effect, which is strongest among those with firmly held beliefs. The most receptive people will be those who are not strongly fixed in their views.
If you must present evidence that is threatening to the audience’s worldview, you may be able to reduce the worldview backfire effect by presenting your content in a worldview-affirming manner (e.g., by focusing on opportunities and potential benefits rather than risks and threats) and/or by encouraging self-affirmation.
You can also circumvent the role of the audience’s worldview by focusing on behavioral techniques, such as the design of choice architectures, rather than overt debiasing.
- Consumers with higher scores on a cognitive reflection test are more inclined to buy products when told more about them; for consumers with lower CRT scores it's the reverse.
- Consumers with higher CRT scores felt that they understood the products better after being told more; consumers with lower CRT scores felt that they understood them worse.
- If subjects are asked to give an explanation of how products work and then asked how well they understand and how willing they'd be to pay, high-CR subjects don't change much in either but low-CR subjects report feeling that they understand worse and that they're willing to pay less.
- Conclusion: it looks as if when you give low-CR subjects more information about a product, they feel they understand it less, don't like that feeling, and become less willing to pay.
If this is right (which seems plausible enough) then it presumably applies more broadly: e.g., to what tactics are most effective in political debate. Though it's hardly news in that area that making people feel stupid isn't the best way to persuade them of things.
Abstract of the paper:
People differ in their threshold for satisfactory causal understanding and therefore in the type of explanation that will engender understanding and maximize the appeal of a novel product. Explanation fiends are dissatisfied with surface understanding and desire detailed mechanistic explanations of how products work. In contrast, explanation foes derive less understanding from detailed than coarse explanations and downgrade products that are explained in detail. Consumers’ attitude toward explanation is predicted by their tendency to deliberate, as measured by the cognitive reflection test. Cognitive reflection also predicts susceptibility to the illusion of explanatory depth, the unjustified belief that one understands how things work. When explanation foes attempt to explain, it exposes the illusion, which leads to a decrease in willingness to pay. In contrast, explanation fiends are willing to pay more after generating explanations. We hypothesize that those low in cognitive reflection are explanation foes because explanatory detail shatters their illusion of understanding.
Related post: Muehlhauser-Wang Dialogue.
Abstract. AGI systems should be able to manage its motivations or goals that are persistent, spontaneous, mutually restricting, and changing over time. A mechanism for handles this kind of goals is introduced and discussed.
From the discussion section:
The major conclusion argued in this paper is that an AGI system should always maintain a goal structure (or whatever it is called) which contains multiple goals that are separately specified, with the properties that
- Some of the goals are accurately specified, and can be fully achieved, while some others are vaguely specified and only partially achievable, but nevertheless have impact on the system's decisions.
- The goals may conflict with each other on what the system should do at a moment, and cannot be achieved all together. Very often the system has to make compromises among the goals.
- Due to the restriction in computational resources, the system cannot take all existing goals into account when making each decision, and nor can it keep a complete record of the goal derivation history.
- The designers and users are responsible for the input goals of an AGI system, from which all the other goals are derived, according to the system's experience. There is no guarantee that the derived goals will be logically consistent with the input goals, except in highly simplified situations.
One area that is closely related to goal management is AI ethics. The previous discussions focused on the goal the designers assign to an AGI system ("super goal" or "final goal"), with the implicit assumption that such a goal will decide the consequences caused by the A(G)I systems. However, the above analysis shows that though the input goals are indeed important, they are not the dominating factor that decides the broad impact of AI to human society. Since no AGI system can be omniscient and omnipotent, to be "general-purpose" means such a system has to handle problems for which its knowledge and resources are insufficient [16, 18], and one direct consequence is that its actions may produce unanticipated results. This consequence, plus the previous conclusion that the effective goal for an action may be inconsistent with the input goals, will render many of the previous suggestions mostly irrelevant to AI ethics.
For example, Yudkowsky's "Friendly AI" agenda is based on the assumption that "a true AI might remain knowably stable in its goals, even after carrying out a large number of self-modications" . The problem about this assumption is that unless we are talking about an axiomatic system with unlimited resources, we cannot assume the system can accurately know the consequence of its actions. Furthermore, as argued previously, the goals in an intelligent system inevitable change as its experience grows, which is not necessarily a bad thing - after all, our "human nature" gradually grows out of, and deviates from, our "animal nature", at both the species level and the individual level.
Omohundro argued that no matter what input goals are given to an AGI system, it usually will derive some common "basic drives", including "be self-protective" and "to acquire resources" , which leads some people to worry that such a system will become unethical. According to our previous analysis, the producing of these goals are indeed very likely, but it is only half of the story. A system with a resource-acquisition goal does not necessarily attempts to achieve it at all cost, without considering its other goals. Again, consider the human beings - everyone has some goals that can become dangerous (either to oneself or to the others) if pursued at all costs. The proper solution, both to human ethics and to AGI ethics, is to prevent this kind of goal from becoming dominant, rather than from being formed.
Nobel prize-winner Daniel Kahneman has issued a strongly worded call to one group of psychologists to restore the credibility of their field by creating a replication ring to check each others’ results.
Kahneman, a psychologist at Princeton University in New Jersey, addressed his open e-mail to researchers who work on social priming, the study of how subtle cues can unconsciously influence our thoughts or behaviour. For example, volunteers might walk more slowly down a corridor after seeing words related to old age1, or fare better in general-knowledge tests after writing down the attributes of a typical professor2.
Could this be a tiny step towards an AGI?
'Blue Brain' Project Accurately Predicts Connections Between Neurons
One of the greatest challenges in neuroscience is to identify the map of synaptic connections between neurons. Called the "connectome," it is the holy grail that will explain how information flows in the brain. In a landmark paper, published the week of 17th of September in the Proceedings of the National Academy of Sciences, the EPFL's Blue Brain Project (BBP) has identified key principles that determine synapse-scale connectivity by virtually reconstructing a cortical microcircuit and comparing it to a mammalian sample. These principles now make it possible to predict the locations of synapses in the neocortex.
"This is a major breakthrough, because it would otherwise take decades, if not centuries, to map the location of each synapse in the brain and it also makes it so much easier now to build accurate models," says Henry Markram, head of the BBP.
A longstanding neuroscientific mystery has been whether all the neurons grow independently and just take what they get as their branches bump into each other, or are the branches of each neuron specifically guided by chemical signals to find all its target. To solve the mystery, researchers looked in a virtual reconstruction of a cortical microcircuit to see where the branches bumped into each other. To their great surprise, they found that the locations on the model matched that of synapses found in the equivalent real-brain circuit with an accuracy ranging from 75 percent to 95 percent.
This means that neurons grow as independently of each other as physically possible and mostly form synapses at the locations where they randomly bump into each other. A few exceptions were also discovered pointing out special cases where signals are used by neurons to change the statistical connectivity. By taking these exceptions into account, the Blue Brain team can now make a near perfect prediction of the locations of all the synapses formed inside the circuit.
The goal of the BBP is to integrate knowledge from all the specialized branches of neuroscience, to derive from it the fundamental principles that govern brain structure and function, and ultimately, to reconstruct the brains of different species -- including the human brain -- in silico. The current paper provides yet another proof-of-concept for the approach, by demonstrating for the first time that the distribution of synapses or neuronal connections in the mammalian cortex can, to a large extent, be predicted.
To achieve these results, a team from the Blue Brain Project set about virtually reconstructing a cortical microcircuit based on unparalleled data about the geometrical and electrical properties of neurons -- data from over nearly 20 years of painstaking experimentation on slices of living brain tissue. Each neuron in the circuit was reconstructed into a 3D model on a powerful Blue Gene supercomputer. About 10,000 of virtual neurons were packed into a 3D space in random positions according to the density and ratio of morphological types found in corresponding living tissue. The researchers then compared the model back to an equivalent brain circuit from a real mammalian brain.
A Major Step Towards Accurate Models of the Brain
This discovery also explains why the brain can withstand damage and indicates that the positions of synapses in all brains of the same species are more similar than different. "Positioning synapses in this way is very robust," says computational neuroscientist and first author Sean Hill, "We could vary density, position, orientation, and none of that changed the distribution of positions of the synapses."
They went on to discover that the synapses positions are only robust as long as the morphology of each neuron is slightly different from each other, explaining another mystery in the brain -- why neurons are not all identical in shape. "It's the diversity in the morphology of neurons that makes brain circuits of a particular species basically the same and highly robust," says Hill.
Overall this work represents a major acceleration in the ability to construct detailed models of the nervous system. The results provide important insights into the basic principles that govern the wiring of the nervous system, throwing light on how robust cortical circuits are constructed from highly diverse populations of neurons -- an essential step towards understanding how the brain functions. They also underscore the value of the BBP's constructivist approach. "Although systematically integrating data across a wide range of scales is slow and painstaking, it allows us to derive fundamental principles of brain structure and hence function," explains Hill.
To my knowledge LessWrong hasn't received a great deal of media coverage. So, I was surprised when I came across an article via a Facebook friend which also appeared on the cover of the New York Observer today. However, I was disappointed upon reading it, as I don't think it is an accurate reflection of the community. It certainly doesn't reflect my experience with the LW communities in Toronto and Waterloo.
I thought it would be interesting to see what the broader LessWrong community thought about this article. I think it would make for a good discussion.
Possible conversation topics:
- This article will likely reach many people that have never heard of LessWrong before. Is this a good introduction to LessWrong for those people?
- Does this article give an accurate characterization of the LessWrong community?
Edit 1: Added some clarification about my view on the article.
Edit 2: Re-added link using “nofollow” attribute.
[link] Prepared to wait? New research challenges the idea that we favour small rewards now over bigger later
The old idea that we make decisions like rational agents has given way over the last few decades to a more realistic, psychologically informed picture that recognises the biases and mental short-cuts that sway our thinking. Supposedly one of these is hyperbolic discounting - our tendency to place disproportionate value on immediate rewards, whilst progressively undervaluing distant rewards the further in the future they stand. But not so fast, say Daniel Read at Warwick Business School and his colleagues with a new paper that fails to find any evidence for the phenomenon.
In the very back of Kaj's excellent How to Run a Successful Less Wrong Meetup Group booklet, he has a recommended reading section, including the classic book How to Win Friends and Influence People.
It just so happens that not only have I read the book myself, but I have written up a concise summary of the core advice here. Kaj suggested that I post this on the discussion section because others might find it useful, so here you go!
I suspect that more people are willing to read a summary of a book from the 1930s than an actual book from the 1930s. What I will say about reading the long-form text is that it can be more useful for internalizing these concepts and giving examples of them. It is far too easy to abstractly know what you need to do, much harder to actually take action on those beliefs...
The International Journal of Machine Consciousness recently published its special issue on mind uploading. The papers are paywalled, but as the editor of the issue, Ben Goertzel has put together a page that links to the authors' preprints of the papers. Preprint versions are available for most of the papers.
Below is a copy of the preprint page as it was at the time that this post was made. Note though that I'll be away for a couple of days, and thus be unable to update this page if new links get added.
This page gathers links to informal, “preprint” versions of the papers in that Special Issue, hosted on the paper authors’ websites. These preprint versions are not guaranteed to be identical to the final published versions, but the content should be essentially the same. The list below contains the whole table of contents of the Special Issue; at the moment links to preprints are still being added to the list items as authors post them on their sites.
BEN GOERTZEL and MATTHEW IKLE’ RANDAL A. KOENE SIM BAMFORD EXPERIMENTAL RESEARCH IN WHOLE BRAIN EMULATION: THE NEED FOR INNOVATIVE IN VIVO MEASUREMENT TECHNIQUESRANDAL A. KOENE AVAILABLE TOOLS FOR WHOLE BRAIN EMULATIONDIANA DECA KENNETH J. HAYWORTH NON-DESTRUCTIVE WHOLE-BRAIN MONITORING USING NANOROBOTS: NEURAL ELECTRICAL DATA RATE REQUIREMENTSNUNO R. B. MARTINS, WOLFRAM ERLHAGEN and ROBERT A. FREITAS, JR. MARTINE ROTHBLATT
WHOLE-PERSONALITY EMULATIONWILLIAM SIMS BAINBRIDGE BEN GOERTZEL MICHAEL HAUSKELLER BRANDON OTO TRANS-HUMAN COGNITIVE ENHANCEMENT, PHENOMENAL CONSCIOUSNESS AND THE EXTENDED MINDTADEUSZ WIESLAW ZAWIDZKI PATRICK D. HOPKINS DIGITAL IMMORTALITY: SELF OR 0010110?LIZ STILLWAGGON SWAN and JOSHUA HOWARD YOONSUCK CHOE, JAEROCK KWON and JI RYANG CHUNG KAJ SOTALA KAJ SOTALA and HARRI VALPOLA
From the Harvard Business Review, an article entitled: "Can We Reverse The Stanford Prison Experiment?"
By: Greg McKeown
Posted: June 12, 2012
Royal Canadian Mounted Police attempt a program where they hand out "Positive Tickets"
Their approach was to try to catch youth doing the right things and give them a Positive Ticket. The ticket granted the recipient free entry to the movies or to a local youth center. They gave out an average of 40,000 tickets per year. That is three times the number of negative tickets over the same period. As it turns out, and unbeknownst to Clapham, that ratio (2.9 positive affects to 1 negative affect, to be precise) is called the Losada Line. It is the minimum ratio of positive to negatives that has to exist for a team to flourish. On higher-performing teams (and marriages for that matter) the ratio jumps to 5:1. But does it hold true in policing?
According to Clapham, youth recidivism was reduced from 60% to 8%. Overall crime was reduced by 40%. Youth crime was cut in half. And it cost one-tenth of the traditional judicial system.
This idea can be applied to Real Life
The lesson here is to create a culture that immediately and sincerely celebrates victories. Here are three simple ways to begin:
1. Start your next staff meeting with five minutes on the question: "What has gone right since our last meeting?" Have each person acknowledge someone else's achievement in a concrete, sincere way. Done right, this very small question can begin to shift the conversation.
2. Take two minutes every day to try to catch someone doing the right thing. It is the fastest and most positive way for the people around you to learn when they are getting it right.
3. Create a virtual community board where employees, partners and even customers can share what they are grateful for daily. Sounds idealistic? Vishen Lakhiani, CEO of Mind Valley, a new generation media and publishing company, has done just that at Gratitude Log. (Watch him explain how it works here).
I attended a talk yesterday given under the auspices of the Ottawa Skeptics on the subject of "metacognition" or thinking about thinking -- basically, it was about core rationality concepts. It was designed to appeal to a broad group of lay people interested in science and consisted of a number of examples drawn from pop-sci books such as Thinking, Fast and Slow and Predictably Irrational. (Also mentioned: straw vulcans as described by CFAR's own Julia Galef.) If people who aren't familiar with LW ask you what LW is about, I'd strongly recommend pointing them to this video.
Here's the link.
This is an ongoing project of mine, although I haven't worked on it in a while. I've been trying to extract the references to Rationality - the Methods of Rationality from HPMoR. It also ended up having a few quotes that seemed interesting about how the story's going. I've linked references where I could find them.
I've only got as far as Chapter 40. Any extra submissions welcome.
At least one person - User:DavidGerard suggested it deserved being posted as a discussion link.
Long story short, it's an attempt to justify the planetarium hypothesis as a solution to the Fermi paradox. The first half is a discussion of how it and things like it are relevant to the intended purview of the blog, and the second half is the meat of the post. You'll probably want to just eat the meat, which I think is relevant to the interests of many LessWrong folk.
The blog is Computational Theology. It's new. I'll be the primary poster, but others are sought. I'll likely introduce the blog and more completely describe it in its own discussion post when more posts are up, hopefully including a few from people besides me, and when the archive will give a more informative indication of what to expect from the blog. Despite theism's suspect reputation here at LessWrong I suspect many of the future posts will be of interest to this audience anyway, especially for those of you who take interest in discussion of the singularity. The blog will even occasionally touch on rationality proper. So you might want to store the fact of the blog's existence somewhere deep in the back of your head. A link to the blog's main page can be found on my LessWrong user page if you forget the url.
I'd appreciate it if comments about the substance of the post were made on the blog post itself, but if you want to discuss the content here on LessWrong then that's okay too. Any meta-level comments about presentation, typos, or the post's relevance to LessWrong, should probably be put as comments on this discussion post. Thanks all!
Ben Goertzel and Joel Pitt: Nine Ways to Bias Open-Source AGI Toward Friendliness. Journal of Evolution and Technology - Vol. 22 Issue 1 – February 2012 - pgs 116-141.
While it seems unlikely that any method of guaranteeing human-friendliness (“Friendliness”) on the part of advanced Artificial General Intelligence (AGI) systems will be possible, this doesn’t mean the only alternatives are throttling AGI development to safeguard humanity, or plunging recklessly into the complete unknown. Without denying the presence of a certain irreducible uncertainty in such matters, it is still sensible to explore ways of biasing the odds in a favorable way, such that newly created AI systems are significantly more likely than not to be Friendly. Several potential methods of effecting such biasing are explored here, with a particular but non-exclusive focus on those that are relevant to open-source AGI projects, and with illustrative examples drawn from the OpenCog open-source AGI project. Issues regarding the relative safety of open versus closed approaches to AGI are discussed and then nine techniques for biasing AGIs in favor of Friendliness are presented:
1. Engineer the capability to acquire integrated ethical knowledge.
2. Provide rich ethical interaction and instruction, respecting developmental stages.
3. Develop stable, hierarchical goal systems.
4. Ensure that the early stages of recursive self-improvement occur relatively slowly and with rich human involvement.
5. Tightly link AGI with the Global Brain.
6. Foster deep, consensus-building interactions between divergent viewpoints.
7. Create a mutually supportive community of AGIs.
8. Encourage measured co-advancement of AGI software and AGI ethics theory.
9. Develop advanced AGI sooner not later.
In conclusion, and related to the final point, we advise the serious co-evolution of functional AGI systems and AGI-related ethical theory as soon as possible, before we have so much technical infrastructure that parties relatively unconcerned with ethics are able to rush ahead with brute force approaches to AGI development.
I'd say it's worth a read - they have pretty convincing criticism against the possibility of regulating AGI (section 3). I don't think that their approach will work if there's a hard takeoff or a serious hardware overhang, though it could maybe work if there isn't. It might also work if there was the possibility for a hard takeoff, but not instantly after developing the first AGI systems.
This is interesting, I wonder if there's anything to it: International variation in IQ – the role of parasites (paper) by Christopher Hassall of U. Carleton.
It strikes me as the sort of thing that could be as big an issue as lead in the environment. Raise the sanity waterline: improve health!
gives a page which lists all the recent posts in both the Main and Discussion sections. I've posted it in the comments section before, but I decided to put it in a discussion post because it's a really handy way of accessing the site. I found it by guessing the URL.
As XFrequentist mentioned last August, "Intelligence Advanced Research Project Activity (IARPA) with the goal of improving forecasting methods for global events of national (US) interest. One of the teams (The Good Judgement Team) is recruiting volunteers to have their forecasts tracked. Volunteers will receive an annual honorarium ($150), and it appears there will be ongoing training to improve one's forecast accuracy (not sure exactly what form this will take)."
You can pre-register here.
Last year, approximately 2400 forecasters were assigned to one of eight experimental conditions. I was the #1 forecaster in my condition. It was fun, and I learned a lot, and eventually they are going to give me a public link so that I can brag about this until the end of time. I'm participating again this year, though I plan to regress towards the mean.
I'll share the same info XFrequentist did last year below the fold because I think it's all still relevant.
Why We Reason is an excellent psychology blog that has a great deal of subject matter in common with Less Wrong. Some of the topics discussed on the blog include social psychology, judgement and decision making, neuroscience, cognitive biases, and creativity. And there's even a hint of the kind of "cognitive philosophy" practiced on Less Wrong.
The author, Sam McNerney, is blessed with the rare gift of being able to distill psychology topics for a lay audience, and his posts are very lucid.
There's also a handy archive of every post on the site.
LessWrong is not big on discussion of non-AI existential risks. But Neil deGrasse Tyson notes killer asteroids not just as a generic problem, but as a specific one, naming Apophis as an imminent hazard.
So treat this as your exercise for today: what are the numbers, what is the risk, what are the costs, what actions are appropriate? Assume your answers need to work in the context of a society that's responded to the notion of anthropogenic climate change with almost nothing but blue vs. green politics.
[LINK] Freeman Dyson reviews "Physics on the Fringe: Smoke Rings, Circlons, and Alternative Theories of Everything"
Freeman Dyson writes in the New York Review of Books about people who took up the crackpot offer. Not just complete cranks, but eminent scientists such as Eddington who got into crankery in their later years.
New thing I learnt: Dyson was not only a good friend of Immanuel Velikovsky, but considers him a greatly underappreciated poet.
Link to ACM press release.
In addition to their impact on probabilistic reasoning, Bayesian networks completely changed the way causality is treated in the empirical sciences, which are based on experiment and observation. Pearl's work on causality is crucial to the understanding of both daily activity and scientific discovery. It has enabled scientists across many disciplines to articulate causal statements formally, combine them with data, and evaluate them rigorously. His 2000 book Causality: Models, Reasoning, and Inference is among the single most influential works in shaping the theory and practice of knowledge-based systems. His contributions to causal reasoning have had a major impact on the way causality is understood and measured in many scientific disciplines, most notably philosophy, psychology, statistics, econometrics, epidemiology and social science.
While that "major impact" still seems to me to be in the early stages of propagating through the various sciences, hopefully this award will inspire more people to study causality and Bayesian statistics in general.
The magazine has a bunch of articles dealing with what the world may be like 98,000 years hence. What with the local interest in the distant future, and with prediction itself, I thought I'd bring it to your attention.
The language you speak may affect how you approach your finances, according to a working paper by economist Keith Chen (seen via posts by Frances Woolley at the Worthwhile Canadian Initiative and Economy Lab). It appears that languages that require more explicit future tense are associated with lower savings. A few interesting quotes from a quick glance:
...[I]n the World Values Survey a language’s FTR [Future-Time Reference] is almost entirely uncorrelated with its speakers’ stated values towards savings (corr = -0.07). This suggests that the language effects I identify operate through a channel which is independent of conscious attitudes towards savings. [emphasis mine]
Something else that I wasn't previously aware of:
Lowenstein (1988) finds a temporal reference-point effect: people demand much more compensation to delay receiving a good by one year, (from today to a year from now), than they are willing to pay to move up consumption of that same good (from a year from now to today).
The New York Times just recently ran an article titled "How Companies Learn Your Secrets", which was partially discussing data mining and partially discussing habits. I thought the bits on habits seemed to offer many valuable insights on how to improve our behavior, excerpts:
The process within our brains that creates habits is a three-step loop. First, there is a cue, a trigger that tells your brain to go into automatic mode and which habit to use. Then there is the routine, which can be physical or mental or emotional. Finally, there is a reward, which helps your brain ﬁgure out if this particular loop is worth remembering for the future. Over time, this loop — cue, routine, reward; cue, routine, reward — becomes more and more automatic. The cue and reward become neurologically intertwined until a sense of craving emerges. What’s unique about cues and rewards, however, is how subtle they can be. Neurological studies like the ones in Graybiel’s lab have revealed that some cues span just milliseconds. And rewards can range from the obvious (like the sugar rush that a morning doughnut habit provides) to the infinitesimal (like the barely noticeable — but measurable — sense of relief the brain experiences after successfully navigating the driveway). Most cues and rewards, in fact, happen so quickly and are so slight that we are hardly aware of them at all. But our neural systems notice and use them to build automatic behaviors.
Habits aren’t destiny — they can be ignored, changed or replaced. But it’s also true that once the loop is established and a habit emerges, your brain stops fully participating in decision-making. So unless you deliberately ﬁght a habit — unless you ﬁnd new cues and rewards — the old pattern will unfold automatically. [...]
Luckily, simply understanding how habits work makes them easier to control. Take, for instance, a series of studies conducted a few years ago at Columbia University and the University of Alberta. Researchers wanted to understand how exercise habits emerge. In one project, 256 members of a health-insurance plan were invited to classes stressing the importance of exercise. Half the participants received an extra lesson on the theories of habit formation (the structure of the habit loop) and were asked to identify cues and rewards that might help them develop exercise routines.
The results were dramatic. Over the next four months, those participants who deliberately identified cues and rewards spent twice as much time exercising as their peers. Other studies have yielded similar results. According to another recent paper, if you want to start running in the morning, it’s essential that you choose a simple cue (like always putting on your sneakers before breakfast or leaving your running clothes next to your bed) and a clear reward (like a midday treat or even the sense of accomplishment that comes from ritually recording your miles in a log book). After a while, your brain will start anticipating that reward — craving the treat or the feeling of accomplishment — and there will be a measurable neurological impulse to lace up your jogging shoes each morning.
Our relationship to e-mail operates on the same principle. When a computer chimes or a smartphone vibrates with a new message, the brain starts anticipating the neurological “pleasure” (even if we don’t recognize it as such) that clicking on the e-mail and reading it provides. That expectation, if unsatisfied, can build until you find yourself moved to distraction by the thought of an e-mail sitting there unread — even if you know, rationally, it’s most likely not important. On the other hand, once you remove the cue by disabling the buzzing of your phone or the chiming of your computer, the craving is never triggered, and you’ll find, over time, that you’re able to work productively for long stretches without checking your in-box. [...]
When they got back to P.& G.’s headquarters, the researchers watched their videotapes again. Now they knew what to look for and saw their mistake in scene after scene. Cleaning has its own habit loops that already exist. In one video, when a woman walked into a dirty room (cue), she started sweeping and picking up toys (routine), then she examined the room and smiled when she was done (reward). In another, a woman scowled at her unmade bed (cue), proceeded to straighten the blankets and comforter (routine) and then sighed as she ran her hands over the freshly plumped pillows (reward). P.& G. had been trying to create a whole new habit with Febreze, but what they really needed to do was piggyback on habit loops that were already in place. The marketers needed to position Febreze as something that came at the end of the cleaning ritual, the reward, rather than as a whole new cleaning routine.
The company printed new ads showing open windows and gusts of fresh air. More perfume was added to the Febreze formula, so that instead of merely neutralizing odors, the spray had its own distinct scent. Television commercials were filmed of women, having finished their cleaning routine, using Febreze to spritz freshly made beds and just-laundered clothing. Each ad was designed to appeal to the habit loop: when you see a freshly cleaned room (cue), pull out Febreze (routine) and enjoy a smell that says you’ve done a great job (reward). When you finish making a bed (cue), spritz Febreze (routine) and breathe a sweet, contented sigh (reward). Febreze, the ads implied, was a pleasant treat, not a reminder that your home stinks.
And so Febreze, a product originally conceived as a revolutionary way to destroy odors, became an air freshener used once things are already clean. The Febreze revamp occurred in the summer of 1998. Within two months, sales doubled. A year later, the product brought in $230 million. Since then Febreze has spawned dozens of spinoffs — air fresheners, candles and laundry detergents — that now account for sales of more than $1 billion a year. Eventually, P.& G. began mentioning to customers that, in addition to smelling sweet, Febreze can actually kill bad odors. Today it’s one of the top-selling products in the world. [...]
But when some customers were going through a major life event, like graduating from college or getting a new job or moving to a new town, their shopping habits became flexible in ways that were both predictable and potential gold mines for retailers. The study found that when someone marries, he or she is more likely to start buying a new type of coffee. When a couple move into a new house, they’re more apt to purchase a different kind of cereal. When they divorce, there’s an increased chance they’ll start buying different brands of beer.
Consumers going through major life events often don’t notice, or care, that their shopping habits have shifted, but retailers notice, and they care quite a bit. At those unique moments, Andreasen wrote, customers are “vulnerable to intervention by marketers.” In other words, a precisely timed advertisement, sent to a recent divorcee or new homebuyer, can change someone’s shopping patterns for years. [...]
Before I met Andrew Pole, before I even decided to write a book about the science of habit formation, I had another goal: I wanted to lose weight.
I had got into a bad habit of going to the cafeteria every afternoon and eating a chocolate-chip cookie, which contributed to my gaining a few pounds. Eight, to be precise. I put a Post-it note on my computer reading “NO MORE COOKIES.” But every afternoon, I managed to ignore that note, wander to the cafeteria, buy a cookie and eat it while chatting with colleagues. Tomorrow, I always promised myself, I’ll muster the willpower to resist.
Tomorrow, I ate another cookie.
When I started interviewing experts in habit formation, I concluded each interview by asking what I should do. The first step, they said, was to figure out my habit loop. The routine was simple: every afternoon, I walked to the cafeteria, bought a cookie and ate it while chatting with friends.
Next came some less obvious questions: What was the cue? Hunger? Boredom? Low blood sugar? And what was the reward? The taste of the cookie itself? The temporary distraction from my work? The chance to socialize with colleagues?
Rewards are powerful because they satisfy cravings, but we’re often not conscious of the urges driving our habits in the first place. So one day, when I felt a cookie impulse, I went outside and took a walk instead. The next day, I went to the cafeteria and bought a coffee. The next, I bought an apple and ate it while chatting with friends. You get the idea. I wanted to test different theories regarding what reward I was really craving. Was it hunger? (In which case the apple should have worked.) Was it the desire for a quick burst of energy? (If so, the coffee should suffice.) Or, as turned out to be the answer, was it that after several hours spent focused on work, I wanted to socialize, to make sure I was up to speed on office gossip, and the cookie was just a convenient excuse? When I walked to a colleague’s desk and chatted for a few minutes, it turned out, my cookie urge was gone.
All that was left was identifying the cue.
Deciphering cues is hard, however. Our lives often contain too much information to figure out what is triggering a particular behavior. Do you eat breakfast at a certain time because you’re hungry? Or because the morning news is on? Or because your kids have started eating? Experiments have shown that most cues fit into one of five categories: location, time, emotional state, other people or the immediately preceding action. So to figure out the cue for my cookie habit, I wrote down five things the moment the urge hit:
Where are you? (Sitting at my desk.)
What time is it? (3:36 p.m.)
What’s your emotional state? (Bored.)
Who else is around? (No one.)
What action preceded the urge? (Answered an e-mail.)
The next day I did the same thing. And the next. Pretty soon, the cue was clear: I always felt an urge to snack around 3:30.
Once I figured out all the parts of the loop, it seemed fairly easy to change my habit. But the psychologists and neuroscientists warned me that, for my new behavior to stick, I needed to abide by the same principle that guided Procter & Gamble in selling Febreze: To shift the routine — to socialize, rather than eat a cookie — I needed to piggyback on an existing habit. So now, every day around 3:30, I stand up, look around the newsroom for someone to talk to, spend 10 minutes gossiping, then go back to my desk. The cue and reward have stayed the same. Only the routine has shifted. It doesn’t feel like a decision, any more than the M.I.T. rats made a decision to run through the maze. It’s now a habit. I’ve lost 21 pounds since then (12 of them from changing my cookie ritual).
The New York Times just published this article on how companies use data mining and the psychology of habit formation to effectively target ads.
The process within our brains that creates habits is a three-step loop. First, there is a cue, a trigger that tells your brain to go into automatic mode and which habit to use. Then there is the routine, which can be physical or mental or emotional. Finally, there is a reward, which helps your brain figure out if this particular loop is worth remembering for the future. Over time, this loop — cue, routine, reward; cue, routine, reward — becomes more and more automatic. The cue and reward become neurologically intertwined until a sense of craving emerges.
It has some decent depth of discussion, including an example of the author actually using the concepts to stop a bad habit. The article is based on an upcoming book by the same author titled The Power of Habit.
I haven't seen emphasis of this particular phenomenon—habits consisting of a cue, routine, and reward—on Lesswrong. Do people think it's a valid, scientifically supported phenomenon? The article gives this impression but, of course, doesn't cite specific academic work on it. It ties in to the System 1/System 2 theory easily as a System 1 process. How much of the whole System 1 can be explained as an implementation of this cue, routine, reward process?
And most importantly, how can this fit into the procrastination equation as a tool to subvert akrasia and establish good habits?
Let's look at each of the four factors. If you've formed a habit, it means that the reward happened consistently, which means you have high expectancy. Given that it is a reward, the value is at least positive, but probably not large. Since habits mostly work on small time scales, delay is probably very small. And maybe increased habit formation means your impulsiveness is low. Each of these effects would increase motivation. In addition, because it's part of System 1, there is little energy cost to performing the habit, like there would be with many other conscious actions.
Does this explanation sound legitimate, or like an argument for the bottom line?
Personally, I can tell that context is a strong cue for behavior at work, school, and home. When I go into work, I'm automatically motivated to perform well, and that motivation remains for several hours. When I go into class, I'm automatically ready to focus on difficult material, or even enthusiastically take a test. Yet when I go home, something about the context switches that off, and I can't seem to get anything done at all. It might be worth significant experimentation to find out what cues trigger both modes, and change my contexts to induce what I want.
What do you think?
Edit: this phenomenon has been covered on LW in the form of operant conditioning in posts by Yvain.
Yes, this a repost from Hacker News, but I want to point out some books that are of LW-related interest.
The Hacker Shelf is a repository of freely available textbooks. Most of them are about computer programming or the business of computer programming, but there are a few that are perhaps interesting to the LW community. All of these were publicly available beforehand, but I'm linking to the aggregator in hopes that people can think of other freely available textbooks to submit there.
The site is in its beginning explosion phase; in the time it took to write this post, it doubled in size. If previous sites are any indication, it will crest in a month or so. People will probably lose interest after three months, and after a year the site will probably silently close shop.
MacKay, Information Theory, Inference, and Learning Algorithms
I really wish I had an older version of this book; the newer one has been marred by a Cambridge UP ad on the upper margin of every page. Publishers ruin everything.
The book covers reasonably concisely the basics of information theory and Bayesian methods, with some game theory and coding theory (in the sense of data compression) thrown in on the side. The style takes after Knuth, but refrains from the latter's more encyclopedic tendencies. It's also the type of book that gives a lot of extra content in the exercises. It unfortunately assumes a decent amount of mathematical knowledge — linear algebra and calculus, but nothing you wouldn't find on the Khan Academy.
Easley and Kleinberg, Networks, Crowds, and Markets
There's just a lot of stuff in this book, most of it of independent interest. The thread that ties the book together is graph theory, and with it they cover a great deal of game theory, voting theory, and economics. There are lots of graphs and pictures, and the writing style is pretty deliberate and slow-paced. The math is not very intense; all their probability spaces are discrete, so there's no calculus, and only a few touches of linear algebra.
Gabriel, Patterns of Software
This is a more fluffy book about the practice of software engineering. It's rather old, but I'm linking to it anyway because I agree with the author's feeling that the software engineering discipline has more or less misunderstood Christopher Alexander's work on pattern languages. The author tends to ramble on. I think there's some good wisdom about programming practices and organizational management in general that one could abstract away from this book.
Nisan et. al., Algorithmic Game Theory
I hesitate to link this because the math level is exceptionally high, perhaps high enough that anyone who can read the book probably knows the better part of its contents already. But game/decision theory is near and dear to LW's heart, so perhaps someone will gather some utility from this book. There's an awful lot going on in it. A brief selection: a section on the relationship between game theory and cryptography, a section on computation in prediction markets, and a section analyzing the incentives of information security.
This reminded me of previous LW comments about how we restrict the rights of children for their own good.
On the one hand, children can't understand the risks so we stop them having sex.
But on the other hand, animals can't understand the risks and we happily let them continue having sex.
This is probably of interest to many here: Cognitive Sciences Stack Exchange.
For those who aren't in the know, the Stack Exchange family of forums is a set of sites where users may post questions and answers. They are divided by subject matter, each trying to collect a community of experts who can collectively answer any well-defined question relating to the domain. The about Stack Exchange page boasts that 90% of questions get great answers, "often stunningly quickly". Probably the most famous SE site is Stack Overflow, the computer programming site that started it all.
I find the creation of a Cogsci SE to be quite exciting, as it seems like it could quickly become an invaluable resource for anyone interested in the subject matter. I encourage people to take a look and contribute if they can, or lurk if they can't - there are a number of interesting questions and answers already. (For instance, I found this answer about biofeedback quite interesting.) I already contributed one answer myself.
In addition to helping contribute to an improved understanding of cognitive science, this might also be a good opportunity for LWers to make a bit of a name for themselves among net-savvy cogsci academics. No idea if that's actually useful, but it might be a bit of a pleasant ego boost if you don't have anything better to do with your time. ;-)
Abstract (emphasis mine):
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.
In the distant future, humanity's age has passed. Runaway technological development has led to the obsolescence of the human race, and the Solar System is now ruled by vast, posthuman intelligences that explore realms of science and philosophy unimaginable to the unenhanced mind. Their most idle musings spawn computational vistas more complex than entire human civilsations as they plumb the very secrets of the cosmos.
Incomprehensibly sophisticated as they may be, however, the posthumans have difficulty dealing with what they euphemistically term the “analogue world”. To be blunt, they're really quite hopeless when it comes to physical matters. For all their cognitive puissance, they haven't yet freed themselves from certain physical needs – energy, security, computational machinery on which to run – and so they create servants to carry out their will, defend their physical forms from rivals and hostile Outsiders, and generally keep things tidy.
Thus, even in the age of humanity's eclipse, there are maids.
The Ego (mind) Origins Table contains entries such as Blank ("You're a brand-new digital sentience, created from scratch to serve your Master"), Fork ("You're a scaled-down copy of your Master's own program. You have so many identity issues"), Uplift ("The Master gave you intelligence to serve him. Were you animal, or something weird like a plant?"), and Offspring ("You're actually a larval posthuman AI, serving your "parent" or another Master as a form of vocational training").
The selection of Morphs (physical bodies) includes ones such as Chibimorph, Giant Flying Space Whale, Spideroid ("This Morph resembles an armoured crab or spider the size of a small car. They're designed for combat and reconnaissance, but a hardware glitch causes Egos sleeved into them to become curious and philosophical"), Braincase ("A brain in a jar; you communicate using a built-in video screen with a picture of your face on it. While sleeved into this Morph, your intellect is vastly expanded, but you're easily tipped over"), Nekomorph, and Spectator ("A hovering metallic sphere with numerous camera-eyes mounted on prehensile robotic stalks. It's equipped with eye lasers for self-defence"). Special Morph qualities range from Blushes Easily ("This Morph turns red at the least provocation - even if this makes no sense whatsoever") to Solar Powered ("Efficient, environmentally friendly, and useless in the dark").
Possible Masters for your maids range from sapient starships to planetary minds to hive minds. You might enjoy reading the PDF even if you didn't know anything about role-playing games.
Thanks to Risto Saarelma for the pointer.
Chris Pruett writes on the Robot Invader blog:
Good player handling code is often smoke and mirrors; the player presses buttons and sees a reasonable result, but in between those two operations a whole lot of code is working to ensure that the result is the best of many potential results. For example, my friend Greggman discovered that Mario 3's jumping rules change depending on whether or not a level has slopes in it. Halo's targeting reticle famously slows as it passes over an enemy to make it easier to target with an analog stick without using an auto-aim system. When Spider-Man swings, he certainly does not orient about the spot where his web connects to a building (at least, he didn't in the swinging system I wrote).
Good player handling code doesn't just translate the player's inputs into action, it tries to discern the player's intent. Once the intended action has been identified, if the rules of the game allow it, good player handling code makes the action happen–even if it means breaking the rules of the simulation a little. The goal of good handling code isn't to maintain a "correct" simulation, it's to provide a fun game. It sucks to miss a jump by three centimeters. It sucks to take the full force of a hit from a blow that visually missed. It sucks to swing into a brick wall at 80 miles per hour instead of continuing down the street. To the extent that the code can understand the player's intent, it should act on that intent rather than on the raw input. Do what I mean, not what I say.
I suppose this explains why I am better at arcade bowling games than I am at actual bowling. More seriously, while I had some vague awareness of this, I am slightly surprised at the breadth (Mario 3!?) and depth to which this "control re-interpretation" takes place.
I know celebrities cryocrastinate just as much as anyone else, but King seems like the kind of guy to go through with it.
View more: Next