Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Vladimir Slepnev (aka cousin_it) gives a popular introduction to logical counterfactuals and modal updateless decision theory in the Tel Aviv LessWrong meetup.
On Scott Adams' Blog: Robots Program People:
It won’t be long before all new drugs are discovered by robots. This start-up is an example of that trend.And it won’t be long before IBM’s Watson can diagnose and prescribe treatments better than any human doctor.Put those two trends together and robots will be programming humans with drugs. Drugs are the user interface to our moistware.[...]Someday, for sure, machines will be programming humans. And that day will probably be in your lifetime. But don’t be afraid because the robots will someday have a drug that will make you feel totally okay with being their pet.
Recent psychophysical experiments indicate that humans perform near-optimal Bayesian inference in a wide variety of tasks, ranging from cue integration to decision making to motor control. This implies that neurons both represent probability distributions and combine those distributions according to a close approximation to Bayes’ rule. At first sight, it would seem that the high variability in the responses of cortical neurons would make it difficult to implement such optimal statistical inference in cortical circuits. We argue that, in fact, this variability implies that populations of neurons automatically represent probability distributions over the stimulus, a type of code we call probabilistic population codes. Moreover, we demonstrate that the Poissonlike variability observed in cortex reduces a broad class of Bayesian inference to simple linear combinations of populations of neural activity. These results hold for arbitrary probability distributions over the stimulus, for tuning curves of arbitrary shape and for realistic neuronal variability.
Note that "humans perform near-optimal Bayesian inference" refers to the integration of information - not conscious symbolic reasoning. Nonetheless I think this is of interest here.
Sean Carroll, physicist and proponent of Everettian Quantum Mechanics, has just posted a new article going over some of the common objections to EQM and why they are false. Of particular interest to us as rationalists:
Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate. And the actual postulates of the theory are quite simple indeed:
- The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space.
- The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian.
That is, as they say, it. Notice you don’t see anything about worlds in there. The worlds are there whether you like it or not, sitting in Hilbert space, waiting to see whether they become actualized in the course of the evolution. Notice, also, that these postulates are eminently testable — indeed, even falsifiable! And once you make them (and you accept an appropriate “past hypothesis,” just as in statistical mechanics, and are considering a sufficiently richly-interacting system), the worlds happen automatically.
Given that, you can see why the objection is dispiritingly wrong-headed. You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. You don’t object to general relativity because you can’t be absolutely sure that Einstein’s equation was holding true at some particular event a billion light years away. This distinction between what is postulated (which should be testable) and everything that is derived (which clearly need not be) seems pretty straightforward to me, but is a favorite thing for people to get confused about.
Very reminiscent of the quantum physics sequence here! I find that this distinction between number of entities and number of postulates is something that I need to remind people of all the time.
META: This is my first post; if I have done anything wrong, or could have done something better, please tell me!
From Scott Adams Blog
The article really is about speeding up government, but the key point is speed as a component of smart:
A smart friend told me recently that speed is the new intelligence, at least for some types of technology jobs. If you are hiring an interface designer, for example, the one that can generate and test several designs gets you further than the “genius” who takes months to produce the first design to test. When you can easily test alternatives, the ability to quickly generate new things to test is a substitute for intelligence.
This shifts the focus from the ability to grasp and think through very complex topics (includes good working memory and memory recall in general) to the ability new topics quickly (includes quick learning and unlearning, creativity).
Smart people in the technology world no long believe they can think their way to success. Now the smart folks try whatever plan looks promising, test it, tweak it, and reiterate. In that environment, speed matters more than intelligence because no one has the psychic ability to pick a winner in advance. All you can do is try things that make sense and see what happens. Obviously this is easier to do when your product is software based.
This also changes the type of grit needed. The grit to push through a long topic versus the grit try lots of new things and to learn from failures.
An Article on Motherboard reports about Alien Minds by Susan Schneider who claiThe Dominant Life Form In the Cosmos Is Probably Superintelligent Robots. The article is crosslinked to other posts about superintelligence and at the end discusses the question why these alien robots leave us along. The arguments puts forth on this don't convince me though.
Not sure what the local view of Oren Etzioni or the Allen Institute for AI is, but I'm curious what people think if his views on UFAI risk. As far as I can tell from this article, it basically boils down to "AGI won't happen, at least not any time soon." Is there (significant) reason to believe he's wrong, or is it simply too great a risk to leave to chance?
Discusses the technical aspects of one of Googles AI projects. According to a pcworld the system "apes human memory and programming skills" (this article seems pretty solid, also contains link to the paper).
We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.
(First post here, feedback on the appropriateness of the post appreciated)
The take-home advice:
Positive thinking fools our minds into perceiving that we’ve already attained our goal, slackening our readiness to pursue it.
What does work better is a hybrid approach that combines positive thinking with “realism.” Here’s how it works. Think of a wish. For a few minutes, imagine the wish coming true, letting your mind wander and drift where it will. Then shift gears. Spend a few more minutes imagining the obstacles that stand in the way of realizing your wish.
This simple process, which my colleagues and I call “mental contrasting,” has produced powerful results in laboratory experiments. When participants have performed mental contrasting with reasonable, potentially attainable wishes, they have come away more energized and achieved better results compared with participants who either positively fantasized or dwelt on the obstacles.
When participants have performed mental contrasting with wishes that are not reasonable or attainable, they have disengaged more from these wishes. Mental contrasting spurs us on when it makes sense to pursue a wish, and lets us abandon wishes more readily when it doesn’t, so that we can go after other, more reasonable ambitions.
This article discusses how upvotes and downvotes influence the quality of posts on online communities. The article claims that downvotes lead to more posts of lower quality from the downvoted commenter.
From the abstract:
Social media systems rely on user feedback and rating mechanisms for personalization, ranking, and content filtering. [...] This paper investigates how ratings on a piece of content affect its author’s future behavior. [...] [W]e find that negative feedback leads to significant behavioral changes that are detrimental to the community. Not only do authors of negatively-evaluated content contribute more, but also their future posts are of lower quality, and are perceived by the community as such. In contrast, positive feedback does not carry similar effects, and neither encourages rewarded authors to write more, nor improves the quality of their posts.
The authors of the article are Justin Cheng, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec.
Edited to add:
It shows how easy a population can be influenced if control over a small sub-set exists.
A key problem for viral marketers is to determine an initial "seed" set [<1% of total size] in a network such that if given a property then the entire network adopts the behavior. Here we introduce a method for quickly finding seed sets that scales to very large networks. Our approach finds a set of nodes that guarantees spreading to the entire network under the tipping model. After experimentally evaluating 31 real-world networks, we found that our approach often finds such sets that are several orders of magnitude smaller than the population size. Our approach also scales well - on a Friendster social network consisting of 5.6 million nodes and 28 million edges we found a seed sets in under 3.6 hours. We also find that highly clustered local neighborhoods and dense network-wide community structure together suppress the ability of a trend to spread under the tipping model.
This is relevant for LW because
a) Rational agents should hedge against this.
b) An UFAI could exploit this.
c) It gives hints to proof systems against this 'exploit'.
A useful bias to quote in discussions that spring up around the subjects we deal with on Less Wrong: Normalcy Bias. It's rather specific, but useful:
The normalcy bias, or normality bias, refers to a mental state people enter when facing a disaster. It causes people to underestimate both the possibility of a disaster occurring and its possible effects. This may results in situations where people fail to adequately prepare for a disaster, and on a larger scale, the failure of governments to include the populace in its disaster preparations.
The assumption that is made in the case of the normalcy bias is that since a disaster never has occurred then it never will occur. It can result in the inability of people to cope with a disaster once it occurs. People with a normalcy bias have difficulties reacting to something they have not experienced before. People also tend to interpret warnings in the most optimistic way possible, seizing on any ambiguities to infer a less serious situation.
Another attack on the resource-based model of willpower, Michael Inzlicht, Brandon J. Schmeichel and C. Neil Macrae have a paper called "Why Self-Control Seems (but may not be) Limited" in press in Trends in Cognitive Sciences. Ungated version here.
Some of the most interesting points:
- Over 100 studies appear to be consistent with self-control being a limited resource, but generally these studies do not observe resource depletion directly, but infer it from whether or not people's performance declines in a second self-control task.
- The only attempts to directly measure the loss or gain of a resource have been studies measuring blood glucose, but these studies have serious limitations, the most important being an inability to replicate evidence of mental effort actually affecting the level of glucose in the blood.
- Self-control also seems to replenish by things such as "watching a favorite television program, affirming some core value, or even praying", which would seem to conflict with the hypothesis inherent resource limitations. The resource-based model also seems evolutionarily implausible.
The authors offer their own theory of self-control. One-sentence summary (my formulation, not from the paper): "Our brains don't want to only work, because by doing some play on the side, we may come to discover things that will allow us to do even more valuable work."
- Ultimately, self-control limitations are proposed to be an exploration-exploitation tradeoff, "regulating the extent to which the control system favors task engagement (exploitation) versus task disengagement and sampling of other opportunities (exploration)".
- Research suggests that cognitive effort is inherently aversive, and that after humans have worked on some task for a while, "ever more resources are needed to counteract the aversiveness of work, or else people will gravitate toward inherently rewarding leisure instead". According to the model proposed by the authors, this allows the organism to both focus on activities that will provide it with rewards (exploitation), but also to disengage from them and seek activities which may be even more rewarding (exploration). Feelings such as boredom function to stop the organism from getting too fixated on individual tasks, and allow us to spend some time on tasks which might turn out to be even more valuable.
The explanation of the actual proposed psychological mechanism is good enough that it deserves to be quoted in full:
Based on the tradeoffs identified above, we propose that initial acts of control lead to shifts in motivation away from “have-to” or “ought-to” goals and toward “want-to” goals (see Figure 2). “Have-to” tasks are carried out through a sense of duty or contractual obligation, while “want-to” tasks are carried out because they are personally enjoyable and meaningful ; as such, “want-to” tasks feel easy to perform and to maintain in focal attention . The distinction between “have-to” and “want-to,” however, is not always clear cut, with some “want-to” goals (e.g., wanting to lose weight) being more introjected and feeling more like “have-to” goals because they are adopted out of a sense of duty, societal conformity, or guilt instead of anticipated pleasure .
According to decades of research on self-determination theory , the quality of motivation that people apply to a situation ranges from extrinsic motivation, whereby behavior is performed because of external demand or reward, to intrinsic motivation, whereby behavior is performed because it is inherently enjoyable and rewarding. Thus, when we suggest that depletion leads to a shift from “have-to” to “want-to” goals, we are suggesting that prior acts of cognitive effort lead people to prefer activities that they deem enjoyable or gratifying over activities that they feel they ought to do because it corresponds to some external pressure or introjected goal. For example, after initial cognitive exertion, restrained eaters prefer to indulge their sweet tooth rather than adhere to their strict views of what is appropriate to eat . Crucially, this shift from “have-to” to “want-to” can be offset when people become (internally or externally) motivated to perform a “have-to” task . Thus, it is not that people cannot control themselves on some externally mandated task (e.g., name colors, do not read words); it is that they do not feel like controlling themselves, preferring to indulge instead in more inherently enjoyable and easier pursuits (e.g., read words). Like fatigue, the effect is driven by reluctance and not incapability  (see Box 2).
Research is consistent with this motivational viewpoint. Although working hard at Time 1 tends to lead to less control on “have-to” tasks at Time 2, this effect is attenuated when participants are motivated to perform the Time 2 task , personally invested in the Time 2 task , or when they enjoy the Time 1 task . Similarly, although performance tends to falter after continuously performing a task for a long period, it returns to baseline when participants are rewarded for their efforts ; and remains stable for participants who have some control over and are thus engaged with the task . Motivation, in short, moderates depletion . We suggest that changes in task motivation also mediate depletion .
Depletion, however, is not simply less motivation overall. Rather, it is produced by lower motivation to engage in “have-to” tasks, yet higher motivation to engage in “want-to” tasks. Depletion stokes desire . Thus, working hard at Time 1 increases approach motivation, as indexed by self-reported states, impulsive responding, and sensitivity to inherently-rewarding, appetitive stimuli . This shift in motivational priorities from “have-to” to “want-to” means that depletion can increase the reward value of inherently-rewarding stimuli. For example, when depleted dieters see food cues, they show more activity in the orbitofrontal cortex, a brain area associated with coding reward value, compared to non-depleted dieters .
A new study indicates that people become more utilitarian (save more lives) when viewing a moral dilemma in a virtual reality situation, as compared to reading the same situation in text.
Although research in moral psychology in the last decade has relied heavily on hypothetical moral dilemmas and has been effective in understanding moral judgment, how these judgments translate into behaviors remains a largely unexplored issue due to the harmful nature of the acts involved. To study this link, we follow a new approach based on a desktop virtual reality environment. In our within-subjects experiment, participants exhibited an order-dependent judgment-behavior discrepancy across temporally-separated sessions, with many of them behaving in utilitarian manner in virtual reality dilemmas despite their non-utilitarian judgments for the same dilemmas in textual descriptions. This change in decisions reflected in the autonomic arousal of participants, with dilemmas in virtual reality being perceived more emotionally arousing than the ones in text, after controlling for general differences between the two presentation modalities (virtual reality vs. text). This suggests that moral decision-making in hypothetical moral dilemmas is susceptible to contextual saliency of the presentation of these dilemmas.
Seems related to many topics discussed on LW, such as the low adoption of cryonics and the difficulty of getting researchers convinced of AI risk.
Four weeks later, on November 18th, Bigelow published his report on the discovery of “insensibility produced by inhalation” in the Boston Medical and Surgical Journal. Morton would not divulge the composition of the gas, which he called Letheon, because he had applied for a patent. But Bigelow reported that he smelled ether in it (ether was used as an ingredient in certain medical preparations), and that seems to have been enough. The idea spread like a contagion, travelling through letters, meetings, and periodicals. By mid-December, surgeons were administering ether to patients in Paris and London. By February, anesthesia had been used in almost all the capitals of Europe, and by June in most regions of the world. [...] Within seven years, virtually every hospital in America and Britain had adopted the new discovery. [...]
Sepsis—infection—was the other great scourge of surgery. It was the single biggest killer of surgical patients, claiming as many as half of those who underwent major operations, such as a repair of an open fracture or the amputation of a limb. [...]
During the next few years, he perfected ways to use carbolic acid for cleansing hands and wounds and destroying any germs that might enter the operating field. The result was strikingly lower rates of sepsis and death. You would have thought that, when he published his observations in a groundbreaking series of reports in The Lancet, in 1867, his antiseptic method would have spread as rapidly as anesthesia.
Far from it. The surgeon J. M. T. Finney recalled that, when he was a trainee at Massachusetts General Hospital two decades later, hand washing was still perfunctory. Surgeons soaked their instruments in carbolic acid, but they continued to operate in black frock coats stiffened with the blood and viscera of previous operations—the badge of a busy practice. Instead of using fresh gauze as sponges, they reused sea sponges without sterilizing them. It was a generation before Lister’s recommendations became routine and the next steps were taken toward the modern standard of asepsis—that is, entirely excluding germs from the surgical field, using heat-sterilized instruments and surgical teams clad in sterile gowns and gloves. [...]
Did the spread of anesthesia and antisepsis differ for economic reasons? Actually, the incentives for both ran in the right direction. If painless surgery attracted paying patients, so would a noticeably lower death rate. Besides, live patients were more likely to make good on their surgery bill. Maybe ideas that violate prior beliefs are harder to embrace. To nineteenth-century surgeons, germ theory seemed as illogical as, say, Darwin’s theory that human beings evolved from primates. Then again, so did the idea that you could inhale a gas and enter a pain-free state of suspended animation. Proponents of anesthesia overcame belief by encouraging surgeons to try ether on a patient and witness the results for themselves—to take a test drive. When Lister tried this strategy, however, he made little progress. [...]
The technical complexity might have been part of the difficulty. [...] But anesthesia was no easier. [...]
So what were the key differences? First, one combatted a visible and immediate problem (pain); the other combatted an invisible problem (germs) whose effects wouldn’t be manifest until well after the operation. Second, although both made life better for patients, only one made life better for doctors. Anesthesia changed surgery from a brutal, time-pressured assault on a shrieking patient to a quiet, considered procedure. Listerism, by contrast, required the operator to work in a shower of carbolic acid. Even low dilutions burned the surgeons’ hands. You can imagine why Lister’s crusade might have been a tough sell.
This has been the pattern of many important but stalled ideas. They attack problems that are big but, to most people, invisible; and making them work can be tedious, if not outright painful. The global destruction wrought by a warming climate, the health damage from our over-sugared modern diet, the economic and social disaster of our trillion dollars in unpaid student debt—these things worsen imperceptibly every day. Meanwhile, the carbolic-acid remedies to them, all requiring individual sacrifice of one kind or another, struggle to get anywhere. [...]
The staff members I met in India had impressive experience. Even the youngest nurses had done more than a thousand child deliveries. [...] But then we hung out in the wards for a while. In the delivery room, a boy had just been born. He and his mother were lying on a cot, bundled under woollen blankets, resting. The room was coffin-cold; I was having trouble feeling my toes. [...] Voluminous evidence shows that it is far better to place the child on the mother’s chest or belly, skin to skin, so that the mother’s body can regulate the baby’s until it is ready to take over. Among small or premature babies, kangaroo care (as it is known) cuts mortality rates by a third.
So why hadn’t the nurse swaddled the two together? [...]
“The mother didn’t want it,” she explained. “She said she was too cold.”
The nurse seemed to think it was strange that I was making such an issue of this. The baby was fine, wasn’t he? And he was. He was sleeping sweetly, a tightly wrapped peanut with a scrunched brown face and his mouth in a lowercase “o.” [...]
Everything about the life the nurse leads—the hours she puts in, the circumstances she endures, the satisfaction she takes in her abilities—shows that she cares. But hypothermia, like the germs that Lister wanted surgeons to battle, is invisible to her. We picture a blue child, suffering right before our eyes. That is not what hypothermia looks like. It is a child who is just a few degrees too cold, too sluggish, too slow to feed. It will be some time before the baby begins to lose weight, stops making urine, develops pneumonia or a bloodstream infection. Long before that happens—usually the morning after the delivery, perhaps the same night—the mother will have hobbled to an auto-rickshaw, propped herself beside her husband, held her new baby tight, and ridden the rutted roads home.
From the nurse’s point of view, she’d helped bring another life into the world. If four per cent of the newborns later died at home, what could that possibly have to do with how she wrapped the mother and child? Or whether she washed her hands before putting on gloves? Or whether the blade with which she cut the umbilical cord was sterilized? [...]
A decade after the landmark findings, the idea remained stalled. Nothing much had changed. Diarrheal disease remained the world’s biggest killer of children under the age of five.
In 1980, however, a Bangladeshi nonprofit organization called brac decided to try to get oral rehydration therapy adopted nationwide. The campaign required reaching a mostly illiterate population. The most recent public-health campaign—to teach family planning—had been deeply unpopular. The messages the campaign needed to spread were complicated.
Nonetheless, the campaign proved remarkably successful. A gem of a book published in Bangladesh, “A Simple Solution,” tells the story. The organization didn’t launch a mass-media campaign—only twenty per cent of the population had a radio, after all. It attacked the problem in a way that is routinely dismissed as impractical and inefficient: by going door to door, person by person, and just talking. [...]
They recruited teams of fourteen young women, a cook, and a male supervisor, figuring that the supervisor would protect them from others as they travelled, and the women’s numbers would protect them from the supervisor. They travelled on foot, pitched camp near each village, fanned out door to door, and stayed until they had talked to women in every hut. They worked long days, six days a week. Each night after dinner, they held a meeting to discuss what went well and what didn’t and to share ideas on how to do better. Leaders periodically debriefed them, as well. [...]
The program was stunningly successful. Use of oral rehydration therapy skyrocketed. The knowledge became self-propagating. The program had changed the norms. [...]
As other countries adopted Bangladesh’s approach, global diarrheal deaths dropped from five million a year to two million, despite a fifty-per-cent increase in the world’s population during the past three decades. Nonetheless, only a third of children in the developing world receive oral rehydration therapy. Many countries tried to implement at arm’s length, going “low touch,” without sandals on the ground. As a recent study by the Gates Foundation and the University of Washington has documented, those countries have failed almost entirely. People talking to people is still how the world’s standards change.
Surgeons finally did upgrade their antiseptic standards at the end of the nineteenth century. But, as is often the case with new ideas, the effort required deeper changes than anyone had anticipated. In their blood-slick, viscera-encrusted black coats, surgeons had seen themselves as warriors doing hemorrhagic battle with little more than their bare hands. A few pioneering Germans, however, seized on the idea of the surgeon as scientist. They traded in their black coats for pristine laboratory whites, refashioned their operating rooms to achieve the exacting sterility of a bacteriological lab, and embraced anatomic precision over speed.
The key message to teach surgeons, it turned out, was not how to stop germs but how to think like a laboratory scientist. Young physicians from America and elsewhere who went to Germany to study with its surgical luminaries became fervent converts to their thinking and their standards. They returned as apostles not only for the use of antiseptic practice (to kill germs) but also for the much more exacting demands of aseptic practice (to prevent germs), such as wearing sterile gloves, gowns, hats, and masks. Proselytizing through their own students and colleagues, they finally spread the ideas worldwide.
In childbirth, we have only begun to accept that the critical practices aren’t going to spread themselves. Simple “awareness” isn’t going to solve anything. We need our sales force and our seven easy-to-remember messages. And in many places around the world the concerted, person-by-person effort of changing norms is under way."
I recently asked BetterBirth workers in India whether they’d yet seen a birth attendant change what she does. Yes, they said, but they’ve found that it takes a while. They begin by providing a day of classroom training for birth attendants and hospital leaders in the checklist of practices to be followed. Then they visit them on site to observe as they try to apply the lessons. [...]
Sister Seema Yadav, a twenty-four-year-old, round-faced nurse three years out of school, was one of the trainers. [...] Her first assignment was to follow a thirty-year-old nurse with vastly more experience than she had. Watching the nurse take a woman through labor and delivery, she saw how little of the training had been absorbed. [...] By the fourth or fifth visit, their conversations had shifted. They shared cups of chai and began talking about why you must wash hands even if you wear gloves (because of holes in the gloves and the tendency to touch equipment without them on), and why checking blood pressure matters (because hypertension is a sign of eclampsia, which, when untreated, is a common cause of death among pregnant women). They learned a bit about each other, too. Both turned out to have one child—Sister Seema a four-year-old boy, the nurse an eight-year-old girl. [...]
Soon, she said, the nurse began to change. After several visits, she was taking temperatures and blood pressures properly, washing her hands, giving the necessary medications—almost everything. Sister Seema saw it with her own eyes.
She’d had to move on to another pilot site after that, however. And although the project is tracking the outcomes of mothers and newborns, it will be a while before we have enough numbers to know if a difference has been made. So I got the nurse’s phone number and, with a translator to help with the Hindi, I gave her a call.
It had been four months since Sister Seema’s visit ended. I asked her whether she’d made any changes. Lots, she said. [...]
She said that she had eventually begun to see the effects. Bleeding after delivery was reduced. She recognized problems earlier. She rescued a baby who wasn’t breathing. She diagnosed eclampsia in a mother and treated it. You could hear her pride as she told her stories.
Many of the changes took practice for her, she said. She had to learn, for instance, how to have all the critical supplies—blood-pressure cuff, thermometer, soap, clean gloves, baby respiratory mask, medications—lined up and ready for when she needed them; how to fit the use of them into her routine; how to convince mothers and their relatives that the best thing for a child was to be bundled against the mother’s skin. But, step by step, Sister Seema had helped her to do it. “She showed me how to get things done practically,” the nurse said.
“Why did you listen to her?” I asked. “She had only a fraction of your experience.”
In the beginning, she didn’t, the nurse admitted. “The first day she came, I felt the workload on my head was increasing.” From the second time, however, the nurse began feeling better about the visits. She even began looking forward to them.
“Why?” I asked.
All the nurse could think to say was “She was nice.”
“She was nice?”
“She smiled a lot.”
“That was it?”
“It wasn’t like talking to someone who was trying to find mistakes,” she said. “It was like talking to a friend.”
That, I think, was the answer. Since then, the nurse had developed her own way of explaining why newborns needed to be warmed skin to skin. She said that she now tells families, “Inside the uterus, the baby is very warm. So when the baby comes out it should be kept very warm. The mother’s skin does this.”
I hadn’t been sure if she was just telling me what I wanted to hear. But when I heard her explain how she’d put her own words to what she’d learned, I knew that the ideas had spread. “Do the families listen?” I asked.
“Sometimes they don’t,” she said. “Usually, they do.”
An opportunity cost model of subjective effort and task performance (h/t lukeprog) is a very interesting paper on why we accumulate mental fatigue: Kurzban et al. suggest an opportunity cost model, where intense focus on a single task means that we become less capable of using our mental resources for anything else, and accumulating mental fatigue is part of a cost-benefit calculation that encourages us to shift our attention instead of monomaniacally concentrating on just one task which may not be the most rewarding possible. Correspondingly, the amount of boredom or mental fatigue we experience with a task should correspond with the perceived rewards from other tasks available at the moment. A task will feel more boring/effortful if there's something more rewarding that you could be doing instead (i.e. if the opportunity costs for pursuing your current task are higher), and if it requires exclusive use of cognitive resources that could also be used for something else.
This seems to make an amount of intuitive/introspective sense - I had a much easier time doing stuff without getting bored as a kid, when there simply wasn't much else that I could be doing instead. And it does roughly feel like I would get more quickly bored with things in situations where more engaging pursuits were available. I'm also reminded of the thing I noticed as a kid where, if I borrowed a single book from the library, I would likely get quickly engrossed in it, whereas if I had several alternatives it would be more likely that I'd end up looking at each for a bit but never really get around reading any of them.
An opportunity cost model also makes more sense than resource models of willpower which, as Kurzban quite persuasively argued in his earlier book, don't really fit together with the fact that the brain is an information-processing system. My computer doesn't need to use any more electricity in situations where it "decides" to do something as opposed to not doing something, but resource models of willpower have tried to postulate that we would need more of e.g. glucose in order to maintain willpower. (Rather, it makes more sense to presume that a low level of blood sugar would shift the cost-benefit calculations in a way that led to e.g. conservation of resources.)
This isn't just Kurzban et al's opinion - the paper was published in Behavioral and Brain Sciences, which invites diverse comments to all the papers that they publish. In this particular case, it was surprising how muted the defenses of the resource model were. As Kurzban et al point out in their response to responses:
As context for our expectations, consider the impact of one of the central ideas with which we were taking issue, the claim that “willpower” is a resource that is consumed when self-control is exerted. To give a sense of the reach of this idea, in the same month that our target article was accepted for publication Michael Lewis reported in Vanity Fair that no less a figure than President Barack Obama was aware of, endorsed, and based his decision- making process on the general idea that “the simple act of making decisions degrades one’s ability to make further decisions,” with Obama explaining: “I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing. Because I have too many other decisions to make ” (Lewis 2012 ).
Add to this the fact that a book based on this idea became a New York Times bestseller (Baumeister & Tierney 2011 ), the fact that a central paper articulating the idea (Baumeister et al. 1998 ) has been cited more than 1,400 times, and, more broadly, the vast number of research programs using this idea as a foundation, and we can be forgiven for thinking that we would have kicked up something of a hornet’s nest in suggesting that the willpower-as-resource model was wrong. So we anticipated no small amount of stings from the large number of scholars involved in this research enterprise. These were our expectations before receiving the commentaries.
Our expectations were not met. Take, for example, the reaction to our claim that the glucose version of the resource argument is false (Kurzban 2010a ). Inzlicht & Schmeichel, scholars who have published widely in the willpower-as-resource literature, more or less casually bury the model with the remark in their commentary that the “mounting evidence points to the conclusion that blood glucose is not the proximate mechanism of depletion.” ( Malecek & Poldrack express a similar view.) Not a single voice has been raised to defend the glucose model, and, given the evidence that we advanced to support our view that this model is unlikely to be correct, we hope that researchers will take the fact that none of the impressive array of scholars submitting comments defended the view to be a good indication that perhaps the model is, in fact, indefensible. Even if the opportunity cost account of effort turns out not to be correct, we are pleased that the evidence from the commentaries – or the absence of evidence – will stand as an indication to audiences that it might be time to move to more profitable explanations of subjective effort.
While the silence on the glucose model is perhaps most obvious, we are similarly surprised by the remarkably light defense of the resource view more generally. As Kool & Botvinick put it, quite correctly in our perception: “Research on the dynamics of cognitive effort have been dominated, over recent decades, by accounts centering on the notion of a limited and depletable ‘resource’” (italics ours). It would seem to be quite surprising, then, that in the context of our critique of the dominant view, arguably the strongest pertinent remarks come from Carter & McCullough, who imply that the strength of the key phenomenon that underlies the resource model – two-task “ego-depletion” studies – might be considerably less than previously thought or perhaps even nonexistent. Despite the confidence voiced by Inzlicht & Schmeichel about the two-task findings, the strongest voices surrounding the model, then, are raised against it, rather than for it. (See also Monterosso & Luo , who are similarly skeptical of the resource account.)
Indeed, what defenses there are of the resource account are not nearly as adamant as we had expected. Hagger wonders if there is “still room for a ‘resource’ account,” given the evidence that cuts against it, conceding that “[t]he ego-depletion literature is problematic.” Further, he relies largely on the argument that the opportunity cost model we offer might be incomplete, thus “leaving room” for other ideas.
(I'm leaving out discussion of some commentaries which do attempt to defend resource models.)
Though the model still seems to be missing pieces - as one of the commentaries points out, it doesn't really address the fact that some tasks are more inherently boring than others. Some of it might be explained by the argument given in Shouts, Whispers, and the Myth of Willpower: A Recursive Guide to Efficacy (I quote the most relevant bit here), where the author suggests that "self-discipline" in some domain is really about sensitivity for feedback in that domain: a novice in some task doesn't really manage to notice the small nuances that have become so significant for an expert, so they receive little feedback for their actions and it ends up being a boring vigilance task. Whereas an expert will instantly notice the effects that their actions have on the system and get feedback of their progress, which in the opportunity cost model could be interpreted as raising the worthwhileness of the task they're working on. If we go with Kurzban et al.'s notion of us acquiring further information about the expected utility of the task we're working on as we continue working on it, then getting feedback from the task could possibly be read as a sign of the task being one in which we can expect to succeed in.
Another missing piece with the model is that it doesn't really seem to explain the way that one can come home after a long day at work and then feel too exhausted to do anything at all - it can't really be about opportunity costs if you end up so tired that you can't come up with ~any activity that you'd want to do.
I review William Hirstein's book Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy, which he proposes a way of connecting the brains of two different people together so that when person A has a conscious experience, person B may also have the same experience. In particular, I compare it to my and Harri Valpola's earlier paper Coalescing Minds, in which we argued that it would be possible to join the brains of two people together in such a way that they'd become a single mind.
Fortunately, it turns out that the book and the paper are actually rather nicely complementary. To briefly summarize the main differences, we intentionally skimmed over many neuroscientific details in order to establish mindmelding as a possible future trend, while Hirstein extensively covers the neuroscience but is mostly interested in mindmelding as a thought experiment. We seek to predict a possible future trend, while Hirstein seeks to argue a philosophical position: Hirstein focuses on philosophical implications while we focus on societal implications. Hirstein talks extensively about the possibility of one person perceiving another’s mental states while both remaining distinct individuals, while we mainly discuss the possibility of two distinct individuals coalescing together into one.
I expect that LW readers might be particularly interested in some of the possible implications of Hirstein's argument, which he himself didn't discuss in the book, but which I speculated on in the review:
Most obviously, if another person’s conscious states could be recorded and replayed, it would open the doors for using this as entertainment. Were it the case that you couldn’t just record and replay anyone’s conscious experience, but learning to correctly interpret the data from another brain would require time and practice, then individual method actors capable of immersing themselves in a wide variety of emotional states might become the new movie stars. Once your brain learned to interpret their conscious states, you could follow them in a wide variety of movie-equivalents, with new actors being hampered by the fact that learning to interpret the conscious states of someone who had only appeared in one or two productions wouldn’t be worth the effort. If mind uploading was available, this might give considerable power to a copy clan consisting of copies of the same actor, each participating in different productions but each having a similar enough brain that learning to interpret one’s conscious states would be enough to give access to the conscious states of all the others.
The ability to perceive various drug- or meditation-induced states of altered consciousness while still having one’s executive processes unhindered and functional would probably be fascinating for consciousness researchers and the general public alike. At the same time, the ability for anyone to experience happiness or pleasure by just replaying another person’s experience of it might finally bring wireheading within easy reach, with all the dangers associated with that.
A Hirstein-style mind meld might possibly also be used as an uploading technique. Some upload proposals suggest compiling a rich database of information about a specific person, and then later using that information to construct a virtual mind whose behavior would be consistent with the information about that person. While creating such a mind based on just behavioral data makes questionable the extent to which the new person would really be a copy of the original, the skeptical argument loses some of its force if we can also include in the data a recording of all the original’s conscious states during various points in their life. If we are able to use the data to construct a mind that would react to the same sensory inputs with the same conscious states as the original did, whose executive processes would manipulate those states in the same ways as the original, and who would take the same actions as the original did, would that mind then not essentially be the same mind as the original mind?
Hirstein’s argumentation is also relevant for our speculations concerning the evolution of mind coalescences. We spoke abstractly about the ”preferences” of a mind, suggesting that it might be possible for one mind to extract the knowledge from another mind without inherting its preferences, and noting that conflicting preferences would be one reason for two minds to avoid coalescing together. However, we did not say much about where in the brain preferences are produced, and what would be actually required for e.g. one mind to extract another’s knowledge without also acquiring its preferences. As the above discussion hopefully shows, some of our preferences are implicit in our automatic habits (the things that we show we value with our daily routines), some in the preprocessing of sensory data that our brains carry out (the things and ideas that are ”painted with” positive associations or feelings), and some in the configuration of our executive processes (the actions we actually end up doing in response to novel or conflicting situations). (See also.) This kind of a breakdown seems like very promising material for some neuroscience-aware philosopher to tackle in an attempt to figure out just what exactly preferences are; maybe someone has already done so.
Discusses a number of aspects of social status, including the "social status as currency" concept that Morendil and I previously wrote about.
Now we get to the really interesting stuff: the economic properties of social status.
Let’s start with transactions, since they form the basis of an economy. Status is part of our system for competing over scarce resources, so it should be no surprise that it participates in so many of our daily transactions. Some examples:
- We trade status for favors (and vice versa). This is so common you might not even realize it, but even the simple act of saying “please” and “thank you” accords a nominal amount of status to the person doing the favor. The fact that status is at stake in these transactions becomes clear when the pleasantries are withheld, which we often interpret as an insult (i.e., a threat to our status).
- An apology is a ritual lowering of one’s status to compensate for a (real or perceived) affront. As with gratitude, withholding an apology is perceived as an insult.
- We trade status for information (and vice versa). This is one component of “powertalk,” as illustrated in the Gervais Principle series.
- We trade status for sex (and vice versa), which often goes by the name “seduction.” Sometimes even the institution of marriage functions as a sex-for-status transaction. Dowries illustrate this principle by working against it — they reinforce class/caste systems by making it harder for high-status men to marry low-status women.
- We reward employees in the form of institutionalized status (titles, promotions, parking spots), which trade off against salary as a form of compensation.
- We can turn money into status by means of conspicuous consumption, or status into money by means of endorsement (i.e., being paid to lend status to an endeavor).
But the part that I found the most interesting was the idea of defining communities via their status standards:
Previously we defined status with respect to a community, but we could also flip it around:
A community is a group of people who agree on how to measure status among their members.
In other words, it’s a group of people who share a common status currency. Silicon Valley, for example, is a community oriented around a particular way of measuring status — the ability to influence the growth of engineering companies. But Silicon-Valley status won’t buy you anything in Hollywood — unless you convert it to something that makes sense in the Hollywood economy. (Financial wealth usually does the trick).
This definition allows us not only to draw boundaries between communities (porous and fuzzy though they may be), but also allows us to discuss the strength of a community, i.e., the level of agreement about how to measure status. Google, for example, is a fairly strong community insofar as Googlers agree on how to measure status among themselves, but Google engineering might be an even stronger community.
Treating communities as “status-currency blocs” helps explain how there’s relatively free trade (at low transaction costs) within the community — and also how trade is distorted across community boundaries. The fluctuating ‘exchange rates’ and asymmetric information make cross-community interaction more difficult. When a Google VP walks into a meeting with some employees from Facebook, say, everyone will be unsure about their relative statuses, and the group will have to spend time and effort (and a lot of posturing) in order to figure it out.
The “currency bloc” metaphor also helps explain both the benefits and the costs of institutional re-orgs. Merging two organizations, for example, can increase economic efficiency (by standardizing on a single status currency and thereby facilitating more interaction/trade), but the integration will also require some ‘repricing’ — with resistance from everyone who loses out.
The article has a lot more.
By ALEXANDRA WOLFE
As we try to talk by Skype, Jaan Tallinn is fading in and out on my computer screen. Sitting in his living room in Estonia, he is having trouble with his connection, which may seem ironic for a co-founder of Skype, the wildly successful video chat service. But these particular technical difficulties are not Mr. Tallinn's problem these days. Since Skype was sold for $2.6 billion in 2005, making him tens of millions of dollars, he has moved on to bigger issues—like extending the span of a healthy human life and saving the species. And those are just this spring's initiatives.
When the screen finally clears up, Mr. Tallinn comes into view. A youthful 41-years-old, with short blond bangs and fair skin, he could be a poster boy for his latest venture, MetaMed, which promises customers personalized health-care research and analysis of their medical conditions.
Health care is a relatively new focus for Mr. Tallinn, who has been interested in computer science and technology since he was 10. Born in Estonia to an architect mother and a father who directs for film and TV, he didn't get access to a computer until he was 14, when the father of one of his schoolmates selected a group of them to work in his office. There he met the friends who would eventually join him in developing Kazaa, the file-sharing application turned music-subscription service, in 2000 and then Skype in 2002.
He launched MetaMed last March after a $500,000 investment from PayPal co-founder Peter Thiel. So far, the New York-based company has about a dozen employees and 20 clients, half of them friends who are trying it pro bono. The idea emerged from another of Mr. Tallinn's goals: "surviving as a species this century." He has also been developing a new nonprofit called the Cambridge Project for Existential Risk with two academics.
What risks worry him? "The first one is artificial intelligence," he says. "The second is the things that technological progress might create that we're unaware of right now."
He has just read an early draft of a book by his friend Max Tegmark, a physicist at the Massachusetts Institute of Technology, arguing that the only reason nuclear bombs can't be made from instructions downloaded from the Internet is that the laws of physics luckily make it hard to do. "There's no guarantee that wouldn't be possible," he says, referring to homemade nuclear bombs.
His third fear is biological risk. "There could be synthetic viruses that evolution doesn't even know how to create," says Mr. Tallinn. For all practical purposes, he suggests, evolution stopped with the advent of gene technology. "The future of the planet depends much more on technology than evolution," he adds.
Having five children with his wife of 16 years has made many of these ideas more concrete for Mr. Tallinn. "When somebody goes all abstract on me ... saying things like, 'Perhaps humanity doesn't deserve to survive,' I say, 'Look, do you have kids? Do you realize you're talking about the death of your kids or my kids?" Mr. Tallinn says he's always glad to hear when technology developers have children because it makes them think in the long-term.
Glancing away from the screen to the trees outside his house, Mr. Tallinn laments that most people don't take these longer-term risks seriously.
"In general, it seems to me that people in society are bad at dealing with things that have never happened and overreact to things that have happened and happened recently," he says. As he notes, more people die slipping in the shower than in plane crashes, train accidents and terrorist attacks combined. "Since 9/11, more Americans have been killed by falling furniture than by terrorists," says Mr. Tallinn.
And these, in his view, may not be humankind's only blind spots. Mr. Tallinn is open to the possibility that our lives and consciousness are all part of a computer simulation. "As our computers and technology get better at making virtual worlds, it's reasonable to expect them to be able to create virtual worlds that are indistinguishable from the real one," he says. "So if you're in a single-history universe, with one real one and many simulations, the chances of being in the simulation are higher than the real thing."
If we are indeed living in a simulation, should we behave differently? "What we should do depends on what kind of evidence we have that we are in a simulation ... and then the critical question is why the simulation is being run." Mr. Tallinn won't say whether or not he believes we are in the real world or a computerized fake. "Once you're in a simulation you don't even know—it could be that it's not even you."
At the moment, Mr. Tallinn's virtual presence is getting fuzzy again, and his image finally fades from my screen. Calling back with his video turned off, he assures me that he is no pessimist. He looks forward to self-driving cars, which "might completely change the logistics of civilization." he says. With MetaMed, he's excited by the prospect of more advanced biomonitors. And then there's the possibility of cheap gene sequencing.
As Mr. Tallinn sees it, his career, from Skype to MetaMed to the Cambridge Project for Existential Risk, has followed a progressive arc. He recalls how he introduced himself at a recent party: "First I saved about one million human relationships," with Skype, but it "doesn't make sense to save human relationships if you don't make sure [people] live longer, and then make sure they don't get destroyed."
You walk into a laboratory, and you read a set of instructions that tell you that your task is to decide how much of a $10 pie you want to give to an anonymous other person who signed up for the experimental session.
This describes, more or less, the Dictator Game, a staple of behavioral economics with a history dating back more than a quarter of a century. The Dictator Game (DG) might not be the drosophila melanogaster of behavioral economics – the Prisoner’s Dilemma can lay plausible claim to that prized analogy – but it could reasonably aspire to an only slightly more modest title, perhaps the e. coli of the discipline. Since the original work, more than 20,000 observations in the DG have been reported.
How much would participants in a Dictator Game give to the other person if they did not know they were in a Dictator Game study? Simply following me around during the day and recording how much cash I dispense won’t answer this question because in the DG, the money is provided by the experimenter. So, to build a parallel design, the method used must move money to subjects as a windfall so that we can observe how much of this “house money” they choose to give away.
And that is what Winking and Mizer did in a paper now in press and available online (paywall) in Evolution and Human Behavior, using participants, fittingly enough, in Las Vegas. Here’s what they did. Two confederates were needed. The first, destined to become the “recipient,” was occupied on a phone call near a bus stop in Vegas. The second confederate approached lone individuals at the bus stop, indicated that they were late for a ride to the airport, and asked the subject if they wanted the $20 in casino chips still in the confederate’s possession, scamming people into, rather than out of money, in sharp contradiction of the deep traditions of Las Vegas. The question was how many chips the fortunate subject transferred to the nearby confederate.
In a second condition, the confederate with the chips added a comment to the effect that the subject could “split it with that guy however you want,” indicating the first confederate. This condition brings the study a bit closer, but not much closer, to lab conditions, In a third condition, subjects were asked if they wanted to participate in a study, and then did so along the lines of the usual DG, making the treatment considerably closer to traditional lab-based conditions.
The difference between the first two treatments and the third treatments is interesting, but, as I said at the beginning, the DG should be thought of as a measuring tool. Figure 1 shows how many chips people give away in the DG in the three treatments. In conditions 1 and 2, the number of people (out of 60) who gave at least one chip to the second confederate was… zero. To the extent you think that this method answers the question, how much Dictator Game giving is due to people knowing they’re in an experiment, the answer is, “all of it.”
Link to paper (paywalled).
"A group blog, More Right is a place to discuss the many things that are touched by politics that we prefer wouldn’t be, as well as right wing ideas in general. It grew out of the correspondences among like minded people in late 2012, who first began their journey studying the findings of modern cognitive science on the failings of human reasoning and ended it reading serious 19th century gentlemen denouncing democracy. Surveying modernity, we found cracks in its façade. Findings and seemingly correct ideas, carefully bolted down and hidden, met with disapproving stares and inarticulate denunciation when unearthed. This only whetted our appetites. Proceeding from the surface to the foundations, we found them lacking. This is reflected in the spirit of the site."
A Guardian article on the impact of climate change on food security. This is worrying (albeit perhaps not a global catastrophic (or existential) risk). It has the potential to wipe out the gains made against extreme poverty in the last few decades.
Should we be so pessimistic? Climate change might be averted through government action or a technological fix; or the poorest might get rich enough to be protected from this insecurity; or we could see a second 'Green Revolution' with GM, etc. I've also seen some discussion that climate change could in fact increase food cultivation - in Russia and Canada for example.
How do people feel about this - optimistic or pessimistic?
Now, I had been taught in school that scurvy had been conquered in 1747, when the Scottish physician James Lind proved in one of the first controlled medical experiments that citrus fruits were an effective cure for the disease. From that point on, we were told, the Royal Navy had required a daily dose of lime juice to be mixed in with sailors’ grog, and scurvy ceased to be a problem on long ocean voyages.
But here was a Royal Navy surgeon in 1911 apparently ignorant of what caused the disease, or how to cure it. Somehow a highly-trained group of scientists at the start of the 20th century knew less about scurvy than the average sea captain in Napoleonic times. Scott left a base abundantly stocked with fresh meat, fruits, apples, and lime juice, and headed out on the ice for five months with no protection against scurvy, all the while confident he was not at risk. What happened?
This article is a vivid illustration of just how nonlinear and downright messy science actually is, and how little the superficial presentation of science as neat "progress" reflects the reality of the field.
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation.
We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread.
We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing.
We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.
This is a fascinating article with many, many interesting points. I'm excerpting some of them below, but mostly just to get you to read it: if I were to quote everything interesting, I'd have to pretty much copy the entire (long!) article.
Rumors and fiction
[...] A related but perhaps more surprising source of misinformation is literary fiction. People extract knowledge even from sources that are explicitly identified as fictional. This process is often adaptive, because fiction frequently contains valid information about the world. For example, non-Americans’ knowledge of U.S. traditions, sports, climate, and geography partly stems from movies and novels, and many Americans know from movies that Britain and Australia have left-hand traffic. By definition, however, fiction writers are not obliged to stick to the facts, which creates an avenue for the spread of misinformation, even by stories that are explicitly identified as fictional. A study by Marsh, Meade, and Roediger (2003) showed that people relied on misinformation acquired from clearly fictitious stories to respond to later quiz questions, even when these pieces of misinformation contradicted common knowledge. In most cases, source attribution was intact, so people were aware that their answers to the quiz questions were based on information from the stories, but reading the stories also increased people’s illusory belief of prior knowledge. In other words, encountering misinformation in a fictional context led people to assume they had known it all along and to integrate this misinformation with their prior knowledge (Marsh & Fazio, 2006; Marsh et al., 2003).
The effects of fictional misinformation have been shown to be stable and difficult to eliminate. Marsh and Fazio (2006) reported that prior warnings were ineffective in reducing the acquisition of misinformation from fiction, and that acquisition was only reduced (not eliminated) under conditions of active on-line monitoring—when participants were instructed to actively monitor the contents of what they were reading and to press a key every time they encountered a piece of misinformation (see also Eslick, Fazio, & Marsh, 2011). Few people would be so alert and mindful when reading fiction for enjoyment. These links between fiction and incorrect knowledge are particularly concerning when popular fiction pretends to accurately portray science but fails to do so, as was the case with Michael Crichton’s novel State of Fear. The novel misrepresented the science of global climate change but was nevertheless introduced as “scientific” evidence into a U.S. Senate committee (Allen, 2005; Leggett, 2005).
Writers of fiction are expected to depart from reality, but in other instances, misinformation is manufactured intentionally. There is considerable peer-reviewed evidence pointing to the fact that misinformation can be intentionally or carelessly disseminated, often for political ends or in the service of vested interests, but also through routine processes employed by the media. [...]
Assessing the Truth of a Statement: Recipients’ Strategies
Misleading information rarely comes with a warning label. People usually cannot recognize that a piece of information is incorrect until they receive a correction or retraction. For better or worse, the acceptance of information as true is favored by tacit norms of everyday conversational conduct: Information relayed in conversation comes with a “guarantee of relevance” (Sperber & Wilson, 1986), and listeners proceed on the assumption that speakers try to be truthful, relevant, and clear, unless evidence to the contrary calls this default into question (Grice, 1975; Schwarz, 1994, 1996). Some research has even suggested that to comprehend a statement, people must at least temporarily accept it as true (Gilbert, 1991). On this view, belief is an inevitable consequence of—or, indeed, precursor to—comprehension.
Although suspension of belief is possible (Hasson, Simmons, & Todorov, 2005; Schul, Mayo, & Burnstein, 2008), it seems to require a high degree of attention, considerable implausibility of the message, or high levels of distrust at the time the message is received. So, in most situations, the deck is stacked in favor of accepting information rather than rejecting it, provided there are no salient markers that call the speaker’s intention of cooperative conversation into question. Going beyond this default of acceptance requires additional motivation and cognitive resources: If the topic is not very important to you, or you have other things on your mind, misinformation will likely slip in." [...]
Is the information compatible with what I believe?
As numerous studies in the literature on social judgment and persuasion have shown, information is more likely to be accepted by people when it is consistent with other things they assume to be true (for reviews, see McGuire, 1972; Wyer, 1974). People assess the logical compatibility of the information with other facts and beliefs. Once a new piece of knowledge-consistent information has been accepted, it is highly resistant to change, and the more so the larger the compatible knowledge base is. From a judgment perspective, this resistance derives from the large amount of supporting evidence (Wyer, 1974); from a cognitive-consistency perspective (Festinger, 1957), it derives from the numerous downstream inconsistencies that would arise from rejecting the prior information as false. Accordingly, compatibility with other knowledge increases the likelihood that misleading information will be accepted, and decreases the likelihood that it will be successfully corrected.
When people encounter a piece of information, they can check it against other knowledge to assess its compatibility. This process is effortful, and it requires motivation and cognitive resources. A less demanding indicator of compatibility is provided by one’s meta-cognitive experience and affective response to new information. Many theories of cognitive consistency converge on the assumption that information that is inconsistent with one’s beliefs elicits negative feelings (Festinger, 1957). Messages that are inconsistent with one’s beliefs are also processed less fluently than messages that are consistent with one’s beliefs (Winkielman, Huber, Kavanagh, & Schwarz, 2012). In general, fluently processed information feels more familiar and is more likely to be accepted as true; conversely, disfluency elicits the impression that something doesn’t quite “feel right” and prompts closer scrutiny of the message (Schwarz et al., 2007; Song & Schwarz, 2008). This phenomenon is observed even when the fluent processing of a message merely results from superficial characteristics of its presentation. For example, the same statement is more likely to be judged as true when it is printed in high rather than low color contrast (Reber & Schwarz, 1999), presented in a rhyming rather than nonrhyming form (McGlone & Tofighbakhsh, 2000), or delivered in a familiar rather than unfamiliar accent (Levy-Ari & Keysar, 2010). Moreover, misleading questions are less likely to be recognized as such when printed in an easy-to-read font (Song & Schwarz, 2008).
As a result, analytic as well as intuitive processing favors the acceptance of messages that are compatible with a recipient’s preexisting beliefs: The message contains no elements that contradict current knowledge, is easy to process, and “feels right.”
Is the story coherent?
Whether a given piece of information will be accepted as true also depends on how well it fits a broader story that lends sense and coherence to its individual elements. People are particularly likely to use an assessment strategy based on this principle when the meaning of one piece of information cannot be assessed in isolation because it depends on other, related pieces; use of this strategy has been observed in basic research on mental models (for a review, see Johnson-Laird, 2012), as well as extensive analyses of juries’ decision making (Pennington & Hastie, 1992, 1993).
A story is compelling to the extent that it organizes information without internal contradictions in a way that is compatible with common assumptions about human motivation and behavior. Good stories are easily remembered, and gaps are filled with story-consistent intrusions. Once a coherent story has been formed, it is highly resistant to change: Within the story, each element is supported by the fit of other elements, and any alteration of an element may be made implausible by the downstream inconsistencies it would cause. Coherent stories are easier to process than incoherent stories are (Johnson-Laird, 2012), and people draw on their processing experience when they judge a story’s coherence (Topolinski, 2012), again giving an advantage to material that is easy to process. [...]
Is the information from a credible source?
[...] People’s evaluation of a source’s credibility can be based on declarative information, as in the above examples, as well as experiential information. The mere repetition of an unknown name can cause it to seem familiar, making its bearer “famous overnight” (Jacoby, Kelley, Brown, & Jaseschko, 1989)—and hence more credible. Even when a message is rejected at the time of initial exposure, that initial exposure may lend it some familiarity-based credibility if the recipient hears it again.
Do others believe this information?
Repeated exposure to a statement is known to increase its acceptance as true (e.g., Begg, Anas, & Farinacci, 1992; Hasher, Goldstein, & Toppino, 1977). In a classic study of rumor transmission, Allport and Lepkin (1945) observed that the strongest predictor of belief in wartime rumors was simple repetition. Repetition effects may create a perceived social consensus even when no consensus exists. Festinger (1954) referred to social consensus as a “secondary reality test”: If many people believe a piece of information, there’s probably something to it. Because people are more frequently exposed to widely shared beliefs than to highly idiosyncratic ones, the familiarity of a belief is often a valid indicator of social consensus. But, unfortunately, information can seem familiar for the wrong reason, leading to erroneous perceptions of high consensus. For example, Weaver, Garcia, Schwarz, and Miller (2007) exposed participants to multiple iterations of the same statement, provided by the same communicator. When later asked to estimate how widely the conveyed belief is shared, participants estimated consensus to be greater the more often they had read the identical statement from the same, single source. In a very real sense, a single repetitive voice can sound like a chorus. [...]
The extent of pluralistic ignorance (or of the false-consensus effect) can be quite striking: In Australia, people with particularly negative attitudes toward Aboriginal Australians or asylum seekers have been found to overestimate public support for their attitudes by 67% and 80%, respectively (Pedersen, Griffiths, & Watt, 2008). Specifically, although only 1.8% of people in a sample of Australians were found to hold strongly negative attitudes toward Aboriginals, those few individuals thought that 69% of all Australians (and 79% of their friends) shared their fringe beliefs. This represents an extreme case of the false-consensus effect. [...]
The Continued Influence Effect: Retractions Fail to Eliminate the Influence of Misinformation
We first consider the cognitive parameters of credible retractions in neutral scenarios, in which people have no inherent reason or motivation to believe one version of events over another. Research on this topic was stimulated by a paradigm pioneered by Wilkes and Leatherbarrow (1988) and H. M. Johnson and Seifert (1994). In it, people are presented with a fictitious report about an event unfolding over time. The report contains a target piece of information: For some readers, this target information is subsequently retracted, whereas for readers in a control condition, no correction occurs. Participants’ understanding of the event is then assessed with a questionnaire, and the number of clear and uncontroverted references to the target (mis-)information in their responses is tallied.
A stimulus narrative commonly used in this paradigm involves a warehouse fire that is initially thought to have been caused by gas cylinders and oil paints that were negligently stored in a closet (e.g., Ecker, Lewandowsky, Swire, & Chang, 2011; H. M. Johnson & Seifert, 1994; Wilkes & Leatherbarrow, 1988). Some participants are then presented with a retraction, such as “the closet was actually empty.” A comprehension test follows, and participants’ number of references to the gas and paint in response to indirect inference questions about the event (e.g., “What caused the black smoke?”) is counted. In addition, participants are asked to recall some basic facts about the event and to indicate whether they noticed any retraction.
Research using this paradigm has consistently found that retractions rarely, if ever, have the intended effect of eliminating reliance on misinformation, even when people believe, understand, and later remember the retraction (e.g., Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011; Ecker, Lewandowsky, & Tang, 2010; Fein, McCloskey, & Tomlinson, 1997; Gilbert, Krull, & Malone, 1990; Gilbert, Tafarodi, & Malone, 1993; H. M. Johnson & Seifert, 1994, 1998, 1999; Schul & Mazursky, 1990; van Oostendorp, 1996; van Oostendorp & Bonebakker, 1999; Wilkes & Leatherbarrow, 1988; Wilkes & Reynolds, 1999). In fact, a retraction will at most halve the number of references to misinformation, even when people acknowledge and demonstrably remember the retraction (Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011); in some studies, a retraction did not reduce reliance on misinformation at all (e.g., H. M. Johnson & Seifert, 1994).
When misinformation is presented through media sources, the remedy is the presentation of a correction, often in a temporally disjointed format (e.g., if an error appears in a newspaper, the correction will be printed in a subsequent edition). In laboratory studies, misinformation is often retracted immediately and within the same narrative (H. M. Johnson & Seifert, 1994). Despite this temporal and contextual proximity to the misinformation, retractions are ineffective. More recent studies (Seifert, 2002) have examined whether clarifying the correction (minimizing misunderstanding) might reduce the continued influence effect. In these studies, the correction was thus strengthened to include the phrase “paint and gas were never on the premises.” Results showed that this enhanced negation of the presence of flammable materials backfired, making people even more likely to rely on the misinformation in their responses. Other additions to the correction were found to mitigate to a degree, but not eliminate, the continued influence effect: For example, when participants were given a rationale for how the misinformation originated, such as, “a truckers’ strike prevented the expected delivery of the items,” they were somewhat less likely to make references to it. Even so, the influence of the misinformation could still be detected. The wealth of studies on this phenomenon have documented its pervasive effects, showing that it is extremely difficult to return the beliefs of people who have been exposed to misinformation to a baseline similar to those of people who were never exposed to it.
Multiple explanations have been proposed for the continued influence effect. We summarize their key assumptions next. [...]
Concise recommendations for practitioners
[...] We summarize the main points from the literature in Figure 1 and in the following list of recommendations:
Consider what gaps in people’s mental event models are created by debunking and fill them using an alternative explanation.
Use repeated retractions to reduce the influence of misinformation, but note that the risk of a backfire effect increases when the original misinformation is repeated in retractions and thereby rendered more familiar.
To avoid making people more familiar with misinformation (and thus risking a familiarity backfire effect), emphasize the facts you wish to communicate rather than the myth.
Provide an explicit warning before mentioning a myth, to ensure that people are cognitively on guard and less likely to be influenced by the misinformation.
Ensure that your material is simple and brief. Use clear language and graphs where appropriate. If the myth is simpler and more compelling than your debunking, it will be cognitively more attractive, and you will risk an overkill backfire effect.
Consider whether your content may be threatening to the worldview and values of your audience. If so, you risk a worldview backfire effect, which is strongest among those with firmly held beliefs. The most receptive people will be those who are not strongly fixed in their views.
If you must present evidence that is threatening to the audience’s worldview, you may be able to reduce the worldview backfire effect by presenting your content in a worldview-affirming manner (e.g., by focusing on opportunities and potential benefits rather than risks and threats) and/or by encouraging self-affirmation.
You can also circumvent the role of the audience’s worldview by focusing on behavioral techniques, such as the design of choice architectures, rather than overt debiasing.
- Consumers with higher scores on a cognitive reflection test are more inclined to buy products when told more about them; for consumers with lower CRT scores it's the reverse.
- Consumers with higher CRT scores felt that they understood the products better after being told more; consumers with lower CRT scores felt that they understood them worse.
- If subjects are asked to give an explanation of how products work and then asked how well they understand and how willing they'd be to pay, high-CR subjects don't change much in either but low-CR subjects report feeling that they understand worse and that they're willing to pay less.
- Conclusion: it looks as if when you give low-CR subjects more information about a product, they feel they understand it less, don't like that feeling, and become less willing to pay.
If this is right (which seems plausible enough) then it presumably applies more broadly: e.g., to what tactics are most effective in political debate. Though it's hardly news in that area that making people feel stupid isn't the best way to persuade them of things.
Abstract of the paper:
People differ in their threshold for satisfactory causal understanding and therefore in the type of explanation that will engender understanding and maximize the appeal of a novel product. Explanation fiends are dissatisfied with surface understanding and desire detailed mechanistic explanations of how products work. In contrast, explanation foes derive less understanding from detailed than coarse explanations and downgrade products that are explained in detail. Consumers’ attitude toward explanation is predicted by their tendency to deliberate, as measured by the cognitive reflection test. Cognitive reflection also predicts susceptibility to the illusion of explanatory depth, the unjustified belief that one understands how things work. When explanation foes attempt to explain, it exposes the illusion, which leads to a decrease in willingness to pay. In contrast, explanation fiends are willing to pay more after generating explanations. We hypothesize that those low in cognitive reflection are explanation foes because explanatory detail shatters their illusion of understanding.
Related post: Muehlhauser-Wang Dialogue.
Abstract. AGI systems should be able to manage its motivations or goals that are persistent, spontaneous, mutually restricting, and changing over time. A mechanism for handles this kind of goals is introduced and discussed.
From the discussion section:
The major conclusion argued in this paper is that an AGI system should always maintain a goal structure (or whatever it is called) which contains multiple goals that are separately specified, with the properties that
- Some of the goals are accurately specified, and can be fully achieved, while some others are vaguely specified and only partially achievable, but nevertheless have impact on the system's decisions.
- The goals may conflict with each other on what the system should do at a moment, and cannot be achieved all together. Very often the system has to make compromises among the goals.
- Due to the restriction in computational resources, the system cannot take all existing goals into account when making each decision, and nor can it keep a complete record of the goal derivation history.
- The designers and users are responsible for the input goals of an AGI system, from which all the other goals are derived, according to the system's experience. There is no guarantee that the derived goals will be logically consistent with the input goals, except in highly simplified situations.
One area that is closely related to goal management is AI ethics. The previous discussions focused on the goal the designers assign to an AGI system ("super goal" or "final goal"), with the implicit assumption that such a goal will decide the consequences caused by the A(G)I systems. However, the above analysis shows that though the input goals are indeed important, they are not the dominating factor that decides the broad impact of AI to human society. Since no AGI system can be omniscient and omnipotent, to be "general-purpose" means such a system has to handle problems for which its knowledge and resources are insufficient [16, 18], and one direct consequence is that its actions may produce unanticipated results. This consequence, plus the previous conclusion that the effective goal for an action may be inconsistent with the input goals, will render many of the previous suggestions mostly irrelevant to AI ethics.
For example, Yudkowsky's "Friendly AI" agenda is based on the assumption that "a true AI might remain knowably stable in its goals, even after carrying out a large number of self-modications" . The problem about this assumption is that unless we are talking about an axiomatic system with unlimited resources, we cannot assume the system can accurately know the consequence of its actions. Furthermore, as argued previously, the goals in an intelligent system inevitable change as its experience grows, which is not necessarily a bad thing - after all, our "human nature" gradually grows out of, and deviates from, our "animal nature", at both the species level and the individual level.
Omohundro argued that no matter what input goals are given to an AGI system, it usually will derive some common "basic drives", including "be self-protective" and "to acquire resources" , which leads some people to worry that such a system will become unethical. According to our previous analysis, the producing of these goals are indeed very likely, but it is only half of the story. A system with a resource-acquisition goal does not necessarily attempts to achieve it at all cost, without considering its other goals. Again, consider the human beings - everyone has some goals that can become dangerous (either to oneself or to the others) if pursued at all costs. The proper solution, both to human ethics and to AGI ethics, is to prevent this kind of goal from becoming dominant, rather than from being formed.
Nobel prize-winner Daniel Kahneman has issued a strongly worded call to one group of psychologists to restore the credibility of their field by creating a replication ring to check each others’ results.
Kahneman, a psychologist at Princeton University in New Jersey, addressed his open e-mail to researchers who work on social priming, the study of how subtle cues can unconsciously influence our thoughts or behaviour. For example, volunteers might walk more slowly down a corridor after seeing words related to old age1, or fare better in general-knowledge tests after writing down the attributes of a typical professor2.
Could this be a tiny step towards an AGI?
'Blue Brain' Project Accurately Predicts Connections Between Neurons
One of the greatest challenges in neuroscience is to identify the map of synaptic connections between neurons. Called the "connectome," it is the holy grail that will explain how information flows in the brain. In a landmark paper, published the week of 17th of September in the Proceedings of the National Academy of Sciences, the EPFL's Blue Brain Project (BBP) has identified key principles that determine synapse-scale connectivity by virtually reconstructing a cortical microcircuit and comparing it to a mammalian sample. These principles now make it possible to predict the locations of synapses in the neocortex.
"This is a major breakthrough, because it would otherwise take decades, if not centuries, to map the location of each synapse in the brain and it also makes it so much easier now to build accurate models," says Henry Markram, head of the BBP.
A longstanding neuroscientific mystery has been whether all the neurons grow independently and just take what they get as their branches bump into each other, or are the branches of each neuron specifically guided by chemical signals to find all its target. To solve the mystery, researchers looked in a virtual reconstruction of a cortical microcircuit to see where the branches bumped into each other. To their great surprise, they found that the locations on the model matched that of synapses found in the equivalent real-brain circuit with an accuracy ranging from 75 percent to 95 percent.
This means that neurons grow as independently of each other as physically possible and mostly form synapses at the locations where they randomly bump into each other. A few exceptions were also discovered pointing out special cases where signals are used by neurons to change the statistical connectivity. By taking these exceptions into account, the Blue Brain team can now make a near perfect prediction of the locations of all the synapses formed inside the circuit.
The goal of the BBP is to integrate knowledge from all the specialized branches of neuroscience, to derive from it the fundamental principles that govern brain structure and function, and ultimately, to reconstruct the brains of different species -- including the human brain -- in silico. The current paper provides yet another proof-of-concept for the approach, by demonstrating for the first time that the distribution of synapses or neuronal connections in the mammalian cortex can, to a large extent, be predicted.
To achieve these results, a team from the Blue Brain Project set about virtually reconstructing a cortical microcircuit based on unparalleled data about the geometrical and electrical properties of neurons -- data from over nearly 20 years of painstaking experimentation on slices of living brain tissue. Each neuron in the circuit was reconstructed into a 3D model on a powerful Blue Gene supercomputer. About 10,000 of virtual neurons were packed into a 3D space in random positions according to the density and ratio of morphological types found in corresponding living tissue. The researchers then compared the model back to an equivalent brain circuit from a real mammalian brain.
A Major Step Towards Accurate Models of the Brain
This discovery also explains why the brain can withstand damage and indicates that the positions of synapses in all brains of the same species are more similar than different. "Positioning synapses in this way is very robust," says computational neuroscientist and first author Sean Hill, "We could vary density, position, orientation, and none of that changed the distribution of positions of the synapses."
They went on to discover that the synapses positions are only robust as long as the morphology of each neuron is slightly different from each other, explaining another mystery in the brain -- why neurons are not all identical in shape. "It's the diversity in the morphology of neurons that makes brain circuits of a particular species basically the same and highly robust," says Hill.
Overall this work represents a major acceleration in the ability to construct detailed models of the nervous system. The results provide important insights into the basic principles that govern the wiring of the nervous system, throwing light on how robust cortical circuits are constructed from highly diverse populations of neurons -- an essential step towards understanding how the brain functions. They also underscore the value of the BBP's constructivist approach. "Although systematically integrating data across a wide range of scales is slow and painstaking, it allows us to derive fundamental principles of brain structure and hence function," explains Hill.
To my knowledge LessWrong hasn't received a great deal of media coverage. So, I was surprised when I came across an article via a Facebook friend which also appeared on the cover of the New York Observer today. However, I was disappointed upon reading it, as I don't think it is an accurate reflection of the community. It certainly doesn't reflect my experience with the LW communities in Toronto and Waterloo.
I thought it would be interesting to see what the broader LessWrong community thought about this article. I think it would make for a good discussion.
Possible conversation topics:
- This article will likely reach many people that have never heard of LessWrong before. Is this a good introduction to LessWrong for those people?
- Does this article give an accurate characterization of the LessWrong community?
Edit 1: Added some clarification about my view on the article.
Edit 2: Re-added link using “nofollow” attribute.
[link] Prepared to wait? New research challenges the idea that we favour small rewards now over bigger later
The old idea that we make decisions like rational agents has given way over the last few decades to a more realistic, psychologically informed picture that recognises the biases and mental short-cuts that sway our thinking. Supposedly one of these is hyperbolic discounting - our tendency to place disproportionate value on immediate rewards, whilst progressively undervaluing distant rewards the further in the future they stand. But not so fast, say Daniel Read at Warwick Business School and his colleagues with a new paper that fails to find any evidence for the phenomenon.
In the very back of Kaj's excellent How to Run a Successful Less Wrong Meetup Group booklet, he has a recommended reading section, including the classic book How to Win Friends and Influence People.
It just so happens that not only have I read the book myself, but I have written up a concise summary of the core advice here. Kaj suggested that I post this on the discussion section because others might find it useful, so here you go!
I suspect that more people are willing to read a summary of a book from the 1930s than an actual book from the 1930s. What I will say about reading the long-form text is that it can be more useful for internalizing these concepts and giving examples of them. It is far too easy to abstractly know what you need to do, much harder to actually take action on those beliefs...
The International Journal of Machine Consciousness recently published its special issue on mind uploading. The papers are paywalled, but as the editor of the issue, Ben Goertzel has put together a page that links to the authors' preprints of the papers. Preprint versions are available for most of the papers.
Below is a copy of the preprint page as it was at the time that this post was made. Note though that I'll be away for a couple of days, and thus be unable to update this page if new links get added.
This page gathers links to informal, “preprint” versions of the papers in that Special Issue, hosted on the paper authors’ websites. These preprint versions are not guaranteed to be identical to the final published versions, but the content should be essentially the same. The list below contains the whole table of contents of the Special Issue; at the moment links to preprints are still being added to the list items as authors post them on their sites.
BEN GOERTZEL and MATTHEW IKLE’ RANDAL A. KOENE SIM BAMFORD EXPERIMENTAL RESEARCH IN WHOLE BRAIN EMULATION: THE NEED FOR INNOVATIVE IN VIVO MEASUREMENT TECHNIQUESRANDAL A. KOENE AVAILABLE TOOLS FOR WHOLE BRAIN EMULATIONDIANA DECA KENNETH J. HAYWORTH NON-DESTRUCTIVE WHOLE-BRAIN MONITORING USING NANOROBOTS: NEURAL ELECTRICAL DATA RATE REQUIREMENTSNUNO R. B. MARTINS, WOLFRAM ERLHAGEN and ROBERT A. FREITAS, JR. MARTINE ROTHBLATT
WHOLE-PERSONALITY EMULATIONWILLIAM SIMS BAINBRIDGE BEN GOERTZEL MICHAEL HAUSKELLER BRANDON OTO TRANS-HUMAN COGNITIVE ENHANCEMENT, PHENOMENAL CONSCIOUSNESS AND THE EXTENDED MINDTADEUSZ WIESLAW ZAWIDZKI PATRICK D. HOPKINS DIGITAL IMMORTALITY: SELF OR 0010110?LIZ STILLWAGGON SWAN and JOSHUA HOWARD YOONSUCK CHOE, JAEROCK KWON and JI RYANG CHUNG KAJ SOTALA KAJ SOTALA and HARRI VALPOLA
From the Harvard Business Review, an article entitled: "Can We Reverse The Stanford Prison Experiment?"
By: Greg McKeown
Posted: June 12, 2012
Royal Canadian Mounted Police attempt a program where they hand out "Positive Tickets"
Their approach was to try to catch youth doing the right things and give them a Positive Ticket. The ticket granted the recipient free entry to the movies or to a local youth center. They gave out an average of 40,000 tickets per year. That is three times the number of negative tickets over the same period. As it turns out, and unbeknownst to Clapham, that ratio (2.9 positive affects to 1 negative affect, to be precise) is called the Losada Line. It is the minimum ratio of positive to negatives that has to exist for a team to flourish. On higher-performing teams (and marriages for that matter) the ratio jumps to 5:1. But does it hold true in policing?
According to Clapham, youth recidivism was reduced from 60% to 8%. Overall crime was reduced by 40%. Youth crime was cut in half. And it cost one-tenth of the traditional judicial system.
This idea can be applied to Real Life
The lesson here is to create a culture that immediately and sincerely celebrates victories. Here are three simple ways to begin:
1. Start your next staff meeting with five minutes on the question: "What has gone right since our last meeting?" Have each person acknowledge someone else's achievement in a concrete, sincere way. Done right, this very small question can begin to shift the conversation.
2. Take two minutes every day to try to catch someone doing the right thing. It is the fastest and most positive way for the people around you to learn when they are getting it right.
3. Create a virtual community board where employees, partners and even customers can share what they are grateful for daily. Sounds idealistic? Vishen Lakhiani, CEO of Mind Valley, a new generation media and publishing company, has done just that at Gratitude Log. (Watch him explain how it works here).
I attended a talk yesterday given under the auspices of the Ottawa Skeptics on the subject of "metacognition" or thinking about thinking -- basically, it was about core rationality concepts. It was designed to appeal to a broad group of lay people interested in science and consisted of a number of examples drawn from pop-sci books such as Thinking, Fast and Slow and Predictably Irrational. (Also mentioned: straw vulcans as described by CFAR's own Julia Galef.) If people who aren't familiar with LW ask you what LW is about, I'd strongly recommend pointing them to this video.
Here's the link.
This is an ongoing project of mine, although I haven't worked on it in a while. I've been trying to extract the references to Rationality - the Methods of Rationality from HPMoR. It also ended up having a few quotes that seemed interesting about how the story's going. I've linked references where I could find them.
I've only got as far as Chapter 40. Any extra submissions welcome.
At least one person - User:DavidGerard suggested it deserved being posted as a discussion link.
Long story short, it's an attempt to justify the planetarium hypothesis as a solution to the Fermi paradox. The first half is a discussion of how it and things like it are relevant to the intended purview of the blog, and the second half is the meat of the post. You'll probably want to just eat the meat, which I think is relevant to the interests of many LessWrong folk.
The blog is Computational Theology. It's new. I'll be the primary poster, but others are sought. I'll likely introduce the blog and more completely describe it in its own discussion post when more posts are up, hopefully including a few from people besides me, and when the archive will give a more informative indication of what to expect from the blog. Despite theism's suspect reputation here at LessWrong I suspect many of the future posts will be of interest to this audience anyway, especially for those of you who take interest in discussion of the singularity. The blog will even occasionally touch on rationality proper. So you might want to store the fact of the blog's existence somewhere deep in the back of your head. A link to the blog's main page can be found on my LessWrong user page if you forget the url.
I'd appreciate it if comments about the substance of the post were made on the blog post itself, but if you want to discuss the content here on LessWrong then that's okay too. Any meta-level comments about presentation, typos, or the post's relevance to LessWrong, should probably be put as comments on this discussion post. Thanks all!
Ben Goertzel and Joel Pitt: Nine Ways to Bias Open-Source AGI Toward Friendliness. Journal of Evolution and Technology - Vol. 22 Issue 1 – February 2012 - pgs 116-141.
While it seems unlikely that any method of guaranteeing human-friendliness (“Friendliness”) on the part of advanced Artificial General Intelligence (AGI) systems will be possible, this doesn’t mean the only alternatives are throttling AGI development to safeguard humanity, or plunging recklessly into the complete unknown. Without denying the presence of a certain irreducible uncertainty in such matters, it is still sensible to explore ways of biasing the odds in a favorable way, such that newly created AI systems are significantly more likely than not to be Friendly. Several potential methods of effecting such biasing are explored here, with a particular but non-exclusive focus on those that are relevant to open-source AGI projects, and with illustrative examples drawn from the OpenCog open-source AGI project. Issues regarding the relative safety of open versus closed approaches to AGI are discussed and then nine techniques for biasing AGIs in favor of Friendliness are presented:
1. Engineer the capability to acquire integrated ethical knowledge.
2. Provide rich ethical interaction and instruction, respecting developmental stages.
3. Develop stable, hierarchical goal systems.
4. Ensure that the early stages of recursive self-improvement occur relatively slowly and with rich human involvement.
5. Tightly link AGI with the Global Brain.
6. Foster deep, consensus-building interactions between divergent viewpoints.
7. Create a mutually supportive community of AGIs.
8. Encourage measured co-advancement of AGI software and AGI ethics theory.
9. Develop advanced AGI sooner not later.
In conclusion, and related to the final point, we advise the serious co-evolution of functional AGI systems and AGI-related ethical theory as soon as possible, before we have so much technical infrastructure that parties relatively unconcerned with ethics are able to rush ahead with brute force approaches to AGI development.
I'd say it's worth a read - they have pretty convincing criticism against the possibility of regulating AGI (section 3). I don't think that their approach will work if there's a hard takeoff or a serious hardware overhang, though it could maybe work if there isn't. It might also work if there was the possibility for a hard takeoff, but not instantly after developing the first AGI systems.
This is interesting, I wonder if there's anything to it: International variation in IQ – the role of parasites (paper) by Christopher Hassall of U. Carleton.
It strikes me as the sort of thing that could be as big an issue as lead in the environment. Raise the sanity waterline: improve health!
View more: Next