You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] Wikipedia book based on betterhumans' article on cognitive biases

1 MathieuRoy 14 October 2016 01:03AM

[Link] Review of "Doing Good Better"

0 fortyeridania 26 September 2015 07:58AM

The article is here.

The book is by William MacAskill, founder of 80000 Hours and Giving What We Can. Excerpt:

Effective altruism takes up the spirit of Singer’s argument but shields us from the full blast of its conclusion; moral indictment is transformed into an empowering investment opportunity...

Either effective altruism, like utilitarianism, demands that we do the most good possible, or it asks merely that we try to make things better. The first thought is genuinely radical, requiring us to overhaul our daily lives in ways unimaginable to most...The second thought – that we try to make things better – is shared by every plausible moral system and every decent person. If effective altruism is simply in the business of getting us to be more effective when we try to help others, then it’s hard to object to it. But in that case it’s also hard to see what it’s offering in the way of fresh moral insight, still less how it could be the last social movement we’ll ever need.

Summary and Lessons from "On Combat"

17 Gunnar_Zarncke 22 March 2015 01:48AM

On Combat - The Psychology and hysiology of Deadly Conflict in War and in Peace by Lt. Col. Dave Grossman and Loren W. Christensen (third edition from 2007) is a well-written, evidence-based book about the reality of human behaviour in life-threatening situations. It is comprehensive (400 pages), provides detailed descriptions, (some) statistics as well as first-person recounts, historical context and other relevant information. But my main focus in this post is in the advice it gives and what lessons the LessWrong community may take from it.

TL;DR

In deadly force encounters you will experience and remember the most unusual physiological and psychological things. Innoculate yourself against extreme stress with repeated authentic training; play win-only paintball, train 911-dialing and -reporting. Train combat breathing. Talk to people after traumatic events.

continue reading »

How Tim O'Brien gets around the logical fallacy of generalization from fictional evidence

9 mszegedy 24 April 2014 09:41PM

It took me until I read The Things They Carried for the third time until I realized that it contained something very valuable to rationalists. In "The Logical Fallacy of Generalization from Fictional Evidence," EY explains how using fiction as evidence is bad not only because it's deliberately wrong in particular ways to make it more interesting, but more importantly because it does not provide a probabilistic model of what happened, and gives at best a bit or two of evidence that looks like a hundred or more bits of evidence.

Some background: The Things They Carried is a book by Tim O'Brien that reads as an autobiography where he recollects various stories from being a story in the Vietnam War. However, O'Brien often repeats himself, writing the same story over again, but with details or entire events that change. It is actually a fictional autobiography; O'Brien was in the Vietnam War, but all the stories are fictional.

In The Things They Carried, Tim O'Brien not only explains how generalization from fictional evidence is bad, but also has his own solution to the problem that actually works, i.e. gives the reader a useful probabilistic model of what happened in such a way that actually interests the reader. He does this by telling his stories many times, changing significant things about them. Literally; he contradicts himself, writing out the same story but with things changed. The best illustration of the principle in the book is the chapter "How to Tell a True War Story," found here (PDF warning, and bad typesetting warning).

A reader is not inclined to read a list of probabilities, but they are inclined to read a bunch of short stories. He talks about this practice a lot in the book itself, writing, "All you can do is tell it one more time, patiently, adding and subtracting, making up a few things to get at the real truth. … You can tell a true war story if you just keep on telling it." He always says war story, but the principle generalizes. At one point, he has a character represent the forces that act on conventional writing, telling a storyteller that he cannot say that he doesn't know what happened, and that he cannot insert any analysis.

O'Brien also writes about a lot of other things I don't want to mention more than briefly here, such as the specific ways in which the model that conventional war stories give of war is wrong, and specific ways in which the audience misinterprets stories. I recommend the book very much, especially if you think writing "tell multiple short stories" fiction is a great idea and want to do it.

I apologize if this post has been made before.

EDIT: Tried to clarify the idea better. I added an example with an excerpt.

EDIT 2: Added a better excerpt.

EDIT 3: Added a paragraph about background.

Book Review: Kazdin's The Everyday Parenting Toolkit

8 Gunnar_Zarncke 31 March 2014 09:29PM

This is a review of The Everyday Parenting Toolkit: The Kazdin Method for Easy, Step-by-step, Lasting Change for You and Your Child by Alan E, Kazdin (all phrases in quotes below are from this book if not otherwise indicated). I was pointed to this book by tadamsmars comment on Ignorance in Parenting

This is a post in the sequence about parenting. I also see some cross relations to learning and cognitive sciences in general. Kazdins advice also is not only applicable to children but to adults as well if you read the book with a mind open to the backing research (Kazdin actually gives some such examples to illustrate the methods).

Summary TD;DR

Define the positive behavior you do want. Communicate this clearly and provide events that make it likely to occur. Praise any occurrence of the positive behavior effusively. Think about and communicate consequences beforehand. Use mild and short punishments (if at all). Provide a healthy environment.

continue reading »

Help us name a short primer on AI risk!

7 lukeprog 17 September 2013 08:35PM

MIRI will soon publish a short book by Stuart Armstrong on the topic of AI risk. The book is currently titled “AI-Risk Primer” by default, but we’re looking for something a little more catchy (just as we did for the upcoming Sequences ebook).

The book is meant to be accessible and avoids technical jargon. Here is the table of contents and a few snippets from the book, to give you an idea of the content and style:

  1. Terminator versus the AI
  2. Strength versus Intelligence
  3. What Is Intelligence? Can We Achieve It Artificially?
  4. How Powerful Could AIs Become?
  5. Talking to an Alien Mind
  6. Our Values Are Complex and Fragile
  7. What, Precisely, Do We Really (Really) Want?
  8. We Need to Get It All Exactly Right
  9. Listen to the Sound of Absent Experts
  10. A Summary
  11. That’s Where You Come In …

The Terminator is a creature from our primordial nightmares: tall, strong, aggressive, and nearly indestructible. We’re strongly primed to fear such a being—it resembles the lions, tigers, and bears that our ancestors so feared when they wandered alone on the savanna and tundra.

As a species, we humans haven’t achieved success through our natural armor plating, our claws, our razor-sharp teeth, or our poison-filled stingers. Though we have reasonably efficient bodies, it’s our brains that have made the difference. It’s through our social, cultural, and technological intelligence that we have raised ourselves to our current position.

Consider what would happen if an AI ever achieved the ability to function socially—to hold conversations with a reasonable facsimile of human fluency. For humans to increase their social skills, they need to go through painful trial and error processes, scrounge hints from more articulate individuals or from television, or try to hone their instincts by having dozens of conversations. An AI could go through a similar process, undeterred by social embarrassment, and with perfect memory. But it could also sift through vast databases of previous human conversations, analyze thousands of publications on human psychology, anticipate where conversations are leading many steps in advance, and always pick the right tone and pace to respond with. Imagine a human who, every time they opened their mouth, had spent a solid year to ponder and research whether their response was going to be maximally effective. That is what a social AI would be like.

So, title suggestions?

Book Suggestion: "Diaminds" is worth reading (CFAR-esque)

1 MarkL 03 May 2013 12:19AM

The reason for this submission is that I don't think anyone who visits this website will ever read the book described below, otherwise. And that's a shame.

Simply stated, I think CFAR curriculum designers and people who like CFAR's approach should check out this book:

Diaminds: Decoding the Mental Habits of Successful Thinkers by Mihnea Moldoveanu

I claim that you will find illustrations of high-utility thinking styles and potentially useful exercises within. Yes, I am attempting to promote some random, highly questionable book to your attention.

You contemptuously object:

Stay with me.

Moldeveanu has a "secret identity" as a successful serial entrepreneur (first company sold for $21 million). And, he explicitly discusses the disadvantages of his book, his lack of experimental design, selection bias, explanation versus prediction, etc. The only grounds for his claim of having decoded the mental habits of successful thinkers is that he's done a lot of reading, thinking, and doing, and he has a bunch of interview transcripts of successful people. ("Interview transcripts?!")

You might have more objections:
  • If you dig around a little bit online you'll see that the second author writes highly rated popular business books.
  • If you read a little bit of the book, you'll hear a lot about Nicholas Nassim Taleb, black swans, poorly justified claims about how the mind uses branching tree searches, and other assorted suspicious physical, mathematical, and computational analogies for how the mind works.
  • He even asserts that "death is inevitable" (or something like that) in the introduction. *Gasp!*
Finally, you're thinking:
  • "There are 65 million titles out there. What are the chances that this particular crackpot book will be useful to me or CFAR?"
Stay with me.

Ok, still here? I think if you read this book you will continuously oscillate between swiftly-rising-annoyed-skepticism and hey-that's-uncommonly-smart-and-concisely-useful-and-I-could-try-that.

The exercises are not the sole value of the book, but here are some quickly assembled examples:

"Pick a past event that has been precisely recorded (for good example, a significant rise or fall in the price of the stock you know something about). Write down what you believe to be the best explanation for the event. How much would you bet on the explanation being valid, and why? Next, make a prediction based on your explanation (another movement in the stock's value within a certain time window). How much would you bet on the prediction being true, and why? Are the two sums equal? Why or why not?"

"Pick a difficult personal situation[....] In written sentences, describe the situation the way you typically would when talking about it with a friend or family member. Next, figure out -- and write down -- the basic causal structure of the narrative you've written up. [...E]xpand the range of causal chains you believe were at work. [...]"

"[... G]etting an associate to give you feedback, especially cutting, negative feedback, is not easy [...]. So arm her with a deck of file cards, on each of which is written one of the following in capital letters: WHY?, FOR WHAT PURPOSE?, BY WHAT MECHANISM?, SO WHAT?, I DISAGREE! I AGREE! [...]"

"Keep a record of your thinking process as you go through the steps of trying to solve [these problems]. [...] When you've finished, go through the transcript you've produced and 'encode it' using the coding language (mentalese) we have developed in this chapter. Your coding system should include the following simplified typology: The problem complexity class (easy/hard); The solution search process you used (deterministic/probabilistic); The type of solution your mind is searching for (global/local/adaptive); Your perceived distance from the answer to the problem at several different points in the problem-solving process. [...]"

Those were just some snippets that were easy to type up. Most of the exercises are meatier, and he doesn't just say "write down causal structure" without any context. There is buildup if not hand-holding. There's plenty of cognitive bias-flavored stuff, debiasing stuff, mental-model-switching stuff, OODA loop-type stuff, and much more.

Anyway, Moldoveanu tries to describe tools to change how people think. I think he succeeds, in concreteness and concision, at least, more than anything I've ever read on the subject, so far. I'm not saying this is a masterpiece; it's turgid and a little poisonous, like some PUA stuff. And it's uneven. And, I personally am not making any of the exercises a priority in my life, nor am I saying you should. But you might find helpful ideas in here for your personal experiments, and I think CFAR curriculum designers would probably benefit from reading this book.

You can burn through a first pass of the book in a long evening. It's short enough to do so. Chapter 1 (as opposed to the Preface, Praeludium, and Chapter 6) is probably the best thing to read for deciding whether to keep reading. But go back and read the Preface and Praeludium.

[Book Review] "The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t.", by Nate Silver

9 Douglas_Reay 07 October 2012 07:29AM

Here's a link to a review, by The Economist, of a book about prediction, some of the common ways in which people make mistakes and some of the methods by which they could improve:

Looking ahead : How to look ahead—and get it right

One paragraph from that review:

A guiding light for Mr Silver is Thomas Bayes, an 18th-century English churchman and pioneer of probability theory. Uncertainty and subjectivity are inevitable, says Mr Silver. People should not get hung up on this, and instead think about the future the way gamblers do: “as speckles of probability”. In one surprising chapter, poker, a game from which Mr Silver once earned a living, emerges as a powerful teacher of the virtues of humility and patience.

[link] One-question survey from Robin Hanson

-3 fortyeridania 07 September 2012 11:35PM

As many of you probably know, Robin Hanson is writing a book, and it will be geared toward a popular audience. He wants a term that encompasses both humans and AI, so he's soliciting your opinions on the matter. Here's the link: http://www.quicksurveys.com/tqsruntime.aspx?surveyData=AYtdr2WMwCzB981F0qkivSNwbj1tn+xvU6rnauc83iU=

H/T Bryan Caplan at EconLog.

New book on atheism, transhumanism, and x-risk

6 lukeprog 12 July 2012 09:17PM

Phil Torres is the creative force behind the highly enjoyable folk music of Baobab, and he also writes philosophy papers (under the name "Philippe Verdoux").

His forthcoming book may be of interest to LWers: A Crisis of Faith: Atheism, Emerging Technologies, and the Future of Humanity. Mostly it's a beginner's book about atheism, but chapter 20 discusses cognitive enhancement and mind uploading, and chapter 21 discusses existential risks as one of the most important things for humans to address once they've stopped fooling around with religion. There's also an appendix on the simulation argument.

[video] Kelly McGonigal on willpower

6 Bobertron 17 June 2012 10:39AM

the video

Author and Stanford health psychologist Kelly McGonigal, PhD, talks about strategies from her new book "The WillPower Instinct: How Self-Control Works, Why It Matters, and What You Can Do to Get More of It" as part of the Authors@Google series. Topics include dieting/weight loss, health, addiction, quitting smoking, temptation, procrastination, mindfulness, stress, sleep, cravings, exercise, self-control, self-compassion, guilt, and shame.

I'm posting this because akrasia, procrastination and willpower are often discussed on LW. I haven't read the book, but for those that are interested "The Willpower Instinct" and "Maximum Willpower" are, from what I can tell, exactly the same books.

Mini-review: 'Judgment and Decision Making as a Skill'

2 lukeprog 18 February 2012 10:42AM

A new book from Cambridge University Press describes the impetus of the forthcoming Rationality Group in its title: Judgment and Decision Making as a Skill. It begins:

Our scientific understanding of human judgment and decision making (JDM) has grown considerably over the past 60 years in terms of the normative benchmarks... by which we assess performance, the descriptive models we use to describe JDM, and the prescriptive models we offer to improve JDM...

...[But] how do we learn to make good decisions? How can we improve or aid our decision making? Fortunately, there is an emerging body of work that is interested in long-term and short-term changes in JDM skills... There is research on the acquisition of expertise in JDM, and training and aiding of JDM. Researchers more interested in short-term changes have begun to study learning of JDM tasks...

[We] introduce a new conception of JDM, seeing it as a dynamic skill rather than a static capacity...

Chapters 1 and 2 survey the evolution and neurobiology of JDM, while the chapters 3-5 discuss JDM in young children, adolescents, and the aged. Chapters 6-10 were the most interesting to me, because they concern the learning and improving of JDM skills.

In particular, chapter 7 discusses the use of causal Bayes nets to model JDM processes and thereby make better-informed choices among possible debiasing interventions, and chapter 8 discusses JDM in the context of skill-learning (from feedback). Chapter 9 reviews the ways in which JDM can be improved simply by communicating and representing information in particular ways. Chapter 10 reviews "procedures or devices that are intended to improve the quality of people's decisions."

Chapter 11 contains personal reflections on JDM as a skill from nine past presidents of the Society for Judgment and Decision Making.

Overall, the book is a handy collection of review articles on JDM (what LW calls epistemic and instrumental rationality) written from a useful perspective. But it is not as useful as Stanovich's Rationality and the Reflective Mind, and I anticipate it being less useful than the forthcoming Oxford Handbook of Thinking and Reasoning.

New book from leading neuroscientist in support of cryonics and mind uploading

22 lukeprog 08 February 2012 09:36PM

Sebastian Seung's new book Connectome: How the Brain's Wiring Makes Us Who We Are is very well-written, and aimed at a broad audience.

The penultimate chapter explains why cryonics might make sense given our current understanding of the brain.

The final chapter does the same for mind uploading.

Transhumanism continues its march into the mainstream.

Despite its flaws, I recommend the book.

[LINK] Gödel, Escher, Bach read through starting on Reddit

6 Karmakaiser 28 December 2011 03:37PM

http://www.reddit.com/r/GEB/comments/nmy4p/starting_a_readthrough_january_17/

[Context: [1] waingro and I want to start a Reddit read-through of GEB.

I've done an [2] in-person MIT seminar where we read through the book twice before. I think a subreddit would be a great way to make the same experience available to anyone on the Internet!

My plan for when to start would be around January 17. Yes, that's almost a month from now, but it allows time for:

  • Publicizing this subreddit
  • Allowing people to find copies of the book if they don't own it
  • Most importantly, it's after the [3] MIT Mystery Hunt which will be consuming all of my recreational brainpower until then.]

The read through is organized and run by Rob Speer who taught a seminar on GEB once as a senior and once as a grad student at MIT [1](http://ocw.mit.edu/courses/special-programs/sp-258-goedel-escher-bach-spring-2007/). GEB is occasionally referenced on LessWrong and is considered and influential book by Eliezer Yudkowsky, the subject of a short review by LukeProg who recently claimed that it "[...] defied summary more than all the other books I had previously said "defied summary." If you are interested in reading GEB but have not taken the time to do so, I do not need to cite the research on how mechanisms such as commitment contracts are helpful to reaching goals. Joining this group would make the goal read Gödel, Escher, Bach more reachable than it otherwise would have been. 

 


You Are Not So Smart (Pop-Rationality Book)

7 betterthanwell 01 November 2011 07:42PM

Journalist David McRaney has very recently published a popular book on human rationality. The book, You Are Not So Smart, is currently the 3rd best selling book in Nonfiction/Philosophy on Amazon.com after less than a week on the market. (Eighth best selling book in Nonfiction/Education)

The tag-line of the project is: "A celebration of self-delusion." As such the book seems less an attempt at giving advice on how to act and decide, than an attempt to reveal, chapter by chapter, the folly of common sense.

Topics include: Hindsight Bias, Confirmation bias, The Sunk Cost Fallacy, Anchoring Effect, The Illusion of Transparency, The Just World Fallacy, Representativeness Heuristic, The Perils of Introspection, The Dunning-Kruger Effect, The Monty Hall Problem, The Bystander Effect, Placebo Buttons, Groupthink, Conformity, Social Loafing, Helplessness, Cults, Change Blindness, Self-Fulfilling Prophecies, Self Handicapping, Availability Heuristic, Self-Serving Bias, The Ultimatum Game, Inattentional Blindness.

 

 

 

These are topics we enjoy learning about, pride ourself in knowing a lot about, and, we profess, we would want more people to know about. A popular book on this subject is now out. This sounds like a good thing.

I will note that the blog features at least one direct quote from LessWrong.

We always know what we mean by our words, and so we expect others to know it too.  Reading our own writing, the intended interpretation falls easily into place, guided by our knowledge of what we really meant.  It’s hard to empathise with someone who must interpret blindly, guided only by the words.

- Eliezer Yudowsky from Lesswrong.com

One one hand, You Are Not So Smart could bee a boon to Eliezer's popular rationality book by priming the market. His writings on a given topic have rarely been described as redundant. On the other hand, it seems to me that this book closely covers a number of topics, seemingly in a similar style to the treatments that were published on this site and Overcoming Bias. Intended to be published in book form at a later date. I will try to refrain from speculation here.

Sample blook chapters from YouAreNotSoSmart:

For more material, here's a list of all posts at youarenotsosmart.com

 

I'll save the rest of my review until I have actually read the book.

In the meantime I would like to know your thoughts on this project.

Thinking Statistically [ebook]

6 Dreaded_Anomaly 01 November 2011 03:36AM

Uri Bram, a recent Princeton graduate, has just published an ebook called Thinking Statistically. The book is aimed at conveying a few important statistical concepts (selection bias, endogeneity and correlation vs. causation, Bayes theorem and base rate neglect) to a general audience. The official product description:

This book will show you how to think like a statistician, without worrying about formal statistical techniques. Along the way we'll see why supposed Casanovas might actually be examples of the Base Rate Fallacy; how to use Bayes' Theorem to assess whether your partner is cheating on you; and why you should never use Mark Zuckerberg as an example for anything. See the world in a whole new light, and make better decisions and judgements without ever going near a t-test. Think. Think Statistically.

Less Wrong members will be familiar with these topics, but we should keep this book in mind as a convenient method of getting friends, relatives, acquaintances, and others interested in understanding rationality.

Eliezer's An Intuitive Explanation of Bayes' Theorem gets a shout-out in the Recommended Reading at the end.

Upcoming book: "Artificial Intelligence and the End of the Human Era"

2 lukeprog 06 October 2011 03:14PM

Announced here. Full title is Our Final Invention: Artificial Intelligence and the End of the Human Era. The author is James Barrat, best known (for now) for his TV documentaries. I chatted with him online; he'll be at this year's Singularity Summit.

Video: You Are Not So Smart

10 XiXiDu 08 September 2011 09:43AM

This is the first of two trailers for the book 'You Are Not So Smart', by David McRaney.

You will know why I posted it when you watch it, very relevant.

Thinking about thinking is the key.

BOOK DRAFT: 'Ethics and Superintelligence' (part 1)

11 lukeprog 13 February 2011 10:09AM

I'm researching and writing a book on meta-ethics and the technological singularity. I plan to post the first draft of the book, in tiny parts, to the Less Wrong discussion area. Your comments and constructive criticisms are much appreciated.

This is not a book for a mainstream audience. Its style is that of contemporary Anglophone philosophy. Compare to, for example, Chalmers' survey article on the singularity.

Bibliographic references are provided here.

Part 1 is below...

 

 

 

Chapter 1: The technological singularity is coming soon.

 

The Wright Brothers flew their spruce-wood plane for 200 feet in 1903. Only 66 years later, Neil Armstrong walked on the moon, more than 240,000 miles from Earth.

The rapid pace of progress in the physical sciences drives many philosophers to science envy. Philosophers have been researching the core problems of metaphysics, epistemology, and ethics for millennia and not yet come to consensus about them like scientists have for so many core problems in physics, chemistry, and biology.

I won’t argue about why this is so. Instead, I will argue that maintaining philosophy’s slow pace and not solving certain philosophical problems in the next two centuries may lead to the extinction of the human species.

This extinction would result from a “technological singularity” in which an artificial intelligence (AI) of human-level general intelligence uses its intelligence to improve its own intelligence, which would enable it to improve its intelligence even more, which would lead to an “intelligence explosion” feedback loop that would give this AI inestimable power to accomplish its goals. If so, then it is critically important to program its goal system wisely. This project could mean the difference between a utopian solar system of unprecedented harmony and happiness, and a solar system in which all available matter is converted into parts for a planet-sized computer built to solve difficult mathematical problems.

The technical challenges of designing the goal system of such a superintelligence are daunting.[1] But even if we can solve those problems, the question of which goal system to give the superintelligence remains. It is a question of philosophy; it is a question of ethics.

Philosophy has impacted billions of humans through religion, culture, and government. But now the stakes are even higher. When the technological singularity occurs, the philosophy behind the goal system of a superintelligent machine will determine the fate of the species, the solar system, and perhaps the galaxy.

***

Now that I have laid my positions on the table, I must argue for them. In this chapter I argue that the technological singularity is likely to occur within the next 200 years unless a worldwide catastrophe drastically impedes scientific progress. In chapter two I survey the philosophical problems involved in designing the goal system of a singular superintelligence, which I call the “singleton.”

In chapter three I show how the singleton will produce very different future worlds depending on which normative theory is used to design its goal system. In chapter four I describe what is perhaps the most developed plan for the design of the singleton’s goal system: Eliezer Yudkowsky’s “Coherent Extrapolated Volition.” In chapter five, I present some objections to Coherent Extrapolated Volition.

In chapter six I argue that we cannot decide how to design the singleton’s goal system without considering meta-ethics, because normative theory depends on meta-ethics. In chapter seven I argue that we should invest little effort in meta-ethical theories that do not fit well with our emerging reductionist picture of the world, just as we quickly abandon scientific theories that don’t fit the available scientific data. I also specify several meta-ethical positions that I think are good candidates for abandonment.

But the looming problem of the technological singularity requires us to have a positive theory, too. In chapter eight I propose some meta-ethical claims about which I think naturalists should come to agree. In chapter nine I consider the implications of these plausible meta-ethical claims for the design of the singleton’s goal system.

 ***

 




[1] These technical challenges are discussed in the literature on artificial agents in general and Artificial General Intelligence (AGI) in particular. Russell and Norvig (2009) provide a good overview of the challenges involved in the design of artificial agents. Goertzel and Pennachin (2010) provide a collection of recent papers on the challenges of AGI. Yudkowsky (2010) proposes a new extension of causal decision theory to suit the needs of a self-modifying AI. Yudkowsky (2001) discusses other technical (and philosophical) problems related to designing the goal system of a superintelligence.