Theory of Knowledge (rationality outreach)
Public schools (and arguably private schools as well; I wouldn't know) teach students what to think, not how to think.
On LessWrong, this insight is so trivial not to bear repeating. Unfortunately, I think many people have adopted it as an immutable fact about the world that will be corrected post-Singularity, rather than a totally unacceptable state of affairs which we should be doing something about now. The consensus seems to be that a class teaching the basic principles of thinking would be a huge step towards raising the sanity waterline, but that it will never happen. Well, my school has one. It's called Theory of Knowledge, and it's offered at 2,307 schools worldwide as part of the IB Diploma Program.
The IB Diploma, for those of you who haven't heard of it, is a internationally recognized high school program. It requires students to pass tests in 6 subject areas, jump through a number of other hoops, and take an additional class called Theory of Knowledge.
For the record, I'm not convinced the IB Diploma Program is a good thing. It doesn't really solve any of the problems with public schools, it shares the frustrating focus on standardized testing and password-guessing instead of real learning, etc. But I think Theory of Knowledge is a huge opportunity to spread the ideas of rationality.
What kinds of people sign up for the IB Diploma? It is considered more rigorous than A-levels in Britain, and dramatically more rigorous than standard classes in the United States (I would consider it approximately equal to taking 5 or 6 AP classes a year). Most kids engaged in this program are intelligent, motivated and interested in the world around them. They seem, (through my informal survey method of talking to lots of them) to have a higher click factor than average.
The problem is that currently, Theory of Knowledge is a waste of time. There isn't much in the way of standards for a curriculum, and in the entire last semester we covered less content than I learn from any given top-level LessWrong post. We debated the nature of truth for 4 months; most people do not come up with interesting answers to this on their own initiative, so the conversation went in circles around "There's no such thing as truth!" "Now, that's just stupid." the whole time. When I mention LessWrong to my friends, I generally explain it as "What ToK would be like, if ToK was actually good."
At my school, we regularly have speakers come in and discuss various topics during ToK, mostly because the regular instructor doesn't have any idea what to say. The only qualifications seem to be a pulse and some knowledge of English (we've had presenters who aren't fluent). If LessWrong posters wanted to call up the IB school nearest you and offer to present on rationality, I'm almost certain people would agree. This seems like a good opportunity to practice speaking/presenting in a low-stakes situation, and a great way to expose smart, motivated kids to rationality.
I think a good presentation would focus on the meaning of evidence, what we mean by "rationality", and making beliefs pay rent, all topics we've touched on without saying anything meaningful. We've also discussed Popper's falsificationism, and about half your audience will already be familiar with Bayes' theorem through statistics classes but not as a model of inductive reasoning in general.
If you'd be interested in this but don't know where to start in terms of preparing a presentation, Liron's presentation "You Are A Brain" seems like a good place to start. Designing a presentation along these lines might also be a good activity for a meetup group.
Entangled with Reality: The Shoelace Example
Less Wrong veterans be warned: this is an exercise in going back to the basics of rationality.
Yudkowsky once wrote:
What is evidence? It is an event entangled, by links of cause and effect, with whatever you want to know about. If the target of your inquiry is your shoelaces, for example, then the light entering your pupils is evidence entangled with your shoelaces. This should not be confused with the technical sense of "entanglement" used in physics - here I'm just talking about "entanglement" in the sense of two things that end up in correlated states because of the links of cause and effect between them.
And:
Here is the secret of deliberate rationality - this whole entanglement process is not magic, and you can understand it. You can understand how you see your shoelaces. You can think about which sort of thinking processes will create beliefs which mirror reality, and which thinking processes will not.
Much of the heuristics and biases literature is helpful, here. It tells us which sorts of thinking processes tend to create beliefs that mirror reality, and which ones don't.
Still, not everyone understands just how much we know about exactly how the brain becomes entangled with reality by chains of cause and effect. Because "Be specific" is an important rationalist skill, and because concrete physical knowledge is important for technical understanding (as opposed to merely verbal understanding), I would like to summarize1 some of how your beliefs become entangled with reality when a photon bounces off your shoelaces into your eye.
Upcoming meet-ups: Bangalore, Minneapolis, Edinburgh, Melbourne, Houston, Dublin
There are upcoming irregularly scheduled Less Wrong meetups in:
- Bangalore: Saturday May 28, 4 pm
- Minneapolis: Saturday May 28, 3 pm
- Edinburgh LW meetup, Saturday May 28, 2pm
- Melbourne Meetup: Friday 3rd June, 7pm
- Houston Hackerspace Meetup: Sunday May 29, 5:00P
- Dublin Less Wrong meetup Sunday 29 May
- Triangle/Durham, NC: Wednesday June 1, 7:00PM
- Ottawa LW meetup, June 2, 7pm; two Bayesian Conspiracy sessions
- London: Sunday June 5, 14:00
Cities with regularly scheduled meetups: New York, Berkeley, Mountain View, Cambridge, MA, Toronto, Seattle, San Francisco, Irvine.
Belief in Belief
Followup to: Making Beliefs Pay Rent (in Anticipated Experiences)
Carl Sagan once told a parable of a man who comes to us and claims: "There is a dragon in my garage." Fascinating! We reply that we wish to see this dragon—let us set out at once for the garage! "But wait," the claimant says to us, "it is an invisible dragon."
Now as Sagan points out, this doesn't make the hypothesis unfalsifiable. Perhaps we go to the claimant's garage, and although we see no dragon, we hear heavy breathing from no visible source; footprints mysteriously appear on the ground; and instruments show that something in the garage is consuming oxygen and breathing out carbon dioxide.
But now suppose that we say to the claimant, "Okay, we'll visit the garage and see if we can hear heavy breathing," and the claimant quickly says no, it's an inaudible dragon. We propose to measure carbon dioxide in the air, and the claimant says the dragon does not breathe. We propose to toss a bag of flour into the air to see if it outlines an invisible dragon, and the claimant immediately says, "The dragon is permeable to flour."
Carl Sagan used this parable to illustrate the classic moral that poor hypotheses need to do fast footwork to avoid falsification. But I tell this parable to make a different point: The claimant must have an accurate model of the situation somewhere in his mind, because he can anticipate, in advance, exactly which experimental results he'll need to excuse.
LW Biology 101 Introduction: Constraining Anticipation
Since the responses to my recent inquiry were positive, I've rolled up my sleeves and gotten started. Special thanks to badger for eir comment in that thread, as it inspired the framework used here.
My intent in the upcoming posts is to offer a practical overview of biological topics of both broad-scale importance and particular interest to the Less Wrong community. This will by no means be exhaustive (else I’d be writing a textbook instead, or more likely, you’d be reading one); instead I am going to attempt to sketch what amounts to a map of several parts of the discipline – where they stand in relation to other fields, where we are in the progress of their development, and their boundaries and frontiers. I’d like this to be a continually improving project as well, so I would very much welcome input on content relevance and clarity for any and all posts.
I will list relevant/useful references for more in-depth reading at the end of each post. The majority of in-text links will be used to provide a quick explanation of terms that may not be familiar or phenomena that may not be obvious. If the terms are familiar to you, you probably do not need to worry about those links. A significant minority of in-text links may or may not be purely for amusement.
What a reduction of "could" could look like
By requests from Blueberry and jimrandomh, here's an expanded repost of my comment which was itself a repost of my email sent to decision-theory-workshop.
(Wait, I gotta take a breath now.)
A note on credit: I can only claim priority for the specific formalization offered here, which builds on Vladimir Nesov's idea of "ambient control", which builds on Wei Dai's idea of UDT, which builds on Eliezer's idea of TDT. I really, really hope to not offend anyone.
(Whew!)
Imagine a purely deterministic world containing a purely deterministic agent. To make it more precise, agent() is a Python function that returns an integer encoding an action, and world() is a Python function that calls agent() and returns the resulting utility value. The source code of both world() and agent() is accessible to agent(), so there's absolutely no uncertainty involved anywhere. Now we want to write an implementation of agent() that would "force" world() to return as high a value as possible, for a variety of different worlds and without foreknowledge of what world() looks like. So this framing of decision theory makes a subprogram try to "control" the output of a bigger program it's embedded in.
For example, here's Newcomb's Problem:
def world():
box1 = 1000
box2 = (agent() == 2) ? 0 : 1000000
return box2 + ((agent() == 2) ? box1 : 0)
The Two-Party Swindle
The Robbers Cave Experiment had as its subject 22 twelve-year-old boys, selected from 22 different schools in Oklahoma City, all doing well in school, all from stable middle-class Protestant families. In short, the boys were as similar to each other as the experimenters could arrange, though none started out knowing any of the others. The experiment, conducted in the aftermath of WWII, was meant to investigate conflicts between groups. How would the scientists spark an intergroup conflict to investigate? Well, the first step was to divide the 22 boys into two groups of 11 campers -
- and that was quite sufficient. There was hostility almost from the moment each group became aware of the other group's existence. Though they had not needed any name for themselves before, they named themselves the Eagles and the Rattlers. After the researchers (disguised as camp counselors) instigated contests for prizes, rivalry reached a fever pitch and all traces of good sportsmanship disintegrated. The Eagles stole the Rattlers' flag and burned it; the Rattlers raided the Eagles' cabin and stole the blue jeans of the group leader and painted it orange and carried it as a flag the next day.
Each group developed a stereotype of itself and a contrasting stereotype of the opposing group (though the boys had been initially selected to be as similar as possible). The Rattlers swore heavily and regarded themselves as rough-and-tough. The Eagles swore off swearing, and developed an image of themselves as proper-and-moral.
Consider, in this light, the episode of the Blues and the Greens in the days of Rome. Since the time of the ancient Romans, and continuing into the era of Byzantium and the Roman Empire, the Roman populace had been divided into the warring Blue and Green factions. Blues murdered Greens and Greens murdered Blues, despite all attempts at policing. They died in single combats, in ambushes, in group battles, in riots.
Swords and Armor: A Game Theory Thought Experiment
Note: this image does not belong to me; I found it on 4chan. It presents an interesting exercise, though, so I'm posting it here for the enjoyment of the Less Wrong community.

For the sake of this thought experiment, assume that all characters have the same amount of HP, which is sufficiently large that random effects can be treated as being equal to their expected values. There are no NPC monsters, critical hits, or other mechanics; gameplay consists of two PCs getting into a duel, and fighting until one or the other loses. The winner is fully healed afterwards.
Which sword and armor combination do you choose, and why?
Simpson's Paradox
This is my first attempt at an elementary statistics post, which I hope is suitable for Less Wrong. I am going to present a discussion of a statistical phenomenon known as Simpson's Paradox. This isn't a paradox, and it wasn't actually discovered by Simpson, but that's the name everybody uses for it, so it's the name I'm going to stick with. Along the way, we'll get some very basic practice at calculating conditional probabilities.
A worked example
The example I've chosen is an exercise from a university statistics course that I have taught on for the past few years. It is by far the most interesting exercise in the entire course, and it goes as follows:
You are a doctor in charge of a large hospital, and you have to decide which treatment should be used for a particular disease. You have the following data from last month: there were 390 patients with the disease. Treatment A was given to 160 patients of whom 100 were men and 60 were women; 20 of the men and 40 of the women recovered. Treatment B was given to 230 patients of whom 210 were men and 20 were women; 50 of the men and 15 of the women recovered. Which treatment would you recommend we use for people with the disease in future?
The simplest way to represent these sort of data is to draw a table, we can then pick the relevant numbers out of the table to calculate the required conditional probabilities.
Overall
| A | B | |
| lived | 60 | 65 |
| died | 100 | 165 |
The probability that a randomly chosen person survived if they were given treatment A is 60/160 = 0.375
The probability that a randomly chosen person survived if they were given treatment B is 65/230 = 0.283
So a randomly chosen person given treatment A was more likely to surive than a randomly chosen person given treatment B. Looks like we'd better give people treatment A.
However, since were given a breakdown of the data by gender, let's look and see if treatment A is better for both genders, or if it gets all of its advantage from one or the other.
Have no heroes, and no villains
"If you meet the Buddha on the road, kill him!"
When Edward Wilson published the book Sociobiology, Richard Lewontin and Stephen J. Gould secretly convened a group of biologists to gather regularly, for months, in the same building at Harvard that Wilson's office was in, to write an angry, politicized rebuttal to it, essentially saying not that Sociobiology was wrong, but that it was immoral - without ever telling Wilson. This proved, to me, that they were not interested in the truth. I never forgave them for this.
I constructed a narrative of evolutionary biology in which Edward Wilson and Richard Dawkins were, for various reasons, the Good Guys; and Richard Lewontin and Stephen J. Gould were the Bad Guys.
When reading articles on group selection for this post, I was distressed to find Richard Dawkins joining in the vilification of group selection with religious fervor; while Stephen J. Gould was the one who said,
"I have witnessed widespread dogma only three times in my career as an evolutionist, and nothing in science has disturbed me more than ignorant ridicule based upon a desire or perceived necessity to follow fashion: the hooting dismissal of Wynne-Edwards and group selection in any form during the late 1960's and most of the 1970's, the belligerence of many cladists today, and the almost ritualistic ridicule of Goldschmidt by students (and teachers) who had not read him."
This caused me great cognitive distress. I wanted Stephen Jay Gould to be the Bad Guy. I realized I was trying to find a way to dismiss Gould's statement, or at least believe that he had said it from selfish motives. Or else, to find a way to flip it around so that he was the Good Guy and someone else was the Bad Guy.
To move on, I had to consciously shatter my Good Guy/Bad Guy narrative, and accept that all of these people are sometimes brilliant, sometimes blind; sometimes share my values, and sometimes prioritize their values (e.g., science vs. politics) very differently from me. I was surprised by how painful it was to do that, even though I was embarrassed to have had the Good Guy/Bad Guy hypothesis in the first place. I don't think it was even personal - I didn't care who would be the Good Guys and who would be the Bad Guys. I just want there to be Good Guys and Bad Guys.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)