Happy Ada Lovelace Day
Today is Ada Lovelace Day, when STEM enthusiasts highlight the work of modern and historical women scientists, engineers, and mathematicians. If you run a blog, you may want to participate by posting about a woman in a STEM field whom you admire. But I'd love to have people share women scientists/mathematicians/authors in the comments that they think we could all stand to read more about.
- Women in STEM fields (living or dead, fiction or nonfictional) that you'd like us to know more about (preferably with a little precis and a link
- Books about women in STEM fields that are awesome
- Books written by women about STEM subjects that are awesome
- Studies about sexism (or ways to combat it) in STEM fields (and anywhere else)
- Practical things you or organizations you're with have done to cut down on careless or intentional sexism. (how did you implement it, how did you measure the effects, etc)
The deeper solution to the mystery of moralism—Believing in morality and free will are hazardous to your mental health
[Crossposted.]
Sexual predator Jerry Sandusky will serve his time in a minimal security prison, where he’s allowed groups of visitors five days a week.
What a belief in moral realism and free will do is nothing less than change the architecture of decision-making. When we practice principles of integrity and internalize them, they and nonmoral considerations co-determine our System 1 judgments, whereas according to moral realism and free will, moral good is the product of conscious free choice, so System 2 contrastsits moral opinion to System 1’s intuition, for which System 2 compensates—and usually overcompensates. The voter had to weigh the imperatives of the duty to vote and the duty to avoid “lowering the bar” when both candidates are ideologically and programmatically distasteful. System 2 can prime and program System 1 by studying the issues, but the multifaceted decision is itself best made by System 1. What happens when System 2 tries to decide these propositions? System 2 makes the qualitative judgment that System 1 is biased one way or the other and corrects System 1. This will implicate the overcompensation bias, in which conscious attempts to counteract biases usually overcorrect. A voter who thinks correction is needed for a bias toward shirking duty will vote when not really wanting to, all things considered. A voter biased toward "lowering the bar" will be excessively purist. Whatever standard the voter uses will be taken too far.
- It retards people in adaptively changing their principles of integrity.
- It prevents people from questioning their so-called foundations.
- It systematically exaggerates the compellingness of moral claims.
Firewalling the Optimal from the Rational
Followup to: Rationality: Appreciating Cognitive Algorithms (minor post)
There's an old anecdote about Ayn Rand, which Michael Shermer recounts in his "The Unlikeliest Cult in History" (note: calling a fact unlikely is an insult to your prior model, not the fact itself), which went as follows:
Branden recalled an evening when a friend of Rand's remarked that he enjoyed the music of Richard Strauss. "When he left at the end of the evening, Ayn said, in a reaction becoming increasingly typical, 'Now I understand why he and I can never be real soulmates. The distance in our sense of life is too great.' Often she did not wait until a friend had left to make such remarks."
Many readers may already have appreciated this point, but one of the Go stones placed to block that failure mode is being careful what we bless with the great community-normative-keyword 'rational'. And one of the ways we do that is by trying to deflate the word 'rational' out of sentences, especially in post titles or critical comments, which can live without the word. As you hopefully recall from the previous post, we're only forced to use the word 'rational' when we talk about the cognitive algorithms which systematically promote goal achievement or map-territory correspondences. Otherwise the word can be deflated out of the sentence; e.g. "It's rational to believe in anthropogenic global warming" goes to "Human activities are causing global temperatures to rise"; or "It's rational to vote for Party X" deflates to "It's optimal to vote for Party X" or just "I think you should vote for Party X".
If you're writing a post comparing the experimental evidence for four different diets, that's not "Rational Dieting", that's "Optimal Dieting". A post about rational dieting is if you're writing about how the sunk cost fallacy causes people to eat food they've already purchased even if they're not hungry, or if you're writing about how the typical mind fallacy or law of small numbers leads people to overestimate how likely it is that a diet which worked for them will work for a friend. And even then, your title is 'Dieting and the Sunk Cost Fallacy', unless it's an overview of four different cognitive biases affecting dieting. In which case a better title would be 'Four Biases Screwing Up Your Diet', since 'Rational Dieting' carries an implication that your post discusses the cognitive algorithm for dieting, as opposed to four contributing things to keep in mind.
[Poll] Less Wrong and Mainstream Philosophy: How Different are We?
Despite being (IMO) a philosophy blog, many Less Wrongers tend to disparage mainstream philosophy and emphasize the divergence between our beliefs and theirs. But, how different are we really? My intention with this post is to quantify this difference.
The questions I will post as comments to this article are from the 2009 PhilPapers Survey. If you answer "other" on any of the questions, then please reply to that comment in order to elaborate your answer. Later, I'll post another article comparing the answers I obtain from Less Wrongers with those given by the professional philosophers. This should give us some indication about the differences in belief between Less Wrong and mainstream philosophy.
Glossary
analytic-synthetic distinction, A-theory and B-theory, atheism, compatibilism, consequentialism, contextualism, correspondence theory of truth, deontology, egalitarianism, empiricism, Humeanism, libertarianism, mental content externalism, moral realism, moral motivation internalism and externalism, naturalism, nominalism, Newcomb's problem, physicalism, Platonism, rationalism, relativism, scientific realism, trolley problem, theism, virtue ethics
Note
Thanks pragmatist, for attaching short (mostly accurate) descriptions of the philosophical positions under the poll comments.
The raw-experience dogma: Dissolving the “qualia” problem
1. Defining the problem: The inverted spectrum
A. Attempted solutions to the inverted spectrum.
B. The “substitution bias” of solving the “easy problem of consciousness” instead of the “hard problem.”
2. The false intuition of direct awareness
A. Our sense that the existence of raw experience is self-evident doesn’t show that it is true.
B. Experience can’t reveal the error in the intuition that raw experience exists.
C. We can’t capture the ineffable core of raw experience with language because there’s really nothing there.
D. We believe raw experience exists without detecting it.
3. The conceptual economy of qualia nihilism pays off in philosophical progress
4. Relying on the brute force of an intuition is rationally specious.
Against these considerations, the only argument for retaining raw experience in our ontology is the sheer strength of everyone’s belief in its existence. How much weight should we attach to a strong belief whose validity we can't check? None. Beliefs ordinarily earn a presumption of truth from the absence of empirical challenge, but when empirical challenge is impossible in principle, the belief deserves no confidence.
Enjoy solving "impossible" problems? Group project!
In the Muehlhauser-Hibbard Dialogue on AGI, Hibbard states it will be "impossible to decelerate AI capabilities" but Luke counters with "Persuade key AGI researchers of the importance of safety ... If we can change the minds of a few key AGI scientists, it may be that key insights into AGI are delayed by years or decades." and before I read that dialogue, I had come up with three additional ideas on Heading off a near-term AGI arms race. Bill Hibbard may be right that "any effort expended on that goal could be better applied to the political and technical problems of AI safety" but I doubt he's right that it's impossible.
How do you prove something is impossible? You might prove that a specific METHOD of getting to the goal does not work, but that doesn't mean there's not another method. You might prove that all the methods you know about do not work. That doesn't prove there's not some other option you don't see. "I don't see an option, therefore it's impossible." is only an appeal to ignorance. It's a common one but it's incorrect reasoning regardless. Think about it. Can you think of a way to prove that a method that does work isn't out there waiting to be discovered without saying the equivalent of "I don't see any evidence for this." We can say "I don't see it, I don't see it, I don't see it!" all day long.
I say: "Then Look!"
How often do we push past this feeling to keep thinking of ideas that might work? For many, the answer is "never" or "only if it's needed". The sense that something is impossible is subjective and fallible. If we don't have a way of proving something is impossible, but yet believe it to be impossible anyway, this is a belief. What distinguishes this from bias?
I think it's a common fear that you may waste your entire life on doing something that is, in fact, impossible. This is valid, but it's completely missing the obvious: As soon as you think of a plan to do the impossible, you'll be able to guess whether it will work. The hard part is THINKING of a plan to do the impossible. I'm suggesting that if we put our heads together, we can think of a plan to make an impossible thing into a possible one. Not only that, I think we're capable of doing this on a worthwhile topic. An idea that's not only going to benefit humanity, but is a good enough idea that the amount of time and effort and risk required to accomplish the task is worth it.
Here's how I am going to proceed:
Step 1: Come up with a bunch of impossible project ideas.
Step 2: Figure out which one appeals to the most people.
Step 3: Invent the methodology by which we are going to accomplish said project.
Step 4: Improve the method as needed until we're convinced it's likely to work.
Step 5: Get the project done.
Impossible Project Ideas
- Decelerate AI Capabilities Research: If we develop AI before we've figured out the political and technical safety measures, we could have a disaster. Luke's Ideas (Starts with "Persuade key AGI researchers of the importance of safety"). My ideas.
- Solve Violent Crime: Testosterone may be the root cause of the vast majority of violent crime, but there are obstacles in treating it.
- Syntax/static Analysis Checker for Laws: Automatically look for conflicting/inconsistent definitions, logical conflicts, and other possible problems or ambiguities.
- Rational Agreement Software: If rationalists should ideally always agree, why not make an organized information resource designed to get us all to agree? This would track the arguments for and against ideas in such a way where each piece can be verified logically and challenged, make the entire collection of arguments available in an organized manner where none are repeated and no useless information is included, and it would need to be such that anybody can edit it like a wiki, resulting in the most rational outcome being displayed prominently at the top. This is especially hard because it would be our responsibility to make something SO good, it convinces one another to agree, and it would have to be structured well enough that we actually manage to distinguish between opinions and facts. Also, Gwern mentions in a post about critical thinking that argument maps increase critical thinking skills.
- Discover unrecognized bias: This is especially hard since we'll be using our biased brains to try and detect it. We'd have to hack our own way of imagining around the corners, peeking behind our own minds.
- Logic checking AI: Build an AI that checks your logic for logical fallacies and other methods of poor reasoning.
Add your own ideas below (one idea per comment, so we can vote them up and down), make sure to describe your vision, then I'll list them here.
Figure out which one appeals to the most people.
Assuming each idea is put into a separate comment, we can vote them up or down. If they begin with the word "Idea" I'll be able to find them and put them on the list. If your idea is getting enough attention obviously, it will at some point make sense to create a new discussion for it.
Natural Laws Are Descriptions, not Rules
Laws as Rules
We speak casually of the laws of nature determining the distribution of matter and energy, or governing the behavior of physical objects. Implicit in this rhetoric is a metaphysical picture: the laws are rules that constrain the temporal evolution of stuff in the universe. In some important sense, the laws are prior to the distribution of stuff. The physicist Paul Davies expresses this idea with a bit more flair: "[W]e have this image of really existing laws of physics ensconced in a transcendent aerie, lording it over lowly matter." The origins of this conception can be traced back to the beginnings of the scientific revolution, when Descartes and Newton established the discovery of laws as the central aim of physical inquiry. In a scientific culture immersed in theism, it was unproblematic, even natural, to think of physical laws as rules. They are rules laid down by God that drive the development of the universe in accord with His divine plan.
Does this prescriptive conception of law make sense in a secular context? Perhaps if we replace the divine creator of traditional religion with a more naturalist-friendly lawgiver, such as an ur-simulator. But what if there is no intentional agent at the root of it all? Ordinarily, when I think of a physical system as constrained by some rule, it is not the rule itself doing the constraining. The rule is just a piece of language; it is an expression of a constraint that is actually enforced by interaction with some other physical system -- a programmer, say, or a physical barrier, or a police force. In the sort of picture Davies presents, however, it is the rules themselves that enforce the constraint. The laws lord it over lowly matter. So on this view, the fact that all electrons repel one another is explained by the existence of some external entity, not an ordinary physical entity but a law of nature, that somehow forces electrons to repel one another, and this isn't just short-hand for God or the simulator forcing the behavior.
I put it to you that this account of natural law is utterly mysterious and borders on the nonsensical. How exactly are abstract, non-physical objects -- laws of nature, living in their "transcendent aerie" -- supposed to interact with physical stuff? What is the mechanism by which the constraint is applied? Could the laws of nature have been different, so that they forced electrons to attract one another? The view should also be anathema to any self-respecting empiricist, since the laws appear to be idle danglers in the metaphysical theory. What is the difference between a universe where all electrons, as a matter of contingent fact, attract one another, and a universe where they attract one another because they are compelled to do so by the really existing laws of physics? Is there any test that could distinguish between these states of affairs?
Self-skepticism: the first principle of rationality
When Richard Feynman started investigating irrationality in the 1970s, he quickly begun to realize the problem wasn't limited to the obvious irrationalists.
Uri Geller claimed he could bend keys with his mind. But was he really any different from the academics who insisted their special techniques could teach children to read? Both failed the crucial scientific test of skeptical experiment: Geller's keys failed to bend in Feynman's hands; outside tests showed the new techniques only caused reading scores to go down.
What mattered was not how smart the people were, or whether they wore lab coats or used long words, but whether they followed what he concluded was the crucial principle of truly scientific thought: "a kind of utter honesty--a kind of leaning over backwards" to prove yourself wrong. In a word: self-skepticism.
As Feynman wrote, "The first principle is that you must not fool yourself -- and you are the easiest person to fool." Our beliefs always seem correct to us -- after all, that's why they're our beliefs -- so we have to work extra-hard to try to prove them wrong. This means constantly looking for ways to test them against reality and to think of reasons our tests might be insufficient.
When I think of the most rational people I know, it's this quality of theirs that's most pronounced. They are constantly trying to prove themselves wrong -- they attack their beliefs with everything they can find and when they run out of weapons they go out and search for more. The result is that by the time I come around, they not only acknowledge all my criticisms but propose several more I hadn't even thought of.
And when I think of the least rational people I know, what's striking is how they do the exact opposite: instead of viciously attacking their beliefs, they try desperately to defend them. They too have responses to all my critiques, but instead of acknowledging and agreeing, they viciously attack my critique so it never touches their precious belief.
Since these two can be hard to distinguish, it's best to look at some examples. The Cochrane Collaboration argues that support from hospital nurses may be helpful in getting people to quit smoking. How do they know that? you might ask. Well, they found this was the result from doing a meta-analysis of 31 different studies. But maybe they chose a biased selection of studies? Well, they systematically searched "MEDLINE, EMBASE and PsycINFO [along with] hand searching of specialist journals, conference proceedings, and reference lists of previous trials and overviews." But did the studies they pick suffer from selection bias? Well, they searched for that -- along with three other kinds of systematic bias. And so on. But even after all this careful work, they still only are confident enough to conclude "the results…support a modest but positive effect…with caution … these meta-analysis findings need to be interpreted carefully in light of the methodological limitations".
Compare this to the Heritage Foundation's argument for the bipartisan Wyden–Ryan premium support plan. Their report also discusses lots of objections to the proposal, but confidently knocks down each one: "this analysis relies on two highly implausible assumptions ... All these predictions were dead wrong. ... this perspective completely ignores the history of Medicare" Their conclusion is similarly confident: "The arguments used by opponents of premium support are weak and flawed." Apparently there's just not a single reason to be cautious about their enormous government policy proposal!
Now, of course, the Cochrane authors might be secretly quite confident and the Heritage Foundation might be wringing their hands with self-skepticism behind-the-scenes. But let's imagine for a moment that these aren't just reportes intended to persuade others of a belief and instead accurate portrayals of how these two different groups approached the question. Now ask: which style of thinking is more likely to lead the authors to the right answer? Which attitude seems more like Richard Feynman? Which seems more like Uri Geller?
What are the optimal biases to overcome?
If you're interested in learning rationality, where should you start? Remember, instrumental rationality is about making decisions that get you what you want -- surely there are some lessons that will help you more than others.
You might start with the most famous ones, which tend to be the ones popularized by Kahneman and Tversky. But K&T were academics. They weren't trying to help people be more rational, they were trying to prove to other academics that people were irrational. The result is that they focused not on the most important biases, but the ones that were easiest to prove.
Take their famous anchoring experiment, in which they showed the spin of a roulette wheel affected people's estimates about African countries. The idea wasn't that roulette wheels causing biased estimates was a huge social problem; it was that no academic could possibly argue that this behavior was somehow rational. They thereby scored a decisive blow for psychology against economists claiming we're just rational maximizers.
Most academic work on irrationality has followed in K&T's footsteps. And, in turn, much of the stuff done by LW and CFAR has followed in the footsteps of this academic work. So it's not hard to believe that LW types are good at avoiding these biases and thus do well on the psychology tests for them. (Indeed, many of the questions on these tests for rationality come straight from K&T experiments!)
But if you look at the average person and ask why they aren't getting what they want, very rarely do you conclude their biggest problem is that they're suffering from anchoring, framing effects, the planning fallacy, commitment bias, or any of the other stuff in the sequences. Usually their biggest problems are far more quotidian and commonsensical.
Take Eliezer. Surely he wanted SIAI to be a well-functioning organization. And he's admitted that lukeprog has done more to achieve that goal of his than he has. Why is lukeprog so much better at getting what Eliezer wants than Eliezer is? It's surely not because lukeprog is so much better at avoiding Sequence-style cognitive biases! lukeprog readily admits that he's constantly learning new rationality techniques from Eliezer.
No, it's because lukeprog did what seems like common sense: he bought a copy of Nonprofits for Dummies and did what it recommends. As lukeprog himself says, it wasn't lack of intelligence or resources or akrasia that kept Eliezer from doing these things, "it was a gap in general rationality."
So if you're interested in closing the gap, it seems like the skills to prioritize aren't things like commitment effect and the sunk cost fallacy, but stuff like "figure out what your goals really are", "look at your situation objectively and list the biggest problems", "when you're trying something new and risky, read the For Dummies book about it first", etc. For lack of better terminology, let's call the K&T stuff "cognitive biases" and this stuff "practical biases" (even though it's all obviously both practical and cognitive and biases is kind of a negative way of looking at it).
What are the best things you've found on tackling these "practical biases"? Post your suggestions in the comments.
A cynical explanation for why rationalists worry about FAI
My friend, hearing me recount tales of LessWrong, recently asked me if I thought it was simply a coincidence that so many LessWrong rationality nerds cared so much about creating Friendly AI. "If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?"
Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.
After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.
Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.
Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)