Decision Theory FAQ
Co-authored with crazy88. Please let us know when you find mistakes, and we'll fix them. Last updated 03-27-2013.
Contents:
- 1. What is decision theory?
- 2. Is the rational decision always the right decision?
- 3. How can I better understand a decision problem?
- 4. How can I measure an agent's preferences?
- 5. What do decision theorists mean by "risk," "ignorance," and "uncertainty"?
- 6. How should I make decisions under ignorance?
- 7. Can decisions under ignorance be transformed into decisions under uncertainty?
- 8. How should I make decisions under uncertainty?
- 9. Does axiomatic decision theory offer any action guidance?
- 10. How does probability theory play a role in decision theory?
- 11. What about "Newcomb's problem" and alternative decision algorithms?
1. What is decision theory?
Decision theory, also known as rational choice theory, concerns the study of preferences, uncertainties, and other issues related to making "optimal" or "rational" choices. It has been discussed by economists, psychologists, philosophers, mathematicians, statisticians, and computer scientists.
We can divide decision theory into three parts (Grant & Zandt 2009; Baron 2008). Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose. Descriptive decision theory studies how non-ideal agents (e.g. humans) actually choose. Prescriptive decision theory studies how non-ideal agents can improve their decision-making (relative to the normative model) despite their imperfections.
For example, one's normative model might be expected utility theory, which says that a rational agent chooses the action with the highest expected utility. Replicated results in psychology describe humans repeatedly failing to maximize expected utility in particular, predictable ways: for example, they make some choices based not on potential future benefits but on irrelevant past efforts (the "sunk cost fallacy"). To help people avoid this error, some theorists prescribe some basic training in microeconomics, which has been shown to reduce the likelihood that humans will commit the sunk costs fallacy (Larrick et al. 1990). Thus, through a coordination of normative, descriptive, and prescriptive research we can help agents to succeed in life by acting more in accordance with the normative model than they otherwise would.
This FAQ focuses on normative decision theory. Good sources on descriptive and prescriptive decision theory include Stanovich (2010) and Hastie & Dawes (2009).
Two related fields beyond the scope of this FAQ are game theory and social choice theory. Game theory is the study of conflict and cooperation among multiple decision makers, and is thus sometimes called "interactive decision theory." Social choice theory is the study of making a collective decision by combining the preferences of multiple decision makers in various ways.
This FAQ draws heavily from two textbooks on decision theory: Resnik (1987) and Peterson (2009). It also draws from more recent results in decision theory, published in journals such as Synthese and Theory and Decision.
Need help with an MLP fanfiction with a transhumanist theme.
EDIT: I am now taking arguments for alicornism. Alicornism being the placeholder term I've given to the stance that all ponies should be alicorns. Please PM me or post here if you have a good one, or an argument against one of anti-alicornism's strongest points: Overpopulation/over-use of resources, magical abuse/existential risk, or upheaval of the respect ponies have for their rulers due to their alicorn status. I would prefer general arguments for alicornism over counter-arguments if possible. Deathist / anti-alicornist arguments are still fine to post here.
Disclaimer: I'm not sure if this is worthy of a discussion post, but I figured, given the amount of people on LW who like My Little Pony, it would have at least as many potentially interested people as a regional meet-up thread would, so I figured I'd give it a shot. If this is too trivial or frivolous for LW, feel free to tell me and/or downvote, and I'll refrain from such threads in future. A place where I could go to find some help instead of the Discussion section would also be greatly appreciated in such a case.
So I had an idea for a one-shot or small novella, depending on how the plot developed, about an argument between Twilight and Celestia. Twilight finds out she's immortal now that she's an alicorn, and Twilight then decides that, given the standard anti-death concepts that immortality is good, death is bad, and so on, they should turn everyone who wants to be an alicorn into one.
The problem is, I'm having a very difficult time coming up with actual arguments for Celestia.
- Celestia herself is immortal, she's lived for well over a thousand years, and she isn't horrifically depressed, so clearly, immortal life is worth living and there's enough stuff to do with an extended lifespan.
- For the purposes of this fic, it's possible to turn anypony into an alicorn. I'm likely going to go with the idea that the spell can only be used a few times a year, but that's still enough to turn anyone who wants it into an alicorn within a couple of decades via exponentiation: The first targets can all be gifted unicorns who can be easily trained to use the magic.
- In most of the "Immortality sucks" fics I've read, the only real argument that immortality sucks is that you have to watch everyone else grow up and die. If a large majority of the population were turned alicorn, this wouldn't be a problem anymore.
- Nothing in canon suggests that there's any sort of religion in Equestria. Even in fanfics I've read, I've only read one fanfic where someone made up an afterlife that some ponies believed in, and in many more that I've read, Celestia's name is actually used in place of God in various sentences, like "Oh for Celestia's sake!" Thus, it's unlikely they'd believe in an afterlife: Both in canon and the majority of fanon, the closest thing to a God appears to be Celestia herself.
I've come up with arguments for Celestia by roleplaying the argument out by myself, but I haven't come up with anything that Twilight can't just shoot down, and I'd prefer if the argument wasn't just Celestia getting steamrolled, and I'd like to do this by strengthening Celestia's side, not weakening Twilight's.
Is the argument for deathism really that weak? I've read over the Harry vs. Dumbledore deathism argument in HPMOR several times looking for ideas, and IIRC Eliezer actually claimed he steel-manned Dumbledore's position, but I don't find anything Dumbledore says convincing in the slightest, and ended that chapter feeling that Harry was the clear winner in that debate, and that's with Dumbledore having access to arguments that Celestia doesn't, given that in the Potterverse, nobody actually knows what it's like to be immortal, and Dumbledore believes in an afterlife.
Some other arguments I've come up with for Celestia:
Argument: We can't just have a massive ruling class.
Response: There's no need for alicorns to be royalty. "Princess = Alicorn, Alicorn = Princess" is only something that law and tradition dictate: They can be changed. After all, Blueblood is a prince and not an alicorn, and it's certainly possible for an alicorn to NOT be royalty, if the princesses wanted.
Argument: Harder to keep the populace in line, if everyone has more power.
Response: Celestia's not exactly going around fighting criminals herself with her alicorn powers, so Celestia being much more powerful than others isn't necessary to keep the peace. If anything, an alicornified populace is MORE likely to be able to govern itself: Atm, a pegasus criminal can only be pursued effectively by about one-third of police officers, for example.
Argument: Overpopulation.
Response: One response to this is the idea that, starting a year or so from a royal edict, ponies who wish to be changed into alicorns aren't permitted to give birth more than once or twice. A broader response is that "overpopulation" isn't actually a reason to oppose alicornification, it's just a problem that has to be solved in order to do it. Saying "There'd be overpopulation" and then forgetting about the entire idea would be like Twilight saying that they didn't know how she was supposed to save the Crystal Empire from being banished again when she got given the task, and responding to this by saying "Oh well, guess that's it, we may as well pack up and go home." rather than trying to actually solve the problem. That said, this is the only truly legitimate argument I've come up with, an argument that requires real thought to fully defeat, rather than an argument that has an easy response leap to my mind.
Argument: Mortals wouldn't understand the consequence of their decision.
Response: Again, several arguments for this. Firstly, there's no reason to believe the alicorn transformation is irreversible, even if it's not currently known how to transform it back. Secondly, Celestia can already predict the consequences, and since she thinks HER life is worth living, clearly there's a solid chance that other ponies will have their lives worth living as well.
So, the questions to ask:
Are there good arguments for Celestia I haven't thought of?
Are the arguments I've already posited sufficient to not straw-man the lifeism position, and to allow for a reasonable argument?
EDIT: I am now taking arguments for alicornism. Alicornism being the placeholder term I've given to the stance that all ponies should be alicorns. Please PM me or post here if you have a good one, or an argument against one of anti-alicornism's strongest points: Overpopulation/over-use of resources, magical abuse/existential risk, or upheaval of the respect ponies have for their rulers due to their alicorn status. I would prefer general arguments for alicornism over counter-arguments if possible. Deathist / anti-alicornist arguments are still fine to post here.
Case Study: the Death Note Script and Bayes
"Who wrote the Death Note script?"
I give a history of the 2009 leaked script, discuss internal & external evidence for its authenticity including stylometrics; and then give a simple step-by-step Bayesian analysis of each point. We finish with high confidence in the script's authenticity, discussion of how this analysis was surprisingly enlightening, and what followup work the analysis suggests would be most valuable.
2012: Year in Review
The beginning of a new year is a customary time to take a look back and consider what has happened during the last 12 months. And while the time for doing so is admittedly rather arbitrary - after all, "years" do not really exist in the universe, just in our heads - it is useful and fun to review one's accomplishments every now and then. And a time when everyone else is doing it gives us a nice Schelling point for joining in, so we can pretend that it's not quite that arbitrary.
So what might be some noteworthy things that happened on Less Wrong in 2012 that could be worth mentioning?
Site upgrades
First, I would like to say "thank you" to all the people working on keeping this site running and helping it make increasingly more awesome! This obviously includes pretty much everyone who comments, posts and writes here, but particularly also the folks at Trikeapps, and everyone who contributes updates to the site's codebase. There were several site upgrades in 2012, four of which were major enough to get separate announcements:
Less Wrong's new front page was rolled out in March, thanks to work by matt. One can easily access a number of site features from the brain graphic, and there's a convenient introduction under it, together with links to featured articles and recent promoted articles. Hopefully, this has made it easier for newcomers to get familiar with the site.
The "Best" sorting system for comments was introduced in July. The work was done by John Simon, and integrated by Wes. Whereas the old default sorting system, "Top", favored old comments that had already floated to the top and were thus more likely to get even more upvotes, "Best" attempts to give newer comments a fairer chance.
In August we got the ability to show parent comments on /comments. The work was done by John Simon, and integrated by wmoore. This change makes it far easier to grasp the context of things seen on the recent comments page, given that we now see the old comment that the new comments are replying to.
And finally, starting from September, we have been able to write comments that contain polls! Work on the code was originally began by jimrandomh, finished later by John Simon, and deployed by wmoore and matt. Although people had long been taking advantage of comment vote counts as a crude way of creating their own polls, this change makes things far easier.
Meetup booklet
In June, we published the How to Run a Successful Less Wrong Meetup booklet, which I wrote together with lukeprog, and which got its graphical design from Stanislaw Boboryk. Numerous other people also helped, both by providing advice and by contributing pictures to it. In addition to general advice on running a meetup, it contains various games and exercises as well as case studies and examples from real meetup groups from around the world.
Index of original research
Starting from October, lukeprog has maintained a curated index of Less Wrong posts containing significant original research. It contains numerous posts, organized under categories such as general philosophy, decision theory / AI architectures / mathematical logic, ethics, and AI risk strategy. Last updated on December 17th, it now links to a total of 78 different posts.
Who are we?
In November and December, Yvain continued his hard work in holding the yearly survey. Among other interesting details, around 90% of us are male, 55% are from the USA, 41% are students and 31% are doing for-profit work. See the 2012 survey results for many more details.
Most popular posts of 2012
On LW, people tend to judge the popularity of a post by the number of upvotes that it has. But this only reflects the opinion of the registered users who care enough to vote. For purposes of this article, we were interested in finding out the posts that had made the biggest impact on the whole Internet. Although it's not a perfect measure either, we decided to measure popularity by the number of unique pageviews, as reported by Google Analytics.
Overall, in 2012 Less Wrong had over eight million unique pageviews and close to two million unique visitors (8,225,509 and 1,756,899, respectively). Of the posts that were written in 2012, the most popular ones were...
#10: Get Curious, in which lukeprog suggests that one of the most important rationality skills is being genuinely curious about things, instead of just jumping to the first answer that comes to your mind and leaving it at that. He suggests a three-step approach for actually becoming more curious: first, feel that you don't already know the answer, then start wanting to know the answer, and finally sprint headlong to reality. Together with a number of exercises intended to make you better at these steps, this article made a lot of folks curious about Less Wrong and caused people to sprint headlong to the post 10,850 times.
#9: Being curious about things means that you genuinely want to know the truth. That makes it useful to have a good grasp of The Useful Idea of Truth. This article by Eliezer Yudkowsky starts the Highly Advanced Epistemology 101 for Beginners sequence by explaining what exactly it means for something to be "true". In order to avoid spoiling the article's "meditations" for anyone who hasn't read it yet, I will not attempt to summarize the answer. I'll only suggest that one definition for "truth" could be the correctness of the claim that this post was viewed 11,161 times.
#8: Having defined truth, we can move on to ask, what are numbers? And in what sense is "2 + 2 = 4" meaningful or true? Eliezer Yudkowsky's Logical Pinpointing attempts to answer this question, partially through the cute device of conversing with an imaginary logician who understands logic perfectly but has no grasp of numbers. As they converse, they define the rules according to which arithmetic works. I'm going to skip the obvious pun due to it being too obvious, and only say that this article was viewed 12,606 times.
#7: Now that we're curious and understand both the meaning of truth and of numbers, it stands to reason that we should Be Happier than before. Or maybe not, since Klevador's article does not actually mention "understand obscure philosophy" as a way of getting happier. What it does mention is a big list of other things that have been shown to increase happiness. We first get a list of brief recommendations a few sentences long, and then somewhat longer excerpts of the relevant literature. There's also a full list of references. Let's hope that the 14,178 views that this post got made someone happier.
#6: Getting into more controversial territory, lukeprog advises us to Train Philosophers with Pearl and Kahneman, not Plato and Kant. Philosophy is getting increasingly diseased and irrelevant, he argues, and the cure for that involves incorporating more actual science and rationality into the standard philosopher curriculum. If the discussion on Hacker News is any indication, this post got a lot of people incensed, which might help explain why it got 14,334 views.
#5: Now that we got started on calling whole disciplines diseased, let's look at Diseased disciplines: the strange case of the inverted chart. Morendil's post begins with a hypothetical example of numerous academics all citing a particular source, which doesn't actually contain the intended reference... and then the intended source doesn't actually have the data to back up its claim, either. But that's just a hypothetical example, right? Well, not really, which helped this post get 17,385 views.
#4: Interestingly, our fourth-most-popular post isn't actually an original contribution as such. Grognor's transcript of Richard Feynman on Why Questions discusses the nature of explanations, and the fact that there are some things which simply cannot be adequately explained in terms of pre-existing knowledge. Instead, one has to learn entirely new concepts in order to comprehend them. Hopefully, at least this much was understood on the 18,402 times that the post was viewed.
#3: From physics to neuroscience: kalla724's Attention control is critical for changing/increasing/altering motivation explores the effect of attention on neural plasticity, including the plasticity of motivation. It explains that paying attention to something can increase the amount of brain circuitry dedicated to processing that something, generally by repurposing nearby less-used circuitry. This also has practical applications, such as in helping to explain why Cognitive Behavioral Therapy works. That earned the post 21,136 views.
#2: I should be writing this post instead of browsing Facebook. Fortunately, lukeprog has a post titled My Algorithm for Beating Procrastination. Based on the equation of Motivation = (Expectancy * Value) / (Impulsiveness * Delay), the algorithm involves first noticing that you are procrastinating, then guessing which part of the motivation equation is causing you the most trouble, and then trying several methods for attacking that specific problem. I guess that a lot of people shared this on Facebook where other procrastinators saw it, because the article got 38,637 views.
#1: And finally... the most read 2012 article on the site was Yvain's The noncentral fallacy - the worst argument in the world?, where he defined the noncentral fallacy as "X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member." Which sounds pretty abstract, but the political examples in the post should make it clearer. The politics probably helped contribute to this post's achievement of 41,932 views.
Most popular all-time posts
In addition to looking at only the posts that were made in 2012, people might be interested in knowing which posts were the most viewed in 2012 overall. The top three ones were all written by lukeprog, and we can see that two of them were closely related to the top-scorers which were written last year.
How to be Happy is LW's run-away favorite article and was viewed more than every page on LW except the home page and the discussion homepage. That is, 228,747 times! The Best Textbooks on Every Subject comes as a distant second at 98,011 views. And the third one is How to Beat Procrastination, at 66,587 views.
So I guess the take-home message is: people want to be happier, smarter, and more productive. Let's keep becoming those things in 2013!
Social status hacks from The Improv Wiki
I can't remember how I found this, just that I was amazed at how rational and near-mode it is on a topic where most of the information one usually encounters is hopelessly far.
LessWrong wiki link on the same topic: http://wiki.lesswrong.com/wiki/Status
Status
Status is pecking order. The person who is lower in status defers to the person who is higher in status.
Status is party established by social position--e.g. boss and employee--but mainly by the way you interact. If you interact in a way that says you are not to be trifled with, the other person must adjust to you, then you are establishing high status. If you interact in a way that says you are willing to go along, you don't want responsibility, that's low status. A boss can play low status or high status. An employee can play low status or high status.
Status is established in every line and gesture, and changes continuously. Status is something that one character plays to another at a particular moment. If you convey that the other person must not cross you on what you're saying now, then you are playing high status to that person in that line. Your very next line might come out low status, as you suggest willingness to defer about something else.
If you analyze your most successful scenes, it's likely they involved several status changes between the players. Therefore, one path to great scenes is to intentionally change status. You can raise or lower your own status, or the status of the other player. The more subtly you can do this, the better the scene.
High-status behaviors
When walking, assuming that other people will get out of your path.
Making eye contact while speaking.
Not checking the other person's eyes for a reaction to what you said.
Having no visible reaction to what the other person said. (Imagine saying something to a typical Clint Eastwood character. You say something expecting a reaction, and you get--nothing.)
Speaking in complete sentences.
Interrupting before you know what you are going to say.
Spreading out your body to full comfort. Taking up a lot of space with your body.
Looking at the other person with your eyes somewhat down (head tilted back a bit to make this work), creating the feeling that you are a parent talking to a child.
Talking matter-of-factly about things that the other person finds displeasing or offensive.
Letting your body be vulnerable, exposing your neck and torso to the other person.
Moving comfortably and gracefully.
Keeping your hands away from your face.
Speaking authoritatively, with certainty.
Making decisions for a group; taking responsibility.
Giving or withholding permission.
Evaluating other people's work.
Speaking cryptically, not adjusting your speech to be easily understood by the other person (except that mumbling does not count). E.g. saying, "Chomper not right" with no explanation of what you mean or what you want the other person to do.
Being surrounded by an entourage, especially of people who are physically smaller than you.
A "high-status specialist" conveys in every word and gesture, "Don't come near me, I bite."
Low-status behaviors
When walking, moving out of other people's path.
Looking away from the other person's eyes.
Briefly checking the other person's eyes to see if they reacted positively to what you said.
Speaking in halting, incomplete sentences. Trailing off, editing your sentences as you got.
Sitting or standing uncomfortably in order to adjust to the other person and give them space. Pulling inward to give the other person more room. If you're tall, you might need to scrunch down a bit to indicate that you're not going to use your height against the other person.
Looking up toward the other person (head tilted forward a bit to make this work), creating the feeling that you are a child talking to a parent.
Dancing around your words (beating around the bush) when talking about something that will displease the other person.
Shouting as an attempt to intimidate the other person. This is low status because it suggests that you expect resistance.
Crouching your body as if to ward off a blow; protecting your face, neck, and torso.
Moving awkwardly or jerkily, with unnecessary movements.
Touching your face or head.
Avoiding making decisions for the group; avoiding responsibility.
Needing permission before you can act.
Adjusting the way you say something to help the other person understand; meeting the other person on their (cognitive) ground; explaining yourself. E.g. "Could you please adjust the chomper? That's the gadget on the kitchen counter immediately to the left of the toaster. If you just give it a slight rap on the top, that should adjust it."
A "low-status specialist" conveys in every word and gesture, "Please don't bite me, I'm not worth the trouble."
Raising another person's status
To raise another person's status is to establish them as high in the pecking order in your group (possibly just the two of you).
• Ask their permission to do something.
• Ask their opinion about something.
• Ask them for advice or help.
• Express gratitude for something they did.
• Apologize to them for something you did.
• Agree that they are right and you were wrong.
• Defer to their judgement without requiring proof.
• Address them with a fancy title or honorific (even "Mr." or "Sir" works very well).
• Downplay your own achievement or attribute in comparison to theirs. "Your wedding cake is so much whiter than mine."
• Do something incompetent in front of them and then apologize for it or act sheepish about it.
• Mention a failure or shortcoming of your own. "I was supposed to go to an audition today, but I was late. They said I was wrong for the part anyway."
• Compliment them in a way that suggests appreciation, not judgement. "Wow, what a beautiful cat you have!"
• Obey them unquestioningly.
• Back down in a conflict.
• Move out of their way, bow to them, lower yourself before them.
• Tip your hat to them.
• Lose to them at something competitive, like a game (or any comparison).
• Wait for them.
• Serve them; do manual labor for them. Tip: Whenever you bring an audience member on stage, always raise their status, never lower it.
Lowering another person's status
To lower another person's status is to attack or discredit their right to be high in the pecking order. Another word for "lowering someone's status" is "humiliating them."
• Criticize something they did.
• Contradict them. Tell them they are wrong. Prove it with facts and logic.
• Correct them.
• Insult them.
• Give them unsolicited advice.
• Approve or disapprove of something they did or some attribute of theirs. "Your cat has both nose and ear points. That is acceptable." Anything that sets you up as the judge lowers their status, even "Nice work on the Milligan account, Joe."
• Shout at them.
• Tell them what to do.
• Ignore what they said and talk about something else, especially when they've said something that requires an answer. E.g. "Have you seen my socks?" "The train leaves in five minutes."
• One-up them. E.g. have a worse problem than the one they described, have a greater past achievement than theirs, have met a more famous celebrity, earn more money, do better than them at something they're good at, etc.
• Win: beat them at something competitive, like a game (or any comparison).
• Announce something good about yourself or something you did. "I went to an audition today, and I got the part!"
• Disregard their opinion. E.g. "You'd better not smoke while pumping gas, it's a fire hazard." Flick, light, puff, puff, pump, pump.
• Talk sarcastically to them.
• Make them wait for you.
• When they've fallen behind you, don't wait for them to catch up, just push on and get further out of sync.
• Disobey them.
• Violate their space.
• Beat them up. Beating them up verbally, not physically as in martial arts or how you learned UFC fighting in an gym, in front of other people, especially their wife, girlfriend, and/or children, is particularly status-lowering.
• In a conflict, make them back down.
• Taunt them. Tease them. The basic status-lowering act
Laugh at them. (Not with them.)
The basic status-raising act
Be laughed at by them.
Second to that is laughing with them at someone else.
(Notice that those are primarily what comedians do.)
Note that behaviors that raise another person's status are not necessarily low-status behaviors, and behaviors that lower another person's status are not necessarily high-status behaviors. People at any status level raise and lower each other all the time. They can do so in ways that convey high or low status.
For example, shouting at someone lowers their status but is itself a low-status behavior.
Objects and environments also have high or low status, although this is seldom explored. So explore it. Make something cheap and inconsequential high status. (This fingernail clipping came from Graceland!) Or bring down the status of a high status item. (Casually toss a 2 carat diamond ring on your jewelry pile.)
Source: http://greenlightwiki.com/improv/Status
Retrieved 20 March 2012
CFAR’s Inaugural Fundraising Drive
http://appliedrationality.org/fundraising/
(interested in hearing how other donors frame allocation between SI and CFAR)
Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86
Edit: New thread posted here.
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 86. The previous thread has long passed 500 comments.
There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16.
As a reminder, it’s often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.
Why you must maximize expected utility
This post explains von Neumann-Morgenstern (VNM) axioms for decision theory, and what follows from them: that if you have a consistent direction in which you are trying to steer the future, you must be an expected utility maximizer. I'm writing this post in preparation for a sequence on updateless anthropics, but I'm hoping that it will also be independently useful.
The theorems of decision theory say that if you follow certain axioms, then your behavior is described by a utility function. (If you don't know what that means, I'll explain below.) So you should have a utility function! Except, why should you want to follow these axioms in the first place?
A couple of years ago, Eliezer explained how violating one of them can turn you into a money pump — how, at time 11:59, you will want to pay a penny to get option B instead of option A, and then at 12:01, you will want to pay a penny to switch back. Either that, or the game will have ended and the option won't have made a difference.
When I read that post, I was suitably impressed, but not completely convinced: I would certainly not want to behave one way if behaving differently always gave better results. But couldn't you avoid the problem by violating the axiom only in situations where it doesn't give anyone an opportunity to money-pump you? I'm not saying that would be elegant, but is there a reason it would be irrational?
It took me a while, but I have since come around to the view that you really must have a utility function, and really must behave in a way that maximizes the expectation of this function, on pain of stupidity (or at least that there are strong arguments in this direction). But I don't know any source that comes close to explaining the reason, the way I see it; hence, this post.
I'll use the von Neumann-Morgenstern axioms, which assume probability theory as a foundation (unlike the Savage axioms, which actually imply that anyone following them has not only a utility function but also a probability distribution). I will assume that you already accept Bayesianism.
*
Epistemic rationality is about figuring out what's true; instrumental rationality is about steering the future where you want it to go. The way I see it, the axioms of decision theory tell you how to have a consistent direction in which you are trying to steer the future. If my choice at 12:01 depends on whether at 11:59 I had a chance to decide differently, then perhaps I won't ever be money-pumped; but if I want to save as many human lives as possible, and I must decide between different plans that have different probabilities of saving different numbers of people, then it starts to at least seem doubtful that which plan is better at 12:01 could genuinely depend on my opportunity to choose at 11:59.
So how do we formalize the notion of a coherent direction in which you can steer the future?
Help Reform A Philosophy Curriculum
A couple of days ago, Luke posted a recommendation for reforming how philosophy is taught. My department at the University of Illinois is in the midst of some potentially large-scale changes.* Hence, now seems to be a great time to think about concrete steps towards reforming or partially reforming the curriculum in an actual philosophy department. I would appreciate some help thinking through how to make changes that will (a) improve the philosophy education of our undergraduates, (b) recruit and retain better students, (c) improve faculty experiences with teaching philosophy, and (d) be salable to the rest of the philosophy faculty. To some extent, this post is me thinking out loud through what I want to say to my department's curriculum committee (probably in January).
How Things Stand Right Now
In this section, I will try to lay out the situation as I see it right now.
First, we have the following problem: philosophy courses that we offer are not sufficiently gated. In the mathematics department at my university, you can't take mathematical logic until you've taken a course called Fundamental Mathematics, which looks to be a class about proof techniques, mathematical induction, etc. And you can't take that until you've taken the second semester of calculus. Computer science, economics, physics, and most every other science curriculum works like this. If you want to take advanced courses, you have to pass through the gates of less advanced courses, which (theoretically, at least) prepare you for the material covered in the more advanced course.
By contrast, in the philosophy department, you may take a senior-level (400 at my school) course after taking a freshman-level (100 at my school) introduction to philosophy. The result is that students who take our 400-level courses are typically unprepared. At least, that has been my experience. (Shockingly, many students taking 400-level classes then complain that they were expected to know things about philosophy!) A big part of the problem here is that we do not presently have enough faculty to cover intermediate-level courses on a regular enough basis, and let's be honest, faculty members don't usually want to teach lower-level courses anyway.
Second, we have the following resource: our department currently has strong and growing connections with several world-class science or science-related departments. We have cross-appointed faculty and/or cross-listed courses with mathematics, linguistics, psychology, and physics, all of which are very strong departments. We have philosophy graduate students who do research and teach courses in these disciplines as well. And I am hoping to expand our connections to include computer science and statistics. I think there ought to be a good way to make use of these resources.
Rather than trying to reform the entire philosophy curriculum all at once, I want to focus first on our logic offerings. What we have now is the following mess.
- 102 -- Introduction to Logic: A critical thinking course almost never taught by a faculty member.
- 103 -- Quantitative Introduction to Logic: An introductory formal logic course taught by me about half the time
- 202 -- Symbolic Logic: A basic symbolic logic course (unclear in how it is different from 103 except in that it is completely restricted to deductive logic)
- 307 -- Elements of Semantics and Pragmatics: Cross-listed with linguistics
- 407 -- Logic and Linguistics: Cross-listed with linguistics
- 453 -- Formal Logic and Philosophy: An extension of 202 but with emphasis on philosophical issues
- 454 -- Advanced Symbolic Logic: Basically, a math logic course covering completeness, compactness, Lowenheim-Skolem, incompleteness, and undecidability
The only pre-requisite for 453 and 454 is 202, and 202 has no pre-requisites at all; the pre-req for 407 is 307, and 307 depends on a 100-level linguistics course or (more commonly) consent of the instructor. We also have a 400-level philosophy of mathematics course. Along with these, the mathematics department has a 400-level mathematical logic course and a 400-level course on set theory and topology, neither of which is currently cross-listed, but both of which, I think, should be cross-listed as philosophy courses, which would also raise the bar for the philosophy students interested in logic by requiring that they take the calculus sequence and the fundamentals course.
On the defects side, I think we have poor use of gating and spotty coverage of even deductive logic. For example, we have no courses on modal logics, we have no courses on intuitionist/constructivist logic, we have no courses on relevance logic, we have no courses on more exotic logics, we have no courses on set theory or category theory, and we have no courses on computation. We get sort of close to the last two in 453/454, and we might address them more directly by cross-listing with mathematics. But as it is, we do not do those things. And we have a huge gaping hole where inductive logic, probability theory, statistics, causal inference, and so on should be. On an individual level, that hole could be filled somewhat by taking courses in statistics; however, that is not quite the same as having courses available on confirmation theory or inductive logics.
Recommendations
So, now you know the basic situation ... what to do?
Below are some specific recommendations that I want to make to my department's curriculum committee. I would really appreciate input on how to refine my recommendations, how to make them more palatable, and so on.
First, we need to make the 100- and 200-level courses connect in a relevant way. I recommend entirely relabeling (and maybe even renumbering) 102 so that it is clear that it is a terminal, service course intended for non-majors. The course should stand to philosophy education as courses like Physics Made Easy stands to physics education. I further recommend making a relabeled (and maybe renumbered) version of 103 a pre-requisite for all 200-level logic courses. In terms of material covered, ideally 103 would introduce symbolic conventions (to be made as standard as possible across the curriculum), proof skills, and basic ideas in model theory, set theory, and probability theory. (I go back and forth between liking this idea, which fits closely with how I teach 103 now, and wanting to do something more like, deductive logic in the first half and confirmation theory in the second half with no formal exposure to set theory. The biggest barrier to the second approach is in formally developing confirmation theory without set theory.) Then 202 could do more meta-logic, go into more detail on model theory, go into more detail on set theory, or whatever.
Second, we need more courses covering inductive logic, probability theory, statistics, and so on. I recommend adding a 200-level course parallel to 202, which would cover some probability theory and some causal and statistical reasoning. Let's call this proposed course PHIL 204, since we don't offer anything under that number right now. I have in mind something slightly more advanced than CMU's Open Learning Initiative course here.
Third, I recommend expanding our 300-level course-offerings as follows. We need a second semester of inductive logic (etc.) that builds off of 204. And we need a course that does a simple survey of exotic logics, like modal logics, intuitionistic logic, relevance logic, free logic, etc. The 300-level survey of exotic logics need not be a pre-req for 400-level courses, provided the 400-level courses cover the same material that they have been covering. And that seems fine to me, although it might be in our long-term interests to drop our 454 in the event that we cross-list with mathematics on their math logic course. Depending on how their math logic course is taught, we might try to convince them that our 202 should satisfy the pre-req as effectively as their fundamentals course. (I don't know if that would be a hard sell or not.)
Fourth, I recommend cross-listing some courses with mathematics and statistics to get more regular coverage of the tools we want our students to have without necessarily having our faculty teach those courses all the time.
What Is The Goal Here?
I haven't spent any time in this write-up thinking about the goal(s) of logic education in philosophy. I am not sure whether it would be worth backing up to address this question or whether there is sufficient implicit agreement about the value and goals of logic to leave it alone. If you think I should be saying something about the place of logic in the overall curriculum or making an argument for teaching logic at all or making an argument for understanding logic broadly enough to get probability and statistics through the door, please tell me and make a suggestion about how to develop the heading.
--------------------------------------------------------------------------------------------------------------------
*My department is currently much too small relative to the size of my university, but the powers that be have recently become receptive to our requests to expand (really, to replace a large number of retirements from the last five years). Hence, we are about to undergo an external review, which we hope will result in a plan for phased growth of the department to a little more than twice its current size by the end of the decade. (Yes, our numbers are seriously depleted!)
By Which It May Be Judged
Followup to: Mixed Reference: The Great Reductionist Project
Humans need fantasy to be human.
"Tooth fairies? Hogfathers? Little—"
Yes. As practice. You have to start out learning to believe the little lies.
"So we can believe the big ones?"
Yes. Justice. Mercy. Duty. That sort of thing.
"They're not the same at all!"
You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.
- Susan and Death, in Hogfather by Terry Pratchett
Suppose three people find a pie - that is, three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory. Zaire wants the entire pie; Yancy thinks that 1/3 each is fair; and Xannon thinks that fair would be taking into equal account everyone's ideas about what is "fair".
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)