Posts I'd Like To Write (Includes Poll)
Summary: There are a bunch of posts I want to write; I'd like your help prioritizing them, and if you feel like writing one of them, that would be awesome too!
I haven't been writing up as many of my ideas for Less Wrong as I'd like; I have excuses, but so does everyone. So I'm listing out my backlog, both for my own motivation and for feedback/help. At the end, there's a link to a poll on which ones you'd like to see. Comments would also be helpful, and if you're interested in writing up one of the ideas from the third section yourself, say so!
(The idea was inspired by lukeprog's request for post-writing help, and I think someone else did this a while ago as well.)
Posts I'm Going To Write (Barring Disaster)
These are posts that I currently have unfinished drafts of.
Decision Theories: A Semi-Formal Analysis, Part IV and Part V: Part IV concerns bargaining problems and introduces the tactic of playing chicken with the inference process; Part V discusses the benefits of UDT and perhaps wraps up the sequence. Part IV has been delayed by more than a month, partly by real life, and partly because bargaining problems are really difficult and the approach I was trying turned out not to work. I believe I have a fix now, but that's no guarantee; if it turns out to be flawed, then Part IV will mainly consist of "bargaining problems are hard, you guys".
Posts I Really Want To Write
These are posts that I feel I've already put substantial original work into, but I haven't written a draft. If anyone else wants to write on the topic, I'd welcome that, but I'd probably still write up my views on it later (unless the other post covers all the bases that I'd wanted to discuss, most of which aren't obvious from the capsule descriptions below).
An Error Theory of Qualia: My sequence last summer didn't turn out as well as I'd hoped, but I still think it's the right approach to a physically reductionist account of qualia (and that mere bullet-biting isn't going to suffice), so I'd like to try again and see if I can find ways to simplify and test my theory. (In essence, I'm proposing that what we experience as qualia are something akin to error messages, caused when we try and consciously introspect on something that introspection can't usefully break down. It's rather like the modern understanding of déjà vu.)
Weak Solutions in Metaethics: I've been mulling over a certain approach to metaethics, which differs from Eliezer's sequence and lukeprog's sequence (although the conclusions may turn out to be close). In mathematics, there's a concept of a weak solution to a differential equation: a function that has the most important properties but isn't actually differentiable enough times to "count" in the original formulation. Sometimes these weak solutions can lead to "genuine" solutions, and other times it turns out that the weak solution is all you really need. The analogy is that there are a bunch of conditions humans want our ethical theories to satisfy (things like consistency, comprehensivity, universality, objectivity, and practical approximability), and that something which demonstrably had all these properties would be a "strong" solution. But the failure of moral philosophers to find a strong solution doesn't have to spell doom for metaethics; we can focus instead on the question of what sorts of weak solutions we can establish.
Posts I'd Really Love To See
And then we get to ideas that I'd like to write Less Wrong posts on, but that I haven't really developed beyond the kernels below. If any of these strike your fancy, you have my atheist's blessing to flesh them out. (Let me know in the comments if you want to publicly commit to doing so.)
Living with Rationality: Several people in real life criticize Less Wrong-style rationality on the grounds that "you couldn't really benefit by living your life by Bayesian utility maximization, you have to go with intuition instead". I think that's a strawman attack, but none of the defenses on Less Wrong seem to answer this directly. What I'd like to see described is how it works to actually improve one's life via rationality (which I've seen in my own life), and how it differs from the Straw Vulcan stereotype of decisionmaking. (That is, I usually apply conscious deliberation on the level of choosing habits rather than individual acts; I don't take out a calculator when deciding who to sit next to on a bus; I leave room for the kind of uncertainty described as "my conscious model of the situation is vastly incomplete", etc.)
An Explanation of the Born Probabilities in MWI: This topic might be even better suited to an actual physicist than to a know-it-all mathematician, but I don't see why the Born probabilities should be regarded as mysterious at all within the Many-Worlds interpretation. The universe is naturally defined as a Hilbert space, and the evolution of the wavefunction has a basic L^2 conservation law. If you're going to ask "how big" a chunk of the wavefunction is (which is the right way to compute the relative probabilities of being an observer that sees such-and-such), the only sane answer is going to be the L^2 norm (i.e. the Born probabilities).
Are Mutual Funds To Blame For Stock Bubbles? My opinion about the incentives behind the financial crisis, in a nutshell: Financial institutions caused the latest crash by speculating in ways that were good for their quarterly returns but involved themselves in way too much risk. The executives were incentivized to act in that short-sighted way because the investors wanted short-term returns and were willing to turn a blind eye to that kind of risk. But that's a crazy preference for most investors (I expect it had seriously negative value), so why weren't investors smarter (i.e. why didn't they flee from any company that wasn't clearly prioritizing longer-term expected value)? Well, there's one large chunk of investors with precisely those incentives: the 20% of the stock market that's composed of mutual funds. I'd like to test this theory and think about realistic ways to apply it to public policy. (It goes without saying that I think Less Wrong readers should, at minimum, invest in index funds rather than mutual funds.)
Strategies for Trustworthiness with the Singularity: I want to develop this comment into an article. Generally speaking, the usual methods of making the principal-agent problem work out aren't available; the possible payoffs are too enormous when we're discussing rapidly accelerating technological progress. I'm wondering if there's any way of setting up a Singularity-affecting organization so that it will be transparent to the organization's backers that the organization is doing precisely what it claims. I'd like to know in general, but there's also an obvious application; I think highly of the idealism of SIAI's people, but trusting people on their signaled idealism in the face of large incentives turns out to backfire in politics pretty regularly, so I'd like a better structure than that if possible.
On Adding Up To Normality: People have a strange block about certain concepts, like the existence of a deity or of contracausal free will, where it seems to them that the instant they stopped believing in it, everything else in their life would fall apart or be robbed of meaning, or they'd suddenly incur an obligation that horrifies them (like raw hedonism or total fatalism). That instinct is like being on an airplane, having someone explain to you that your current understanding of aerodynamic lift is wrong, and then suddenly becoming terrified that the plane will plummet out of the sky now that there's no longer the kind of lift you expected. (That is, it's a fascinating example of the Mind Projection Fallacy.) So I want a general elucidation of Egan's Law to point people to.
The Subtle Difference Between Meta-Uncertainty and Uncertainty: If you're discussing a single toss of a coin, then you should treat it the same (for decision purposes) whether you know that it's a coin designed to land heads 3/4 of the time, or whether you know there's a 50% chance it's a fair coin and a 50% chance it's a two-headed coin. Metauncertainty and uncertainty are indistinguishable in that sense. Where they differ is in how you update on new evidence, or how you'd make bets about three upcoming flips taken together, etc. This is a worthwhile topic that seems to confuse the hell out of newcomers to Bayesianism.
(Originally, this was a link to a poll on these post ideas)
Thanks for your feedback!
UPDATE:
Thanks to everyone who gave me feedback; results are in this comment!
Timeless physics breaks T-Rex's mind [LINK]
From Dinosaur Comics, with a nice shout-out to Eliezer's Timeless Physics post! (Look in the newspost below the comic.)
I can't wait until Ryan North gets around to Newcomb's Problem...
Decision Theories: A Semi-Formal Analysis, Part III
Or: Formalizing Timeless Decision Theory
Previously:
0. Decision Theories: A Less Wrong Primer
1. The Problem with Naive Decision Theory
2. Causal Decision Theory and Substitution
WARNING: The main result of this post, as it's written here, is flawed. I at first thought it was a fatal flaw, but later found a fix. I'm going to try and repair this post, either by including the tricky bits, or by handwaving and pointing you to the actual proofs if you're curious. Carry on!
Summary of Post: Have you ever wanted to know how (and whether) Timeless Decision Theory works? Using the framework from the last two posts, this post shows you explicitly how TDT can be implemented in the context of our tournament, what it does, how it strictly beats CDT on fair problems, and a bit about why this is a Big Deal. But you're seriously going to want to read the previous posts in the sequence before this one.
We've reached the frontier of decision theories, and we're ready at last to write algorithms that achieve mutual cooperation in Prisoner's Dilemma (without risk of being defected on, and without giving up the ability to defect against players who always cooperate)! After two substantial preparatory posts, it feels like it's been a long time, hasn't it?

But look at me, here, talking when there's Science to do...
Decision Theories: A Semi-Formal Analysis, Part II
Or: Causal Decision Theory and Substitution
Previously:
0. Decision Theories: A Less Wrong Primer
1. The Problem with Naive Decision Theory
Summary of Post: We explore the role of substitution in avoiding spurious counterfactuals, introduce an implementation of Causal Decision Theory and a CliqueBot, and set off in the direction of Timeless Decision Theory.
In the last post, we showed the problem with what we termed Naive Decision Theory, which attempts to prove counterfactuals directly and pick the best action: there's a possibility of spurious counterfactuals which lead to terrible decisions. We'll want to implement a decision theory that does better; one that is, by any practical definition of the words, foolproof and incapable of error...

I know you're eager to get to Timeless Decision Theory and the others. I'm sorry, but I'm afraid I can't do that just yet. This background is too important for me to allow you to skip it...
Over the next few posts, we'll create a sequence of decision theories, each of which will outperform the previous ones (the new ones will do better in some games, without doing worse in others0) in a wide range of plausible games.
Decision Theories: A Semi-Formal Analysis, Part I
Or: The Problem with Naive Decision Theory
Previously: Decision Theories: A Less Wrong Primer
Summary of Sequence: In the context of a tournament for computer programs, I give almost-explicit versions of causal, timeless, ambient, updateless, and several other decision theories. I explain the mathematical considerations that make decision theories tricky in general, and end with a bunch of links to the relevant recent research. This sequence is heavier on the math than the primer was, but is meant to be accessible to a fairly general audience. Understanding the basics of game theory (and Nash equilibria) will be essential. Knowing about things like Gödel numbering, quining and Löb's Theorem will help, but won't be required.
Summary of Post: I introduce a context in which we can avoid most of the usual tricky philosophical problems and formalize the decision theories of interest. Then I show the chief issue with what might be called "naive decision theory": the problem of spurious counterfactual reasoning. In future posts, we'll see how other decision theories get around that problem.
In my Decision Theory Primer, I gave an intuitive explanation of decision theories; now I'd like to give a technical explanation. The main difficulty is that in the real world, there are all sorts of complications that are extraneous to the core of decision theory. (I'll mention more of these in the last post, but an obvious one is that we can't be sure that our perception and memory match reality.)
In order to avoid such difficulties, I'll need to demonstrate decision theory in a completely artificial setting: a tournament among computer programs.

Suggestions for naming a class of decision theories
In my recent post, I outlined 5 conditions that I'd like a decision theory to pass; TDT, UDT and ADT pass them, while CDT and EDT don't. I called decision theories that passed those conditions "advanced decision theories", but that's probably not an optimal name. Can I ask you to brainstorm some other suggestions for me? (I may be writing a follow-up soon.)
As usual, it's best to brainstorm on your own before reading any of the comments. You can write down your ideas, then check if any have already been suggested, then comment with the new ones.
Thanks!
Decision Theories: A Less Wrong Primer

Summary: If you've been wondering why people keep going on about decision theory on Less Wrong, I wrote you this post as an answer. I explain what decision theories are, show how Causal Decision Theory works and where it seems to give the wrong answers, introduce (very briefly) some candidates for a more advanced decision theory, and touch on the (possible) connection between decision theory and ethics.
Baconmas: The holiday for the sciences
Summary: Sir Francis Bacon's birthday (Jan. 22) is a holiday devoted to the sciences, with a side order of bacon. Check out the website and share the Baconmas cheer with everyone!
What is Baconmas?
For the past few years, I've been celebrating the birthday of Sir Francis Bacon (Jan. 22) as a holiday, hosting parties with both science experiments and bacon dishes. It's been excellent enough that I want to share it with everyone else, so I made a website devoted to Baconmas and I'd like you to check it out (and share it if you like it).
It goes without saying that holidays devoted to the sciences can be a force for good as well as a lot of fun (if you haven't, you should see the writeup of the Solstice Celebration for an awesome example). I thought it would be especially powerful to have a holiday that was (1) explicitly about science, (2) fun to celebrate, with a "hook" like bacon, and (3) positive and open to everyone.
The main "tradition" of Baconmas is simply to try something new with each celebration, and to record how it went. Everything else is just a suggestion. I think it's clear that the Zombie Feynman school of science is a powerful and good meme. Science isn't only about the things that are shiny and fun, but it should be shiny and fun whenever possible. So I'm linking a bunch of fun, easy experiments (that have actual content for both novices and the scientifically literate).
How can I help Baconmas grow?
Are you as excited about Baconmas as I am? Great! There are some things you can do to really help!
- Tell all your friends! Share the website, the Twitter feed, and the Facebook event page. If you're part of a relevant forum (anything from a subreddit to the XKCD forums), share it there too!
- Host a Baconmas party, and then send me (happybaconmas at gmail) pictures/video/testimonials that I can add to the site! These will help Baconmas 2013 grow even faster than 2012.
- Create a local Baconmas Meetup group, and send me a link that I can post.
- Find or invent any of the following: experiments, recipes, traditions, carols, guest blog posts; then send them to me for the Baconmas site!
- One thing that would be totally incredible: a funny video biography of Sir Francis Bacon, done with puppets/costume/animation/whatever works for you! I'd love to do this myself, but may not have the time.
- Give me feedback and ideas for the website and everything else!
P.S. Thanks for all the comments on my advice request post! I made a few key additions based on your input.
Advice Request: Baconmas Website
The Gist: I started this blog to get people excited about a science-themed holiday. I want your suggestions before I advertise it to everyone I know!
Two years ago, I came up with the idea of celebrating Sir Francis Bacon's birthday (Jan. 22) as a festive science-themed holiday called Baconmas. (The name has the additional bonus that it's easy to convince people to come to a party if there will be bacon there.) I had a good Baconmas party in 2010 and a better one in 2011, and now I want to let other people in on the fun.
It's currently "in beta"; I wrote a couple of preparatory things, but haven't yet shown it to the vast majority of my friends. I want to maximize the chance that it goes a bit viral when I do, because a science-themed holiday really needs to exist. So I'd like any suggestions you have, before I go "alpha" with it. So as to not cause anchoring, I'll put down in the first comment the things I already plan to do- if you could make all your original suggestions first, then read those plans and others' comments, then add more suggestions, that should maximize the good ideas. Thanks!
(Oh, and it goes without saying that you should celebrate Baconmas if at all possible. It's been a lot of fun for me.)
[LINK] "Prediction Audits" for Nate Silver, Dave Weigel
Nate Silver (the NYT quantitative political analyst) and Dave Weigel (the Slate columnist) have started a good tradition, listing their worst predictions of 2011. (Silver also listed his best.)
If any other pundits are doing the same, link them here.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)