I'd like Decision Theory and Rational Living; your qualia and meta-ethics sound like a waste of time, and I suspect your mutual fund theory is just wrong.
An Explanation of the Born Probabilities in MWI: This topic might be even better suited to an actual physicist than to a know-it-all mathematician
It's fine. Most of it's measure theory anyhow.
Two points are a bit tricky, though. One is that the probabilities should have the same symmetry as the Schroedinger equation. Or in other words, you have to make use of the fact that "change of basis" is physical nonsense.
The other is caution :P There's still gravity to sort out, so the Born probabilities are only known to be derivable from MW this way to the extent that the Schroedinger equation is a good approximation.
If you're going to ask "how big" a chunk of the wavefunction is (which is the right way to compute the relative probabilities of being an observer that sees such-and-such), the only sane answer is going to be the L^2 norm (i.e. the Born probabilities).
Why is that the right way to compute probability? By number, most worlds don't have anythign like a born rule. By L^2, most worlds have the Born rule. Why is it the L^2 that matters? Why do we seem more likely to find ourselves in the Born worlds? I am utterly confused about anthropics.
I'd like to see that article, but if I understand your proposed solution, I wouldn't stop being confused.
Hanson's semi-solution (mangled worlds) at least makes the confusion go away (most surviving worlds have born rule).
Hanson's semi-solution (mangled worlds) at least makes the confusion go away (most surviving worlds have born rule).
There is a general pattern of discrete things (balls, springs, discrete "worlds") being less confusing components to build a model out of. So it makes sense that "mangled worlds" isn't confusing, despite being totally wrong :P
If we don't hack up the universe into discrete bits, then instead of talking about the number of worlds we have to talk about "amount of world." And that's confusing already :D I'm certainly still confused by it, in the sense of not being able to easily picture how we get high-level observations from the low-level model.
Would it help if for every case where counting discrete worlds gave you the right answer, measuring "amount of world" also has to work?
we have to talk about "amount of world."
But why do our experiences seem selected from an "amount of world" distribution? What the hell is going on here?
Would it help if for every case where counting discrete worlds gave you the right answer, measuring "amount of world" also has to work?
If I understand correctly, that you mean a case where counting and measure gave same (born rule) answer, I guess that would make me less confused.
Poll Results (now that it's been a few days since the last entry):
I received 56 votes. (Many thanks!) There was a general wave of support for the "basic" posts; when it comes to the more esoteric stuff, people had strong and divergent opinions that mostly balanced out.
In order of total votes (approval-voting style), with people's first preferences noted:
Given the votes, the next thing I do will probably be either Decision Theories Part IV or Adding Up To Normality (which is a few votes back of Living With Rationality, but which I'm more enthused to write). If anyone else feels like writing on the ideas of Living With Rationality or Meta-Uncertainty, looks like there's karma for the taking.
I see some worries about doing much research in decision theory, because quicky advances could end with bad consequences. Rational Living is more broader, but is more promise in the near time. Posts about hapiness and sunk costs analysis for example.
The universe is naturally defined as a Hilbert space, and the evolution of the wavefunction has a basic L^2 conservation law. If you're going to ask "how big" a chunk of the wavefunction is (which is the right way to compute the relative probabilities of being an observer that sees such-and-such)
I believe the question is not "What are the sane probabilistic interpretations of the wave function?" but "why are we even asking the question "What are the sane probabilistic interpretations of the wave function" ?". Are there no other questions we might be asking whose answer might relate to the experiences of the inhabitants of a quantum universe?
Are there no other questions we might be asking whose answer might relate to the experiences of the inhabitants of a quantum universe?
If you have incomplete information, probability is how you quantify it. There are probably other questions, yes. But that's not very relevant.
An interesing question is "if what has incomplete information?" Because then you get to think about computers and hard drives.
I failed to properly explain myself.
1) Look, we have this fancy differential equation describing the time-evolution of a function called the wave function. We believe this equation describes, without additional baggage, the universe.
2) This wave function has a unique sane probabilistic interpretation
3) Therefore, the inhabitants of the universe described by this equation experience a seemingly non-deterministic reality with probabilities given by said interpretation.
It is not at all clear to me how (or even if) (3) follows from (1) and (2).
That's why hard drives are neat :)
One of the whole points of MW is that the universe isn't inherently probabilistic. But there's apparent probability (which also shows up in everyday things like coin flips, which we can assign a probability to even while they're in the air and their course is determined). This is odd because probability is born of incomplete information (even if it's determined, we still don't know how the coin will land, so we use probability), and it seems like there is none here. If the hard drive is about to record 100 quantum coin flips, the computer can print out exactly what state the hard drive will be in afterwards.
But it's tricky - because once it's flipped the coins, when the computer does the "read the hard drive and print out the results operation," the computer doesn't get to look at the exact quantum state and print that out. Instead, each "1s and 0s" state gets entangled with the computer, and so the computer prints out a superposition of different messages. An external observer with a good interferometer could figure out what was going on, but no classical operation of the computer itself will figure it out or rely on it. According to the computer, it's getting random results - in fact, the computer can run a program called "detect randomness," which will likewise access the 1s and 0s and print out "yep, this looks random" in almost all cases. The randomness-detector can only access sequences of 1s and 0s, not quantum states.
From here, the interesting question is "how is 'apparent probability' different from the probability we assign to coin flips?" If the computer were ignorant about the future like it was about a coin flip, they were be exactly the same and all confusion would go away. But it's not - it can know exactly what state its in, it's merely that any operations it takes can only depend on sequences of 1s and 0s, which screen off any computational access to the actual quantum state. A very odd sort of "ignorance."
Perhaps you should make a bunch of posts, one for each topic, and have people vote via karma for each one.
edit
wait no I'm stupid, didn't see the pre-existing poll
Summary: There are a bunch of posts I want to write; I'd like your help prioritizing them, and if you feel like writing one of them, that would be awesome too!
I haven't been writing up as many of my ideas for Less Wrong as I'd like; I have excuses, but so does everyone. So I'm listing out my backlog, both for my own motivation and for feedback/help. At the end, there's a link to a poll on which ones you'd like to see. Comments would also be helpful, and if you're interested in writing up one of the ideas from the third section yourself, say so!
(The idea was inspired by lukeprog's request for post-writing help, and I think someone else did this a while ago as well.)
Posts I'm Going To Write (Barring Disaster)
These are posts that I currently have unfinished drafts of.
Decision Theories: A Semi-Formal Analysis, Part IV and Part V: Part IV concerns bargaining problems and introduces the tactic of playing chicken with the inference process; Part V discusses the benefits of UDT and perhaps wraps up the sequence. Part IV has been delayed by more than a month, partly by real life, and partly because bargaining problems are really difficult and the approach I was trying turned out not to work. I believe I have a fix now, but that's no guarantee; if it turns out to be flawed, then Part IV will mainly consist of "bargaining problems are hard, you guys".
Posts I Really Want To Write
These are posts that I feel I've already put substantial original work into, but I haven't written a draft. If anyone else wants to write on the topic, I'd welcome that, but I'd probably still write up my views on it later (unless the other post covers all the bases that I'd wanted to discuss, most of which aren't obvious from the capsule descriptions below).
An Error Theory of Qualia: My sequence last summer didn't turn out as well as I'd hoped, but I still think it's the right approach to a physically reductionist account of qualia (and that mere bullet-biting isn't going to suffice), so I'd like to try again and see if I can find ways to simplify and test my theory. (In essence, I'm proposing that what we experience as qualia are something akin to error messages, caused when we try and consciously introspect on something that introspection can't usefully break down. It's rather like the modern understanding of déjà vu.)
Weak Solutions in Metaethics: I've been mulling over a certain approach to metaethics, which differs from Eliezer's sequence and lukeprog's sequence (although the conclusions may turn out to be close). In mathematics, there's a concept of a weak solution to a differential equation: a function that has the most important properties but isn't actually differentiable enough times to "count" in the original formulation. Sometimes these weak solutions can lead to "genuine" solutions, and other times it turns out that the weak solution is all you really need. The analogy is that there are a bunch of conditions humans want our ethical theories to satisfy (things like consistency, comprehensivity, universality, objectivity, and practical approximability), and that something which demonstrably had all these properties would be a "strong" solution. But the failure of moral philosophers to find a strong solution doesn't have to spell doom for metaethics; we can focus instead on the question of what sorts of weak solutions we can establish.
Posts I'd Really Love To See
And then we get to ideas that I'd like to write Less Wrong posts on, but that I haven't really developed beyond the kernels below. If any of these strike your fancy, you have my atheist's blessing to flesh them out. (Let me know in the comments if you want to publicly commit to doing so.)
Living with Rationality: Several people in real life criticize Less Wrong-style rationality on the grounds that "you couldn't really benefit by living your life by Bayesian utility maximization, you have to go with intuition instead". I think that's a strawman attack, but none of the defenses on Less Wrong seem to answer this directly. What I'd like to see described is how it works to actually improve one's life via rationality (which I've seen in my own life), and how it differs from the Straw Vulcan stereotype of decisionmaking. (That is, I usually apply conscious deliberation on the level of choosing habits rather than individual acts; I don't take out a calculator when deciding who to sit next to on a bus; I leave room for the kind of uncertainty described as "my conscious model of the situation is vastly incomplete", etc.)
An Explanation of the Born Probabilities in MWI: This topic might be even better suited to an actual physicist than to a know-it-all mathematician, but I don't see why the Born probabilities should be regarded as mysterious at all within the Many-Worlds interpretation. The universe is naturally defined as a Hilbert space, and the evolution of the wavefunction has a basic L^2 conservation law. If you're going to ask "how big" a chunk of the wavefunction is (which is the right way to compute the relative probabilities of being an observer that sees such-and-such), the only sane answer is going to be the L^2 norm (i.e. the Born probabilities).
Are Mutual Funds To Blame For Stock Bubbles? My opinion about the incentives behind the financial crisis, in a nutshell: Financial institutions caused the latest crash by speculating in ways that were good for their quarterly returns but involved themselves in way too much risk. The executives were incentivized to act in that short-sighted way because the investors wanted short-term returns and were willing to turn a blind eye to that kind of risk. But that's a crazy preference for most investors (I expect it had seriously negative value), so why weren't investors smarter (i.e. why didn't they flee from any company that wasn't clearly prioritizing longer-term expected value)? Well, there's one large chunk of investors with precisely those incentives: the 20% of the stock market that's composed of mutual funds. I'd like to test this theory and think about realistic ways to apply it to public policy. (It goes without saying that I think Less Wrong readers should, at minimum, invest in index funds rather than mutual funds.)
Strategies for Trustworthiness with the Singularity: I want to develop this comment into an article. Generally speaking, the usual methods of making the principal-agent problem work out aren't available; the possible payoffs are too enormous when we're discussing rapidly accelerating technological progress. I'm wondering if there's any way of setting up a Singularity-affecting organization so that it will be transparent to the organization's backers that the organization is doing precisely what it claims. I'd like to know in general, but there's also an obvious application; I think highly of the idealism of SIAI's people, but trusting people on their signaled idealism in the face of large incentives turns out to backfire in politics pretty regularly, so I'd like a better structure than that if possible.
On Adding Up To Normality: People have a strange block about certain concepts, like the existence of a deity or of contracausal free will, where it seems to them that the instant they stopped believing in it, everything else in their life would fall apart or be robbed of meaning, or they'd suddenly incur an obligation that horrifies them (like raw hedonism or total fatalism). That instinct is like being on an airplane, having someone explain to you that your current understanding of aerodynamic lift is wrong, and then suddenly becoming terrified that the plane will plummet out of the sky now that there's no longer the kind of lift you expected. (That is, it's a fascinating example of the Mind Projection Fallacy.) So I want a general elucidation of Egan's Law to point people to.
The Subtle Difference Between Meta-Uncertainty and Uncertainty: If you're discussing a single toss of a coin, then you should treat it the same (for decision purposes) whether you know that it's a coin designed to land heads 3/4 of the time, or whether you know there's a 50% chance it's a fair coin and a 50% chance it's a two-headed coin. Metauncertainty and uncertainty are indistinguishable in that sense. Where they differ is in how you update on new evidence, or how you'd make bets about three upcoming flips taken together, etc. This is a worthwhile topic that seems to confuse the hell out of newcomers to Bayesianism.
(Originally, this was a link to a poll on these post ideas)
Thanks for your feedback!
UPDATE:
Thanks to everyone who gave me feedback; results are in this comment!