This is the fifth bimonthly What Are You Working On? thread. Previous threads are here. So here's the question:
What are you working on?
Here are some guidelines:
- Focus on projects that you have recently made progress on, not projects that you're thinking about doing but haven't started, those are for a different thread.
- Why this project and not others? Mention reasons why you're doing the project and/or why others should contribute to your project (if applicable).
- Talk about your goals for the project.
- Any kind of project is fair game: personal improvement, research project, art project, whatever.
- Link to your work if it's linkable.
I am working on finishing up a philosophy paper about whether "fine-tuning" (the claim that the physical constants and initial conditions that permit the evolution of life and conscious observers are rare in the space of physically possible parameters) supports "multiverse" hypotheses according to which the cosmos is huge and is heterogeneous in its local conditions. One major argument for the view that fine-tuning does not support multiverse hypotheses is due to Ian Hacking, who claimed that this inference is analogous to an "inverse gambler's fallacy" where a gambler enters a casino, witnesses a roll of dice resulting in double-sixes, and concludes that the people must have been throwing dice for a while.
While going through Nick Bostrom's book Anthropic Bias, I've found his discussion of Hacking's argument (and of an significantly improved recent version by Roger White, available here ) somewhat unilluminating, although I thought there must be something wrong with the argument. Going through the existing replies to this argument in the literature I've found counterarguments that either fail straightforwardly or (more commonly) render fine-tuning irrelevant to whether multiverse hypotheses are confirmed, degenerating into an almost a priori argument that I find very implausible. I've found a fairly simple way of seeing how exactly the Hacking/White argument goes wrong, by combining Bostrom's self-sampling assumption with a technical fix independently arrived at by a few other philosophers. This solution does not generate the implausible a priori argument for the multiverse that previous approaches in the literature do, as long as the reference class (for applying the self-sampling assumption) satisfies some weak requirements.
The result is a critical review paper going through the literature while building up the concepts needed to understand the proposed solution. I've produced all the content by now, and am now mostly working on finishing a draft, integrating notation across sections, making it readable to philosophers with at least rudimentary knowledge of Bayesianism, and in general improving the paper to meet top-tier journal standards.
Have you read Darren Bradley's Multiple Universes and Observation Selection Effects? If so, do you group it into the category of unacceptably a priori arguments? Because it sounds somewhat similar, and I remember finding it convincing at the time I read it.