Over the years I've tried to collect predictions that were supposedly made on the basis of anthropic reasoning and which turned out to be true.
Personally, I have collected a grand total of one. Maybe it doesn't work, or doesn't count, because I've never been really thorough on the scholarship to go back to original sources for myself. And determining "what the real provenance of good scientific reasoning actually was" is in general is notoriously tricky...
Anyway, the one/best anthropic update I'm aware of is Fred Hoyle's prediction that something like the triple alpha process must be occurring in stars, otherwise there would not be enough carbon in the universe for us to have formed.
Do you know of any other/better predictions based on anthropics?
Steven Weinberg argued anthropically for a small nonzero cosmological constant, a few years before dark energy became part of standard cosmology.
Nice! Searching... I see that he has an article from 1989 that is a trove of gems. He tosses this one off to get the idea started (which was also new to me):
In one very weak version, the anthropic principle amounts simply to the use of the fact that we are here as one more experimental datum. For instance, recall M. Goldhaber's joke that "we know in our bones" that the lifetime of the proton must be greater than about 10^16 yr, because otherwise we would not survive the ionizing particles produced by proton decay in our own bodies. No one can argue with this version, but it does not help us to explain anything, such as why the proton lives so long. Nor does it give very useful experimental information; certainly experimental physicists (including Goldhaber) have provided us with better limits on the proton life-time.
The presentation has the tone of a survey, and floats many ideas, equations, and empirical results than I can wrap my head around swiftly, but it appears that the core idea is that too fast an expansion might have prevented the condensation of enough matter, via gravity, for some of the matter to become us. He self cites back to 1987.
Are you keeping a list somewhere we can look? A roam page, maybe?
For now nothing comes to mind, but I can register a prediction of my own. I've been developing a theory of experience as an emergent property of matter. I'm not sure of it. One of its big tensions is, it finds it strange that we are not whales. Whales seem to have more of the physical qualities that make human brains cosmically peculiar than human brains do. Assuming that whales actually exist, it would hold that most observer-moments experiences ought to belong to whales. The main alternative is:
Prediction: Whale brains must be missing some amount of nth-degree connectivity in some way.
Right now the only scrapings of anything like confirmation I've stumbled over as a non marine neurologist is that their cortex is missing layer four.
I don't publish a lot. Also, I've tried to fill out this list with numerous examples, but mostly I find people explaining things via anthropics after they were basically inferred from other methods, not people predicting things using this argument from scratch and THEN those things turning out to be true.
The list of "probably successful predictions" that probably started from an anthropic hunch so far was: just Fred Hoyle (but see a sibling comment... maybe Mitchell Porter's example of Steven Weinberg should count too).
I laughed out loud over the phrasing "Assuming that whales actually exist..."
...and then I wondered if maybe you're talking about the abstract form of whales in general, and postulating that they might exist (or not) in general in other places. Like perhaps there are something-like-whales under the ice of Europa?
One fun anthropic-flavored theory (that is definitely not just a retroactive explanation of an idea that is already considered true based on more prosaic reasoning (because we don't know if it is true yet)) is the proposal by Fergus Simpson that most aliens will turn out to physically larger than us, but from smaller planet-like objects.
That's the great thing about roam pages, you don't have to publish. Draft forever. For instance, here's a draft applying my conception of anthropics to fish. It's not finished and maybe it never will be but at least it's recorded and I can show it to people.
x] I actually didn't mean that. The concern is that if this is a simulation, it's unlikely that whales are simulated with much detail, as they don't have much of an effect on the most probably-interesting-to-simulators aspects of this era. Which I really should have mentioned because that's one of the branches of the prediction: If we look closely at some whale brains and find that they ought to be huge anthropic measure attractors, there is a way that the underlying theory could still be probable.
I suppose the reason I didn't mention it is that, if it's a simulation thing, we have no way of demonstrating that until it's too late to do anything with that information, and I'm not sure anything good would come of me writing about it any time soon because simulationism is a big pill that most people aren't eager to swallow.
Where are you on the spectrum from "SSA and SIA are equally valid ways of reasoning" to "it's more and more likely that in some sense SIA is just true"? I feel like I've been at the latter position for a few years now.
More SIAish for conventional anthropic problems. Other theories are more applicable for more specific situations, specific questions, and for duplicate issues.
in the absence of exact duplicates
Have you come across examples of it getting weird around exact duplicates? I should probably be informed of them. The only one example I'm aware of is the one I found https://www.lesswrong.com/posts/9RdhJKPrYvsttsko9/the-mirror-chamber-a-short-story-exploring-the-anthropic
I am on a quest to show that anthropics probability are normal, at least in the absence of exact duplicates.
So consider this simple example: a coin is tossed. This coin is either fair, is 3/4 biased to heads, or 3/4 biased to tails; the three options are equally likely. After being tossed, the coin is covered, and you eat a cake. Then you uncover the coin, and see that it was tails.
You can now update your probabilities on what type of coin it was. It goes to a posterior of 1/6 on the coin being heads-biased, 1/3 on it being fair, and 1/2 on it being tails-biased[1]. Your estimated probability of it being tails on the next toss is (1/6)(1/4)+(1/3)(1/2)+(1/2)(3/4)=7/12.
Now you are told that, had the coin come up heads, there would have been poison in the cake and you would have died before seeing the coin.
This fact makes the problem into an anthropic problem: you would never have been alive to see the coin, had it come up heads. But I can't see how that would have changed your probability update. If we got ethics board approval, we could actually run this experiment. And for the survivors in the tail worlds, we could toss the coin a second time (without cake or poison), just to see what it came up as. In the long run, we would indeed get roughly 7/12 tails frequency. So the update was correct, and the poison makes no difference.
Again, it seems that, if we ignore identical copies, anthropics is just normal probability theory. Now, if we knew about the poison, then we could deduce that the coin was tails from our survival. But that information gives us exactly the same update as seeing the coin was actually tails. So "I survived the cake" is exactly the same type of information as "the coin was tails".
Incubators
If we had more power in this hypothetical thought experiment, we could flip the coin and create you if it comes up tails. Then, after getting over your surprise, you could bet on the next flip of the coin - and the odds on that will be the same as in the poison cake and in the non-anthropic-case. Thus updates are the same if:
The probability of tails given the heads-biased coin is 1/4; given the fair coin it is 1/2=2/4, and given tails-biased it is 3/4. So the odds are 1:2:3; multiplying these by the (equal) prior probabilities doesn't change these odds. To get probabilities, divide the odds by 6, the sum of the odds, and get 1/6, 2/6=1/3 and 3/6=12. ↩︎