It seems to me that viewing a late Great Filter to be worse news than an early Great Filter is another instance of the confusion and irrationality of SSA/SIA-style anthropic reasoning and subjective anticipation. If you anticipate anything, believing that the great filter is more likely to lie in the future means you have to anticipate a higher probability of experiencing doom.
Let's take this further: is there any reason, besides our obsession with subjective anticipation, to discuss whether a late great filter is 'good' or 'bad' news, over and above policy implications? Why would an idealized agent evaluate the utility of counterfactuals it knows it can't realize?
That is a good question, and one that I should have asked and tried to answer before I wrote this post. Why do we divide possible news into "good" and "bad", and "hope" for good news? Does that serve some useful cognitive function, and if so, how?
Without having good answers to these questions, my claim that a late great filter should not be considered bad news may just reflect confusion about the purpose of calling something "bad news".
Yes there are different ways to conceive what news is good or bad, and yes it is good from a God-view if filters are late. But to those of us who already knew that we exist now, and have passed all previous filters, but don't know how many others out there are at a similar stage, the news that the biggest filters lie ahead of us is surely discouraging, if useful.
All else being the same we should prefer a late coin toss over an early one, but we should prefer an early coin toss that definitely came up tails over a late coin toss that might come up either way. Learning that the coin toss is still ahead is bad news in the same way as learning that the coin came up tails is good news. Bad news is not isomorphic to making a bad choice. An agent that maximizes good news behaves quite differently from a rational actor.
So, according to your definition of "good news" and "bad news", it might be bad news to find that you've made a good decision, and good news to find that you've made a bad decision? Why would a rational agent want to have such a concept of good and bad news?
If you wake up in the morning to learn that you got drunk last night, played the lottery, and won, then this is good news.
Let us suppose that, when you were drunk, you were computationally limited in a way that made playing the lottery (seem to be) the best decision given your computational limitations. Now, in your sober state, you are more computationally powerful, and you can see that playing the lottery last night was a bad decision (given your current computational power but minus the knowledge that your numbers would win). Nonetheless, learning that you played and won is good news. After all, maybe you can use your winnings to become even more computationally powerful, so that you don't make such bad decisions in the future.
In the real world, we don't get to make any decision. The filter hits us or it doesn't.
If it hits early, then we shouldn't exist (good news: we do!). If it hits late, then WE'RE ALL GOING TO DIE!
In other words, I agree that it's about subjective anticipation, but would point out that the end of the world is "bad news" even if you got to live in the first place. It's just not as bad as never having existed.
Nick is wondering whether we can stop worrying about the filter (if we're already past it). Any evidence we have that complex life develops before the filter would then cause us to believe in the late filter, leaving it still in our future, and thus still something to worry about and strive against. Not as bad as an early filter, but something far more worrisome, since it is still to come.
First of all, a late great filter may be total, while an early great filter (based on the fact of our existence) was not - a reason to prefer the early one.
Secondly, let's look at the problem from our own perspective. If we knew that there was an Omega simulating us, and that our decision would affect when a great filter happens, even in our past, then this argument could work.
But we have no evidence of that! Omega is an addition to the problem, that completely changes the situation. If I had the button in front of me that said "early partial great fi...
"The two scenarios involving Omega are only meant to establish that a late great filter should not be considered worse news than an early great filter."
I honestly think this would have been way, way, way clearer if you had dropped the Omega decision theory stuff, and just pointed out that, given great filters of equal probability, choosing an early great filter over a late great filter would entail wiping out the history of humanity in addition to the galactic civilization that we could build, which most of us would definitely see as worse.
This seems centered around a false dichotomy. If you have to choose between an early and a late Great Filter, the later may well be preferable. But the presupposes it must be on or the other. In reality, there may be no Great Filter, or there may be a great filter of such a nature that it only allows linear expansion, or some other option we simply haven't thought of. Or there may be a really late great filter. Your reasoning presumes an early/late dichotomy that is overly simplistic.
I don't seem to understand the logic here. As I understand the idea of "Late Great Filter is bad news", it is simply about bayesian update of probabilities for hyphoteses A = "Humanity will eventually come to Explosion" versus not-A. Say, we have original probabilities for this p = P(A) and q = 1-p. Now, suppose, we take Great Filter hyphoteses for granted, and we find on Mars remnants of great civilization, equal to ours or even more improved. This means that we must update our probabilities of A/not-A so that P(A) decreases.
And I consider this really bad news. Either that, or Great Filter idea has some huuuuge flaw I overlooked.
So, where am I wrong?
In the scenario where the coin lands tails, nothing interesting happens, and the coin is logically independent of anything else, so let us assume that the coin lands heads. We are assuming (correct me if I'm wrong) that Omega is a perfect predictor. So, in the second scenario, we already know that the person will press the button even before he makes his decision, even before he does it, or else either a). Omega's prediction is wrong (contradiction) or b). the Earth was destroyed a million years ago (contradiction). The fact that we currently exist gives u...
I downvoted this because it seems to be missing a very obvious point - that the reason why an early filter would be good is because we've already passed it. If we hadn't passed it, then of course we want the filter as late as possible.
On the other hand, I notice that this post has 15 upvotes. So I am wondering whether I have missed anything - generally posts that are this flawed do not get upvoted this much. I read through the comments and thought about this post a bit more, but I still came to the conclusion that this post is incredibly flawed.
But since copies of us occur more frequently in universes with late filters than in universes with early filters, such a decision (which Robin arrives at via SIA) can be justified on utilitarian grounds under UDT.
But it seems like our copies in early-filter universes can eventually affect a proportionally greater share of the universe's resources.
Also, Robin's example of shelters seems mistaken: if shelters worked, some civilizations would already have tried them and colonized the universe. Whatever we try has to stand a chance of working against some u...
according to this line of thought, we're acting in both kinds of universes: those with early filters, and those with late filters.
Suppose that the reason for a late filter is a (complex) logical fact about the nature of technological progress; that there is some technological accident that it is almost impossible for an intelligent species to avoid, and that the lack of an early filter is a logical fact about the nature of life-formation and evolution. For the purposes of clarity, we might even think of an early filter as impossible and a late filter as certain.
Then, in what sense do we "exist in both kinds of universe"?
I agree. Note that the very concept of "bad news" doesn't make sense apart from a hypothetical where you get to choose to do something about determining these news one way or another. Thus CronoDAS's comment actually exemplifies another reason for the error: if the hypothetical decision is only able to vary the extent of a late great filter, as opposed to shifting the timing of a filter, it's clear that discovering powerful great filter is "bad news" according to such metric (because it's powerful, as opposed to because it's late).
I think this is a misleading problem, like Omega's subcontracting problem. Our actions now do not affect early filters, even acausally, so we cannot force the filter to be late by being suicidal and creating a new filter now.
Let us construct an analogous situation: let D be a disease that people contract with 99% probability, and they die at it at latest when they are n years old (let us say n=20).
Assume that you are 25 years old and there exists no diagnosis for the disease, but some scientist discovers that people can die at the disease even until they get 30. I don't know you, but in your place I'd call it a bad news for you personally, since you will have to live in fear for 5 additional years.
On the other hand it is a good news in an abstract sense, since it means 99% of ...
A tangent: if we found extinct life on Mars, it would provide precious extra motivation to go there which is a good thing.
Scenario 1 and Scenario 2 are not isomorphic: the former is Newcomb-like and the latter is Solomon-like (see Eliezer's paper on TDT for the difference), i.e. in the former you can pre-commit to choose the late filter four years from now if you survive, whereas in the latter there's no such possibility. I'm still trying to work out what the implications of this are, though...
Re: "believing that the great filter is more likely to lie in the future means you have to anticipate a higher probability of experiencing doom."
Maybe - if you also believe the great filter is likely to result in THE END OF THE WORLD.
If it is merely a roadblock - similar to the many roadblocks we have seen so far - DOOM doesn't necessarily follow - at least not for a loooong time.
— Nick Bostrom, in Where Are They? Why I hope that the search for extraterrestrial life finds nothing
This post is a reply to Robin Hanson's recent OB post Very Bad News, as well as Nick Bostrom's 2008 paper quoted above, and assumes familiarity with Robin's Great Filter idea. (Robin's server for the Great Filter paper seems to be experiencing some kind of error. See here for a mirror.)
Suppose Omega appears and says to you:
(Scenario 1) I'm going to apply a great filter to humanity. You get to choose whether the filter is applied one minute from now, or in five years. When the designated time arrives, I'll throw a fair coin, and wipe out humanity if it lands heads. And oh, it's not the current you that gets to decide, but the version of you 4 years and 364 days from now. I'll predict his or her decision and act accordingly.
I hope it's not controversial that the current you should prefer a late filter, since (with probability .5) that gives you and everyone else five more years of life. What about the future version of you? Well, if he or she decides on the early filter, that would constitutes a time inconsistency. And for those who believe in multiverse/many-worlds theories, choosing the early filter shortens the lives of everyone in half of all universes/branches where a copy of you is making this decision, which doesn't seem like a good thing. It seems clear that, ignoring human deviations from ideal rationality, the right decision of the future you is to choose the late filter.
Now let's change this thought experiment a little. Omega appears and instead says:
(Scenario 2) Here's a button. A million years ago I hid a doomsday device in the solar system and predicted whether you would press this button or not. Then I flipped a coin. If the coin came out tails, I did nothing. Otherwise, if I predicted that you would press the button, then I programmed the device to destroy Earth right after you press the button, but if I predicted that you would not press the button, then I programmed the device to destroy the Earth immediately (i.e., a million years ago).
It seems to me that this decision problem is structurally no different from the one faced by the future you in the previous thought experiment, and the correct decision is still to choose the late filter (i.e., press the button). (I'm assuming that you don't consider the entire history of humanity up to this point to be of negative value, which seems a safe assumption, at least if the "you" here is Robin Hanson.)
So, if given a choice between an early filter and a late filter, we should choose a late filter. But then why do Robin and Nick (and probably most others who have thought about it) consider news that imply a greater likelihood of the Great Filter being late to be bad news? It seems to me that viewing a late Great Filter to be worse news than an early Great Filter is another instance of the confusion and irrationality of SSA/SIA-style anthropic reasoning and subjective anticipation. If you anticipate anything, believing that the great filter is more likely to lie in the future means you have to anticipate a higher probability of experiencing doom.
(This paragraph was inserted to clarify in response to a couple of comments. These two scenarios involving Omega are not meant to correspond to any actual decisions we have to make, but just to establish that A) if we had a choice, it would be rational to choose a late filter instead of an early filter, therefore it makes no sense to consider the Great Filter being late to be bad news (compared to it being early), and B) human beings, working off subjective anticipation, would tend to incorrectly choose the early filter in these scenarios, especially scenario 2, which explains why we also tend to consider the Great Filter being late to be bad news. The decision mentioned below, in the last paragraph, is not directly related to these Omega scenarios.)
From an objective perspective, a universe with a late great filter simply has a somewhat greater density of life than a universe with an early great filter. UDT says, let's forget about SSA/SIA-style anthropic reasoning and subjective anticipation, and instead consider yourself to be acting in all of the universes that contain a copy of you (with the same preferences, memories, and sensory inputs), making the decision for all of them, and decide based on how you want the multiverse as a whole to turn out.
So, according to this line of thought, we're acting in both kinds of universes: those with early filters, and those with late filters. If, as Robin Hanson suggests, we were to devote a lot of resources to projects aimed at preventing possible late filters, then we would end up improving the universes with late filters, but hurting the universes with only early filters (because the resources would otherwise have been used for something else). But since copies of us occur more frequently in universes with late filters than in universes with early filters, such a decision (which Robin arrives at via SIA) can be justified on utilitarian grounds under UDT.