But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.

Conversely, if we discovered traces of some simple extinct life form—some bacteria, some algae—it would be bad news. If we found fossils of something more advanced, perhaps something looking like the remnants of a trilobite or even the skeleton of a small mammal, it would be very bad news. The more complex the life we found, the more depressing the news of its existence would be. Scientifically interesting, certainly, but a bad omen for the future of the human race.

— Nick Bostrom, in Where Are They? Why I hope that the search for extraterrestrial life finds nothing

This post is a reply to Robin Hanson's recent OB post Very Bad News, as well as Nick Bostrom's 2008 paper quoted above, and assumes familiarity with Robin's Great Filter idea. (Robin's server for the Great Filter paper seems to be experiencing some kind of error. See here for a mirror.)

Suppose Omega appears and says to you:

(Scenario 1) I'm going to apply a great filter to humanity. You get to choose whether the filter is applied one minute from now, or in five years. When the designated time arrives, I'll throw a fair coin, and wipe out humanity if it lands heads. And oh, it's not the current you that gets to decide, but the version of you 4 years and 364 days from now. I'll predict his or her decision and act accordingly.

I hope it's not controversial that the current you should prefer a late filter, since (with probability .5) that gives you and everyone else five more years of life. What about the future version of you? Well, if he or she decides on the early filter, that would constitutes a time inconsistency. And for those who believe in multiverse/many-worlds theories, choosing the early filter shortens the lives of everyone in half of all universes/branches where a copy of you is making this decision, which doesn't seem like a good thing. It seems clear that, ignoring human deviations from ideal rationality, the right decision of the future you is to choose the late filter.

Now let's change this thought experiment a little. Omega appears and instead says:

(Scenario 2) Here's a button. A million years ago I hid a doomsday device in the solar system and predicted whether you would press this button or not. Then I flipped a coin. If the coin came out tails, I did nothing. Otherwise, if I predicted that you would press the button, then I programmed the device to destroy Earth right after you press the button, but if I predicted that you would not press the button, then I programmed the device to destroy the Earth immediately (i.e., a million years ago).

It seems to me that this decision problem is structurally no different from the one faced by the future you in the previous thought experiment, and the correct decision is still to choose the late filter (i.e., press the button). (I'm assuming that you don't consider the entire history of humanity up to this point to be of negative value, which seems a safe assumption, at least if the "you" here is Robin Hanson.)

So, if given a choice between an early filter and a late filter, we should choose a late filter. But then why do Robin and Nick (and probably most others who have thought about it) consider news that imply a greater likelihood of the Great Filter being late to be bad news? It seems to me that viewing a late Great Filter to be worse news than an early Great Filter is another instance of the confusion and irrationality of SSA/SIA-style anthropic reasoning and subjective anticipation. If you anticipate anything, believing that the great filter is more likely to lie in the future means you have to anticipate a higher probability of experiencing doom.

(This paragraph was inserted to clarify in response to a couple of comments. These two scenarios involving Omega are not meant to correspond to any actual decisions we have to make, but just to establish that A) if we had a choice, it would be rational to choose a late filter instead of an early filter, therefore it makes no sense to consider the Great Filter being late to be bad news (compared to it being early), and B) human beings, working off subjective anticipation, would tend to incorrectly choose the early filter in these scenarios, especially scenario 2, which explains why we also tend to consider the Great Filter being late to be bad news. The decision mentioned below, in the last paragraph, is not directly related to these Omega scenarios.)

From an objective perspective, a universe with a late great filter simply has a somewhat greater density of life than a universe with an early great filter. UDT says, let's forget about SSA/SIA-style anthropic reasoning and subjective anticipation, and instead consider yourself to be acting in all of the universes that contain a copy of you (with the same preferences, memories, and sensory inputs), making the decision for all of them, and decide based on how you want the multiverse as a whole to turn out.

So, according to this line of thought, we're acting in both kinds of universes: those with early filters, and those with late filters. If, as Robin Hanson suggests, we were to devote a lot of resources to projects aimed at preventing possible late filters, then we would end up improving the universes with late filters, but hurting the universes with only early filters (because the resources would otherwise have been used for something else). But since copies of us occur more frequently in universes with late filters than in universes with early filters, such a decision (which Robin arrives at via SIA) can be justified on utilitarian grounds under UDT.

New to LessWrong?

New Comment
82 comments, sorted by Click to highlight new comments since: Today at 12:12 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It seems to me that viewing a late Great Filter to be worse news than an early Great Filter is another instance of the confusion and irrationality of SSA/SIA-style anthropic reasoning and subjective anticipation. If you anticipate anything, believing that the great filter is more likely to lie in the future means you have to anticipate a higher probability of experiencing doom.

Let's take this further: is there any reason, besides our obsession with subjective anticipation, to discuss whether a late great filter is 'good' or 'bad' news, over and above policy implications? Why would an idealized agent evaluate the utility of counterfactuals it knows it can't realize?

That is a good question, and one that I should have asked and tried to answer before I wrote this post. Why do we divide possible news into "good" and "bad", and "hope" for good news? Does that serve some useful cognitive function, and if so, how?

Without having good answers to these questions, my claim that a late great filter should not be considered bad news may just reflect confusion about the purpose of calling something "bad news".

6cousin_it12y
About the cognitive function of "hope": it makes evolutionary sense to become all active and bothered when a big pile of utility hinges on a single uncertain event in the near future, because that makes you frantically try to influence that event. If you don't know how to influence it (as in the case of a lottery), oh well, evolution doesn't care.
2TheOtherDave12y
Evolution might care. That is, systems that expend a lot of attention on systems they can't influence might do worse than systems that instead focus their attention on systems they can influence. But yes, either there weren't any of the second kind of system around to compete with our ancestors, or there were and they lost out for some other reason, or there were and it turns out that it's a bad design for our ancestral environment.
9Rain14y
If we expect a late great filter, we may need to strive significantly harder to avoid it. The policy implications are staggering.
1Roko14y
no, I don't think so. But if you strip subjective anticipation off a human, you might lose a lot of our preferences. We are who we are, so we care about subjective anticipation.
3Wei Dai14y
We are who we are, but who we are is not fixed. What we care about seem to depend on what arguments we listen to or think up, and in what order. (See my Shut Up and Divide post for an example of this.) While a ideally rational agent (according to our current best conception of ideal rationality) would seek to preserve its values regardless of what they are, some humans (including me, for example) actively seek out arguments that might change what they care about. Such "value-seeking" behavior doesn't seem irrational to me, even though I don't know how to account for it in terms of rationality. And while it seems impossible for a human to completely give up subjective anticipation, it does seem possible to care less about it.
1JGWeissman14y
I would say it is part of checking for reflective consistency. Ideally, there shouldn't be arguments that change your (terminal) values, so if there are, you want to so you can figure out what is wrong and how to fix it.
5Wei Dai14y
I don't think that explanation makes sense. Suppose an AI thinks it might have a security hole in its network stack, so that if someone sends it a certain packet, it would become that person's slave. It would try to fix that security hole, without actually seeking to have such a packet sent to itself. We humans know that there are arguments out there that can change our values, but instead of hardening our minds against them, some of us actually try to have such arguments sent to us.
2Amanojack14y
In the deontological view of values this is puzzling, but in the consequentialist view it isn't: we welcome arguments that can change our instrumental values, but not our terminal values (A.K.A. happiness/pleasure/eudaimonia/etc.). In fact I contend that it doesn't even make sense to talk about changing our terminal values.
2Roko14y
It is indeed a puzzling phenomenon. My explanation is that the human mind is something like a coalition of different sub-agents, many of which are more like animals or insects than rational agents. In any given context, they will pull the overall strategy in different directions. The overall result is an agent with context dependent preferences, i.e. irrational behavior. Many people just live with this. Some people, however, try to develop a "life philosophy" that shapes the disparate urges of the different mental subcomponents into an overall strategy, that reflects a consistent overall policy. A moral "argument" might be a hypothetical that attempts to put your mind into a new configuration of relative power of subagents, so that you can re-assess the overall deal.
1pjeby14y
Congratulations, you just reinvented [a portion of] PCT. ;-) [Clarification: PCT models the mind as a massive array of simple control circuits that act to correct errors in isolated perceptions, with consciousness acting as a conflict-resolver to manage things when two controllers send conflicting commands to the same sub-controller. At a fairly high level, a controller might be responsible for a complex value: like correcting hits to self-esteem, or compensating for failings in one's aesthetic appreciation of one's work. Such high-level controllers would thus appear somewhat anthropomorphically agent-like, despite simply being something that detects a discrepancy between a target and an actual value, and sets subgoals in an attempt to rectify the detected discrepancy. Anything that we consider of value potentially has an independent "agent" (simple controller) responsible for it in this way, but the hierarchy of control does not necessarily correspond to how we would abstractly prefer to rank our values -- which is where the potential for irrationaity and other failings lies.]
2Roko14y
It does seem that something in this region has to be correct.

Yes there are different ways to conceive what news is good or bad, and yes it is good from a God-view if filters are late. But to those of us who already knew that we exist now, and have passed all previous filters, but don't know how many others out there are at a similar stage, the news that the biggest filters lie ahead of us is surely discouraging, if useful.

0Wei Dai14y
My post tried to argue that some of them are better than others. But is such discouragement rational? If not, perhaps we should try to fight against it. It seems to me that we would be less discouraged if we considered our situation and decisions from what you call the God-view.
8RobinHanson14y
Call me stuck in ordinary decision theory, with less than universal values. I mostly seek info that will help me now make choices to assist myself and my descendants, given what I already know about us. The fact that I might have wanted to commit before the universe began to rating universes as better if they had later filters is not very relevant for what info I now will call "bad news."
5Wei Dai14y
After I wrote my previous reply to you, I realized that I don't really know why we call anything good news or bad news, so I may very well be wrong when I claimed that late great filter is not bad news, or that it's more rational to not call it bad news. That aside, ordinary decision theory has obvious flaws. (See here for the latest example.) Given human limitations, we may not be able to get ourselves unstuck, but we should at least recognize that it's less than ideal to be stuck this way.
1Vladimir_Nesov14y
The goal of UDT-style decision theories is making optimal decisions at any time, without needing to precommit in advance. Looking at the situation as if from before the beginning of time is argued to be the correct perspective from any location and state of knowledge, no matter what your values are.
-8timtyler14y

All else being the same we should prefer a late coin toss over an early one, but we should prefer an early coin toss that definitely came up tails over a late coin toss that might come up either way. Learning that the coin toss is still ahead is bad news in the same way as learning that the coin came up tails is good news. Bad news is not isomorphic to making a bad choice. An agent that maximizes good news behaves quite differently from a rational actor.

1Wei Dai14y
So, according to your definition of "good news" and "bad news", it might be bad news to find that you've made a good decision, and good news to find that you've made a bad decision? Why would a rational agent want to have such a concept of good and bad news?

So, according to your definition of "good news" and "bad news", it might be bad news to find that you've made a good decision, and good news to find that you've made a bad decision? Why would a rational agent want to have such a concept of good and bad news?

If you wake up in the morning to learn that you got drunk last night, played the lottery, and won, then this is good news.

Let us suppose that, when you were drunk, you were computationally limited in a way that made playing the lottery (seem to be) the best decision given your computational limitations. Now, in your sober state, you are more computationally powerful, and you can see that playing the lottery last night was a bad decision (given your current computational power but minus the knowledge that your numbers would win). Nonetheless, learning that you played and won is good news. After all, maybe you can use your winnings to become even more computationally powerful, so that you don't make such bad decisions in the future.

2Wei Dai14y
Why is that good news, when it also implies that in the vast majority of worlds/branches, you lost the lottery? It only makes sense if, after learning that you won, you no longer care about the other copies of you that lost, but I think that kind of mind design is simply irrational, because it leads to time inconsistency.
6wnoise14y
Whether these copies exist or not, and their measure could depend on details of the lotteries' implementation. If it's a classical lottery, all the (reasonable) quantum branches from the point you decided could have the same numbers.
5Tyrrell_McAllister14y
I want to be careful to distinguish Many-Worlds (MW) branches from theoretical possibilities (with respect to my best theory). Events in MW-branches actually happen. Theoretical possibilities, however, may not. (I say this to clarify my position, which I know differs from yours. I am not here justifying these claims.) My thought experiment was supposed to be about theoretical possibility, not about what happens in some MW-branches but not others. But I'll recast the situation in terms MW-branches, because this is analogous to your scenario in your link. All of the MW-branches very probably exist, and I agree that I ought to care about them without regard to which one "I" am or will be subjectively experiencing. So, if learning that I played and won the lottery in "my" MW-branch doesn't significantly change my expectation of the measures of MW-branches in which I play or win, then it is neither good news nor bad news. However, as wnoise points out, some theoretical possibilities may happen in practically no MW-branches. This brings us to theoretical possibilities. What are my expected measures of MW-branches in which I play and in which I win? If I learn news N that revises my expected measures in the right way, so that the total utility of all branches is greater, then N is good news. This is the kind of news that I was talking about, news that changes my expectations of which of the various theoretical possibilities are in fact realized.
0Tyrrell_McAllister14y
I'm very surprised that this was downvoted. I would appreciate an explanation of the downvote.
1prase14y
This is the point at which believing in many worlds and caring about other branches leads to very suspicious way to perceive reality. I know, absurdity heuristic isn't that much reliable, but still - would it make you really sad or angry or desperate if you realised that you have won a billion (in any currency) under described circumstances? Would you really celebrate if you realised that the great filter, which wipes out a species 90% of the time, and which you previously believed we have already passed, is going to happen in the next 50 years? I am ready to change my opinion about this style of reasoning, but probably I need some more powerful intuition pump.
3Nick_Tarleton14y
Caring about other branches doesn't imply having congruent emotional reactions to beliefs about them. Emotions aren't preferences.
1prase14y
Emotions are not preferences, but I believe they can't be completely disentangled. There is something wrong with a person who feels unhappy after learning that the world has changed towards his/her prefered state.
0BrandonReinhart14y
I don't see how you can effectively apply social standards like "something wrong" to a mind that implements UDT. There are no human minds or non-human minds that I am aware of that perfectly implement UDT. There are no known societies of beings that do. It stands to reason that such a society would seem very other if judged by the social standards of a society composed of standard human minds. When discussing UDT outcomes you have to work around that part of you that wants to immediately "correct" the outcome by applying non-UDT reasoning.
0prase14y
That "something wrong" was not as much of a social standard, as rather an expression of an intuitive feeling of a contradiction, which I wasn't able to specify more explicitly. I could anticipate general objections such as yours, however, it would help if you can be more concrete here. The question is whether one can say he prefers the state of world where he dies soon with 99% probability, even if he would be in fact disappointed after realising that it was really going to happen. I think we are now at risk of redefining few words (like preference) to mean something quite different from what they used to mean, which I don't find good at all. And by the way, why is this a question of decision theory? There is no decision in the discussed scenario, only a question whether some news can be considered good or bad.
2cupholder14y
I don't know if this is exactly the kind of thing you're looking for, but you might like this paper arguing for why many-worlds doesn't imply quantum immortality and like-minded conclusions based on jumping between branches. (I saw someone cite this a few days ago somewhere on Less Wrong, and I'd give them props here, but can't remember who they were!)
2Cyan14y
It was Mallah, probably.
1cupholder14y
You're probably right - going through Mallah's comment history, I think it might have been this post of his that turned me on to his paper. Thanks Mallah!
0RobinZ14y
It's good news because you just gained a big pile of utility last night. Yes, learning that you're not very smart when drunk is bad news, but the money more than makes up for.
2wnoise14y
Wei_Dai is saying that all the other copies of you that didn't win lost more than enough utility to make up for it. This is far from a universally accepted utility measure, of course.
0RobinZ14y
So Wei_Dai's saying the money doesn't more than make up for? That's clever, but I'm not sure it actually works.
0Tyrrell_McAllister14y
Had the money more than made up for it, it would have been rational from a normal expected-utility perspective to play the lottery. My scenario was assuming that, with sufficient computational power, you would know that playing the lottery wasn't rational.
1RobinZ14y
We're not disagreeing about the value of the lottery - it was, by stipulation, a losing bet - we are disagreeing about the proper attitude towards the news of having won the lottery. I don't think I understand the difference in opinion well enough to discover the origin of it.
0Tyrrell_McAllister14y
I must have misunderstood you, then. I think that we agree about having a positive attitude toward having won.

In the real world, we don't get to make any decision. The filter hits us or it doesn't.

If it hits early, then we shouldn't exist (good news: we do!). If it hits late, then WE'RE ALL GOING TO DIE!

In other words, I agree that it's about subjective anticipation, but would point out that the end of the world is "bad news" even if you got to live in the first place. It's just not as bad as never having existed.

Nick is wondering whether we can stop worrying about the filter (if we're already past it). Any evidence we have that complex life develops before the filter would then cause us to believe in the late filter, leaving it still in our future, and thus still something to worry about and strive against. Not as bad as an early filter, but something far more worrisome, since it is still to come.

3FAWS14y
Depends on what you mean with " find that you've made a good decision", but probably yes. A decision is either rational given the information you had available or it's not. Do you mean finding out you made a rational decision that you forgot about? Or making the right decision for the wrong reasons and later finding out the correct reasons? Or finding additional evidence that increases the difference in expected utility for making the choice you made? Finding out you have a brain tumor is bad news. Visiting the doctor when you have the characteristic headache is a rational decision, and an even better decision when in the third sense when you turn out to actually have a brain tumor. Finding a tumor would retroactively make a visit to the doctor a good decision in the second sense even if it originally was for irrational reasons. And in the first sense, if you somehow forgot about the whole thing in the mean time I guess being diagnosed would remind you of the original decision. Bad news is news that reduces your expectation of utility. Why should a rational actor lack that concept? If you don't have a concept for that you might confuse things that change expectation of utility for things that change utility and accidentally end up just maximizing the expectation of utility when you try to maximize expected utility.
0[anonymous]14y
UPDATE: This comment clearly misses the point. Don't bother reading it. Well, the worse you turn out to have done within the space of possible choices/outcomes, the more optimistic you should be about your ability to do better in the future, relative to the current trend. For example, if I find out that I am being underpaid for my time, while this may offend my sense of justice, it is good news about future salary relative to my prior forecast, because it means it should be easier than I thought to be paid more, all else equal. Generally, if I find that my past decisions have all been perfect given the information available at the time, I can't expect to materially improve my future by better decisionmaking, while if I find errors that were avoidable at the time, then if I fix these errors going forward, I should expect an improvement. This is "good news" insofar as it expands the space of likely outcomes in a utility-positive direction, and so should raise the utility of the expected (average) outcome.

First of all, a late great filter may be total, while an early great filter (based on the fact of our existence) was not - a reason to prefer the early one.

Secondly, let's look at the problem from our own perspective. If we knew that there was an Omega simulating us, and that our decision would affect when a great filter happens, even in our past, then this argument could work.

But we have no evidence of that! Omega is an addition to the problem, that completely changes the situation. If I had the button in front of me that said "early partial great fi... (read more)

1RobinZ14y
Hah - two hours after you, and without reading your comment, I come to the same conclusion by analogy to your own post.* :) * That's where your ten karma just came from, by the way.

We can't do anything today about any filters that we've already passed...

0Wei Dai14y
I'm not sure which part of my post you're responding to with that comment, but perhaps there is a misunderstanding. The two scenarios involving Omega are only meant to establish that a late great filter should not be considered worse news than an early great filter. They are not intended to correspond to any decisions that we actually have to make. The decision mentioned in the last paragraph, about how much resources to spend on existential risk reduction, which we do have to make, is not directly related to those two scenarios.

"The two scenarios involving Omega are only meant to establish that a late great filter should not be considered worse news than an early great filter."

I honestly think this would have been way, way, way clearer if you had dropped the Omega decision theory stuff, and just pointed out that, given great filters of equal probability, choosing an early great filter over a late great filter would entail wiping out the history of humanity in addition to the galactic civilization that we could build, which most of us would definitely see as worse.

1Wei Dai14y
Point taken, but I forgot to mention that the Omega scenarios are also meant to explain why we might feel that the great filter being late is worse news than the great filter being early: an actual human, faced with the decision in scenario 2, might be tempted to choose the early filter. I'll try to revise the post to make all this clearer. Thanks.
3CronoDAS14y
But, in universes with early filters, I don't exist. Therefore anything I do to favor late filters over early filters is irrelevant, because I can't affect universes in which I don't exist. (And by "I", I mean anything that UDT would consider "me".)

This seems centered around a false dichotomy. If you have to choose between an early and a late Great Filter, the later may well be preferable. But the presupposes it must be on or the other. In reality, there may be no Great Filter, or there may be a great filter of such a nature that it only allows linear expansion, or some other option we simply haven't thought of. Or there may be a really late great filter. Your reasoning presumes an early/late dichotomy that is overly simplistic.

0Wei Dai14y
I made that assumption because I was responding to two articles that both made that assumption, and I wanted to concentrate on a part of their reasoning apart from that assumption.

I don't seem to understand the logic here. As I understand the idea of "Late Great Filter is bad news", it is simply about bayesian update of probabilities for hyphoteses A = "Humanity will eventually come to Explosion" versus not-A. Say, we have original probabilities for this p = P(A) and q = 1-p. Now, suppose, we take Great Filter hyphoteses for granted, and we find on Mars remnants of great civilization, equal to ours or even more improved. This means that we must update our probabilities of A/not-A so that P(A) decreases.

And I consider this really bad news. Either that, or Great Filter idea has some huuuuge flaw I overlooked.

So, where am I wrong?

In the scenario where the coin lands tails, nothing interesting happens, and the coin is logically independent of anything else, so let us assume that the coin lands heads. We are assuming (correct me if I'm wrong) that Omega is a perfect predictor. So, in the second scenario, we already know that the person will press the button even before he makes his decision, even before he does it, or else either a). Omega's prediction is wrong (contradiction) or b). the Earth was destroyed a million years ago (contradiction). The fact that we currently exist gives u... (read more)

1Wei Dai14y
I'm not sure how this relates to the main points of my post. Did you intend for it to be related (in which case please explain how), or is it more of a tangent? What I meant by the title is that the Great Filter being late is not bad news (compared to it being early). Perhaps I should change the title to make that clearer?
0alyssavance14y
"I'm not sure how this relates to the main points of my post. Did you intend for it to be related (in which case please explain how), or is it more of a tangent?" You said: "It seems to me that this decision problem is structurally no different from the one faced by the future you in the previous thought experiment, and the correct decision is still to choose the late filter (i.e., press the button)." This isn't a decision problem because the outcome is already known ahead of time (you will press the button).
0[anonymous]14y
Known to whom?
0Mass_Driver14y
Hang on. Let W = "The Forces of Good will win an epic battle against the Forces of Evil." Let C = "You will be instrumental in winning an epic battle against the Forces of Evil." Let B = "There will be an epic battle between the Forces of Good and the Forces of Evil." What "You are the Chosen One" usually means in Western fiction is: B is true, and; W if and only if C. Thus, if you are definitely the Chosen One, and you stay home and read magazines, and reading magazines doesn't help you win epic battles, and epic battles are relatively evenly-matched, then you should expect to observe (B ^ !C ^ !W), i.e., you lose an epic battle on behalf of the Forces of Good. Fate can compel you to the arena, but it can't make you win.
0Unknowns14y
If you are a heroic individual and a perfect predictor says that you will go on a dangerous quest, you will go on a dangerous quest even if there is a significant probability that you will not go. After all many things happen that had low probabilities.
0alyssavance14y
Contradiction. If a perfect predictor predicts that you will go on a dangerous quest, then the probability of you not going on a dangerous quest is 0%, which is not "significant".
0Unknowns14y
There may be a significant probability apart from the fact that a perfect predictor predicted it. You might as well say that either you will go or you will not, so the probability is either 100% or 0%.
0alyssavance14y
"There may be a significant probability apart from the fact that a perfect predictor predicted it. " I do not understand your sentence. "You might as well say that either you will go or you will not, so the probability is either 100% or 0%." Exactly. Given omniscience about event X, the probability of event X is always either 100% or 0%. If we got a perfect psychic to predict whether I would win the lottery tomorrow, the probability of me winning the lottery would be either 100% or 0% after the psychic made his prediction.
2Unknowns14y
I was saying that taking into account everything you know except for the fact that a perfect predictor predicted something, there could be a significant probability.

I downvoted this because it seems to be missing a very obvious point - that the reason why an early filter would be good is because we've already passed it. If we hadn't passed it, then of course we want the filter as late as possible.

On the other hand, I notice that this post has 15 upvotes. So I am wondering whether I have missed anything - generally posts that are this flawed do not get upvoted this much. I read through the comments and thought about this post a bit more, but I still came to the conclusion that this post is incredibly flawed.

But since copies of us occur more frequently in universes with late filters than in universes with early filters, such a decision (which Robin arrives at via SIA) can be justified on utilitarian grounds under UDT.

But it seems like our copies in early-filter universes can eventually affect a proportionally greater share of the universe's resources.

Also, Robin's example of shelters seems mistaken: if shelters worked, some civilizations would already have tried them and colonized the universe. Whatever we try has to stand a chance of working against some u... (read more)

according to this line of thought, we're acting in both kinds of universes: those with early filters, and those with late filters.

Suppose that the reason for a late filter is a (complex) logical fact about the nature of technological progress; that there is some technological accident that it is almost impossible for an intelligent species to avoid, and that the lack of an early filter is a logical fact about the nature of life-formation and evolution. For the purposes of clarity, we might even think of an early filter as impossible and a late filter as certain.

Then, in what sense do we "exist in both kinds of universe"?

I agree. Note that the very concept of "bad news" doesn't make sense apart from a hypothetical where you get to choose to do something about determining these news one way or another. Thus CronoDAS's comment actually exemplifies another reason for the error: if the hypothetical decision is only able to vary the extent of a late great filter, as opposed to shifting the timing of a filter, it's clear that discovering powerful great filter is "bad news" according to such metric (because it's powerful, as opposed to because it's late).

3Tyrrell_McAllister14y
I don't think that that's the concept of "bad news" that Hanson and Bostrom are using. If you have background knowledge X, then a piece of information N is "bad news" if your expected utility conditioned on N & X is less than your expected utility conditioned on X alone. Let our background knowledge X include the fact that we have secured all the utility that we received up till now. Suppose also that, when we condition only on X, the Great Filter is significantly less than certain to be in our future. Let N be the news that a Great Filter lies ahead of us. If we were to learn N, then, as Wei Dai pointed out, we would be obliged to devote more resources to mitigating the Great Filter. Therefore, our expected utility over our entire history would be less than it is when we condition only on X. That is why N is bad news.

I think this is a misleading problem, like Omega's subcontracting problem. Our actions now do not affect early filters, even acausally, so we cannot force the filter to be late by being suicidal and creating a new filter now.

Let us construct an analogous situation: let D be a disease that people contract with 99% probability, and they die at it at latest when they are n years old (let us say n=20).

Assume that you are 25 years old and there exists no diagnosis for the disease, but some scientist discovers that people can die at the disease even until they get 30. I don't know you, but in your place I'd call it a bad news for you personally, since you will have to live in fear for 5 additional years.

On the other hand it is a good news in an abstract sense, since it means 99% of ... (read more)

A tangent: if we found extinct life on Mars, it would provide precious extra motivation to go there which is a good thing.

Scenario 1 and Scenario 2 are not isomorphic: the former is Newcomb-like and the latter is Solomon-like (see Eliezer's paper on TDT for the difference), i.e. in the former you can pre-commit to choose the late filter four years from now if you survive, whereas in the latter there's no such possibility. I'm still trying to work out what the implications of this are, though...

Re: "believing that the great filter is more likely to lie in the future means you have to anticipate a higher probability of experiencing doom."

Maybe - if you also believe the great filter is likely to result in THE END OF THE WORLD.

If it is merely a roadblock - similar to the many roadblocks we have seen so far - DOOM doesn't necessarily follow - at least not for a loooong time.

0[anonymous]14y
If the great filter is a roadblock, or a mere possibility of DOOM, then we already know damn well there are late great filters and could spend several pages just listing them.