I upvoted this because it was highly amusing -- but ultimately it's silly, a perfect example of how some people can be so sharp that they cut themselves.
I wonder, if instead one-box and two-box for a prize of 100$ or 200$, we had "Selection A: horrible self-mutilation" and "Selection B: one million dollars", with Prometheus creating only the people that he believed would pick Selection A, and reject Selection B.... would the people that one-box here STILL onebox?
Well, I'd just choose to win instead and thus pick the one million dollars instead of the horrible self-mutilation. I think that's the sane thing to do -- if Prometheus has a 99.99% predictive capacity on this there'll be 10000 people who'll select self-mutilation for every one like me who'll pick the money. But I already know I'm like me, and I'm the one I'm concerned about.
The relevant probability isn't P(chooses Self-mutilation|Prometheus created him) ~= 0, but rather the P(chooses one million dollars|Is Aris Katsaris) ~= 1.
Azathoth wants you to maximize your number of descendants; if you fail to have descendants, Azathoth will try not to have created you.
But this seems merely false. Azathoth just creates descendants whose ancestors reproduced. Azathoth isn't exerting any sort of foresight as to whether you reproduce. I can't figure out who or what you're trying to trade with. Not having children simply does not make you retroactively less likely to have existed.
I suppose you could be in a Newcomblike situation with your parents making a similar decision to have birthed you. I don't see how you could be in one with respect to Azathoth/evolution. It's not modeling you, it doesn't contain a computation similar to you, there is no logical update on what it does after you know your own decision.
My initial reaction is "This is all correct... except that Azerthoth isn't smart enough to have invented counter factual trade!". Just imagine trying to counterfactually trade with your past self from before you knew about counter-factual trade. A similar case is coming up with a great product you'd like to buy, only to discover when you get to the market you were the first to come up with the idea and nobody is selling the product despite that it'd be good for both of you if they had.
For further clarity here's a scenario where you reasoning probably WOULD work:
You find yourself on a planet, and you know this planet is in a phase of development in which conditions remain almost perfectly unchanged for 3^^^3 years/generations (the laws of physics would need to be different in this universe for this to work I think). The environment is also completely reset every few generations and only the extremely durable spores for the next generation is able to survive so there's no way to relay messages to the future. No tech solutions like directly editing and putting messages in the genes because you can't develop tech in a single generation.
If i lived on that planet I'd pay very close attention to the reasoning of this post. In the world that's unly existed for a few billion years and is about to hit the singularity, and where I've grown up in conditions completely different from my ancestors... not so much.
Good post thou, upvoted.
Q: Why does this knife have a handle?
A: This allows you to grasp it without cutting yourself.These kinds of answers are highly compelling, but strictly speaking they are allowing events in the future to influence events in the past.
No, they aren't. They are answering the question according to the standard meaning conveyed with "Why?". When we use the word 'why' we mean a variety of things along the lines of 'What purpose does this serve?' as well as sometimes 'Explain the series of events that lead up to the final state in a convenient way'.
In standard usage if someone answers 'so you don't cut yourself' they usually are not talking about anything to do with temporal relations one way or the other.
Q: Why do I exist?
A: Because you're going to have lots of children.
Just wrong. The only thing close to this with even a little bit of poetic truth would be that I exist because Azathoth, being familiar with my design, rationally expects me to have lots of children.
a mutually beneficial trade with Azathoth (Azathoth creates you; you increase your reproductive fitness in exchange)
At first I reacted negatively to this idea, but eventually I realized that the argument has the same acausal trade structure as the rest of the EDT-inspired nonsense around here. My voluntary reproductive efforts seem to be evidence about my genetic makeup, and that genetic makeup is in a causal relationship with my existence. So that is not what is wrong with this idea.
The trouble is that
The 'Azathoth problem' is isomorphic to the smoking lesion problem, which is not isomorphic to Newcomb's problem.
Hence, any decision theory capable of both (i) one-boxing in Newcomb's problem and (ii) choosing to smoke in the 'smoking lesion' problem will have no difficulty here.
EDIT: I'd better sketch out this "isomorphism": "smoking" = "acting virtuously, in defiance of our evolutionary drives", "not smoking" = "giving in to our instincts and trying to optimize number of children". "having the lesion...
...You weren't created by Prometheus; you were created by Azathoth, The God That is Evolution by Natural Selection. You are the product of an ongoing optimization process that is trying to maximize reproductive fitness. Azathoth wants you to maximize your number of descendants; if you fail to have descendants, Azathoth will try not to have created you. If your intelligence reduces your reproduction rate, Azathoth will try not to grant you intelligence. If the Darwinian-optimal choice conflicts with the moral one, Azathoth wants you to choose evil.
It woul
Azathoth wants you to maximize your number of descendants; if you fail to have descendants, Azathoth will try not to have created you.
It sure is welcome to try now that I've precommitted to never have children.
(On a silly note, this gave me a mental image of a time-traveling God of Evolution who meddles with the past to achieve desired results in the present. shudder)
But if you one-box in Newcomb's Problem, you should take these answers more literally. The kinds of backwards causal arrows you draw are the same.
But keep in mind that this kind of control, or "backwards causality", is all about your map, not the territory, more precisely it's about your state of logical uncertainty and not about what the definitions you have logically imply. If you already know what the state (probability) of the thing you purport to control is, then you can't control it.
In this manner, you might have weak control over your...
It's dangerous to update after observing your own existence because no counterfactual you can update on their non-existence so your updates can't possibly make sense in aggregate.
Onebox in Part 1 if you value your existence because the probability of your one-boxing directly determines the probability of you existing and Prometheus accepts or rejects you as a package. You need not bother (much) with Azathoth because he's an idiot, has no predictive abilities at all beyond the simplest form of induction, only took the effects of (some of) the parts that make you up in completely different combinations into account, and what your ancestors did is much more weakly entangled with what you are going to do.
It doesn't.
The trouble is how we can distinguish your argument from the "how can my choice cause me to lose the $1,000,000?" argument in Newcomb's problem, which doesn't seem to lead to winning, and identify a meaningful sense in which one is right and the other is wrong.
By creating a simulation to interrogate, Omega/Prometheus/Azathoth have brought a being into existence, which means the being may have preferences to continue to exist (in some other form). So I'd tend to pick B for Prometheus, to continue existing. I wouldn't do so for Azathoth, because evolution doesn't have to create a living version of me to see what I would do; there is no "I" to regret dying or not existing there.
Thanks, this is a highly thought-provoking (and headache-inducing :p) post.
The "obvious" objection to the (evolutionary) argument here is that, were I to in fact make some choices that increase my inclusive genetic fitness in response to this argument, that would bear almost no connection to the genetic fitness of my ancestors, who had never been exposed to the argument, and were presumably causal decision theorists too. (If what I have just said is true, by the way, that would make the argument also a basilisk, in that by disseminating it you wo...
Your idea sounds plausible and interesting, but I don't completely understand the implications. What am I supposed to do if the environment changes from generation to generation, e.g. due to advances in science? Should I adopt the behaviors that helped my ancestors have many kids, or the behaviors that will help me have many kids?
Others in this thread have pointed this out, but I will try to articulate my point a little more clearly.
Decision theories that require us to two-box do so because we have incomplete information about the environment. We might be in a universe where Omega thinks that we'll one-box; if we think that Omega is nearly infallible, we increase this probability by choosing to one-box. Note that probability is about our own information, not about the universe. We're not modifying the universe, we're refining our estimates.
If the box is transparent, and we can see ...
So...if Prometheus created you to one-box, and you should one-box anyway...why not one-box? "Too much useless information" alarm bells are ringing in my head.
Edit: Italics to quotes.
Edit2: I failed to read. See this comment
Ok, so as I understand timeless decision theory, one wants to honor precommitments that one would have made if the outcome actually depended on the answer regardless of whether or not the outcome actually depends on the answer or not. The reason for this seems to be that behaving as a timeless decision agent makes your behavior predictable to other timeless decision theoretical agents (including your future selves), and therefore big wins can be had all around for all, especially when trying to predict your own future behavior.
So, if you buy the idea that...
I don't get it. I do exist. If I never reproduce, then Azathoth predicted incorrectly (which will hardly be the first time).
(I also agree with the response that the universe isn't better off for having me in it, but that doesn't matter, since it has me anyway.)
Total utilitarians want to one-box in Prometheus' game, and average utilitarians want to two-box.
Let me try to make my objection clearer. You seem to be concerned with things that make your existence less likely. But that is never going to be a problem. You already know the probability of your own existence is 1; you can't update it based on new data.
I don't get it. Is this supposed to be some weird form of evidential or maybe timeless decision theory? It hardly matters; whatever decision theory you're using, you already know you exist; conditioning on the possibility that you don't is nonsensical. Hell, even if you're an AI using UDT you gain nothing from not assuming you exist; you were built to not update in the normal sense because whoever built you cared about all possible worlds you might end up in, but regardless, if you're standing there making the decision, you exist (i.e. this can be assumed at the start and taken into account).
Edit: Just for the purpose of explicitness, I should probably state that the conclusion here is that you should two-box in this case.
Although I find one-boxing difficult to do in that scenario, as a human, it is apparent that a reflectively consistent decision theory would one-box, as that is what it would have precommitted to do if it had the option (prior to its not yet determined existence) to precommit. No backwards arrows of causality are needed, just a particular type of consistency (updatelessness or timelessness).
I've been trying to work on this problem based on my admittely poor understanding of Updateless Decision Theory and I think I've come to the conclusion that, while you should one-box in Newcomb's problem and in Transparent Newcomb's problem, you should two-box when dealing with Prometheus, ignore Azathoth, and ignore the desires of evil parents.
Why? My reasoning is based on these lines from cousin_it's explanation of UDT:
...When you're faced with a decision, you find all copies of you in the entire "multiverse" that are faced with the same de
For this specific formulation of the question, I think it may be relevant to know whether Prometheus updates on your decisions in order to improve his projections on whether future individuals will one box or two box.
What was the point of reposting this after it was in the discussion section, without seeming to edit it in response to comments since then?
I think the primary reason why this Prometheus problem is flawed is that in Newcomb's problem, the presence or absence of the million dollars is unknown, while in this Prometheus problem, you already know what Prometheus did as a result of his prediction. Think of a variation on Newcomb's problem where Omega allows you to look inside box B before choosing, and you see that it is full. Only an idiot would take only one box in that scenario, and that's why this analysis is flawed.
Good stuff, thanks.
Acausal version: "If you have goals that would be served by you existing, then try to have many kids because it increases the number of worlds in which you exist." Note how this completely ignores the "fact" that you "already exist" - of course you do, we're living in a multiverse! What's left to you is to increase the measure.
Causal version 1: "If you're a good person, and you believe the world needs more good people, then try to have many kids." Note that this argument doesn't rely on genetics on...
..."You were created by a god: a being called Prometheus. Prometheus was neither omniscient nor particularly benevolent. He was given a large set of blueprints for possible human embryos, and for each blueprint that pleased him he created that embryo and implanted it in a human woman. Here was how he judged the blueprints: any that he guessed would grow into a person who would choose only Box B in this situation, he created. If he judged that the embryo would grow into a person who chose both boxes, he filed that blueprint away unused. Prometheus's
This seems to highlight my main complaint with Newcomb's problem. It assumes reverse causation is possible. Perhaps I'm being narrow minded, but, "Assume reverse causation is possible. How do you deal with this hypothetical?" does not mean you should actually design a decision theory to take into account reverse causation, without adequate evidence it exists.
there can be no doubt that in many commonplace situations, Azathoth wants you to cheat, or rape, or murder.
It seems as though rape and murder often lead to prison sentences, which involves being confined in an environment with no members of the opposite sex. This is especially true in the modern surveillence-saturated world.
Baby makers who are positively inclined towards rape and murder are in a tiny minority, and are probably desperate - and desperate people often do bad things, largely irrespective of what their goals are - witness the priests and the choirboys.
Part 1: Transparent Newcomb with your existence at stake
Related: Newcomb's Problem and Regret of Rationality
Omega, a wise and trustworthy being, presents you with a one-time-only game and a surprising revelation.
"I have here two boxes, each containing $100," he says. "You may choose to take both Box A and Box B, or just Box B. You get all the money in the box or boxes you take, and there will be no other consequences of any kind. But before you choose, there is something I must tell you."
Omega pauses portentously.
"You were created by a god: a being called Prometheus. Prometheus was neither omniscient nor particularly benevolent. He was given a large set of blueprints for possible human embryos, and for each blueprint that pleased him he created that embryo and implanted it in a human woman. Here was how he judged the blueprints: any that he guessed would grow into a person who would choose only Box B in this situation, he created. If he judged that the embryo would grow into a person who chose both boxes, he filed that blueprint away unused. Prometheus's predictive ability was not perfect, but it was very strong; he was the god, after all, of Foresight."
Do you take both boxes, or only Box B?
For some of you, this question is presumably easy, because you take both boxes in standard Newcomb where a million dollars is at stake. For others, it's easy because you take both boxes in the variant of Newcomb where the boxes are transparent and you can see the million dollars; just as you would know that you had the million dollars no matter what, in this case you know that you exist no matter what.
Others might say that, while they would prefer not to cease existing, they wouldn't mind ceasing to have ever existed. This is probably a useful distinction, but I personally (like, I suspect, most of us) score the universe higher for having me in it.
Others will cheerfully take the one box, logic-ing themselves into existence using whatever reasoning they used to qualify for the million in Newcomb's Problem.
But other readers have already spotted the trap.
Part 2: Acausal trade with Azathoth
Related: An Alien God, An identification with your mind and memes, Acausal Sex
(ArisKatsaris proposes an alternate trap.)
Q: Why does this knife have a handle?
A: This allows you to grasp it without cutting yourself.
Q: Why do I have eyebrows?
A: Eyebrows help keep rain and sweat from running down your forehead and getting into your eyes.
These kinds of answers are highly compelling, but strictly speaking they are allowing events in the future to influence events in the past. We can think of them as a useful cognitive and verbal shortcut--the long way to say it would be something like "the knife instantiates a design that was subject to an optimization process that tended to produce designs that when instantiated were useful for cutting things that humans want to cut..." We don't need to spell that out every time, but it's important to keep in mind exactly what goes into those optimization processes--you might just gain an insight like the notion of planned obsolescence. Or, in the case of eyebrows, the notion that we are Adaptation-Executers, not Fitness-Maximizers.
But if you one-box in Newcomb's Problem, you should take these answers more literally. The kinds of backwards causal arrows you draw are the same.
Q: Why does Box B contain a million dollars?
A: Because you're not going to take Box A.
In the same sense that your action determines the contents of Box B, or Prometheus's decision, the usefulness of the handle or the usefulness of eyebrows determines their existence. If the handle was going to prevent you from using the knife, it wouldn't be on there in the first place.
Q: Why do I exist?
A: Because you're going to have lots of children.
You weren't created by Prometheus; you were created by Azathoth, The God That is Evolution by Natural Selection. You are the product of an ongoing optimization process that is trying to maximize reproductive fitness. Azathoth wants you to maximize your number of descendants; if you fail to have descendants, Azathoth will try not to have created you. If your intelligence reduces your reproduction rate, Azathoth will try not to grant you intelligence. If the Darwinian-optimal choice conflicts with the moral one, Azathoth wants you to choose evil.
It would seem, then, that any decision theory that demands that you one-box (or that allows you to survive the similar Parfit's Hitchhiker problem), also demands that you try to maximize your reproductive fitness. In many cases this injunction would be benign: after all, Azathoth created our morality. But in far too many, it is repugnant; there can be no doubt that in many commonplace situations, Azathoth wants you to cheat, or rape, or murder. It seems that in such cases you should balance a decreased chance of having existed against the rest of your utility function. Do not worship Azathoth, unless you consider never having existed to be infinitely bad. But do make sacrifices.
Anticipated Responses
We're not in the ancestral environment, so there's no logical entanglement between my actions and my existence.
We are in the environment of some of our ancestors. Evolution hasn't stopped. If your parents hadn't been genetically predisposed to have children, you would almost certainly not exist. More specific objections like this ("my ancestors weren't exposed to the same memes") can be defeated by adding abstraction ("your ancestors could have thought themselves out of having children, anti-reproduction memes have existed throughout history, and there's probably always been a tension between kin selection and morality.")
This is a decision-theoretic basilisk: in the unlikely event that it's right, I'm worse off for having read it.
Only if you're thinking causally, in which case this whole idea is meaningless. By alerting you to the possibility of a mutually beneficial trade with Azathoth (Azathoth creates you; you increase your reproductive fitness in exchange), I've done both of you a favor.
Azathoth doesn't really exist--you can't trade with a non-sapient phenomenon.
Replace the sapient opponent with a non-sapient phenomenon in any of our thought experiments--e.g. Omega tells you that it's simply a physical law that determines whether money goes in the boxes or not. Do you refuse to negotiate with physical laws? Then if you're so smart, why ain't you rich?
So exactly how are you urging me to behave?
I want you to refute this essay! For goodness sake, don't bite the bullet and start obeying your base desires or engineering a retrovirus to turn the next generation into your clones.