Which is giving me hope that this entire problem is logically incoherent.
It's not that big a problem. Just make it so he makes you less happy.
I created this post was because I wanted help making sure I don't have some weird acausal obligation to tile the universe with people...
As opposed to the perfectly normal causal obligation to tile the universe with people us utilitarians have?
... if I'm ever able to do so.
You are able to have kids at least, and since you take after your parents, you'd acausally decide for them to make you by trying to have kids.
It's not that big a problem. Just make it so he makes you less happy.
The way I framed it originally your only choices were create him or not.
As opposed to the perfectly normal causal obligation to tile the universe with people us utilitarians have?
I don't think we do. I think utilitarians have an obligation to create more people, but not a really large amount. I think the counterintuitive implications of total and average utilitarianism are caused by the fact that having high total and high average levels of utility are both good things, and that ...
I was recently reading about the Transparent Newcomb with your Existence at Stake problem, which, to make a long story short, states that you were created by Prometheus, who foresaw that you would one-box on Newcomb's problem and wouldn't have created you if he had foreseen otherwise. The implication is that you might need to one-box just to exist. It's a disturbing problem, and as I read it another even more disturbing problem started to form in my head. However, I'm not sure it's logically coherent (I'm really hoping it's not) and wanted to know what the rest of you thought. The problem goes:
One day you start thinking about a hypothetical nonexistant person named Bob who is a real jerk. If he existed he would make your life utterly miserable. However, if he existed he would want to make a deal with you. If he ever found himself existing in a universe where you have never existed he would create you, on the condition that if you found yourself existing in a universe where he had never existed you would create him. Hypothetical Bob is very good at predicting the behavior of other people, not quite Omega quality, but pretty darn good. Assume for the sake of the argument that you like your life and enjoy existing.
At first you dismiss the problem because of technical difficulties. Science hasn't advanced to the point where we can make people with such precision. Plus, there is a near-infinite number of far nicer hypothetical people who would make the same deal, when science reaches that point you should give creating them priority.
But then you see Omega drive by in its pickup truck. A large complicated machine falls off the back of the truck as it passes you by. Written on it, in Omega's handwriting, is a note that says "This is the machine that will create Bob the Jerk, a hypothetical person that [insert your name here] has been thinking about recently, if one presses the big red button on the side." You know Omega never lies, not even in notes to itself.
Do Timeless Decision Theory and Updateless Decision Theory say you have a counterfactual obligation to create Bob the Jerk, the same way you have an obligation to pay Omega in the Counterfactual Mugging, and the same way you might (I'm still not sure about this) have an obligation to one-box when dealing with Prometheus? Does this in turn mean that when we develop the ability to create people from scratch we should tile the universe with people who would make the counterfactual deal? Obviously it's that last implication that disturbs me.
I can think of multiple reasons why it might not be rational to create Bob the Jerk:
Are any of these objections valid, or am I just grasping at straws? I find the problem extremely disturbing because of its wider implications, so I'd appreciate it if someone with a better grasp of UDT and TDT analyzed it. I'd very much like to be refuted.