I think it depends on how much you're willing to ask counterfactuals to do.
In the paper Anthropic Decision Theory for Self-Locating Agents, Stuart Armstrong says "ADT is nothing but the anthropic version of the far more general Updateless Decision Theory and Functional Decision Theory" -- suggesting that he agrees with the idea that a proposed solution to counterfactual reasoning gives a proposed solution to anthropic reasoning. The overall approach of that paper is to side-step the issue of assigning anthropic probabilities, instead addressing the question of how to make decisions in cases where anthropic questions arise. I suppose this might either be said to "solves anthropics" or "side-steps anthropics", and this choice would determine whether one took Stuart's view to answer "yes" or "no" to your question.
Stuart mentions in that paper that agents making decisions via CDT+SIA tend to behave the same as agents making decisions via EDT+SSA. This can be seen formally in Jessica Taylor's post about CDT+SIA in memoryless cartesian environments, and Caspar Oesterheld's comment about the parallel for EDT+SSA. The post discusses the close connection to pure UDT (with no special anthropic reasoning). Specifically, CDT+SIA (and EDT+SSA) are consistent with the optimality notion of UDT, but don't imply it (UDT may do better, according to its own notion of optimality). This is because UDT (specifically, UDT 1.1) looks for the best solution globally, whereas CDT+SIA can have self-coordination problems (like hunting rabbit in a game of stag hunt with identical copies of itself).
You could see this as giving a relationship between two different notions of counterfactual, with anthropic reasoning mediating the connection.
CDT and EDT are two different ways of reasoning about the consequences of actions. Both of them are "updateful": they make use of all information available in estimating the consequences of actions. We can also think of them as "local": they make decisions from the situated perspective of an information state, whereas UDT makes decisions from a "global" perspective considering all possible information states.
I would claim that global counterfactuals have an easier job than local ones, if we buy the connection between the two suggested here. Consider the transparent Newcomb problem: you're offered a very large pile of money if and only if you're the sort of agent who takes most, but not all, of the pile. It is easy to say from an updateless (global) perspective that you should be the sort of agent who takes most of the money. It is more difficult to face the large pile (an updateful/local perspective) and reason that it is best to take most-but-not-all; your counterfactuals have to say that taking all the money doesn't mean you get all the money. The idea is that you have to be skeptical of whether you're in a simulation; ie, your counterfactuals have to do anthropic reasoning.
In other words: you could factor the whole problem of logical decision theory in two different ways.
With option 1, you side-step anthropic reasoning. With option 2, you have to tackle it explicitly. So, you could say that in option 1, you solve anthropic reasoning for free if you solve counterfactual reasoning; in option 2, it's quite the opposite: you might solve counterfactual reasoning by solving anthropic reasoning.
I'm more optimistic about option 2, recently. I used to think that maybe we could settle for the most basic possible notion of logical counterfactual, ie, evidential conditionals, if combined with logical updatelessness. However, a good logically updateless perspective has proved quite elusive so far.
Anyway, this answers one of key questions: whether it is worth working on anthropics or not. I put some time into reading about it (hopefully I get time to pick up Bostrom's book again at some point), but I got discouraged when I started wondering if the work on logical counterfactuals would make this all irrelevant. Thanks for clarifying this. Anyway, why do you think the second approach is more promising?
"The idea is that you have to be skeptical of whether you're in a simulation" - I'm not a big fan of that framing, though I suppose it's okay if you're clear that it is an analogy. Firstly, I think it is cleaner to seperate issues about whether simulations have consciousness or not from questions of decision theory given that functionalism is quite a controversial philosophical assumption (even though it might be taken for granted at MIRI). Secondly, it seems as though that you might be able to perfectly predict someone from h...
There is a "natural reference class" for any question X: it is everybody who asks the question X.
In the case of classical anthropic questions like Doomsday Argument such reasoning is very pessimistic, as the class of people who knows about DA is very short and its end is very soon.
Members of the natural reference class could bet on the outcome of X, but the betting result depends on the betting procedure. If betting outcome doesn't depend on the degree of truth (I am either right or wrong), when we get weird anthropic effects.
Such weird anthropic is net winning in betting: the majority of the members of DA-aware reference class live not in the beginning of the world, and DA may be used to predict the end of the world.
If we take into account the edge cases which produce very false results, this will compensate net winning.
This supposedly "natural" reference class is full of weird edge cases, in the sense that I can't write an algorithm that finds "everybody who asks the question X". Firstly "everybody" is not well defined in a world that contains everything from trained monkeys to artificial intelligence's. And "who asks the question X" is under-defined as there is no hard boundary between a different way of phrasing the same question and slightly different questions. Does someone considering the argument in chinese fall int...
I had that idea at first, but of the people asking the question, only some of them actually know how do anthropics. Others might be able to ask the anthropic question, but have no idea how to solve it, so toss up their hands and ignore the entire issue, in which case it is effectively the same as them never asking it in the first place. Others may make an error in their anthropic reasoning which you know how to avoid; similarly they aren't in your reference class because their reasoning process is disconnected from yours. Whenever you make a decision, you are implicitly making a bet. Anthropic considerations alter how the bet plays plays out and in so far as you can account for this, you can account for anthropics.
No. If there's a coinflip that determines whether an identical copy of me is created tomorrow, my ability to perfectly coordinate the actions of all copies (logical counterfactuals) doesn't help me at all with figuring out if I should value the well-being of these copies with SIA, SSA or some other rule.
That sounds a lot like Stuart Armstrong's view. I disagree with it, although perhaps our differences are merely definitional rather than substantive. I consider questions of morality or axiology seperate from questions of decision theory. I believe that the best way to model things is such that agents only care about their own overall utility function, however, this is a combination of direct utility (utility the agent experiences directly) and indirect utility (value assigned by the agent to the overall world state excluding the agent, but including other agents). So from my perspective this falls outside of the question of anthropics. (The only cases where this breaks down are Evil Genie like problems where there is no clear referent for "you")
I consider questions of morality or axiology seperate from questions of decision theory.
The claim is essentially that specification of anthropic principles an agent follows belongs to axiology, not decision theory. That is, the orthogonality thesis applies to the distinction, so that different agents may follow different anthropic principles in the same way as different stuff-maximizers may maximize different kinds of stuff. Some things discussed under the umbrella of "anthropics" seem relevant to decision theory, such as being able to function with most anthropic principles, but not, say, choice between SIA and SSA.
(I somewhat disagree with the claim, as structuring values around instances of agents doesn't seem natural, maps/worlds are more basic than agents. But that is disagreement with emphasizing the whole concept of anthropics, perhaps even with emphasizing agents, not with where to put the concepts between axiology and decision theory.)
Hmm... interesting point. I've briefly skimmed Stuart Armstrong's paper and the claim that different moralities end up as different anthropic theories assuming that you care about all of your clones seems to be mistaking a cool calculation trick as having some deeper meaning, which does not automatically follow without further justification.
On reflection, what I said above doesn't perfectly capture my views. I don't want to draw the boundary so that anything in axiology is automatically not a part of anthropics. Instead, I'm just trying to abstract out questions about how desirable other people and states of the world are, so that we can just focus on building a decision theory on top of this. On the other hand, I consider axiology relevant in so far as it relates directly to "you".
For example, in Evil Genie like situations, you might find out that if you had chosen A instead of B, it would have contradicted your existence and the task of trying to value this seems relevant to anthropics. And I still don't know precisely where I stand on these problems, but I'm definitely open to the possibility that this is orthogonal to other questions of value. PS. I'm not even sure at this stage whether Evil Genie problems most naturally fall into anthropics or a seperate class of problems.
I also agree that structuring values around instances of agents seems unnatural, but I'd suggest discussing agent-instances instead of map/worlds.
I'll register in advance that this story sounds too simplistic to me (I may add more detail later), but I suspect this question will be a good stimulus for kicking off a discussion
From an agent's first-person perspective there is no reference class for himself, i.e. he is the only one in its reference class. A reference class containing multiple agents only exists if we employ an outsider view.
When beauty wakes up in the experiment she can tell it is "today" and she's experiencing "this awakening". That is not because she knows any objective differences between "today" and "the other day" or between "this awakening" and "the other awakening". It is because from her perspective "today" and "this awakening" is most immediate to her subjective experience which makes them inherently unique and identifiable. She doesn't need to consider the other day(s) to specify today. "Today" is in a class of its own to begin with. But if we reason as an objective outsider and not use any perspective center in our logic then none of the two days are inherently unique. To specify one among the two would require a selection process. For example a day can be specified by say "the earlier day of the two", "the hotter day of the two" or the old fashioned "the randomly selected among of the two". (an awakening can similarly be specified among all awakenings the same way) It is this selection process from the outsider view that defines the reference class.
Paradoxes happens when we mix reasonings from the first-person perspective and the outsider's perspective in the same logic framework. "Today" becomes both uniquely identifiable while at the same time also belongs to a reference class of multiple days. The same can be said about "this awakening". This difference leads to the debate between SIA and SSA.
The importance of perspectives also means when using betting argument we need to repeat the experiment from the perspective of the agent as well. This also means from an agent's first-person perspective, if his objective is simply to maximize his own utility no other agent's decision need to be considered.
Ok, imagine a Simple Beauty problem, without the coin toss: she only wakes up on Monday and on Tuesday. When she wakes up, she know, that it is "today", but "today" is an unknown variable, which could be either Monday or Tuesday, and she doesn't know the day.
In that case she (or me on her place) will still use the reference class logic to get 0.5 probability of Tuesday.
In this case beauty still shouldn't use the reference class logic to assign a probability of 0.5. I argue for sleeping beauty problem the probability of "today" being Monday/Tuesday is an incoherent concept so it do not exist. To ask this question we must specify a day from the view of an outsider. E.g. "what's the probability the hotter day is Monday?" or " what is the probability the randomly selected day among the two is Monday?".
Imagine you participate in a cloning experiment. At night when you are sleeping a highly accurate clone of you with indistinguishable memory is created in an identical room. When waking up there is no way to tell if you are old or new. It might be tempted to ask "what's the probability of "me" being the clone?" I would guess your answer is 0.5 as well. But you can repeat the same experiment as many times as you want by falling asleep let another clone of you be created and wake up again. Each time waking up you can easily tell "this is me", but the is no reason to expect in all these repetitions the "me" would be the new clone about half the times. In fact there is no reason the relative frequency of me being the clone would converge to any value as the number of repetition increases. However if instead of this first person concept of "me" we use an outsider's specification then the question is easily answerable. E.g. what is the probability the randomly chosen version among the two is the clone? The answer is obviously 0.5. If we repeat the experiments and each time let an outsider randomly choose a version then the relative frequency would obviously approach 0.5 as well.
On a side note this also explains why double-halving is not unBayesian.
If the original is somehow privileged over its copies, when his "me" statistic will be different of the copies statistic.
Not sure if I'm following. I don't see in anyway the original is privileged over its copies. In each repetition after waking up I could be the newly created clone just like in the first experiment. The only privileged concepts are due to my first-person perspective such as here, now, this, or the "me" based on my subjective experience.
I would say that the concept of probability works fine in anthropic scenarios, or at least there is a well defined number that is equal to probability in non anthropic situations. This number is assigned to "worlds as a whole". Sleeping beauty assigns 1/2 to heads, and 1/2 to tails, and can't meaningfully split the tails case depending on the day. Sleeping beauty is a functional decision theory agent. For each action A, they consider the logical counterfactual that the algorithm they are implementing returned A, then calculate the worlds utility in that counterfactual. They then return whichever action maximizes utility.
In this framework, "which version am I?" is a meaningless question, you are the algorithm. The fact that the algorithm is implemented in a physical substrate give you means to affect the world. Under this model, whether or not your running on multiple redundant substrates is irrelivant. You reason about the universe without making any anthropic updates. As you have no way of affecting a universe that doesn't contain you, or someone reasoning about what you would do, you might as well behave as if you aren't in one. You can make the efficiency saving of not bothering to simulate such a world.
You might, or might not have an easier time effecting a world that contains multiple copies of you.
"I would say that the concept of probability works fine in anthropic scenarios" - I agree that you can build a notion of probability on top of a viable anthropic decision theory. I guess I was making two points a) you often don't need to b) there isn't a unique notion of probability, but it depends on the payoffs (which disagrees with what you wrote, although the disagreement may be more definitional than substantive)
"As you have no way of affecting a universe that doesn't contain you, or someone reasoning about what you would do, you might as well behave as if you aren't in one" - anthropics isn't just about existence/non-existence. Under some models there will be more agents experiencing your current situation.
"You might, or might not have an easier time effecting a world that contains multiple copies of you" - You probably can, but this is unrelated to anthropics
One of the key problems with anthropics is establishing the appropriate reference class. When we attempt to calculate a probability accounting for anthropics, do we consider all agents or all humans or all humans who understand decision theory?
If a tree falls on Sleeping Beauty argues probability is not ontologically basic and the "probability" depends on how you count bets. In this vein, one might attempt to solve anthropics by asking about whose decision to take a bet is linked to yours. You could then count up all the linked agents who observe A and all the agents who observe not A and then calculate the expected value of the bet. More generally, if you can solve bets, my intuition is that you can answer any other question that you would like about the decision by reframing it as a bet.