Thanks for mentioning this. I know this wasn't put very nicely.
Imagine you were a very selfish person X only caring about yourself. If I make a really good copy of X which is then placed 100 meters next to X, then this copy X only cares about the spatiotemporal dots of what we define X. Both agents, X and X, are identical if we formalize their algorithms incorporating indexical information. If we don't do that then a disparity remains, namely that X is different to X in that, intrinsically, X only cares about the set of spatiotemporal dots constituting X. ...
It goes the other way round. An excerpt of my post (section Newcomb's Problem's problem of free will):
...Perceiving time without an inherent “arrow” is not new to science and philosophy, but still, readers of this post will probably need a compelling reason why this view would be more goal-tracking. Considering the Newcomb’s Problem a reason can be given: Intuitively, the past seems much more “settled” to us than the future. But it seems to me that this notion is confounded as we often know more about the past than we know about the future. This could tempt
Look, HIV patients who get HAART die more often (because people who get HAART are already very sick). We don't get to see the health status confounder because we don't get to observe everything we want. Given this, is HAART in fact killing people, or not?
It is not that clear to me what we know about HAART in this game. For instance, in case we know nothing about it and we only observe logical equivalences (in fact rather probabilistic tendencies) in the form "HAART" <--> "Patient dies (within a specified time interval)" and &qu...
I agree that it is challenging to assign forecasting power to a study, as we're uncertain about lots of background conditions. There is forecasting power to the degree that the set A of all variables involved with previous subjects allow for predictions about the set A' of variables involved in our case. Though when we deal with Omega who is defined to make true predictions, then we need to take this forecasting power into account, no matter what the underlying mechanism is. I mean, what if Omega in Newcomb's Problem was defined to make true predictions an...
If lots of subjects were using CDT or EDT, they would all be choosing ice cream independently of their soda, and we wouldn't see that correlation (except maybe by coincidence). So it doesn't have to be stated in the problem that other subjects aren't using evidential reasoning--it can be seen plainly from the axioms! To assume that they are reasoning as you are is to assume a contradiction.
If lots of subjects were using CDT or EDT, they would be choosing ice cream independently of their soda iff the soda has no influence on whether they argue according ...
Presumably, if you use E to decide in Newcomb's soda, the decisions of agents not using E are screened off, so you should only calculate the relevant probabilities using data from agents using E.
Can you show where the screening off would apply (like A screens off B from C)?
I claim EDT is irrepairably broken on far less exotic problems than Parfit's hitchhiker. Problems like "should I give drugs to patients based on the results of this observational study?"
This seems to be a matter of screening off. Once we don't prescribe drugs because of evidential reasoning we don't learn anything new about the health of the patient. I would only not prescripe the drug if a credible instance with forecasting power (for instance Omega) shows to me that generally healthy patients (who show suspicious symptoms) go to doctors who...
My comment above strongly called into question whether CDT gives the right answers. Therefore I wouldn't try to reinvent CDT with a different language. For instance, in the post I suggest that we should care about "all" the outcomes, not only the one happening in the future. I've first read about this idea in Paul Almond's paper on decision theory. An excerpt that might be of interest:
...Suppose the universe is deterministic, so that the state of the universe at any time completely determines its state at some later time. Suppose at the present
In the post you can read that I am not endorsing plain EDT, as it seems to lose in problems like parfit's hitchhiker or counterfactual mugging. But in other games, for instance in Newcomblike problems, the fundamental trait of evidential reasoning seems to give the right answers (as long as one knows how to apply conditional independencies!). It sounds to me like a straw man that EDT should give foolish answers (pass on crucial issues like screening off so that for instance we should waste money to make it more likely that we are rich if we don't know our...
Assuming i), I would rather say that when Omega tells me that if I choose carrots I'll have a heart attack, then almost certainly I'm not in a freak world, but in a "normal" world where there is a causal mechanism (as common sense would call it). But the point stands that there is no necessity for a causal mechanism so that c) can be true and the game can be coherent. (Again, this point only stands as long as one's definition of causal mechanism excludes the freak case.)
I think I agree. But I would formulate it otherwise:
i) Omega's prediction are true. ii) Omega predicts that carrot-choosers have heart attacks.
c) Therefore, carrot-choosers have heart attacks.
As soon as you accept i), c) follows if we add ii). I don't know how you define "causal mechanism". But I can imagine a possible world where no biological mechanism connects carrot-choosing with heart attacks but where "accidentally" all the carrot-choosers have heart-attacks (Let's imagine running worlds on a computer countless times. One day we ...
Those who pick a carrot after hearing Omega's prediction, or without hearing the prediction? Those are two very different situations, and I am not sure which one you meant.
That's a good point. I agree with you that it is crucial to keep apart those two situations. This is exactly what I was trying to address considering Newcomb's Problem and Newcomb's Soda. What do the agents (previous study-subjects) know? It seems to me that the games aren't defined precise enough.
Once we specify a game in a way that all the agents hear Omega's prediction (like in New...
I think it is more an attempt to show that a proper use of updating results in evidential reasoners giving the right answers in Newcomblike problems. Further more, it is an attempt to show that the medical version of Solomon's Problem and Newcomb's Soda aren't defined precise enough since it is not clear what the study-subjects were aware of. Another part tries to show that people get confused when thinking about Newcomb's Problem because they use a dated perception of time as well as a problematic notion of free will.
Thanks for the comment!
However, in the A,B-Game we assume that a specific gene makes people presented with two options choose the worse one -- please note that I have not mentioned Omega in this sentence yet! So the claim is not that Omega is able to predict something, but that the gene can determine something, even in absence of the Omega. It's no longer about Omega's superior human-predicting powers; the Omega is there merely to explain the powers of the gene.
I think there might be a misunderstanding. Although I don't believe it to be impossible tha...
Correlation by itself without known connecting mechanisms or relationships does not imply causation.
The bayesian approach would suggest that we assign a causation-credence to every correlation we observe. Of course detecting confounders is very important since it provides you with updates. However, a correlation without known connecting mechanisms does imply causation. In particular it does it probabilistically. A bayesian updater would prefer talking about credences in causation which can be shifted up and downwards. It would be a (sometimes dangerous)...
There is no contradiction to rejecting total utilitarianism and choosing torture.
For one thing, I compared choosing torture with the repugnant conclusion, not with total utilitarianism. For another thing, I didn't suspect there to be any contradiction. However, agents with intransitive dispositions are exploitable.
...You can also descriptively say that, structurally, refusing total utilitarianism because of the repugnant conclusion is equal to refusing deontology because we've realise that two deontological absolutes can contradict each other. Or, mor
I generally don't see why the conclusion is considered to be repugnant not only as a reaction of gut-feelings but also upon reflection, since we simply deal with another case of "dust speck vs torture", an example that illustrates how our limbic system is not adapted in a way that it could scale up emotions linearly and prevent intransitive dispositions.
We can imagine a world in which evolutionary mechanisms brought forth human brains that by some sort of limbic limitation simply cannot imagine the integer "17", whereas all the other n...
Hi! I've been lurking around on the blog. I look forward to actively engage from now. Generally, I'm strongly interested in AI research, rationality in general, bayesian statistics and decision problems. I hope that I will keep on learning a lot and will also contribute useful insights for this community as it is very valuable what people here are about to do! So, see you on the "battlefield". Hi to everyone!
I agree. It seems to me that the speciality of the Necomb Problem is that actions "influence" states and that this is the reason why the dominance principle alone isn't giving the right answer. The same applies to this game. Your action (sim or not sim) determines the probability of which agent you have been all along and therefore "influences" the states of the game, whether you are X or X*. Many people dislike this use of the word "influence" but I think there are some good reasons in favour of a broader use of it (eg. quantum entanglement).