I don't understand why the Smoking Lesion is a problem for evidential decision theory. I would simply accept that in the scenario given, you shouldn't smoke. And I don't see why you assert that this doesn't lessen your chances of getting cancer, except in the same sense that two-boxing doesn't lessen your chances of getting the million.
I would just say: in the scenario give, you should not smoke, and this will improve your chances of not getting cancer.
If you doubt this, consider if the correlation were known to be 100%; every person who ever smoked up til...
I see you've moved this discussion off-site. FWIW, I commend you for trying to organize the various decision theory issues into a more accessible and organized sequence. I'd like to suggest that you take some of this and use it to improve the (almost comically sparse) decision theory articles on the LW Wiki. If that's really going to be the go-to place for LW knowledge, your efforts to summarize and present this info could really be useful there, and any redundancy with existing blog posts would be a non-issue.
I'm confused as to why you said you weren't continuing this on Less Wrong, then posted it on Less Wrong.
I've read the smoking lesion thing before, and what occurred to be is that even under EDT, the reasoning in there is wrong. What I mean was that one shouldn't simply reason it out by comparing to the average stats, but take into account the fact that they're using EDT itself. ie, they should say "given that a person is using EDT, then what's the correlation between etc etc..."
Worth referencing:
The Smoking Lesion on the wiki.
Timeless Decision Theory and Meta-Circular Decision Theory, where Eliezer discusses this problem (among others)
(By the way, your blog has some interesting posts!)
If we assume determinism, however, we might say this about any decision.
Not really. The lesion is a single aspect that completely determines a decision.
For most decisions, far more of the brain/mind than just one small, otherwise irrelevant, part can have some influence on the outcome.
But the lesion is clearly different, IF it has a 100% correlation.
Acknowledging some kind of fatalism is one thing, but injecting it into the middle of our decision processes seems to me to be asking for trouble.
When making a decision on something where I know my thought-process is irrelevant, why should I not be fatalistic? There is no decision-making process in the 100%-lesion case, the decision is MADE, it's right there in the lesion.
EDIT: Here's something analogous to the 100% lesion: you have a light attached to your head. If it blinks red, it'll make you feel happy, but it'll blow up in an hour. It's not linked to the rest of your brain at all. Should you try and make a decision about whether to have it blink red?
There is no decision-making process in the 100%-lesion case, the decision is MADE, it's right there in the lesion.
There is no decision-making process anyway, every decision is made, it's right there in the frontal/temporal/occipital/parietal lobe, right?
Here's something analogous to the 100% lesion: you have a light attached to your head. If it blinks red, it'll make you feel happy, but it'll blow up in an hour. It's not linked to the rest of your brain at all. Should you try and make a decision about whether to have it blink red?
The red light blink...
This is part of a sequence titled "An introduction to decision theory". The previous post was Newcomb's Problem: A problem for Causal Decision Theories
For various reasons I've decided to finish this sequence on a seperate blog. This is principally because there were a large number of people who seemed to feel that this sequence either wasn't up to the Less Wrong standard or felt that it was simply covering ground that had already been covered on Less Wrong.
The decision to post it on another blog rather than simply discontinuing it came down to the fact that other people seemed to feel that the sequence had value. Those people can continue reading it at "The Smoking Lesion: A problem for evidential decision theory".
Alternatively, there is a sequence index available: Less Wrong and decision theory: sequence index