fbacus
fbacus has not written any posts yet.

fbacus has not written any posts yet.

Yeah, you're correct--I shouldn't have conflated "outcomes" (things utilities are non-derivatively assigned to) with "objects of preference." Thanks for this.
As Richard Kennaway noted, it seems considerations about time are muddling things here. If we wanted to be super proper, then preferences should have as objects maximally specific ways the world could be, including the whole history and future of the universe, down to the last detail. Decision theory involving anything more coarse-grained than that is just a useful approximation--e.g. I might have a decision problem with only two outcomes being "You get $10" and "You lose $5," but we would just be pretending these are the only two ways the world can end up for practical purposes, which is a permissible simplification since in my actual circumstances my desire to have... (read 443 more words →)
Could that domain not just be really small, such that the ratio of outcomes you'd accept the bet at get closer and closer to 1? It seems like the premise that the discounting rate stays constant over a large interval (so we get the extreme effects from exponential discounting) is doing the work in your argument, but I don't see how it's substantiated.
Thanks for the reply!
So, in my experience it's common for decision theorists in philosophy to take preferences to be over possible worlds and probability distributions over such (the specification of which includes the past and future), and when coarse-graining they take outcomes to be sets of possible worlds. (What most philosophers do is, of course, irrelevant to the matter of how it's best to do things, but I just want to separate "my proposal" from what I (perhaps mistakenly) take to be common.) As you say, no agent remotely close to actual agents will have preferences where details down to the location of every particle in 10,000 BC make a difference, which... (read 1257 more words →)