You said 'discontinuous at infinity'. Did you mean 'the infinite limit diverges or otherwise does not exist'?
No, I mean a function whose limit doesn't equal its defined value at infinity. As a trivial example, I could define a utility function to be 1 for all real numbers in [-inf,+inf) and 0 for +inf. The function could never actually be evaluated at infinity, so I'm not sure what it would mean, but I couldn't claim that the limit was giving me the "correct" answer.
Provided you can assign a unique rational number to each day each person lives, they are countable.
I will note that the expected time for a given person to remain in the sphere in which they started is infinite, provided they don't know in what order they will be removed. The summation for each day becomes (total of an infinite number of people)+(total of a finite number of people); if we assume that a person-day in bliss is positive and a person-day in agony is negative, then the answer is trivial. An infinite summation of terms of positive infinity is greater than an infinite sum of terms of negative infinity- the cardinalities are irrelevant.
Thanks for clearing up the countability. It's clear that there are some cases where taking limits will fail (like when the utility is discontinuous at infinity), but I don't have an intuition about how that issue is related to countability.
Does it matter if the number of people is countably infinite, or uncountably infinite?
If each person corresponds on a 1-1 basis with the real numbers, there are an infinite number people who will not be selected to change spheres on any of the integer-numbered days. Those people will never change spheres.
In the above example, the number of people and the number of days they live were uncountable, if I'm not mistaken. The take-home message is that you do not get an answer if you just evaluate the problem for sets like that, but you might if you take a limit.
Conclusions that involve infinity don't map uniquely on to finite solutions because they don't supply enough information. Above, "infinite immortal people" refers to a concept that encapsulates three different answers. We had to invent a new parameter, alpha, which was not supplied in the original problem, to come up with a well defined result. In essence, we didn't actually answer the question. We made up our own problem that was similar to the original one.
"Don’t worry about people liking it"? This sounds dangerous.
Here is some clarification from Zinsser himself (ibid.):
"Who am I writing for? It's a fundamental question, and it has a fundamental answer: You're writing for yourself. Don't try to visualize the great mass audience. There is no such audience - every reader is a different person.
This may seem to be a paradox. Earlier I warned that the reader is... impatient... . Now I'm saying you must write for yourself and not be gnawed by worry over whether the reader is tagging along. I'm talking about two different issues. One is craft, the other is attitude. The first is a question of mastering a precise skill. The second is a question of how you use the skill to express your personality.
In terms of craft, there's no excuse for losing readers through sloppy workmanship. ... But on the larger issue of whether the reader likes you, or likes what you are saying or how you are saying it, or agrees with it, or feels an affinity for your sense of humor or your vision of life, don't give him a moment's worry. You are who you are, he is who he is, and either you'll get along or you won't.
N.B: These paragraphs are not contiguous in the original text.
On Writing Well, by William Zinsser
Every word should do useful work. Avoid cliché. Edit extensively. Don’t worry about people liking it. There is more to write about than you think.
It makes no sense to call something “true” without specifying prior information. That would imply that we could never update on evidence, which we know not to be the case for statements like “2 + 3 = 5.” Much of the confusion comes from different people meaning different things by the proposition “2 + 3 = 5,” which we can resolve as usual by tabooing the symbols.
Consider the propositions "
A =“The next time I put two sheep and three sheep in a pen, I will end up with five sheep in the pen.”
B = “The universe works as if in all cases, combining two of something with three of something results in five of that thing.” C = “the symbolic expression 2 + 3 = 5 is consistent with mathematical formalism”
These are a few examples of what we might mean when we ask “Is ‘2+3=5’ true?” In all cases, we can in principle perform the computation of P(A|Q), or P(B|Q), etc, where Q represents prior information including what I know about sheep and mathematical formalism.
As usual, I'm late to the discussion.
The probability that a counterfactual is true should be handled with the same probabilistic machinery we always use. Once the set of prior information is defined, it can be computed as usual with Bayes. The confusing point seems to be that the prior information is contrary to what actually occurred, but there's no reason this should be different than any other case with limited prior information.
For example, suppose I drop a glass above a marble floor. Define:
sh = “my glass shattered”
f = “the glass fell to the floor under the influence of gravity”
and define sh0 and f0 as the negations of these statements. We wish to compute
P(sh0|f0,Q) = P(sh0|Q)P(f0|sh0,Q)/P(f0|Q),
where Q is all other prior information, including my understanding of physics. As long as these terms exist, we have no problem. The confusion seems to stem from the assumption that P(f0|sh0,Q) = P(f0|Q) = 0, since f0 is contrary to our observations, and in this case seemingly mutually exclusive with Q.
But probability is in the mind. From the perspective of an observer at the moment the glass is dropped, P(f0|Q) at least includes cases in which she is living in the Matrix, or aliens have harnessed the glass in a tractor beam. Both of these cases hold finite probability consistent with Q. From the perspective of someone remembering the observed event, P(f0|Q) might include cases in which her memory is not trustworthy.
In the usual colloquial case, we’re taking the perspective of someone running a thought experiment on a histroical event with limited information about history and physics. The glass-dropping case limits the possible cases covered by P(f_0|Q) considerably, but the Kennedy-assassination case leaves a good many of them open. All terms are well defined in Bayes’ rule above, and I see no problem with computing in principle the probability of the counterfactual being true.
I'm confused about why this problem is different from other decision problems.
Given the problem statement, this is not an acausal situation. No physics is being disobeyed - Kramers Kronig still works, relativity still works. It's completely reasonable that my choice could be predicted from my source code. Why isn't this just another example of prior information being appropriately applied to a decision?
Am I dodging the question? Does EY's new decision theory account for truly acausal situations? If I based my decision on the result of, say, a radioactive decay experiment performed after Omega left, could I still optimize?
I've found it helpful to mention the town in the headline.
Ha - thanks. FIxed. But I guess if other people want to Skype in from around the world, they're welcome to.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
How did you make those wonderful graphs?
The plots were done in Mathematica 9, and then I added the annotations in PowerPoint, including the dashed lines. I had to combine two color functions for the density plot, since I wanted to highlight the fact that the line s=n represented indifference. Here's the code:
r = 1; ua = 1;ub = -1; f1[n, s] := (ns - s^2r ) (ua - ub); Show[DensityPlot[-f1[n, s], {n, 0, 20}, {s, 0, 20}, ColorFunction -> "CherryTones", Frame -> False, PlotRange -> {-1000, 0}], DensityPlot[f1[n, s], {n, 0, 20}, {s, 0, 20}, ColorFunction -> "BeachColors", Frame -> False, PlotRange -> {-1000, 0}]]