It actually does have practical applications for me, because it will be part of my calculations. I don't know whether I should have any preference for the distribution of utility over my lifetime at all, before I consider things like uncertainty and opportunity cost. Does this mean you would say the answer is no?
I can think of example where I behaved both ways, but I haven't recorded the frequencies. In practice, I don't feel any emotional difference. If I have a chocolate bar, I don't feel any more motivated to eat it now than to eat it next week, and the anticipation from waiting might actually lead to a net increase in my utility. One of the things I'm interested in was whether there's anyone else who feels this way, because it seems to contradict my understanding of discounting.
That assumption is to make time the only difference between the situations, because the point is that the total amount of utility over my life stays constant. If I lose utility during the time of the agreement, then I would accept a rate that earns me back an amount equal to the value I lost. But if I only "want" to use it today and I could use it to get an equal amount of utility in 3 months, then I don't have a preference.
Thanks for that – the point that I’m separating out uncertainty helped clarify some things about how I’m thinking of this.
So is time inconsistency the only way that a discount function can be self-inconsistent? Is there any reason other than self-inconsistency that we could call a discount function irrational?
Second, with respect to "my intuition is not to discount at all", let's try this. I assume you have some income that you live on. How much money would you take at the end of three months to not receive any income at all for those three months? Adjust the time scale if you wish.
If I received an amount equal to the income I would have gotten normally, then I have no preference over which option occurs. This still assumes that I have enough savings to live from, the offer is credible, there are no opportunity costs I’m losing, no effort is required on my part, etc.
In general, you can think of discounting in terms of loans. Assuming no risk of default, what is the interest rate you would require to lend money to someone for a particular term?
This is the same question, unless I misunderstood. I do have a motivation to earn money, so practically I might want to increase the rate, but I have no preference between not loaning and a rate that will put me in the same place after repayment. With my assumptions, the rate would be zero, but it could increase to compensate - if there's an opportunity cost of X, I'd want to get X more on repayment, etc.
I have some questions on discounting. There are a lot, so I'm fine with comments that don't answer everything (although I'd appreciate it if they do!) I'm also interested in recommendations for a detailed intuitive discussion on discounting, ala EY on Bayes' Theorem.
On a personal level, my intuition is not to discount at all, i.e. my happiness in 50 years is worth exactly the same as my happiness in the present. I'll take $50 right now over $60 next year because I'm accounting for the possibility that I won't receive it, and because I won't have to plan for receiving it either. But if the choice is between receiving it in the mail tomorrow or in 50 years (assuming it's adjusted for inflation, I believe I'm equally likely to receive it in both cases, I don't need the money to survive, there are no opportunity costs, etc), then I don't see much of a difference.
I think the value of a Wikipedia pageview may not be fully captured by data like this on its own, because it's possible that the majority of the benefit comes from a small number of influential individuals, like journalists and policy-makers (or students who will be in those groups in the future). A senator's aide who learns something new in a few years' time might have an impact on many more people than the number who read the article. I'd actually assign most of my probability to this hypothesis, because that's the distribution of influence in the world population.
ETA: the effects will also depend on the type of edits someone makes. Some topics will have more leverage than others, adding information from a textbook is more valuable than adding from a publicly available source, and so on.
This can be illustrated by the example of evolution I mentioned: An evolutionary explanation is actually anti-reductionist; it explains the placement of nucleotides in terms of mathematics like inclusive genetic fitness and complexities like population ecology.
This doesn't acknowledge the other things explained on the same grounds. It's a good argument if the principles were invented for the single case you're explaining, but here they're universal. If you want to include inclusive genetic fitness in the complexity of the explanation, I think you need to include everything it's used for in the complexity of what's being explained.
Sure, this experiment is evidence against 'all fat, tired people with dry hair get better with thryoxine'. No problem there.
Okay, but you said it was evidence in favor of your own hypothesis. That’s what my question was about.
Yes, it is kind of odd isn't it? One of the pills apparently made them a bit unwell, and yet they couldn't tell which one. I notice that I am confused.
Suppose they’re measuring on a 10-point scale, and we get ordered pairs of scores for time A and time B. One person might have 7 and 6, another has (4,3), another has (5,6), then (9,7), (7,7), (4,5), (3,2)...Even if they’re aware of their measurements (which they might not be), all sorts of things affect their scores and it’s unlikely that any one person would be able to make a conclusion. You’re basically asking an untrained patient to draw a conclusion from an n of 1.
But that's awful! Once, there was a diagnostic method, and a treatment that worked fine, that everyone thought was brilliant. Then they invented a test, which is very clever, and a good test for what it tests, and the result of that is that lots of people are ill and don't get the treatment any more and have to suffer horribly and die early.
There are several assumptions here that I think are probably incorrect, the biggest being the causal link between introducing the test and people suffering. But what I described before is just the application of reductionism to better distinguish between disease states based on their causal mechanism.
If that's normal then there's something badly wrong with normal. A new way of measuring things should help!
Sometimes, but replacing an objective measurement with a subjective one isn’t usually a step forward.
Seriously, if 'start off with low doses and keep raising the dose until you get a response' is inaccessible to testing, then something is broken.
Problems with this include: you can’t justify the parameters of the dose increase, you still have to agree on how to measure the response, and you also have a multiple testing issue. It isn’t inaccessible, but it’s a complication (potentially a major one), and that’s just in the abstract. Practically, in any one situation there might be another half dozen issues that wouldn’t be apparent to anyone who isn’t an expert.
But in fact, just 'low basal metabolic rate in CFS' would be good evidence in favour, I think. We can work out optimal treatments later.
Not knowing anything about the subject, I would expect to observe a low basal metabolic rate in CFS regardless of its ultimate cause or causes.
At that point, we're all post-modernists aren't we? The truth is socially determined.
No, it just means we put very little weight on individual studies. We don’t pay much attention to results that haven’t been replicated a few times, and rely heavily on summaries like meta-analyses.
Science is not unreliable...
You’re talking about the overall process and how science moves in the direction of truth, which I agree with. I’m talking on the level of individual papers and how our current best knowledge may still be overturned in the future. But you can leave out “just like..wisdom” from the paragraph without losing the main points.
There's at least a possibility here that medical science is getting beaten hollow by chiropractors and quack doctors and internet loonies, none of whom have any resources or funding at all.
The alt med people have a lot of funding. It’s a multi-billion-dollar industry.
Even the possibility is enough to make me think that there's something appallingly badly wrong with the methods and structure of medical science.
A few things, not just one, but it’s the best we have at the moment.
I think this is a special case of the problem that it's usually easier for an AI to change itself (values, goals, definitions) than for it to change the external world to match a desired outcome. There's an incentive to develop algorithms that edit the utility function (or variables storing the results of previous calculations, etc) to redefine or replace tasks in a way that makes them easier or unnecessary. This kind of ability is necessary, but in the extreme the AI will stop responding to instructions entirely because the goal of minimizing resource usage led it to develop the equivalent of an "ignore those instructions" function.