I agree that attitudes have been internalized that make ratings skewed. I will add, however, that the rating for "mean performance" on a scale is context-dependent. Examples off the top of my head: 80% is an okay-ish grade in most US schools, but 50% is atrocious. Contrast this with attractiveness on a 0-10 scale: an 8 is a superior specimen, whereas a 5 is average.
With customer service in particular, I can attest to feeling a lot of pressure to give a high rating (if I must rate) because I don't want an employee punished as a result. Heck, this goes beyond ratings. I would be a dishonest juror if I thought the defendant were guilty of a minor crime but they were facing an extreme sentence.
I would argue that inconsistency of preferences isn't necessarily a sign of irrationality. Come to think of it, it may hinge greatly on how you frame the preference.
Consider changing tastes. As a child, I preferred some sweets to savory items, and those preferences reversed as I aged. Is that irrational? No and, indeed, you needn't even view it as a preference reversal. The preference "I prefer to eat what tastes good to me" has remained unchanged, after all. Is my sense of taste itself a preference? It seems like this would devolve into semantics quickly....
I've read a good chunk of Eliezer's paper on TDT, and it's in that context that I am interpreting reflection. Forgive me if I misunderstand some of it; it's new to me.
TDT is motivated by requiring a decision rule that is consistent under reflection. It doesn't seem to pass judgment on preferences themselves, only on how actions ought to be chosen given preferences. Am I mistaken here?
Perhaps I should have been clearer with Voldemort's "revealed" preferences. JKR writes him as a fairly simple character and I did take for granted that what we saw was what we...
There is a distinction people often fail to make, which is commonly seen in analyses of fictional characters' actions but also those of real people. It is the distinction between behaving irrationally and having extreme preferences.
If we look at actions and preferences the way decision theorists do, it is clear that preferences cannot be irrational. Indeed, rationality is defined as tendency toward preference-satisfaction. To say preferences are irrational is to say that someone's tastes can be objectively wrong.
Example: Voldemort is very stubborn in JKR's...
I think this is closely related to the more colloquial concept of "necessary evils". I always felt the term was a bit of a misnomer--we feel they are evils, I suspect, because their necessity is questionable. Actually necessary things aren't assigned moral value, because that would be pointless. You can't prescribe behavior that is impossible (to paraphrase Kant).
As a recent example, someone argued that school bullying is a necessary evil because bullying in the adult world is inevitable and the schoolyard version is preparation. In that case it seems there was a sort of "all-or-nothing" fallacy, i.e., if we can't eliminate it, we might as well not even mitigate it.
Funnily enough, it seems to me that forgetting old experiences so they feel novel again would be the solution if there is any solution to be had.
Learning quickly may work for a time, but the universe is finite. There is a theoretical upper bound on storage space for that reason alone. Therefore there is an upper bound on how much one can learn.
When I first read about Newcomb's Problem, I will admit that it struck me as artificial at first. But not unfamiliar! Similar dilemmas seem common in film and television.
For example, consider Disney's Hercules. Hercules spends the entire movie trying to regain his status as a god. He is told that he must become a hero to do so, so of course he sets out doing what seem to be heroic things. In the end he succeeds by jumping into the Styx to save Meg despite being told he would die. Heroism in his world evidently requires irrationality!
While it isn't identica...
Exactly. It's like saying that if someone occasionally lies, then any claim they make is false. "You don't get to choose when to be truthful."
It seems to me the most one can say is that our confidence that someone is being objective at any given time will decrease when we discover inconsistencies? But even this seems too strong. I don't doubt someone's statistical analyses because they blindly believe their spouse is the best partner to ever walk the earth. I just figure they have a blind spot, as do we all.
I've always had philosophical leanings, so I find myself asking often what decision theory sets out to do, even as I grapple with a concrete mathematical application. This seems important to me if I want a realistic model of an actual decision an agent may face. My concerns keep returning to utility and what it represents.
Utility is used as a measure for many things: wealth, usefulness, satisfaction, happiness, scoring in games, etc. Our treatment of it suggests that what it represents doesn't matter--that the default aim of a decision theory should be to ...
This sounds like it is effective, but I will add that model and care problems are often entangled. For me at least, the extent that I care about someone is very sensitive to how much I figure they care for me. That type of reciprocation seems pretty general. So if we wrongly perceive that others don't care, we will likely be less helpful and may distance ourselves from them. Then the victims may perceive the behavior solely as a care problem.
I wonder how often one can really isolate these aspects in practice?
This is why we need a healthcare system in which people can get regular checkups. They should have an extensive medical history on you, that way their comparisons take into account what is "normal for you" and not just whether you're "normal amongst the population", since the latter may not even be relevant.
Mortality and Discounting
Many are probably aware of how discounting works, but I'll give a brief summary first:
Humans have time preferences, which is to say that we prefer to have money (or any item of utility) sooner rather than later, all else equal. One way of capturing this is by converting the present value of cash to an equivalent value in the future with a discount function. Studies show that humans tend to use a hyperbolic discount fu...
It occurred to me today that the VNM utility functions model preferences concerning income rather than wealth in general.
Consider the continuity axiom, for example. This axiom seems to imply that a rational agent would be willing to gamble their entire life savings for an extra dollar provided that the probability of losing is small enough. Barring the possibility of charity, going broke is tantamount to death, since it costs money to make money. It seems reasonable to me that a rational agent would treat their own death as infinitely bad. Under this assum... (read more)