Additional/complementary argument in favour (and against the “any difference you make is marginal” argument): one’s personal example of viable veganism increases the chances of others becoming vegan (or partially so, which is still a benefit). Under plausible assumptions this effect could be (potentially much) larger the the direct effect of personal consumption decisions.
I have to say that the claimed reductios here strike me as under-argued, particularly when there are literally decades of arguments articulating and defending various versions of moral anti-realism, and which set out a range of ways in which the implications, though decidedly troubling, need not be absurd.
His 2018 lectures are also available on youtube and seem pretty good so far if anyone wants a complement to the book. The course website also has lecture notes and exercises.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
FWIW, this conclusion is not clear to me. To return to one of my original points: I don't think you can dodge this objection by arguing from potentially idiosyncratic prefe...
So, I don't think your concern about keeping utility functions bounded is unwarranted; I'm just noting that they are part of a broader issue with aggregate consequentialism, not just with my ethical system.
Agreed!
you just need to make it so the supremum of them their value is 1 and the infimum is 0.
Fair. Intuitively though, this feels more like a rescaling of an underlying satisfaction measure than a plausible definition of satisfaction to me. That said, if you're a preferentist, I accept this is internally consistent, and likely an improvement on alternative versions of preferentism.
...One issue with only having boundedness above is that is that the expected of life satisfaction for an arbitrary agent would probably often be undefined or in
In an infinite universe, there's already infinitely-many people, so I don't think this applies to my infinite ethical system.
YMMV, but FWIW allowing a system of infinite ethics to get finite questions (which should just be a special case) wrong seems a very non-ideal property to me, and suggests something has gone wrong somewhere. Is it really never possible to reach a state where all remaining choices have only finite implications?
I'll clarify the measure of life satisfaction I had in mind. Imagine if you showed an agent finitely-many descriptions of situations they could end up being in, and asked the agent to pick out the worst and the best of all of them. Assign the worst scenario satisfaction 0 and the best scenario satisfaction 1.
Thanks. I've toyed with similar ideas perviously myself. The advantage, if this sort of thing works, is that it conveniently avoids a major issue with preference-based measures: that they're not unique and therefore incomparable across individuals. How...
Re boundedness:
It's important to note that the sufficiently terrible lives need to be really, really, really bad already. So much so that being horribly tortured for fifty years does almost exactly nothing to affect their overall satisfaction. For example, maybe they're already being tortured for more than 3^^^^3 years, so adding fifty more years does almost exactly nothing to their life satisfaction.
I realise now that I may have moved through a critical step of the argument quite quickly above, which may be why this quote doesn't seem to capture the core ...
Re the repugnant conclusion: apologies for the lazy/incorrect example. Let me try again with better illustrations of the same underlying point. To be clear, I am not suggesting these are knock-down arguments; just that, given widespread (non-infinitarian) rejection of average utilitarianisms, you probably want to think through whether your view suffers from the same issues and whether you are ok with that.
Though there's a huge literature on all of this, a decent starting point is here:
...However, the average view has very little support among moral phil
Fair point re use cases! My familiarity with DSGE models is about a decade out-of-date, so maybe things have improved, but a lot of the wariness then was that typical representative-agent DSGE isn't great where agent heterogeneity and interactions are important to the dynamics of the system, and/or agents fall significantly short of the rational expectations benchmark, and that in those cases you'd plausibly be better of using agent-based models (which has only become easier in the intervening period).
...I (weakly) believe this is mainly because econometrists
My point was more that, even if you can calculate the expectation, standard versions of average utilitarianism are usually rejected for non-infinitarian reasons (e.g. the repugnant conclusion) that seem like they would plausibly carry over to this proposal as well. I haven't worked through the details though, so perhaps I'm wrong.
Separately, while I understand the technical reasons for imposing boundedness on the utility function, I think you probably also need a substantive argument for why boundedness makes sense, or at least is morally acceptable. Bound...
Worth noting that many economists (including e.g. Solow, Romer, Stiglitz among others) are pretty sceptical (to put it mildly) about the value of DSGE models (not without reason, IMHO). I don't want to suggest that the debate is settled one way or the other, but do think that the framing of the DSGE approach as the current state-of-the-art at least warrants a significant caveat emptor. Afraid I am too far from the cutting edge myself to have a more constructive suggestion though.
This sounds essentially like average utilitarianism with bounded utility functions. Is that right? If so, have you considered the usual objections to average utilitarianism (in particular, re rankings over different populations)?
Have you read s1gn1f1cant d1g1t5?
There is no value to a superconcept that crosses that boundary.
This doesn't seem to me to argue in favour of using wording that's associated with the (potentially illegitimate) superconcept to refer to one part of it. Also, the post you were responding to (conf)used both concepts of utility, so by that stage, they were already in the same discussion, even if they didn't belong there.
Two additional things, FWIW:
(1) There's a lot of existing literature that distinguishes between "decision utility" and "experienced utility" (where "...
I'm hesitant to get into a terminology argument when we're in substantive agreement. Nonetheless, I personally find your rhetorical approach here a little confusing. (Perhaps I am alone in that.)
Yes, it's annoying when people use the word 'fruit' to refer to both apples and oranges, and as a result confuse themselves into trying to derive propositions about oranges from the properties of apples. But I'd suggest that it's not the most useful response to this problem to insist on using the word 'fruit' to refer exclusively to apples, and to proceed to make c...
While I'm in broad agreement with you here, I'd nitpick on a few things.
Different utility functions are not commensurable.
Agree that decision-theoretic or VNM utility functions are not commensurable - they're merely mathematical representations of different individuals' preference orderings. But I worry that your language consistently ignores an older, and still entirely valid use of the utility concept. Other types of utility function (hedonic, or welfarist more broadly) may allow for interpersonal comparisons. (And unless you accept the possibility ...
Are you familiar with the debate between John Harsanyi and Amartya Sen on essentially this topic (which we've discussed ad nauseam before)? In response to an argument of Harsanyi's that purported to use the VNM axioms to justify utilitarianism, Sen reaches a conclusion that broadly aligns with your take on the issue.
If not, some useful references here.
ETA: I worry that I've unduly maligned Harsanyi by associating his argument too heavily with Phil's post. Although I still think it's wrong, Harsanyi's argument is rather more sophisticated than Phil's, and w...
It wouldn't necessarily reflect badly on her: if someone has to die to take down Azkaban,* and Harry needs to survive to achieve other important goals, then Hermione taking it down seems like a non-foolish solution to me.
*This is hinted at as being at least a strong possibility.
Although I agree it's odd, it does in fact seem that there is gender information transferred / inferred from grammatical gender.
From Lera Boroditsky's Edge piece
...Does treating chairs as masculine and beds as feminine in the grammar make Russian speakers think of chairs as being more like men and beds as more like women in some way? It turns out that it does. In one study, we asked German and Spanish speakers to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical g
My understanding of the relevant research* is that it's a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)
* There's an overview of some of this here (from p.26).
Isn't the main difference just that they have a bigger sample. (e.g. "4x" in the hardcore group).
Isn't the claim in 6 (that there is a planning-optimal choice, but no action-optimal choice) inconsistent with 4 (a choice that is planning optimal is also action optimal)?
Laying down rules for what counts as evidence that a body is considering alternatives, is mess[y]
Agreed. But I don't think that means that it's not possible to do so, or that there aren't clear cases on either side of the line. My previous formulation probably wasn't as clear as it should have been, but would the distinction seem more tenable to you if I said "possible in principle to observe physical representations of" instead of "possible in principle to physically extract"? I think the former better captures my intended meaning.
I...
FWIW, the exact quote (from pp.13-14 of this article) is:
Far better an approximate answer to the right question, which is often vague, than the exact answer to the wrong question, which can always be made precise. [Emphasis in original]
Your paraphrase is snappier though (as well as being less ambiguous; it's hard to tell in the original whether Tukey intends the adjectives "vague" and "precise" to apply to the questions or the answers).
all of the above assumes a distinction I'm not convinced you've made
If it is possible in principle, to physically extract the alternatives/utility assignments etc., wouldn't that be sufficient to ground the CSA--non-CSA distinction, without running afoul of either current technological limitations, or the pebble-as-CSA problem? (Granted, we might not always know whether a given agent is really a CSA or not, but that doesn't seem to obviate the distinction itself.)
The Snoep paper Will linked to measured the correlation for the US, Denmark and the Netherlands (and found no significant correlation in the latter two).
The monopolist religion point is of course a good one. It would be interesting to see what the correlation looked like in relatively secular, yet non-monopolistic countries. (Not really sure what countries would qualify though.)
We already have some limited evidence that conventionally religious people are happier
But see Will Wilkinson on this too (arguing that this only really holds in the US, and speculating that it's really about "a good individual fit with prevailing cultural values" rather than religion per se).
Thanks for the explanation.
The idea is that when you are listening to music, you are handicapping yourself by taking some of the attention of the aural modality.
I'd heard something similar from a friend who majored in psychology, but they explained it in terms of verbal processing rather than auditory processing more generally, which is why (they said) music without words wasn't as bad.
I'm not sure whether it's related, but I've also been told by a number of musically-trained friends that they can't work with music at all, because they can't help but ...
Sometimes, but it varies quite a lot depending on exactly what I'm doing. The only correlation I've noticed between the effect of music and work-type is that the negative effect of lyrics is more pronounced when I'm trying to write.
Of course, it's entirely possible that I'm just not noticing the right things - which is why I'd be interested in references.
If anyone does have studies to hand I'd be grateful for references.* I personally find it difficult to work without music. That may be habit as much as anything else, though I expect part of the benefit is due to shutting out other, more distracting noise. I've noticed negative effects on my productivity on the rare occasions I've listened to music with lyrics, but that's about it.
* I'd be especially grateful for anything that looks at how much individual variation there is in the effect of music.
Fair enough. My impression of the SWB literature is that the relationship is robust, both in a purely correlational sense, and in papers like the Frey and Stutzer one where they try to control for confounding factors like personality and selection. The only major catch is how long it takes individuals to adapt after the initial SWB spike.
Indeed, having now managed to track down the paper behind your first link, it seems like this is actually their main point. From their conclusion:
...Our results show that (a) selection effects appear to make happy people m
FWIW, this seems inconsistent with the evidence presented in the paper linked here, and most of the other work I've seen. The omitted category in most regression analyses is "never married", so I don't really see how this would fly.
Sorry for the delay in getting back to you (in fairness, you didn't get back to me either!). A good paper (though not a meta-analysis) on this is:
Stutzer and Frey (2006) Does Marriage Make People Happy or Do Happy People Get Married? Journal of Socio-Economics 35:326-347. links
The lit review surveys some of the other evidence.
I a priori doubt all the happiness research as based on silly questionnaires and naive statistics
I'm a little puzzled by this comment given that the first link you provided looks (on its face) to be based on exactly this sort of e...
this post infers possible causation based upon a sample size of 1
Eh? Pica) is a known disorder. The sample size for the causation claim is clearly more than 1.
[ETA: In case anyone's wondering why this comment no longer makes any sense, it's because most of the original parent was removed after I made it, and replaced with the current second para.]
I for one comment far more on Phil's posts when I think they're completely misguided than I do otherwise. Not sure what that says about me, but if others did likewise, we would predict precisely the relationship Phil is observing.
Interesting. All the other evidence I've seen suggest that committed relationships do make people happier, so I'd be interested to see how these apparently conflicting findings can be resolved.
Part of the difference could just be the focus on marriage vs. stable relationships more generally (whether married or not): I'm not sure there's much reason to think that a marriage certificate is going to make a big difference in and of itself (or that anyone's really claiming that it would). In fact, there's some, albeit limited, evidence that unmarried couples a...
Me too. It gets especially embarrassing when you end up telling someone a story about a conversation they themselves were involved in.
Warning, nitpicks follow:
The sentence "All good sentences must at least one verb." has at least one verb. (It's an auxiliary verb, but it's still a verb. Obviously this doesn't make it good; but it does detract from the point somewhat.)
"2+2=5" is false, but it's not nonsense.
I was objecting to the subset claim, not the claim about unit equivalence. (Mainly because somebody else had just made the same incorrect claim elsewhere in the comments to this post.)
As it happens, I'm also happy to object to claim about unit equivalence, whatever the wiki says. (On what seems to be the most common interpretation of utilons around these parts, they don't even have a fixed origin or scale: the preference orderings they represent are invariant to affine transforms of the utilons.)
To expand on this slightly, it seems like it should be possible to separate goal achievement from risk preference (at least under certain conditions).
You first specify a goal function g(x) designating the degree to which your goals are met in a particular world history, x. You then specify another (monotonic) function, f(g) that embodies your risk-preference with respect to goal attainment (with concavity indicating risk-aversion, convexity risk-tolerance, and linearity risk-neutrality, in the usual way). Then you maximise E[f(g(x))].
If g(x) is only ordin...
Utility means "the function f, whose expectation I am in fact maximizing".
There are many definitions of utility, of which that is one. Usage in general is pretty inconsistent. (Wasn't that the point of this post?) Either way, definitional arguments aren't very interesting. ;)
The interesting result is that if you're maximizing something you may be vulnerable to a failure mode of taking risks that can be considered excessive.
Your maximand already embodies a particular view as to what sorts of risk are excessive. I tend to the view that if ...
Crap. Sorry about the delete. :(
Redefining "utility" like this doesn't help us with the actual problem at hand: what do we do if Omega offers to double the f(x) which we're actually maximizing?
It wasn't intended to help with the the problem specified in terms of f(x). For the reasons set out in the thread beginning here, I don't find the problem specified in terms of f(x) very interesting.
In your restatement of the problem, the only thing we assume about Omega's offer is that it would change the universe in a desirable way
You're assuming the output of V(x) is ordinal. It...
The logic for the first step is the same as for any other step.
Actually, on rethinking, this depends entirely on what you mean by "utility". Here's a way of framing the problem such that the logic can change.
Assume that we have some function V(x) that maps world histories into (non-negative*) real-valued "valutilons", and that, with no intervention from Omega, the world history that will play out is valued at V(status quo) = q.
Omega then turns up and offers you the card deal, with a deck as described above: 90% stars, 10% skulls. S...
Interesting, I'd assumed your definitions of utilon were subtly different, but perhaps I was reading too much into your wording.
The wiki definition focuses on preference: utilons are the output of a set of vNM-consistent preferences over gambles.
Your definition focuses on "values": utilons are a measure of the extent to which a given world history measures up according to your values.
These are not necessarily inconsistent, but I'd assumed (perhaps wrongly) that they differed in two respects.
We can experience things other than pleasure.
I can see the appeal, but I worry that a metaphor where a single person is given a single piece of software, and has an option to rewrite it for their own and/or others’ purpose without grappling with myriad upstream and downstream dependencies, vested interests, and so forth is probably missing an important part of the dynamics of real world systems?
(This doesn’t really speak to moral obligations to systems, as much as practical challenges doing anything about them, but my experience is that the latter is a much more binding constraint.)