All of conchis's Comments + Replies

conchis130

I can see the appeal, but I worry that a metaphor where a single person is given a single piece of software, and has an option to rewrite it for their own and/or others’ purpose without grappling with myriad upstream and downstream dependencies, vested interests, and so forth is probably missing an important part of the dynamics of real world systems?

(This doesn’t really speak to moral obligations to systems, as much as practical challenges doing anything about them, but my experience is that the latter is a much more binding constraint.)

2Rana Dexsin
Indeed. I impulsively wrote some continuation story in response—it's very rough, and the later sections kind of got away from me, but I've posted a scribble of “Bad Reasons Behind Different Systems and a Story with No Good Moral” which may be of relevance.

Additional/complementary argument in favour (and against the “any difference you make is marginal” argument): one’s personal example of viable veganism increases the chances of others becoming vegan (or partially so, which is still a benefit). Under plausible assumptions this effect could be (potentially much) larger the the direct effect of personal consumption decisions.

I have to say that the claimed reductios here strike me as under-argued, particularly when there are literally decades of arguments articulating and defending various versions of moral anti-realism, and which set out a range of ways in which the implications, though decidedly troubling, need not be absurd.

Alon’s Design Principles of Biological Circuits

His 2018 lectures are also available on youtube and seem pretty good so far if anyone wants a complement to the book. The course website also has lecture notes and exercises.

4johnswentworth
Meta-note: I'd usually recommend complementing a course with a book by someone else, in order to get a different perspective. However, some professors are uniquely good at teaching their particular thing, and I'd include both Uri Alon and Stephen Boyd (the convex optimization guy) in that list. In those cases it more often makes sense to use materials from the one professor.

To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.

FWIW, this conclusion is not clear to me. To return to one of my original points: I don't think you can dodge this objection by arguing from potentially idiosyncratic prefe... (read more)

2Chantiel
Yes, that's correct. It's possible that there are some agents with consistent preferences that really would wish to get extraordinarily uncomfortable to avoid the torture. My point was just that this doesn't seem like it would would be a common thing for agents to want. Still, it is conceivable that there are at least a few agents out their that would consistently want to opt for the 0.5 chance of being extremely uncomfortable option, and I do suppose it would be best to respect their wishes. This is a problem that I hadn't previously fully appreciated, so I would like to thank you for brining it up. Luckily, I think I've finally figured out a way to adapt my ethical system to deal with this. That is, the adaptation will allow for agents to choose the extreme-discomfort-from-dust-specks option if that is what they wish for my my ethical system to respect their preferences. To do this, allow for the measure to satisfaction to include infinitesimals. Then, to respect the preferences of such agents, you just need need to pick the right satisfaction measure. Consider the agent that for which each 50 years of torture causes a linear decrease in their utility function. For simplicity, imagine torture and discomfort are the only things the agent cares about; they have no other preferences; also assume that the agent dislike torture more than it dislikes discomfort, but only be a finite amount. Since the agent's utility function/satisfaction measure is linear, I suppose being tortured for an eternity would be infinitely worse for the agent than being tortured for a finite amount of time. So, assign satisfaction 0 to the scenario in which the agent is tortured for eternity. And if the agent is instead tortured for n∈R years, let the agent's satisfaction be 1−nϵ, where ϵ is whatever infinitesimal number you want. If my understanding of infinitesimals is correct, I think this will do what we want it to do in terms having agents using my ethical system respect the agent's pr

So, I don't think your concern about keeping utility functions bounded is unwarranted; I'm just noting that they are part of a broader issue with aggregate consequentialism, not just with my ethical system.

Agreed!

you just need to make it so the supremum of them their value is 1 and the infimum is 0.

Fair. Intuitively though, this feels more like a rescaling of an underlying satisfaction measure than a plausible definition of satisfaction to me. That said, if you're a preferentist, I accept this is internally consistent, and likely an improvement on alternative versions of preferentism.    

One issue with only having boundedness above is that is that the expected of life satisfaction for an arbitrary agent would probably often be undefined or  in

... (read more)
1Chantiel
Fair enough. So I'll provide a non-sadistic scenario. Consider again the scenario I previously described in which you have a 0.5 chance of being tortured for 3^^^^3 years, but also have the repeated opportunity to cause yourself minor discomfort in the case of not being tortured and as a result get your possible torture sentence reduced by 50 years. If you have an unbounded below utility function in which each 50 years causes a linear decrease in satisfaction or utility, then to maximize expected utility or life satisfaction, it seems you would need to opt for living in extreme discomfort in the non-torture scenario to decrease your possible torture time be an astronomically small proportion, provided the expectations are defined. To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life. Oh, I see. And yes, you can have consistent preference orderings that aren't represented as a utility function. And such techniques have been proposed before in infinite ethics. For example, one of Bostrom's proposals to deal with infinite ethics is the extended decision rule. Essentially, it says to first look at the set of actions you could take that would maximize P(infinite good) - P(infinite bad). If there is only one such action, take it. Otherwise, take whatever action among these that has highest expected moral value given a finite universe. As far as I know, you can't represent the above as a utility function, despite it being consistent. However, the big problem with the above decision rule is that it suffers from the fanaticism problem: people would be willing to bear any finite cost, even 3^^^3 years of torture, to have even an unfathomabl
2Chantiel
Fair enough. So I'll provide a non-sadistic scenario. Consider again the scenario I previously described in which you have a 0.5 chance of being tortured for 3^^^^3 years, but also have the repeated opportunity to cause yourself minor discomfort in the case of not being tortured and as a result get your possible torture sentence reduced by 50 years. If you have an unbounded below utility function in which each 50 years causes a linear decrease in satisfaction or utility, then to maximize expected utility or life satisfaction, it seems you would need to opt for living in extreme discomfort in the non-torture scenario to decrease your possible torture time be an astronomically small proportion, provided the expectations are defined. To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life. Oh, I see. And yes, you can have consistent preference orderings that aren't represented as a utility function. And such techniques have been proposed before in infinite ethics. For example, one of Bostrom's proposals to deal with infinite ethics is the extended decision rule. Essentially, it says to first look at the set of actions you could take that would maximize P(infinite good) - P(infinite bad). If there is only one such action, take it. Otherwise, take whatever action among these that has highest expected moral value given a finite universe. As far as I know, you can't represent the above as a utility function, despite it being consistent. However, the big problem with the above decision rule is that it suffers from the fanaticism problem: people would be willing to bear any finite cost, even 3^^^3 years of torture, to have even an unfathomabl

In an infinite universe, there's already infinitely-many people, so I don't think this applies to my infinite ethical system.

YMMV, but FWIW allowing a system of infinite ethics to get finite questions (which should just be a special case) wrong seems a very non-ideal property to me, and suggests something has gone wrong somewhere. Is it really never possible to reach a state where all remaining choices have only finite implications?

1Chantiel
For the record, according to my intuitions, average consequentialism seems perfectly fine to me in a finite universe. That said, if you don't like using average consequentialism in a finite case, I don't personally see what's wrong with just having a somewhat different ethical system for finite cases. I know it seems ad-hoc, but I think there really is an important distinction between finite and infinite scenarios. Specifically, people have the moral intuition that larger numbers of satisfied lives are more valuable than smaller numbers of them, which average utilitarianism conflicts with. But in an infinite universe, you can't change the total amount of satisfaction or dissatisfaction. But, if you want, you could combine both the finite ethical system and infinite ethical system so that a single principle is used for moral deliberation. This might make it feel less ad-hocy. For example, you could have a moral value function that of the form, f(total amount of satisfaction and dissatisfaction in the universe) * expected value of life satisfaction for an arbitrary agent in this universe. And let f be some bounded function that's maximized by ∞ and approaches this value very slowly. For those who don't want this, they are free to use my total-utilitarian-infinite-ethical system. I think that it just ends up as regular total utilitarian in a finite world, or close to it.

I'll clarify the measure of life satisfaction I had in mind. Imagine if you showed an agent finitely-many descriptions of situations they could end up being in, and asked the agent to pick out the worst and the best of all of them. Assign the worst scenario satisfaction 0 and the best scenario satisfaction 1.

Thanks. I've toyed with similar ideas perviously myself. The advantage, if this sort of thing works, is that it conveniently avoids a major issue with preference-based measures: that they're not unique and therefore incomparable across individuals. How... (read more)

1Chantiel
Also, in addition to my previous response, I want to note that the issues with unbounded satisfaction measures are not unique to my infinite ethical system. Instead, they are common potential problems with a wide variety of aggregate consequentialist theories. For example, imagine suppose your a classical utilitarianism with an unbounded utility measure per person. And suppose you know that the universe is finite will consist of a single inhabitant with a utility whose probability distributions follows a Cauchy distribution. Then your expected utilities are undefined, despite the universe being knowably finite. Similarly, imagine if you again used classical utilitarianism but instead you have a finite universe with one utility monster and 3^^^3 regular people. Then, if your expected utilities are defined, you would need to give the utility monster what it wants, to the expense of of everyone else. So, I don't think your concern about keeping utility functions bounded is unwarranted; I'm just noting that they are part of a broader issue with aggregate consequentialism, not just with my ethical system.
1Chantiel
As I said, you can allow for infinitely-many scenarios if you want; you just need to make it so the supremum of them their value is 1 and the infimum is 0. That is, imagine there's an infinite sequence of scenarios you can come up with, each of which is worse than the last. Then just require that the infimum of the satisfaction of those sequences is 0. That way, as you consider worse and worse scenarios, the satisfaction continues to decrease, but never gets below 0. One issue with only having boundedness above is that is that the expected of life satisfaction for an arbitrary agent would probably often be undefined or −∞ in expectation. For example, consider if an agent had a probability distribution like a Cauchy distribution, except that it assigns probability 0 to anything about the maximize level of satisfaction, and is then renormalized to have probabilities sum to 1. If I'm doing my calculus right, the resulting probability distribution's expected value doesn't converge. You could either interpret this as the expected utility being undefined or being −∞, since the Rienmann sum approaches −∞ as the width of the column approaches zero. That said, even if the expectations are defined, it doesn't seem to me that keeping the satisfaction measure bounded above but not bellow would solve the problem of utility monsters. To see why, imagine a new utility monster as follows. The utility monster feels an incredibly strong need to have everyone on Earth be tortured. For the next hundred years, its satisfaction will will decrease by 3^^^3 for every second there's someone on Earth not being tortured. Thus, assuming the expectations converge, the moral thing to do, according to maximizing average, total, or expected-value-conditioning-on-being-in-this-universe life satisfaction is to torture everyone. This is a problem both in finite and infinite cases. If I understand what you're asking correctly, you can indeed have consistent preferences over universes, even if you d
conchis*10

Re boundedness:

It's important to note that the sufficiently terrible lives need to be really, really, really bad already. So much so that being horribly tortured for fifty years does almost exactly nothing to affect their overall satisfaction. For example, maybe they're already being tortured for more than 3^^^^3 years, so adding fifty more years does almost exactly nothing to their life satisfaction.

I realise now that I may have moved through a critical step of the argument quite quickly above, which may be why this quote doesn't seem to capture the core ... (read more)

1Chantiel
To some extent, whether or not life satisfaction is bounded just comes down to how you want to measure it. But it seems to me that any reasonable measure of life satisfaction really would be bounded. I'll clarify the measure of life satisfaction I had in mind. Imagine if you showed an agent finitely-many descriptions of situations they could end up being in, and asked the agent to pick out the worst and the best of all of them. Assign the worst scenario satisfaction 0 and the best scenario satisfaction 1. For any other outcome w set the satisfaction to p, where p is the probability in which the agent would be indifferent between getting satisfaction 1 with probability p and satisfaction 0 with probability 1 - p. This is very much like a certain technique for constructing a utility function from elicited preferences. So, according to my definition, life satisfaction is bounded by definition. (You can also take the limit of the agent's preferences as the number of described situations approaches infinite, if you want and if it converges. If it doesn't, then you could instead just ask the agent about its preferences with infinitely-many scenarios and require the infimum of satisfactions to be 0 and the supremum to be 1. Also you might need to do something special to deal with agents with preferences that are inconsistent even given infinite reflection, but I don't think this is particularly relevant to the discussion.) Now, maybe you're opposed to this measure. However, if you reject it, I think you have a pretty big problem you need to deal with: utility monsters. To quote Wikipedia: If you have some agents with unbounded measures satisfaction, then I think that would imply you would need to be willing cause arbitrary large amounts of suffering of agents with bounded satisfaction in order to increase the satisfaction of a utility monster as much as possible. This seems pretty horrible to me, so I'm satisfied with keeping the measure of life satisfaction to be bo
conchis*10

Re the repugnant conclusion: apologies for the lazy/incorrect example. Let me try again with better illustrations of the same underlying point. To be clear, I am not suggesting these are knock-down arguments; just that, given widespread (non-infinitarian) rejection of average utilitarianisms, you probably want to think through whether your view suffers from the same issues and whether you are ok with that. 

Though there's a huge literature on all of this, a decent starting point is here:

However, the average view has very little support among moral phil

... (read more)
1Chantiel
Thanks for the response. In an infinite universe, there's already infinitely-many people, so I don't think this applies to my infinite ethical system. In a finite universe, I can see why those verdicts would be undesirable. But in an infinite universe, there's already infinitely-many people at all levels of suffering. So, according to my own moral intuition at least, it doesn't seem that these are bad verdicts. You might have differing moral intuitions, and that's fine. If you do have an issue with this, you could potentially modify my ethical system to make it an analogue of total utilitarianism. Specifically, consider the probability distribution something would have if it conditions on it ending up somewhere in this universe, but doesn't even know if it will be an actual agent with preferences or not.That is, it uses some prior that allows for the possibility that of ending up as a preference-free rock or something. Also, make sure the measure of life satisfaction treats existences with neutral welfare and the existences of things without preferences as zero. Now, simply modify my system to maximize the expected value of life satisfaction, given this prior. That's my total-utilitarianism-infinite-analog ethical system. So, to give an example of how this works, consider the situation in which you can torture one person to avoid creating a large number of people with pretty decent lives. Well, the large number of people with pretty decent lives would increase the moral value of the world, because creating those people makes it more likely that a prior that something would end up as an agent with positive life satisfaction rather than some inanimate object, conditioning only on being something in this universe. But adding a tortured creature would only decrease the moral value of the universe. Thus, this total-utilitarian-infinite-analogue ethical system would prefer create the large number of people with decent lives than to tortured one creature. Of course, i

Fair point re use cases! My familiarity with DSGE models is about a decade out-of-date, so maybe things have improved, but a lot of the wariness then was that typical representative-agent DSGE isn't great where agent heterogeneity and interactions are important to the dynamics of the system, and/or agents fall significantly short of the rational expectations benchmark, and that in those cases you'd plausibly be better of using agent-based models (which has only become easier in the intervening period).

I (weakly) believe this is mainly because econometrists

... (read more)
3johnswentworth
Yeah, this all sounds right. Personally, I typically assume both heterogenous utilities and heterogenous world-models when working with DSGE, at which point it basically becomes an analytic tool for agent-based models.
conchis*10

My point was more that, even if you can calculate the expectation, standard versions of average utilitarianism are usually rejected for non-infinitarian reasons (e.g. the repugnant conclusion) that seem like they would plausibly carry over to this proposal as well. I haven't worked through the details though, so perhaps I'm wrong.

Separately, while I understand the technical reasons for imposing boundedness on the utility function, I think you probably also need a substantive argument for why boundedness makes sense, or at least is morally acceptable. Bound... (read more)

1Chantiel
If I understand correctly, average utilitarianism isn't rejected due to the repugnant conclusion. In fact, it's the opposite: the repugnant conclusion is a problem for total utilitarianism, and average utilitarianism is one way to avoid the problem. I'm just going off what I read on The Stanford Encyclopedia of Philosophy, but I don't have particular reason to doubt what it says. Yes, I do think boundedness is essential for a utility function. The issue unbounded utility functions is that the expected value according to some probability distributions will be undefined. For example, if your utility follows a Cauchy distribution, then the expected utility is undefined. Your actual probability distribution over utilities in an unbounded utility function wouldn't exactly follow a Cauchy distribution. However, I think that for whatever reasonable probability distribution you would use in real life, an unbounded utility function have still have an undefined expected value. To see why, note that there is a non-zero probability probability that your utility really will be sampled from a Cauchy distribution. For example, suppose you're in some simulation run by aliens, and to determine your utility in your life after the simulation ends, they sample from the Cauchy distribution. (This is supposing that they're powerful enough to give you any utility). I don't have any completely conclusive evidence to rule out this possibility, so it has non-zero probability. It's not clear to me why an alien would do the above, or that they would even have the power to, but I still have no way to rule it out with infinite confidence. So your expected utility, conditioning on being in this situation, would be undefined. As a result, you can prove that your total expected utility would also be undefined. So it seems to me that the only way you can actually have your expected values be robustly well-defined is by having a bounded utility function. In principle, I do think this could occur

Worth noting that many economists (including e.g. Solow, Romer, Stiglitz among others) are pretty sceptical (to put it mildly) about the value of DSGE models (not without reason, IMHO). I don't want to suggest that the debate is settled one way or the other, but do think that the framing of the DSGE approach as the current state-of-the-art at least warrants a significant caveat emptor. Afraid I am too far from the cutting edge myself to have a more constructive suggestion though. 

6johnswentworth
Two comments on this; * First, DSGE models as actually used are usually pretty primitive. I (weakly) believe this is mainly because econometrists mostly haven't figured out that they can backpropagate through complex models, and therefore they can't fit the parameters to real data except in some special simple cases. From what I've seen, they usually make extremely restrictive assumptions (like Cobb-Douglas utilities) in order to simplify the models. * Second, the use-case matters. We'd expect e.g. financial markets to be a much better fit for DSGE models than entire economies. And personally, I don't even necessarily consider economies the most interesting use-case - for instance, to the extent that a human is well-modelled as a collection of subagents, it makes sense to apply a DSGE model to a single human's preferences/decisions. (And same for other biological systems well-modelled as a collection of subagents.) Anyway, the important point here is that I'm more interested in the cutting edge of mathematical-models-of-collections-of-agents than in forecasting-whole-economies (since that's not really my main use-case), and I do think DSGE models are the cutting edge in that.

This sounds essentially like average utilitarianism with bounded utility functions. Is that right? If so, have you considered the usual objections to average utilitarianism (in particular, re rankings over different populations)?

1Chantiel
Thank you for responding. I actually had someone else bring up the same way in a review; maybe I should have addressed this in the article. The average life satisfaction is undefined in a universe with infinitely-many agents of varying life-satisfaction. Thus a moral system using it suffers from infinitarian paralysis. My system doesn't worry about averages, and thus does not suffer from this problem.
Answer by conchis20

Have you read s1gn1f1cant d1g1t5?

1Klen Salubri
I mean, do you recommend reading it? What is this book in a nutshell?
5Klen Salubri
Not really. Should i?
conchis00

There is no value to a superconcept that crosses that boundary.

This doesn't seem to me to argue in favour of using wording that's associated with the (potentially illegitimate) superconcept to refer to one part of it. Also, the post you were responding to (conf)used both concepts of utility, so by that stage, they were already in the same discussion, even if they didn't belong there.

Two additional things, FWIW:

(1) There's a lot of existing literature that distinguishes between "decision utility" and "experienced utility" (where "... (read more)

conchis10

I'm hesitant to get into a terminology argument when we're in substantive agreement. Nonetheless, I personally find your rhetorical approach here a little confusing. (Perhaps I am alone in that.)

Yes, it's annoying when people use the word 'fruit' to refer to both apples and oranges, and as a result confuse themselves into trying to derive propositions about oranges from the properties of apples. But I'd suggest that it's not the most useful response to this problem to insist on using the word 'fruit' to refer exclusively to apples, and to proceed to make c... (read more)

5[anonymous]
"Fruit" is a natural category; apples and oranges share interesting characteristics that make it useful to talk about them in general. "Utility" is not. The two concepts, "that for which expectation is legitimate", and some quantity related to inter-agent preference aggregation do not share very many characteristics, and they are not even on the same conceptual abstraction layer. The VNM-stuff is about decision theory. The preference aggregation stuff is about moral philosophy. Those should be completely firewalled. There is no value to a superconcept that crosses that boundary. As for me using the word "utility" in this discussion, I think it should be unambiguous that I am speaking of VNM-stuff, because the OP is about VNM, and utilitarianism and VNM do not belong in the same discussion, so you can infer that all uses of "utility" refer to the same thing. Nevertheless, I will try to come up with a less ambiguous word to refer to the output of a "preference function".
conchis-20

While I'm in broad agreement with you here, I'd nitpick on a few things.

Different utility functions are not commensurable.

Agree that decision-theoretic or VNM utility functions are not commensurable - they're merely mathematical representations of different individuals' preference orderings. But I worry that your language consistently ignores an older, and still entirely valid use of the utility concept. Other types of utility function (hedonic, or welfarist more broadly) may allow for interpersonal comparisons. (And unless you accept the possibility ... (read more)

1[anonymous]
I ignore it because they are entirely different concepts. I also ignore aerodynamics in this discussion. It is really unfortunate that we use the same word for them. It is further unfortunate that even LWers can't distinguish between an apple and an orange if you call them both "apple". "That for which the calculus of expectation is legitimate" is simply not related to inter-agent preference aggregation.
conchis00

Are you familiar with the debate between John Harsanyi and Amartya Sen on essentially this topic (which we've discussed ad nauseam before)? In response to an argument of Harsanyi's that purported to use the VNM axioms to justify utilitarianism, Sen reaches a conclusion that broadly aligns with your take on the issue.

If not, some useful references here.

ETA: I worry that I've unduly maligned Harsanyi by associating his argument too heavily with Phil's post. Although I still think it's wrong, Harsanyi's argument is rather more sophisticated than Phil's, and w... (read more)

6[anonymous]
Oh wow. No, not at all. You can't derive mathematical results by playing word games. Even if you could, it doesn't even make sense to take the average utility of a population. Different utility functions are not commensurable. No. That is not at all how it works. A deterministic coin toss will end up the same in all everett branches, but have subjective probability distributed between two possible worlds. You can't conflate them; they are not the same. Having your math rely on a misinterpreted physical theory is generally a bad sign... Really? Translate the axioms into statements about people. Do they still seem reasonable? 1. Completeness. Doesn't hold. Preferred by who? The fact that we have a concept of "pareto optimal" should raise your suspicions. 2. Transitivity. Assuming you can patch Completeness to deal with pareto-optimality, this may or may not hold. Show me the math. 3. Continuity. Assuming we let population frequency or some such stand in for probability. I reject the assumption that strict averaging by population is valid. So much for reasonable assumptions. 4. Independence. Adding another subpopulation to all outcomes is not necessarily a no-op. Other problems include the fact that population can change, while the sum of probabilities is always 1. The theorem probably relies on this. Assuming you could construct some kind of coherent population-averaging theory from this, it would not involve utility or utility functions. It would be orthogonal to that, and would have to be able to take into account egalitarianism and population change, and varying moral importance of agents and such. Shocking indeed.
conchis00

It wouldn't necessarily reflect badly on her: if someone has to die to take down Azkaban,* and Harry needs to survive to achieve other important goals, then Hermione taking it down seems like a non-foolish solution to me.

*This is hinted at as being at least a strong possibility.

[This comment is no longer endorsed by its author]Reply
conchis70

Although I agree it's odd, it does in fact seem that there is gender information transferred / inferred from grammatical gender.

From Lera Boroditsky's Edge piece

Does treating chairs as masculine and beds as feminine in the grammar make Russian speakers think of chairs as being more like men and beds as more like women in some way? It turns out that it does. In one study, we asked German and Spanish speakers to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical g

... (read more)
conchis20

My understanding of the relevant research* is that it's a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)

* There's an overview of some of this here (from p.26).

1steven0461
I wonder if they tested whether individuals suffer similar negative effects from plural generics.
conchis00

Isn't the main difference just that they have a bigger sample. (e.g. "4x" in the hardcore group).

conchis00

Isn't the claim in 6 (that there is a planning-optimal choice, but no action-optimal choice) inconsistent with 4 (a choice that is planning optimal is also action optimal)?

0CarlShulman
4 is true if randomized strategies are allowed?
conchis10

Laying down rules for what counts as evidence that a body is considering alternatives, is mess[y]

Agreed. But I don't think that means that it's not possible to do so, or that there aren't clear cases on either side of the line. My previous formulation probably wasn't as clear as it should have been, but would the distinction seem more tenable to you if I said "possible in principle to observe physical representations of" instead of "possible in principle to physically extract"? I think the former better captures my intended meaning.

I... (read more)

1SilasBarta
Heh, I actually had a response half-written up to this position, until I decided that something like the comment I did make would be more relevant. So, let's port that over... The answer to your question is: yes, as long as you can specify what observations of the system (and you may of course include any physically-possible mode of entanglement) count as evidence for it having considered multiple alternatives. This criterion, I think, is what AnnaSalamon should be focusing on: what does it mean for "alternative-consideration" to be embedded in a physical system? In such a limited world as chess, it's easy to see the embedding. [Now begins what I hadn't written before.] I think that's a great example of what I'm wondering about: what is this possible class of intelligent algorithms that stands in contrast to CSAs? If there were a good chess computer that was not a CSA, what would it be doing instead? You could imagine one, perhaps, that computes moves purely as a function of the current board configuration. If bishop here, more than three pawns between here, move knight there, etc. The first thing to notice is that for the program to actually be good, it would require that some other process was able to find a lot of regularity to the search space, and compactly express it. And to find that regularity, it would have to interact with it. So, such a good "insta-evaluator" implicitly contains the result of previous simulations. Arguably, this, rather than(?) a CSA is what humans (mostly) are. Throughout our evolutionary history, a self-replicating process iterated through a lot of experiences that told it what "does work" and "doesn't work". The way we exist today, just the same as in the case of chess above, implicitly contains a compression of previous evaluations of "does work" and "doesn't work", known as heuristics, which together guide our behavior. Is a machine that acts purely this way, and without humans' ability to consciously consider alternatives, wha
conchis50

FWIW, the exact quote (from pp.13-14 of this article) is:

Far better an approximate answer to the right question, which is often vague, than the exact answer to the wrong question, which can always be made precise. [Emphasis in original]

Your paraphrase is snappier though (as well as being less ambiguous; it's hard to tell in the original whether Tukey intends the adjectives "vague" and "precise" to apply to the questions or the answers).

conchis00

all of the above assumes a distinction I'm not convinced you've made

If it is possible in principle, to physically extract the alternatives/utility assignments etc., wouldn't that be sufficient to ground the CSA--non-CSA distinction, without running afoul of either current technological limitations, or the pebble-as-CSA problem? (Granted, we might not always know whether a given agent is really a CSA or not, but that doesn't seem to obviate the distinction itself.)

3SilasBarta
Thanks for your reply. For the purposes of the argument I was making, "possible in principle to physically extract" is the same as "possible in principle to extract". For once you know the laws of physics, which supposedly you can learn from a pebble, you can physically extract data that is functionally equivalent to alternatives/utility assignments. For example, our knowledge of thermodynamics and chemistry tells us that a chemical would go to a lower energy state (and perhaps release heat) if it could observe certain other chemicals (which we call "catalysts"). It is our knowledge of science that justifies saying that there is this lower energy state that it "has a tendency" to want to go to, which is an "alternative" lacking "couldness" in the same sense of the proposed CSAs. Laying down rules for what counts as evidence that a body is considering alternatives, is messier than AnnaSalamon thinks.
conchis10

The Snoep paper Will linked to measured the correlation for the US, Denmark and the Netherlands (and found no significant correlation in the latter two).

The monopolist religion point is of course a good one. It would be interesting to see what the correlation looked like in relatively secular, yet non-monopolistic countries. (Not really sure what countries would qualify though.)

1taw
I'm going to completely ignore "statistical significance", as scientific papers are well known to have no idea how to do statistics properly with multiple hypotheses, and can be assumed to be doing it wrong until proven otherwise. If null hypothesis were false, the chance of all almost signs pointing in the same direction would be very low. As far as I can tell what the paper finds out is that religion is less effective in Denmark and Netherlands than in US, but it increases happiness, and it's extremely unlikely to be a false positive result due to chance.
conchis20

We already have some limited evidence that conventionally religious people are happier

But see Will Wilkinson on this too (arguing that this only really holds in the US, and speculating that it's really about "a good individual fit with prevailing cultural values" rather than religion per se).

1taw
That's a good counter-argument, but the linked post doesn't actually measure religion-happiness correlation within those other countries (which is the relevant factor), and it's very plausible that European monopolistic religions are far less effective than American freely competing religions for creating happiness.
conchis40

Thanks for the explanation.

The idea is that when you are listening to music, you are handicapping yourself by taking some of the attention of the aural modality.

I'd heard something similar from a friend who majored in psychology, but they explained it in terms of verbal processing rather than auditory processing more generally, which is why (they said) music without words wasn't as bad.

I'm not sure whether it's related, but I've also been told by a number of musically-trained friends that they can't work with music at all, because they can't help but ... (read more)

0thomblake
In a possibly-related anecdote, I can't listen to music I've played in Guitar Hero while working, as my mind switches into Guitar Hero mode and all I see are streams of colored buttons.
0gwern
I find that very interesting too, since I am in fact the opposite of your musically-trained friends: I am quite rubbish at anything musical, am hard-of-hearing, and have great difficulty analysing music & songs. (In part that's why I listen to so much J-pop: since I often can't understand the lyrics even if they're in English...)
conchis00

Sometimes, but it varies quite a lot depending on exactly what I'm doing. The only correlation I've noticed between the effect of music and work-type is that the negative effect of lyrics is more pronounced when I'm trying to write.

Of course, it's entirely possible that I'm just not noticing the right things - which is why I'd be interested in references.

8Vladimir_Nesov
The idea is that when you are listening to music, you are handicapping yourself by taking some of the attention of the aural modality. If you are used to rely on it in your thinking, this makes you impaired. This is related to an experiment that Feynman describes in this video: Feynman 'Fun to Imagine' 11: Ways of Thinking In the experiment, you need to count in your mind, while doing various activities. That attention was really paid to the counting is controlled by you first calibrating and then using the counting process to predict when exactly a minute has passed. Thus, you can't cheat, you have to really go on counting. Feynman himself says that he was unable to speak while counting, as he was "speaking" and "hearing" these numbers in his mind. Another man he asked to do the experiment had no difficulty speaking, but was unable to read: he then explained that he was counting visually. I tried it both ways, and the difference shows in different speeds of counting in these modes which are hard to synchronize (and so just switching between them doesn't work very well).
conchis10

If anyone does have studies to hand I'd be grateful for references.* I personally find it difficult to work without music. That may be habit as much as anything else, though I expect part of the benefit is due to shutting out other, more distracting noise. I've noticed negative effects on my productivity on the rare occasions I've listened to music with lyrics, but that's about it.

* I'd be especially grateful for anything that looks at how much individual variation there is in the effect of music.

0Vladimir_Nesov
Do you rely mostly on visual imagination?
conchis00

Fair enough. My impression of the SWB literature is that the relationship is robust, both in a purely correlational sense, and in papers like the Frey and Stutzer one where they try to control for confounding factors like personality and selection. The only major catch is how long it takes individuals to adapt after the initial SWB spike.

Indeed, having now managed to track down the paper behind your first link, it seems like this is actually their main point. From their conclusion:

Our results show that (a) selection effects appear to make happy people m

... (read more)
conchis00

FWIW, this seems inconsistent with the evidence presented in the paper linked here, and most of the other work I've seen. The omitted category in most regression analyses is "never married", so I don't really see how this would fly.

conchis10

Sorry for the delay in getting back to you (in fairness, you didn't get back to me either!). A good paper (though not a meta-analysis) on this is:

Stutzer and Frey (2006) Does Marriage Make People Happy or Do Happy People Get Married? Journal of Socio-Economics 35:326-347. links

The lit review surveys some of the other evidence.

I a priori doubt all the happiness research as based on silly questionnaires and naive statistics

I'm a little puzzled by this comment given that the first link you provided looks (on its face) to be based on exactly this sort of e... (read more)

0taw
Thanks. By doubt I just mean it's really really easy to get it spectacularly wrong in a systemic way in too many ways, so I'm only going to believe the result if it's robust with wide variety of tests and situations. Not that there's no value in it.
conchis30

this post infers possible causation based upon a sample size of 1

Eh? Pica) is a known disorder. The sample size for the causation claim is clearly more than 1.

[ETA: In case anyone's wondering why this comment no longer makes any sense, it's because most of the original parent was removed after I made it, and replaced with the current second para.]

1SforSingularity
EDIT: The claim that Pica is a known disorder is distinct from claims about what causes it. The only evidence given in the post is one personal experience. However, the wikipedia article does state that referencing a study which states that:
conchis60

I for one comment far more on Phil's posts when I think they're completely misguided than I do otherwise. Not sure what that says about me, but if others did likewise, we would predict precisely the relationship Phil is observing.

conchis00

Interesting. All the other evidence I've seen suggest that committed relationships do make people happier, so I'd be interested to see how these apparently conflicting findings can be resolved.

Part of the difference could just be the focus on marriage vs. stable relationships more generally (whether married or not): I'm not sure there's much reason to think that a marriage certificate is going to make a big difference in and of itself (or that anyone's really claiming that it would). In fact, there's some, albeit limited, evidence that unmarried couples a... (read more)

0taw
"All other evidence" being? I a priori doubt all the happiness research as based on silly questionnaires and naive statistics (and most other psychological research). Is there any good metaanalysis showing anything like that?
conchis00

Me too. It gets especially embarrassing when you end up telling someone a story about a conversation they themselves were involved in.

conchis30

Warning, nitpicks follow:

The sentence "All good sentences must at least one verb." has at least one verb. (It's an auxiliary verb, but it's still a verb. Obviously this doesn't make it good; but it does detract from the point somewhat.)

"2+2=5" is false, but it's not nonsense.

conchis20

I was objecting to the subset claim, not the claim about unit equivalence. (Mainly because somebody else had just made the same incorrect claim elsewhere in the comments to this post.)

As it happens, I'm also happy to object to claim about unit equivalence, whatever the wiki says. (On what seems to be the most common interpretation of utilons around these parts, they don't even have a fixed origin or scale: the preference orderings they represent are invariant to affine transforms of the utilons.)

-1timtyler
My original claim was about what the Wiki says. Outside that context we would have to start by stating definitions of Hedons and Utilons before there could be much in the way of sensible conversation.
conchis00

To expand on this slightly, it seems like it should be possible to separate goal achievement from risk preference (at least under certain conditions).

You first specify a goal function g(x) designating the degree to which your goals are met in a particular world history, x. You then specify another (monotonic) function, f(g) that embodies your risk-preference with respect to goal attainment (with concavity indicating risk-aversion, convexity risk-tolerance, and linearity risk-neutrality, in the usual way). Then you maximise E[f(g(x))].

If g(x) is only ordin... (read more)

conchis10

Utility means "the function f, whose expectation I am in fact maximizing".

There are many definitions of utility, of which that is one. Usage in general is pretty inconsistent. (Wasn't that the point of this post?) Either way, definitional arguments aren't very interesting. ;)

The interesting result is that if you're maximizing something you may be vulnerable to a failure mode of taking risks that can be considered excessive.

Your maximand already embodies a particular view as to what sorts of risk are excessive. I tend to the view that if ... (read more)

1DanArmak
Yes, that was the point :-) On my reading of OP, this is the meaning of utility that was intended. Yes. Here's my current take: The OP argument demonstrates the danger of using a function-maximizer as a proxy for some other goal. If there can always exist a chance to increase f by an amount proportional to its previous value (e.g. double it), then the maximizer will fall into the trap of taking ever-increasing risks for ever-increasing payoffs in the value of f, and will lose with probability approaching 1 in a finite (and short) timespan. This qualifies as losing if the original goal (the goal of the AI's designer, perhaps) does not itself have this quality. This can be the case when the designer sloppily specifies its goal (chooses f poorly), but perhaps more interesting/vivid examples can be found.
conchis00

Crap. Sorry about the delete. :(

conchis10

Redefining "utility" like this doesn't help us with the actual problem at hand: what do we do if Omega offers to double the f(x) which we're actually maximizing?

It wasn't intended to help with the the problem specified in terms of f(x). For the reasons set out in the thread beginning here, I don't find the problem specified in terms of f(x) very interesting.

In your restatement of the problem, the only thing we assume about Omega's offer is that it would change the universe in a desirable way

You're assuming the output of V(x) is ordinal. It... (read more)

conchis30

The logic for the first step is the same as for any other step.

Actually, on rethinking, this depends entirely on what you mean by "utility". Here's a way of framing the problem such that the logic can change.

Assume that we have some function V(x) that maps world histories into (non-negative*) real-valued "valutilons", and that, with no intervention from Omega, the world history that will play out is valued at V(status quo) = q.

Omega then turns up and offers you the card deal, with a deck as described above: 90% stars, 10% skulls. S... (read more)

0DanArmak
Redefining "utility" like this doesn't help us with the actual problem at hand: what do we do if Omega offers to double the f(x) which we're actually maximizing? In your restatement of the problem, the only thing we assume about Omega's offer is that it would change the universe in a desirable way (f is increasing in V(x)). Of course we can find an f such that a doubling in V translates to adding a constant to f, or if we like, even an infinitesimal increase in f. But all this means is that Omega is offering us the wrong thing, which we don't really value.
conchis00

Interesting, I'd assumed your definitions of utilon were subtly different, but perhaps I was reading too much into your wording.

The wiki definition focuses on preference: utilons are the output of a set of vNM-consistent preferences over gambles.

Your definition focuses on "values": utilons are a measure of the extent to which a given world history measures up according to your values.

These are not necessarily inconsistent, but I'd assumed (perhaps wrongly) that they differed in two respects.

  1. Preferences are a simply binary relation, that does
... (read more)
conchis00

since Hedons are a subset of Utilons

Not true. Even according to the wiki's usage.

-1timtyler
What the Wiki says is: "Utilons generated by fulfilling base desires are hedons". I think it follows from that that Utilons and Hedons have the same units. I don't much like the Wiki on these issues - but I do think it a better take on the definitions than this post.
conchis00

We can experience things other than pleasure.

Load More