All of Beluga's Comments + Replies

Beluga110

Dear Americans,

While spending a holiday in the New Orleans and Mississippi region, I was baffled by the typical temperatures in air-conditioned rooms. The point of air conditioning is to make people feel comfortable, right? It is obviously very bad at achieving this. I saw shivering girls with blue lips waiting in the airport. I saw ladies wearing a jacket with them which they put on as soon as they entered an air-conditioned room. The rooms were often so cold that I felt relieved the moment I left them and went back into the heat. Cooling down less than t... (read more)

4moridinamael
First, the temperature will be uneven throughout any given building. In order to ensure that the outskirts of a large building are adequately cooled, the interior may end up frigid. This effect is more pronounced with larger buildings. Please complain to your nearest HVAC contractor, not to us poor Texans. Second, people who are just coming in from 115 F outdoor temperatures actually tend to want it to be nice and cold inside. Believe me. Third, the outdoor temperature varies over the course of a day. A thermostat setting that resulting in an acceptable actual indoor temperature at noon might be causing very cold indoor temperatures at 6 PM, even though nobody touched the thermostat. Fourth, colder air is drier, which causes sweat to evaporate faster. So there's a sweat-evaporating benefit along with the rapid cooling benefit, which is very beneficial and widely appreciated when every single person entering a building is drenched in sweat. The best way to learn these lessons is to simply live in Texas and observe your own behavior vis-a-vis air conditioning preferences.
3bogus
There is no "optimally-comfortable temperature" - different folks want different temps! The optimum choice is in fact to have two rooms/environments, one of which cools down a bit more, the other less (or not at all). If you feel cold, just spend some time in the warmer room.
4Viliam
Some form of signalling? Air-conditioning is higher status than no air-conditioning. Higher-status people are more likely to live with air-conditioning; lower-status people are more likely to live without it. Lower-status people will feel more inconvenienced by too much air-conditioning, because it is a greater shock for them. Complaining about too much air-conditioning is an evidence of lower status. People who want to seem higher-status will avoid complaining about air-conditioning (and maybe just dress warmer). If all high-status people agree that the air-conditioning is okay as it is, it will remain as it is, because higher-status people make the decisions.
1Richard Korzekwa
Yeah, this is a thing, and I hear plenty of Americans make baffled complaints about it as well. I don't know the answer, but this is my guess. A while back, there was a flurry of news sites talking about air conditioning being "sexist". The short version is that standards for climate control were all written when offices were full of men in suits. Times have changed, in terms of who's wearing what in which buildings, but things like building codes and temperature guidelines haven't caught up.
4shev
I think you get more of that in Texas and the southeast. It (by my observation - very much a stereotype) correlates with driving big trucks, eating big meals, liking steak dinners and soda and big desserts, obesity, not caring about the environment, and taking strong unwavering opinions on things. And with conservatism, but not exclusively. I distinctly remember driving in my high school band director's car once, maybe a decade ago, and he was blasting the AC at max when it maybe needed to be on the lowest setting, tops -- it seemed to reflect a mindset that "I want to get cold NOW" when it's hot, to the point of overreaction. Maybe a mindset that - if the sun is bright and on my face, I need a lot of cold air, even if the rest of me doesn't need it? Or maybe, 'it feels hot in the world so I want a lot of cold air'. Certainly there was no realization that it was excessive, and he didn't seem bothered by the unnecessary use of resources. I've noticed this same mindset a lot ever since, and I still don't understand it.
Beluga10

Not sure I understand your question, but:

  • I assume that each civilization only cares about itself. So one civilization succeeding does not "lead to large positive utilities for all future civilisations", only for itself. If civilization A assigns positive or negative value to civilization B succeeding, the expected utility calculations become more complicated.
  • You cannot "let the game end". The fact that the game ends when one player receives R only represents the fact that each player knows that no previous player has received R (i.e., we arguably know that no civilization so far has successfully colonized space in our neighborhood).
0habeuscuppus
Wouldn't it be more accurate to state that R represents an enduring multi-system technological civilization and not mere colonial presence? I don't think we can arguably claim that space in our stellar neighborhood has never been colonized, just that it does not appear to be currently
Beluga10

Thanks a lot for your comments, they were very insightful for me. Let me play the Advocatus Diaboli here and argue from the perspective of a selfish agent against your reasoning (and thus also against my own, less refined version of it).

"I object to the identification 'S = $B'. I do not care about the money owned by the person in cell B, I only do so if that person is me. I do not know whether the coin has come up heads or tails, but I do not care about how much money the other person that may have been in cell B had the coin come up differently would... (read more)

0lackofcheese
First of all, I think your argument from connection of past/future selves is just a specific case of the more general argument for reflective consistency, and thus does not imply any kind of "selfishness" in and of itself. More detail is needed to specify a notion of selfishness. I understand your argument against identifying yourself with another person who might counterfactually have been in the same cell, but the problem here is that if you don't know how the coin actually came up you still have to assign amounts of "care" to the possible selves that you could actually be. Let's say that, as in my reasoning above, there are two cells, B and C; when the coin comes up tails humans are created in both cell B and cell C, but when the coin comes up heads a human is created in either cell B or cell C, with equal probability. Thus there are 3 "possible worlds": 1) p=1/2 human in both cells 2) p=1/4 human in cell B, cell C empty 3) p=1/4 human in cell C, cell B empty If you're a selfish human and you know you're in cell B, then you don't care about world (3) at all, because there is no "you" in it. However, you still don't know whether you're in world (1) or (2), so you still have to "care" about both worlds. Moreover, in either world the "you" you care about is clearly the person in cell B, and so I think the only utility function that makes sense is S = $B. If you want to think about it in terms of either SSA-like or SIA-like assumptions, you get the same answer because both in world (1) and world (2) there is only a single observer who could be identified as "you". Now, what if you didn't know whether you were in cell B or cell C? That's where things are a little different. In that case, there are two observers in world (1), either of whom could be "you". There are basically two different ways of assigning utility over the two different "yous" in world (1)---adding them together, like a total utilitarian, and averaging them, like an average utilitarian; the result
Beluga00

The decision you describe in not stable under pre-commitments. Ahead of time, all agents would pre-commit to the $2/3. Yet they seem to change their mind when presented with the decision. You seem to be double counting, using the Bayesian updating once and the fact that their own decision is responsible for the other agent's decision as well.

Yes, this is exactly the point I was trying to make -- I was pointing out a fallacy. I never intended "lexicality-dependent utilitarianism" to be a meaningful concept, it's only a name for thinking in this fallacious way.

[This comment is no longer endorsed by its author]Reply
Beluga10

I elaborated on this difference here. However, I don't think this difference is relevant for my parent comment. With indexical utility functions I simply mean selfishness or "selfishness plus hating the other person if another person exists", while with lexicality-independent utility functions I meant total and average utilitarianism.

Beluga10

The broader question is "does bringing in gnomes in this way leave the initial situation invariant"? And I don't think it does. The gnomes follow their own anthropic setup (though not their own preferences), and their advice seems to reflect this fact (consider what happens when the heads world has 1, 2 or 50 gnomes, while the tails world has 2).

As I wrote (after your comment) here, I think it is prima facie very plausible for a selfish agent to follow the gnome's advice if a) conditional on the agent existing, the gnome's utility function agr... (read more)

2Stuart_Armstrong
The decision you describe in not stable under pre-commitments. Ahead of time, all agents would pre-commit to the $2/3. Yet they seem to change their mind when presented with the decision. You seem to be double counting, using the Bayesian updating once and the fact that their own decision is responsible for the other agent's decision as well. In the terminology of paper http://www.fhi.ox.ac.uk/anthropics-why-probability-isnt-enough.pdf , your agents are altruists using linked decisions with total responsibility and no precommitments, which is a foolish thing to do. If they were altruists using linked decisions with divided responsibility (or if they used precommitments), everything would be fine (I don't like or use that old terminology - UDT does it better - but it seems relevant here). But that's detracting from the main point: still don't see any difference between indexical and non-indexical total utilitarianism. I don't see why a non-indexical total utilitarian can't follow the wrong reasoning you used in your example just as well as an indexical one, if either of them can - and similarly for the right reasoning.
Beluga10

First scenario: there is no such gnome. The number of gnomes is also determined by the coin flip, so every gnome will see a human. Then if we apply the reasoning from http://lesswrong.com/r/discussion/lw/l58/anthropic_decision_theory_for_selfish_agents/bhj7 , this will result with a gnome with a selfish human agreeing to x<$1/2.

If the gnomes are created after the coin flip only, they are in exactly the same situation like the humans and we cannot learn anything by considering them that we cannot learn from considering the humans alone.

Instead, let'

... (read more)
1Stuart_Armstrong
I'm still not clear why lexicality-independent utility functions are different from their equivalent indexical versions.
Beluga20

Thanks for your reply.

Ok, I don't like gnomes making current decisions based on their future values.

For the selfish case, we can easily get around this by defining the gnome's utility function to be the amount of $ in the cell. If we stipulate that this can only change through humans buying lottery tickets (and winning lotteries) and that humans cannot leave the cells, the gnome's utility function coincides with the human's. Similarly, we can define the gnome's utility function to be the amount of $ in all cells (the average amount of $ in those cells ... (read more)

1Stuart_Armstrong
The broader question is "does bringing in gnomes in this way leave the initial situation invariant"? And I don't think it does. The gnomes follow their own anthropic setup (though not their own preferences), and their advice seems to reflect this fact (consider what happens when the heads world has 1, 2 or 50 gnomes, while the tails world has 2). I also don't see your indexical objection. The sleeping beauty could perfectly have an indexical version of total utilitarianism ("I value my personal utility, plus that of the sleeping beauty in the other room, if they exist"). If you want to proceed further, you seem to have to argue that indexical total utilitarianism gives different decisions than standard total utilitarianism. This is odd, as it seems a total utilitarian would not object to having their utility replaced with the indexical version, and vice-versa.
Beluga150

Not sure how much sense it makes to take the arithmetic mean of probabilities when the odds vary over many orders of magnitude. If the average is, say, 30%, then it hardly matters whether someone answers 1% or .000001%. Also, it hardly matters whether someone answers 99% or 99.99999%.

I guess the natural way to deal with this would be to average (i.e., take the arithmetic mean of) the order of magnitude of the odds (i.e., log[p/(1-p)], p someone's answer). Using this method, it would make a difference whether someone is "pretty certain" or "extremely certain" that a certain statement is true or false.

Does anyone know what the standard way for dealing with this issue is?

2Eugine_Nier
Use medians and percentiles instead of means and standard deviations.
6Manfred
Yeah, log odds sounds like a good way to do it. Aggregating estimates is hard because peoples' estimates aren't independent, but averaging log odds will at least do better than averaging probabilities.
Beluga10

For the School Mark problem, the causal diagram I obtain from the description is one of these:

diagram

or

diagram

For the first of these, the teacher has waived the requirement of actually sitting the exam, and the student needn't > bother. In the second, the pupil will not get the marks except by studying for and taking the exam. See also the decision problem I describe at the end of this comment.

I think it's clear that Pallas had the first diagram in mind, and his point was exactly that the rational thing to do is to study despite the fact that the ... (read more)

0Richard_Kennaway
My first diagram is scenario C and my second is scenario B. In the first diagram there is no (ETA: causal) dependence of the final mark on exam performance. I think pallas' intended scenario was more likely to be B: the mark does (ETA: causally) depend on exam performance and has been predicted. Since in B the mark depends on final performance it is necessary to study and take the exam. In the real world, where teachers do not possess Omega's magic powers, teachers may very well be able to predict pretty much how their students will do. For that matter, the students themselves can predict how they will do, which transforms the problem into the very ordinary, non-magical one I gave at the end of my comment. If you know how well you will do on the exam, and want to do well on it, should you (i.e. is it the correct decision to) put in the work? Or for another example of topical interest, consider the effects of genes on character. Unless you draw out the causal diagrams, Omega is just magic: an imaginary phenomenon with no moving parts. As has been observed by someone before on LessWrong, any decision theory can be defeated by suitably crafted magic: Omega fills the boxes, or whatever, in the opposite way to whatever your decision theory will conclude. Problems of that sort offer little insight into decision theory.
Beluga10

The results you quote are very interesting and answer questions I've been worrying about for some time. Apologies for bringing up two purely technical inquiries:

  1. Could you provide a reference for the result you quote? You referred to Eq. (34) in Everett's original paper in another comment, but this doesn't seem to make the link to the VNM axioms and decision theory.

  2. <>

That seems wrong to me. There has to be a formulation of the form if the two initially perfectly entangled particles get only slightly entangled with other particles, then quantum ... (read more)