Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Someone probably does. I believe that the cultural practice of preferring coffee to tea began in the British colonies at the time the United States started to cease to be part of the British Empire as a side effect of boycotting tea to avoid paying a tea tax. (This is a pretty well-known episode of American history within the United States.) I was boycotting the boycott. Refusing to drink tea is a signaling thing in the United States to let people know that you are not in agreement with the government of the United States as to which side constitutes the actual enemy in most wars the United States fights. It more less means "I was an anglophile on my route to becoming a Bob Dylan fan, and I make a point of singing, at least, the first verse of "Chimes of Freedom" loudly and publicly every May 1, July 4, and September 2." By "more or less," I mean, I'm a musician so that's how I now express some of the same things that I used to express by refusing to drink coffee before I had enough confidence to just sing "flashing for the warrior, whose strength is not to fight; flashing for the refugee on the unarmed road of flight" whenever I see someone wearing a uniform that I deem offensive. Relatedly, refusing to drink coffee while still drinking caffeine is a fairly radical refusal to participate in mainstream culture that an enormous number of second-and-third-tier trendsetters recognize as a common signal used by first-tier trend-setters. For instance, most hipsters are at least vaguely aware that many of the most influential people who call the shots and set the trends in their subculture are some subset of the people who are not actually hipsters but who interact with the fringes of hipster culture and who have also spent at least a few years saying, "I DO NOT DRINK COFFEE. i drink tea." ("No thanks, I drink tea," is completely different.) To become a first-tier trend-setter in hipster culture, you have to be a non-hipster who has learned how to do a super-hipster thing for the right reasons, and one of the most obvious and easy ways you can do that is to express a disdain for Starbucks that is more menacing/intimidating than it is merely contemptuous (but is also at least as contemptuous as the typical hipsters' ability to express disdain). Hipsters are not formidable people, but they respect formidable people; and they disrespect people's whose power is derived from social structures. There is at least one venue that I used to go primarily to consume tea, where hipsters still go primarily to consume Jazz. The comment that you responded mostly consisted of me cryptically calling a few shots. The comments I've posted today consist of cryptically taking victory laps for all the shots called in that comment ten years ago; while calling some shots for the next ten years. I occasionally interact with hipster culture to inform hipsters about what types of aesthetic preferences they are going to help spread in the next few years. All the minor celebrities I interact with respond to all the comments I direct towards them and ignore all the comments I make about them. For instance, Scott Siskind always replies to the comments I post on his blogs that I want him to respond to. And when I go to less wrong meetups I figure out whose worth talking to by saying, "I learned Scott's last name from the blog that I sort of vaguely remember as being named after an octopus long before I confirmed it by asking 'how many Jazz pianists who performed in Carnegie Hall can possibly have a brother named Scott who has practiced psychiatry in Michigan."

I'm just reading this for random reasons either for the first time or for the first time that I have a response. I think what I see differently from you is not happiness but motivation. And that at the time I wrote this, I believe your process of making decisions was more future-oriented than mine was. (I believe I have converged towards you in how my motivation works in the ten years since I wrote this.) When I wrote the above, my past was clinging to me in many ways that were adverse to happiness. (Trauma) What I wasn't quite saying in my previous comment, is that I was at the time (like many other people I know) holding onto trauma in stupid ways because I needed to hold onto it to make it feel like that part of my life had a reason to have happened that wasn't purely just worthless-badness happens. Holding onto trauma was a core part of my identity and was a core part of many other people's identities. On the silliest extreme of people holding onto trauma are the people who keep playing games that they expect to keep losing with the hope of eventually winning in a way that makes all their past losses up to them. (Before 2016, being a Cubs fan was a particularly lighthearted example of people behaving this way; whereas, gambling addictions are a much less lighthearted one. Many gambling addicts are deeply aware that their hope to one day recover all they've lost through continuing to gamble is not founded in reality. I've never been a gambling addict. But I've had several relationships where I was holding onto the baseless hope that someone would change or that someone's true colors were never the colors that actually came out, etc. and I think that's more or less psychologically the same thing as a very potent gambling addiction. I think the psychological mechanism of staying in an abusive relationship is almost exactly the same as the psychological mechanism of being addicted to playing slots, but much, much stronger because abusive people are like intelligent slot machines that are studying you to make you maximally addicted to interacting with them. My overall thesis is that the brain has an enormous number of traits that I would say seem more like bugs to me than like features and that I believe your article is a much more accurate description of how human motivation should work than an accurarate explanation of how human motivation does work.) And I used silly examples to express this because I didn't want to talk about any of the real ones.

I haven't studied this in general, but I have read a decent amount about the history of a couple cities, and based on those examples, can say with confidence that no modern city comes remotely close to the density that people would choose absent regulations keeping density down.

Tokyo today is less densely populated per square meter ground than late medieval Edo was, and late medieval Edo had no plumbing and basically no buildings taller than three stories. (I don't think there are historical examples of cities with no height restrictions and no density restrictions because until 1885, nobody knew how to build a skyscraper, so height restrictions existed indirectly through limitations of engineering -- technically, they still do.)

All of the evidence I'm familiar with suggests that people would choose to be very densely concentrated if it wasn't for regulations limiting their density.

The favelas of Brazil are generally considered a stepping stone towards urban living by their residents. Most of their residents don't live there because they need to; they live there because they would prefer to leave the places they came from (generally the countryside). There's pretty strong evidence globally and historically that, when given the option, people deliberately choose urban poverty over rural poverty. People migrate from villages to slums, and they don't move back. This is happening in Brazil, Kenya, Tibet, and India today. It happened historically in the United States and the U.K. This exhausts my knowledge of the history of human migration patterns, but I assume that the cases I don't know anything about are roughly consistent with the places I do know something about.

Air pollution from density of residency is unlikely to ever be self-limiting. 19th century London had way worse air pollution than any modern city, caused by coal-burning urban factories being everywhere, not to mention that everyone also burned coal for heat in the winter. (They lacked the technology to track air pollution back then, but it was bad enough that it effectively limited life expectancy to 30, so pretty bad. Incidentally, high polluting urban factories were priced out of existing in urban settings more than they were regulated out of existing in them.) Most cities also end up having a high percentage of their residents primarily travel by not-car, because traffic gets to be horrendous in everywhere but the nimbyest of cities. Outside the U.S., most cities are also designed around encouraging people to get around by not-car.

Asian countries generally permit much higher urban density than Western countries, and this seems to greatly increase the percentage of people who prefer to live in urban settings, and more or less prevent suburbs from developing. (I assume this happens because people are much less likely to be priced out of being able to live in a city, and that the preference for living outside of a city mainly comes from costs.) 

Population density and price per square foot of livable space are highly correlated. I strongly suspect the density causes the increase in price; pretty sure the increase in price doesn't cause the increase in density. 

 

 

By the way, Bloomberg News has a section called "Citylab" that is primarily focused on urban planning. I highly recommend it to anyone interested in the subject.

If I were designing the experiment, I would have the control group be to play a different game instead of having it be maths instructions.

You generally don't want test subjects to know whether they are in the control condition or not. So if you're going to make it be maths instructions, you probably shouldn't tell them what the experiment is designed to test at all, until you're debriefing at the end. If you tell people you are recruiting that you are testing the effects of playing computer games on statistical reasoning, then the people in the control condition won't need to realize that what you're really testing is whether your RPG in particular helps people think about statistics. They can just play HalfLife 2 or whatever you pick for them to play for a few minutes, and then take your tests afterwards.

I find that playing the piano is a particularly useful technique for gauging my emotions, when they are suppressed/muted. This works better when I'm just making stuff up by ear than it does when I'm playing something I know or reading music. (And learning to make stuff up is a lot easier than learning to read music if you don't already play.) Playing the piano does not help me feel the emotions any more strongly, but it does let me hear them -- I can tell that music is sad, happy, or angry regardless of its impact on my affect. Most people can.

Something that I don't do that I think would work (based partially on what Ariely says in The Upside of Irrationality, partially on what Norman says in Emotional Design, and partially on anecdotal experience) is to do something challenging/frustrating and see how long it takes for you to give up or get angry. If you can do it for a while without getting frustrated, you're probably in a positive state of mind. If you give up feeling like it's futile, you're sad, and if you start feeling an impulse to break something, you're frustrated/angry. The shorter it takes you to give up or angry the stronger that emotion is. The huge downside to this approach is that it results in exacerbating negative emotions (temporarily) in order to gauge what you were feeling and how strongly.

afeller08120

The person proposing the bet is usually right.

This is a crucial observation if you are trying to use this technique to improve your calibration of your own accuracy! You can't just start making bets when no one else you associate regularly is challenging you to the bets.

Several years ago, I started taking note of all of the times I disagreed with other people and looking it up, but initially, I only counted myself as having "disagreed with other people" if they said something I thought was wrong, and I attempted to correct them. Then I soon added in the case when they corrected me and I argued back. During this period of time, I went from thinking I was about 90% accurate in my claims to believing I was way more accurate than that. I would go months without being wrong, and this was in college, so I was frequently getting into disagreements with people, probably, an average, three a day during the school year. Then I started checking the times that other people corrected me, just as much as I checked when I corrected other people. (Counting even the times that I made no attempt to argue.) And my accuracy rate plummeted.

Another thing I would recommend to people starting out in doing this is that you should keep track of your record with individual people not just your general overall record. My accuracy rate with a few people is way lower than my overall accuracy rate. My overall rate is higher than it should be because I know a few argumentative people who are frequently wrong. (This would probably change if we were actually betting money, and we were only counting arguments when those people were willing to bet. So you're approach adjusts for this better than mine.) I have several people for whom I'm close to 50%, and there are two people for whom I have several data points and my overall accuracy is below 50%.

There's one other point I think somebody needs to make about calibration. And that's that 75% accuracy when you disagree with other people is not the same thing as 75% accuracy. 75% information fidelity is atrocious; 95% information fidelity is not much better. Human brains are very defective in a lot of ways, but they aren't that defective! Except at doing math. Brains are ridiculously bad at math relative to how easily machines can be implemented to be good at math. For most intents and purposes, 99% isn't a very high percentage. I am not a particular good driver, but I haven't gotten into a collision with another vehicle in my well over 1000 times driving. Percentages tend to have an exponential scale to them (or more accurately a logistic curve). You don't have to be a particularly good driver to avoid getting into an accident 99.9% of the time you get behind the wheel, because that is just a few orders of magnitude improvement relative to 50%.

Information fidelity differs from information retention. Discarding 25% or 95% or more of collected information is reasonable; corrupting information at that rate is what I'm saying would be horrendous. (Because discarding information conserves resources; whereas corrupting information does not... except to the extent that you would consider compressing information with a lossy (as in "not lossless") compression to be a corrupting information, but I would still consider that to be discarding information. Episodic memory is either very compressed or very corrupted depending on what you think it should be.)

In my experience, people are actually more likely to be underconfident about factual information than they are to be overconfident, if you measure confidence on an absolute scale instead of a relative-to-other-people scale. My family goes to trivia night, and we almost always get at least as many correct as we expect to get correct, usually more. However, other teams typically score better than we expect them to score too, and we win the round less often than we expect to.

Think back to grade school when you actually had fill in the blank and multiple choice questions on tests. I'm going to guess that you probably were an A student and got around 95% right on your tests... because a) that's about what I did and I tend to project, b) you're on LessWrong so you were probably an A student, and C) you say you feel like you ought to be right about 95% of the time. I'm also going to guess (because I tend to project my experience onto other people) that you probably felt a lot less than 95% confident on average when you were taking the tests. There were more than a few tests I took in my time in school where I walked out of the test thinking "I didn't know any of that; I'll probably get a 70 or better just because that would be horribly bad compared to what I usually do, but I really feel like I failed that"... and it was never 70. (Math was the one exception in which I tended to be overconfident, I usually made more mistakes than I expected to make on my math tests.)

Where calibration is really screwed up is when you deal with subjects that are way outside of the domain of normal experience, especially if you know that you know more than your peer group about this domain. People are not good at thinking about abstract mathematics, artificial intelligence, physics, evolution, and other subjects that happen at a different scale from normal everyday life. When I was 17, I thought I understood Quantum Mechanics just because I'd read A Brief History of Time and A Universe in a Nut Shell... Boy was I wrong!

On LessWrong, we are usually discussing subjects that are way beyond the domain of normal human experience, so we tend to be overconfident in our understanding of these subjects... but part of the reason for this overconfidence is that we do tend to be correct about most of the things we encounter within the confines of routine experience.

Precisely for this reason, there was a time when I wrote in Elverson pronouns (basically, Spivak pronouns) for gender ambiguous cases. So, if I was writing about Bill Clinton, I would use "he," and if I was writing about Grace Hopper, I would use "she," but if I was writing about somebody/anybody in would use, I would use "ey" instead. This allows one to easily compile the pronouns according to preference without mis-attributing pronouns to actual people... I've always planned on getting around to hosting my own blog running on my own code which would include an option to let people set a cookie to store their gender preference so they could get "she by default", "he by default", "Spivak by default", or randomization between he and she -- with a gimmick option for switching between different sets of gender neutral pronouns at random. The default default being randomization between he and she. But I haven't gotten around to writing the website to host my stuff yet, and I just use unmodified blogger, so for now I'm doing deliberate switching by hand as described above.

(I think I could write a script like that for blogger too, but I haven't bothered looking into how to customize blogger because I keep planning to write my own website anyways because there are a lot of things I want to differently, and that's not necessarily the one that's at the top of my list.)

More jarring than that is if one set of gender pronouns gets used predominantly in negative examples, and the other set gets used predominantly in positive examples.

I try to deliberately switch based on context. If I wrote an example of someone being wrong and then someone being right. I will stick with the same gender for both cases, and then switch to the other gender when I move to the next example of someone being wrong, right, or indifferent.

Occasionally, something will be so inherently gendered that I cannot use the non-default gender and feel reasonable doing it. In these cases, I actually don't think I should. (Triggers: sexual violence. I was recently writing about violence, including rape, and I don't think I could reasonable alternate pronouns for referring to the rapist because, while not all perpetrators are male, they are so overwhelmingly male that it would be unreasonable to use "she" in isolation. I mixed "he" with an occasional "he or she" for the extremely negative examples in those few paragraphs.)

I changed my mind midway through this post. Hopefully it still makes sense... I started disagreeing with you based on the first two thoughts that come to mind, but I'm now beginning to think you may be right.

So it's hard to see how timeless cooperation could be morally significant, since morality usually deals with terminal values, not instrumental goals.

I.

This statement doesn't really fit with the philosophy of morality. (At least as I read it.)

Consequentialism distinguishes itself from other moral theories by emphasizing terminal values more than other approaches to morality do. A consequentialist can have "No murder" as a terminal value, but that's different from a deontologist believing that murder is wrong or a Virtue Ethicist believing that virtuous people don't commit murder. A true consequentialist seeking to minimize the amount of murder that happens would be willing to commit murder to prevent more murder, but neither a deontologist nor a virtue ethicist would.

Contractualism is a framework for thinking about morality that presupposes that people have terminal values and their values sometimes conflict with each other's terminal values. It's a description of morality as a negotiated system of adopting/avoiding certain instrumental goals so that the people who implicitly negotiate the contract for their mutual benefit at attaining their terminal values. It says nothing about what kind of terminal values people should have.

II.

Discussions of morality focus on what people "should" do and what people "should" think, etc. The general idea of terminal values is that you have them and they don't change in response to other considerations. They're the fixed points that affect the way you think about what you want to accomplish with you instrumental goals. There's no point to discussing what kind of terminal values people "should" have. But in practice, people agree that there is a point to discussing what sorts of moral beliefs people should have.

III.

The psychological conditions that cause people to become immoral by most other people's standards have a lot to do with terminal values, but not anything to do with the kinds of terminal values that people talk about when they discuss morality.

Sociopaths are people who don't experience empathy or remorse. Psychopaths are people who don't experience empathy, remorse, or fear. Being able to feel fear is not the sort of thing that seems relevant to a discussion about morality... But that's not the same thing as saying that being able to feel fear is not relevant to a discussion about morality. Maybe it is.

Maybe what we mean by morality, is having the terminal values that arise from experiencing empathy, remorse, and fear the way most people experience these things in relation to the people they care about. That sounds like a really odd thing to say to me... but it also sounds pretty empirically accurate for nailing down what people typically mean when they talk about morality.

Anti-epistemology is a more general model of what is going on in the world than rationalizations are,

Yes.

so it should all reduce to rationalizations in the end.

Unless there are anti-epistemologies that are not rationalizations.

The general concept of a taboo seems to me to be an example of a forceful anti-epistemology that is common in most moral ideologies and is different from rationalization. When something is tabooed, it is deemed wrong to do, wrong to discuss, and wrong to even think about. The tabooed thing is something that people deem wrong because they cannot think about whether it is wrong without in the process doing something "wrong," so there is no reason to suppose that they would find something wrong with the idea if they were to think about it, and try to consider whether the taboo fit with or ran against their moral sense.

A similar anti-epistemology is when people believe it is right to believe something is morally right... on up through all the meta-levels of beliefs about beliefs, so that they would already be committing the sin of doubt as soon as they begin to question whether they should believe that continuing to hold their moral beliefs is actually something they are morally obliged to do. (For ease of reference, I'll call this anti-epistemology "faith".)

One of the three things that rationalization, taboos, and faith have in common is that they are sufficiently general modes of thought to permit them to be applied to "is" propositions as well as "ought" propositions, and when these modes of thought are applied to objective propositions for which truth-values can be measured, they behave like anti-epistemologies. So in the absence of evidence to the contrary, we should presume that they behave as anti-epistemologies for morality, art criticisms and other subjects -- even though the existence of something stable and objective to be known in these subjects is highly questionable. The modes of thought I just mentioned are themselves inherently flawed. They are not simply flawed ways of thinking about morality, in particular.

If you are looking for bad patterns of though that deal specifically with ethics, and cannot be applied to other subjects about which truthiness can be more objectively measured, the best objection (I can think of) by which to call those modes of thought invalid is not to try to figure out why they are anti-epistemologies, but instead to reject them for their failure to put forward any objectively measurable claims. There are many more ways for a mode of thought to go wrong than for it to go right, so until some thought pattern has provided evidence of being useful for making accurate judgments about something, it should not be presumed to be a useful way to think about something for which the accuracy of statements is difficult or impossible to judge.

Load More