Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I am not a professional evolutionary biologist. I only know a few equations, very simple ones by comparison to what can be found in any textbook on evolutionary theory with math, and on one memorable occasion I used one incorrectly. For me to publish an article in a highly technical ev-bio journal would be as impossible as corporations evolving. And yet when I'm dealing with almost anyone who's not a professional evolutionary biologist...
It seems to me that there's a substantial advantage in knowing the drop-dead basic fundamental embarrassingly simple mathematics in as many different subjects as you can manage. Not, necessarily, the high-falutin' complicated damn math that appears in the latest journal articles. Not unless you plan to become a professional in the field. But for people who can read calculus, and sometimes just plain algebra, the drop-dead basic mathematics of a field may not take that long to learn. And it's likely to change your outlook on life more than the math-free popularizations or the highly technical math.
In his recent excellent blog post, Yvain discusses a few "universal" (commonplace) human experiences that many people never notice they don't have, such as the ability to smell, see some colors, see mental pictures, and feel emotions. I was reminded of a longstanding argument I had with a friend. She always insisted that she would rather be blind than deaf. I could not understand how that was possible, since the visual world is so much richer and more interesting. We later found out that I can see an order of magnitude more colors than she can, but have subpar ability to distinguish tones. And I thought she was just being a contrarian for its own sake. I thought the experience of that many colors was universal, and had rarely seen evidence that challenged that belief.
More seriously, a good friend of mine did not realize he suffered from a serious genetic disorder that caused him extreme body pain and terrible headaches whenever he became tired or dehydrated for the first three decades of his life. He thought everyone felt that way, but considered it whiny to talk about it. He almost never mentioned it, and never realized what it was, until <bragging> I noticed how tense his expressions became when he got tired, asked him about it, then put it together with some other unusual physical experiences I knew he had </bragging>
This got me thinking about when it is likely we might be having unusual sensory experiences and not realize for long periods of time. I am calling these "secretly secret experiences." Here are the factors that might increase the likelihood of having a secretly secret experience.
1) When they are rarely consciously mentally examined: experiences such as the ability to distinguish subtle differences in shades of color are tested occasionally (when choosing paint or ripe fruit), but few people besides interior decorators think about how good their shade-distinguishing skills are. Others include that feeling of being in different moods or mental states, breathing, sensing commonly-sensed things (the look of roads or the sound of voices, etc.) Most of the examples from the blog post fall under this category. People might not notice that they over- or under-experience or differently experience such feelings, relative to others.
2) When they are rarely discussed in everyday life: If my experience of pooping feels very different from other peoples' I may never know, because I don't discuss the experience in detail with anyone. If people talked about their experiences, I would probably notice if mine didn't match up, but that's unlikely to happen. The same might apply for other experiences that are taboo to discuss, such as masturbation, sex (in some cultures), anything considered gross or unhygienic, or socially awkward experiences (in some cultures).
3) When there is social pressure to experience something a certain way: it may be socially dangerous to admit you don't find members of the opposite sex attractive, or you didn't enjoy The Godfather or whatever. Depending on your sensitivity to social pressure (see 4) and the strength of the pressure, this could lead to unawareness about true rare preferences.
4) Sensitivity to external influences: Some people pick up on social cues more easily than others. Some notice social norms more readily, and some seem more or less willing to violate some norms (partly because of how well they perceive them, plus some other factors). I can imagine that a deeply autistic person might be influenced far less by mainstream descriptions of different experiences. Exceptionally socially attuned people might (perhaps) take social influences to heart and be less able to distinguish their own from those they know about.
5) When skills are redundant or you have good substitutes: For example, if we live in a world with only fish and mammals, and all mammals are brown and warm and all fish are cold and silver, you might never notice that you can't feel temperature because you are still a perfectly good mammal and fish distinguisher. In the real world, it's harder to find clear examples, but I can think of substitutes for color-sightedness such as shade and textural cues that increase the likelihood of a color-blind person not realizing zir blindness. Similarly, empathy and social adeptness may increase someone's ability both to mask that ze is having a different experience than others, and the likelihood that ze will believe all others are good at hiding a different experience than the one they portray openly.
What else can people think of?
Special thanks to JT for his feedback and for letting me share his story.
If the future is determined by physics, how can anyone control it?
In Thou Art Physics, I pointed out that since you are within physics, anything you control is necessarily controlled by physics. Today we will talk about a different aspect of the confusion, the words "determined" and "control".
The "Block Universe" is the classical term for the universe considered from outside Time. Even without timeless physics, Special Relativity outlaws any global space of simultaneity, which is widely believed to suggest the Block Universe—spacetime as one vast 4D block.
When you take a perspective outside time, you have to be careful not to let your old, timeful intuitions run wild in the absence of their subject matter.
In the Block Universe, the future is not determined before you make your choice. "Before" is a timeful word. Once you descend so far as to start talking about time, then, of course, the future comes "after" the past, not "before" it.
When is it faster to rediscover something on your own than to learn it from someone who already knows it?
Sometimes it's faster to re-derive a proof or algorithm than to look it up. Keith Lynch re-invented the fast Fourier transform because he was too lazy to walk all the way to the library to get a book on it, although that's an extreme example. But if you have a complicated proof already laid out before you, and you are not Marc Drexler, it's generally faster to read it than to derive a new one. Yet I found a knowledge-intensive task where it would have been much faster to tell someone nothing at all than to tell them how to do it.
This is an extension of a comment I made that I can't find and also a request for examples. It seems plausible that, when giving advice, many people optimize for deepness or punchiness of the advice rather than for actual practical value. There may be good reasons to do this - e.g. advice that sounds deep or punchy might be more likely to be listened to - but as a corollary, there could be valuable advice that people generally don't give because it doesn't sound deep or punchy. Let's call this boring advice.
An example that's been discussed on LW several times is "make checklists." Checklists are great. We should totally make checklists. But "make checklists" is not a deep or punchy thing to say. Other examples include "google things" and "exercise."
I would like people to use this thread to post other examples of boring advice. If you can, provide evidence and/or a plausible argument that your boring advice actually is useful, but I would prefer that you err on the side of boring but not necessarily useful in the name of more thoroughly searching a plausibly under-searched part of advicespace.
Upvotes on advice posted in this thread should be based on your estimate of the usefulness of the advice; in particular, please do not vote up advice just because it sounds deep or punchy.
There is a lot of bad science and controversy in the realm of how to have a healthy lifestyle. Every week we are bombarded with new studies conflicting older studies telling us X is good or Y is bad. Eventually we reach our psychological limit, throw up our hands, and give up. I used to do this a lot. I knew exercise was good, I knew flossing was good, and I wanted to eat better. But I never acted on any of that knowledge. I would feel guilty when I thought about this stuff and go back to what I was doing. Unsurprisingly, this didn't really cause me to make any positive lifestyle changes.
Instead of vaguely guilt-tripping you with potentially unreliable science news, this post aims to provide an overview of lifestyle interventions that have very strong evidence behind them and concrete ways to implement them.
[Highlights for the busy: de-bunking standard "Bayes is optimal" arguments; frequentist Solomonoff induction; and a description of the online learning framework. Note: cross-posted from my blog.]
Short summary. This essay makes many points, each of which I think is worth reading, but if you are only going to understand one point I think it should be “Myth 5″ below, which describes the online learning framework as a response to the claim that frequentist methods need to make strong modeling assumptions. Among other things, online learning allows me to perform the following remarkable feat: if I’m betting on horses, and I get to place bets after watching other people bet but before seeing which horse wins the race, then I can guarantee that after a relatively small number of races, I will do almost as well overall as the best other person, even if the number of other people is very large (say, 1 billion), and their performance is correlated in complicated ways.
If you’re only going to understand two points, then also read about the frequentist version of Solomonoff induction, which is described in “Myth 6″.
Main article. I’ve already written one essay on Bayesian vs. frequentist statistics. In that essay, I argued for a balanced, pragmatic approach in which we think of the two families of methods as a collection of tools to be used as appropriate. Since I’m currently feeling contrarian, this essay will be far less balanced and will argue explicitly against Bayesian methods and in favor of frequentist methods. I hope this will be forgiven as so much other writing goes in the opposite direction of unabashedly defending Bayes. I should note that this essay is partially inspired by some of Cosma Shalizi’s blog posts, such as this one.
This essay will start by listing a series of myths, then debunk them one-by-one. My main motivation for this is that Bayesian approaches seem to be highly popularized, to the point that one may get the impression that they are the uncontroversially superior method of doing statistics. I actually think the opposite is true: I think most statisticans would for the most part defend frequentist methods, although there are also many departments that are decidedly Bayesian (e.g. many places in England, as well as some U.S. universities like Columbia). I have a lot of respect for many of the people at these universities, such as Andrew Gelman and Philip Dawid, but I worry that many of the other proponents of Bayes (most of them non-statisticians) tend to oversell Bayesian methods or undersell alternative methodologies.
If you are like me from, say, two years ago, you are firmly convinced that Bayesian methods are superior and that you have knockdown arguments in favor of this. If this is the case, then I hope this essay will give you an experience that I myself found life-altering: the experience of having a way of thinking that seemed unquestionably true slowly dissolve into just one of many imperfect models of reality. This experience helped me gain more explicit appreciation for the skill of viewing the world from many different angles, and of distinguishing between a very successful paradigm and reality.
On ChrisHallquist's post extolling the virtues of money, the top comment is Eliezer pointing out the lack of concrete examples. Can anyone think of any? This is not just hypothetical: if I think your suggestion is good, I will try it (and report back on how it went)
I care about health, improving personal skills (particularly: programming, writing, people skills), gaining respect (particularly at work), and entertainment (these days: primarily books and computer games). If you think I should care about something else, feel free to suggest it.
I am early-twenties programmer living in San Francisco. In the interest of getting advice useful to more than one person, I'll omit further personal details.
If your idea requires significant ongoing time commitment, that is a major negative.
There's a core meme of rationalism that I think is fundamentally off-base. It's been bothering me for a long time — over a year now. It hasn't been easy for me, living this double life, pretending to be OK with propagating an instrumentally expedient idea that I know has no epistemic grounding. So I need to get this off my chest now: Our established terminology is not consistent with an evidence-based view of the Star Trek canon.
According to TVtropes, a straw Vulcan is a character used to show that emotion is better than logic. I think a lot of people take "straw Vulcan rationality" it to mean something like, "Being rational does not mean being like Vulcans from Star Trek."
This is not fair to Vulcans from Star Trek.
Central to the character of Spock — and something that it's easy to miss if you haven't seen every single episode and/or read a fair amount of fan fiction — is that he's being a Vulcan all wrong. He's half human, you see, and he's really insecure about that, because all the other kids made fun of him for it when he was growing up on Vulcan. He's spent most of his life resenting his human half, trying to prove to everyone (especially his father) that he's Vulcaner Than Thou. When the Vulcan Science Academy worried that his human mother might be an obstacle, it was the last straw for Spock. He jumped ship and joined Starfleet. Against his father's wishes.
Spock is a mess of poorly handled emotional turmoil. It makes him cold and volatile.
Real Vulcans aren't like that. They have stronger and more violent emotions than humans, so they've learned to master them out of necessity. Before the Vulcan Reformation, they were a collection of warring tribes who nearly tore their planet apart. Now, Vulcans understand emotions and are no longer at their mercy. Not when they apply their craft successfully, anyway. In the words of the prophet Surak, who created these cognitive disciplines with the purpose of saving Vulcan from certain doom, "To gain mastery over the emotions, one must first embrace the many Guises of the Mind."
Successful application of Vulcan philosophy looks positively CFARian.
There is a ritual called "kolinahr" whose purpose is to completely rid oneself of emotion, but it was not developed by Surak, nor, to my knowledge, was it endorsed by him. It's an extreme religious practice, and I think the wisest Vulcans would consider it misguided1. Spock attempted kolinahr when he believed Kirk had died, which I take to be a great departure from cthia (the Vulcan Way) — not because he ultimately failed to complete the ritual2, but because he tried to smash his problems with a hammer rather than applying his training to sort things out skillfully. If there ever were such a thing as a right time for kolinahr, that would not have been it.
So Spock is both a straw Vulcan and a straw man of Vulcans. Steel Vulcans are extremely powerful rationalists. Basically, Surak is what happens when science fiction authors try to invent Eliezer Yudkowsky without having met him.
1) I admit that I notice I'm a little confused about this. Sarek, Spock's father and a highly influential diplomat, studied for a time with the Acolytes of Gol, who are the masters of kolinahr. If I've ever known what came of that, I've forgotten. I'm not sure whether that's canon, though.
2) "Sorry to meditate and run, but I've gotta go mind-meld with this giant space crystal thing. ...It's complicated."
View more: Next