Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
The First Step
The first step on the path to truth is superstition. We all start there, and should acknowledge that we start there.
Superstition is, contrary to our immediate feelings about the word, the first stage of understanding. Superstition is the attribution of unrelated events to a common (generally unknown or unspecified) cause - it could be called pattern recognition. The "supernatural" component generally included in the definition is superfluous, because supernatural merely refers to that which isn't part of nature - which means reality -, which is an elaborate way of saying something whose relationship to nature is not yet understood, or else nonexistent. If we discovered that ghosts are real, and identified an explanation - overlapping entities in a many-worlds universe, say - they'd cease to be supernatural and merely be natural.
Just as the supernatural refers to unexplained or imaginary phenomena, superstition refers to unexplained or imaginary relationships, without the necessity of cause. If you designed an AI in a game which, after five rounds of being killed whenever it went into rooms with green-colored walls, started avoiding rooms with green-colored walls, you've developed a good AI. It is engaging in superstition, it has developed an incorrect understanding of the issue. But it hasn't gone down the wrong path - there is no wrong path in understanding, there is only the mistake of stopping. Superstition, like all belief, is only useful if you're willing to discard it.
The Next Step
Incorrect understanding is the first - and necessary - step to correct understanding. It is, indeed, every step towards correct understanding. Correct understanding is a path, not an achievement, and it is pursued, not by arriving at the correct conclusion in the first place, but by testing your ideas and discarding those which are incorrect.
No matter how much intelligent you are, you cannot skip the "incorrect understanding" step of knowledge, because that is every step of knowledge. You must come up with wrong ideas in order to get at the right ones - which will always be one step further. You must test your ideas. And again, the only mistake is stopping, in assuming that you have it right now.
Intelligence is never your bottleneck. The ability to think faster isn't necessarily the ability to arrive at the right answer faster, because the right answer requires many wrong ones, and more importantly, identifying which answers are indeed wrong, which is the slow part of the process.
Better answers are arrived at by the process of invalidating wrong answers.
The Winding Path
The process of becoming Less Wrong is the process of being, in the first place, wrong. It is the state of realizing that you're almost certainly incorrect about everything - but working on getting incrementally closer to an unachievable "correct". It is a state of anti-hubris, and requires a delicate balance between the idea that one can be closer to the truth, and the idea that one cannot actually achieve it.
The art of rationality is the art of walking this narrow path. If ever you think you have the truth - discard that hubris, for three steps from here you'll see it for superstition, and if you cannot see that, you cannot progress, and there your search for truth will end. That is the path of the faithful.
But worse, the path is not merely narrow, but winding, with frequent dead ends requiring frequent backtracking. If ever you think you're closer to the truth - discard that hubris, for it may inhibit you from leaving a dead end, and there your search for truth will end. That is the path of the crank.
The path of rationality is winding and directionless. It may head towards beauty, then towards ugliness; towards simplicity, then complexity. The correct direction isn't the aesthetic one; those who head towards beauty may create great art, but do not find truth. Those who head towards simplicity might open new mathematical doors and find great and useful things inside - but they don't find truth, either. Truth is its own path, found only by discarding what is wrong. It passes through simplicity, it passes through ugliness; it passes through complexity, and also beauty. It doesn't belong to any one of these things.
The path of rationality is a path without destination.
Written as an experiment in the aesthetic of Less Wrong. I'd appreciate feedback into the aesthetic interpretation of Less Wrong, rather than the sense of deep wisdom emanating from it (unless the deep wisdom damages the aesthetic).
I've made very extensive notes, along with my assessment, of Daniel Kahneman's Thinking, Fast and Slow, and have passed it around to aspiring rationalist friends who found my notes very useful. So I though I would share these with the Less Wrong community by creating a Less Wrong Wiki article with these notes. Feel free to optimize the article based on your own notes as well. Hope this proves as helpful to you as it did to those others whom I shared my notes with.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
As those of you on the Less Wrong chat may know, Mark Manson is my favourite personal development author. I thought I'd share those articles that are most related to rationality, as I figured that they would have the greatest chance of being appreciated.
Immediately after writing this article, I realised that I left one thing unclear, so I'll explain it now. Why have I included articles discussing the terms "life purpose" and "finding yourself"? The reason is that I think that it is very important to provide linguistic bridges between some of the vague everyday language that people often use and the more precise language expected by rationalists.
“When looked at from this perspective, personal development can actually be quite scientific. The hypotheses are our beliefs. Our actions and behaviors are the experiments. The resulting internal emotions and thought patterns are our data. We can then take those and compare them to our original beliefs and then integrate them into our overall understanding of our needs and emotional make-up for the future.”
“You test those beliefs out in the real world and get real-world feedback and emotional data from them. You may find that you, in fact, don’t enjoy writing every day as much as you thought you would. You may discover that you actually have a lot of trouble expressing some of your more exquisite thoughts than you first assumed. You realize that there’s a lot of failure and rejection involved in writing and that kind of takes the fun out of it. You also find that you spend more time on your site’s design and presentation than you do on the writing itself, that that is what you actually seem to be enjoying. And so you integrate that new information and adjust your goals and behaviors accordingly.”
Mark Manson deconstructs the notion of “life purpose”, replacing it with a question that is much more tractable:
“Part of the problem is the concept of “life purpose” itself. The idea that we were each born for some higher purpose and it’s now our cosmic mission to find it. This is the same kind of shitty logic used to justify things like spirit crystals or that your lucky number is 34 (but only on Tuesdays or during full moons).
Here’s the truth. We exist on this earth for some undetermined period of time. During that time we do things. Some of these things are important. Some of them are unimportant. And those important things give our lives meaning and happiness. The unimportant ones basically just kill time.
So when people say, “What should I do with my life?” or “What is my life purpose?” what they’re actually asking is: “What can I do with my time that is important?””
While this isn’t the only way that the cliche of “finding yourself” can be broken down into something more understandable, it is quite a good attempt:
“Many people embark on journeys around the world in order to “find themselves.” In fact, it’s sort of cliché, the type of thing that sounds deep and important but doesn’t actually mean anything.
Whenever somebody claims they want to travel to “find themselves,” this is what I think they mean: They want to remove all of the major external influences from their lives, put themselves into a random and neutral environment, and then see what person they turn out to be.
By removing their external influences — the overbearing boss at work, the nagging mother, the pressure of a few unsavory friends — they’re then able to see how they actually feel about their life back home.
So perhaps a better way to put it is that you don’t travel to “find yourself,” you travel in order to get a more accurate perception of who you were back home, and whether you actually like that person or not.””
Mark Manson attacks one of the biggest myths in our society:
“In our culture, many of us idealize love. We see it as some lofty cure-all for all of life’s problems. Our movies and our stories and our history all celebrate it as life’s ultimate goal, the final solution for all of our pain and struggle. And because we idealize love, we overestimate it. As a result, our relationships pay a price.
When we believe that “all we need is love,” then like Lennon, we’re more likely to ignore fundamental values such as respect, humility and commitment towards the people we care about. After all, if love solves everything, then why bother with all the other stuff — all of the hard stuff?
But if, like Reznor, we believe that “love is not enough,” then we understand that healthy relationships require more than pure emotion or lofty passions. We understand that there are things more important in our lives and our relationships than simply being in love. And the success of our relationships hinges on these deeper and more important values.”
Edit: Read the warning in the comments
I included this article because of the discussion of the first habit.
"There’s this guy. His name is John Gottman. And he is like the Michael Jordan of relationship research. Not only has he been studying intimate relationships for more than 40 years, but he practically invented the field.
His “thin-slicing” process boasts a staggering 91% success rate in predicting whether newly-wed couples will divorce within 10 years — a staggeringly high result for any psychological research.
Gottman devised the process of “thin-slicing” relationships, a technique where he hooks couples up to all sorts of biometric devices and then records them having short conversations about their problems. Gottman then goes back and analyzes the conversation frame by frame looking at biometric data, body language, tonality and specific words chosen. He then combines all of this data together to predict whether your marriage sucks or not.
And the first thing Gottman says in almost all of his books is this: The idea that couples must communicate and resolve all of their problems is a myth."
I highly recommend these articles. They are based on research to an extent, but also upon his experiences, so they are not completely research based. If that's what you want, then you should try looking for a review article.
This summary was posted to LW Main on November 13th. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
Irregularly scheduled Less Wrong meetups are taking place in:
- Prague Less Wrong Meetup: 02 December 2015 07:00PM
- San Francisco Meetup: Projects: 16 November 2015 06:15PM
- Warsaw November Meetup: 14 November 2015 04:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- [Moscow] FallacyMania game in Kocherga club: 25 November 2015 07:30PM
- NYC Solstice: 19 November 2015 06:30PM
- Seattle Solstice: 19 December 2015 05:00PM
- Tel Aviv: Black Holes after Jacob Bekenstein: 24 November 2015 08:00AM
- Vienna: 21 November 2015 04:00PM
- [Vienna] Five Worlds Collide - Vienna: 04 December 2015 08:00PM
- [West LA] Scrum: A Philosophy of Life?: 18 November 2015 07:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Epsilon walks up to you with two boxes, A and b, labeled in rather childish-looking handwriting written in crayon.
"In box A," he intones, sounding like he's trying to be foreboding, which might work better when he hits puberty, "I may or may not have placed a million of your human dollars." He pauses for a moment, then nods. "Yes. I may or may not have placed a million dollars in this box. If I expect you to open Box B, the million dollars won't be there. Box B will contain, regardless of what you do, one thousand dollars. You may choose to take one box, or both; I will leave with any boxes you do not take."
You've been anticipating this. He's appeared to around twelve thousand people so far. Out of eight thousand people who accepted both boxes, eighty found the million dollars missing, and walked away with $1,000; the other seven thousand nine hundred and twenty people walked away with $1,001,000 dollars. Out of the four thousand people who opened only box A, only four found it empty.
The agreement is unanimous: Epsilon is really quite bad at this. So, do you one-box, or two-box?
There are some important differences here with the original problem. First, Epsilon won't let you open either box until you've decided whether to open one or both, and will leave with the other box. Second, while Epsilon's false positive rate on identifying two-boxers is quite impressive, making mistakes about one-boxers only .1% of the time, his false negative rate is quite unimpressive - he catches 1% of everybody who engages in it. Whatever heuristic he's using, clearly, he prefers to let two-boxers slide than to accidentally punish one-boxers.
I'm curious to know whether anybody would two-box in this scenario and why, and particularly curious in the reasoning of anybody whose answer is different between the original Newcomb problem and this one.
Suppose you are trying to create a list. It may be of the "best" popular science books, or most controversial movies of the last twenty years, tips for getting over a breakup or the most interesting cat gifs posted in the last few days.
There are many reasons for wanting to create one of these lists, but only a few main simple methods:
- Voting model - This is the simplest model, but popularity doesn't always equal quality. It is also particularly problematic for regularly updated lists (like Reddit), where a constantly changing audience can result in large amounts of duplicate content and where easily consumable content has an advantage.
- Curator model - A single expect can often do an admirable job of collecting high-quality content, but this is subject to their own personal biases. It is also effort intensive to evaluate different curators to see if they have done a good job.
- Voting model with (content) rules - This can cut out the irrelevant or sugary content that is often upvoted, but creating good rules is hard. Often there is no objective line between high and low-quality content. These rules can often result in conflict.
- Voting model with sections - This is a solution to some of the limitations of 1 and 3. Instead of declaring some things off-topic outright, they can be thrown into their own section. This is the optimal solution, but is usually neglected.
- Voting model with selection - This covers any model where only certain people are allowed to vote. Sometimes selection is extraordinarily rigorous, however, it can still be very effective when it isn't. As an example, Metafilter charges a $5 one-time only fee and that is sufficient to keep the quality high.
Many of you have already seen Gwern's page on the topic of nicotine use. Nicotine is interesting because it's a stimulant, it may increase intelligence (I believe Daniel Kahneman said he was smarter back when he used to smoke), and it may be useful for habit formation.
Elaine Keller, president of the CASAA, pointed to other recently published research that she said shows outcomes in the “real world” as opposed to a laboratory. One study showed that smokers put on nicotine replacement therapy after suffering an acute coronary event like a heart attack or stroke had no greater risk of a second incident within one year than those who were not.
I managed to get ahold of the study in question, and it seems to me that it damns e-cigarettes by faint praise. Based on a quick skim, researchers studied smokers who recently suffered an acute coronary syndrome (ACS). The treatment group was given e-cigarettes for nicotine replacement therapy, while the control group was left alone. Given that baseline success rates in quitting smoking are on the order of 10-20%, it seems safe to say that the control group mostly continued smoking as they had previously. (The study authors say "tobacco use during follow-up could not be accurately assessed because of the variability in documentation and, therefore, was not included in the present analysis", so we are left guessing.)
29% of the nicotine replacement group suffered an adverse event in the year following the intervention, and 31% of the control group did--similar numbers. So one interpretation of this study is that if you are a smoker in your fifties and you have already experienced an acute coronary syndrome, switching from cigarettes to e-cigs will do little to help you avoid further health issues in the next year. Doesn't exactly inspire confidence.
Another more recent article states that older smokers should see health gains from quitting cigarettes, which hammers the nail in further for e-cigarettes. It also states:
More conclusive answers about how e-cigarettes affect the body long-term are forthcoming, Rose said. Millions in research dollars are being funneled toward this topic.
“There is some poor science,” Rose said. “Everybody is trying to get something out quick in order to get funding.”
So based on this very cursory analysis I'm inclined to hold off until more research comes in. But these are just a few data points--I haven't read this government review which claims "e-cigarettes are about 95% less harmful than tobacco cigarettes", for example.
The broad issue I see is that most e-cigarette literature is focused on whether switching from cigarettes to e-cigarettes is a good idea, not whether using e-cigarettes as a nonsmoker is a good idea. I'm inclined to believe the first is true, but I'd hesitate to use research that proves the first to prove the second (as exemplified by the study I took a look at).
Anyway, if you're in the US and you want to buy e-cigarette products it may be best to do it soon before they're regulated out of existence.
Here's my op-ed that uses long-term orientation, probabilistic thinking, numeracy, consider the alternative, reaching our actual goals, avoiding intuitive emotional reactions and attention bias, and other rationality techniques to suggest more rational responses to the Paris attacks and the ISIS threat. It's published in the Sunday edition of The Plain Dealer, a major newspaper (16th in the US). This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.
Throughout HPMOR, the author has included many fascinating details about how the real world works, and how to gain power. The Mirror of CEV seems like a lesson in what a true Friendly AI could look like and do.
I've got a weirder theory. (Roll for sanity...)
The entire story is plausible-deniability cover for explaining how to get the Law of Intention to work reliably.
(All quoted text is from HPMOR.)
This Mirror reflects itself perfectly and therefore its existence is absolutely stable.
"This Mirror" is the Mind, or consciousness. The only thing a Mind can be sure of is that it is a Mind.
The Mirror's most characteristic power is to create alternate realms of existence, though these realms are only as large in size as what can be seen within the Mirror
A Mind's most characteristic power is to create alternate realms of existence, though these realms are only as large in size as what can be seen within the Mind.
Showing any person who steps before it an illusion of a world in which one of their desires has been fulfilled.
The final property upon which most tales agree, is that whatever the unknown means of commanding the Mirror - of that Key there are no plausible accounts - the Mirror's instructions cannot be shaped to react to individual people...the legends are unclear on what rules can be given, but I think it must have something to do with the Mirror's original intended use - it must have something to do with the deep desires and wishes arising from within the person.
More specifically, the Mirror shows a universe that obeys a consistent set of physical laws. From the set of all wish-fulfillment fantasies, it shows a universe that could actually plausibly exist.
It is known that people and other objects can be stored therein
Actors store other minds within their own Mind. Engineers store physical items within their Mind. The Mirror is a Mind.
the Mirror alone of all magics possesses a true moral orientation
The Mind alone of all the stuff that exists possesses a true moral orientation.
If that device had been completed, the story claimed, it would have become an absolutely stable existence that could withstand the channeling of unlimited magic in order to grant wishes. And also - this was said to be the vastly harder task - the device would somehow avert the inevitable catastrophes any sane person would expect to follow from that premise.
An ideal Mind would grant wishes without creating catastrophes. Unfortunately, we're not quite ideal minds, even though we're pretty good.
Professor Quirrell made to walk away from the Mirrror, and seemed to halt just before reaching the point where the Mirror would no longer have reflected him, if it had been reflecting him.
My self-image can only go where it is reflected in my Mind. In other words, I can't imagine what it would be like to be a philosophical zombie.
Most powers of the Mirror are double-sided, according to legend. So you could banish what is on the other side of the Mirror instead. Send yourself, instead of me, into that frozen instant. If you wanted to, that is.
Let's interpret this scene: We've got a Mind/consciousness (the Mirror), we've got a self-image (Riddle) as well as the same spirit in a different self-image (Harry), and we've got a specific Extrapolated Volition instance in the mind (Dumbledore shown in the Mirror). This Extrapolated Volition instance is a consistent universe that could actually exist.
It sounds like the Process of the Timeless trap causes some Timeless Observer to choose one side of the Mirror as the real Universe, trapping the universe on the other side of the mirror in a frozen instant from the Timeless Observer's perspective.
The implication: the Mind has the power to choose which Universes it experiences from the set of all possible Universes extending from the current point.
All right, screw this nineteenth-century garbage. Reality wasn't atoms, it wasn't a set of tiny billiard balls bopping around. That was just another lie. The notion of atoms as little dots was just another convenient hallucination that people clung to because they didn't want to confront the inhumanly alien shape of the underlying reality. No wonder, then, that his attempts to Transfigure based on that hadn't worked. If he wanted power, he had to abandon his humanity, and force his thoughts to conform to the true math of quantum mechanics.
There were no particles, there were just clouds of amplitude in a multiparticle configuration space and what his brain fondly imagined to be an eraser was nothing except a gigantic factor in a wavefunction that happened to factorize, it didn't have a separate existence any more than there was a particular solid factor of 3 hidden inside the number 6, if his wand was capable of altering factors in an approximately factorizable wavefunction then it should damn well be able to alter the slightly smaller factor that Harry's brain visualized as a patch of material on the eraser -
Had to see the wand as enforcing a relation between separate past and future realities, instead of changing anything over time - but I did it, Hermione, I saw past the illusion of objects, and I bet there's not a single other wizard in the world who could have.
This seems like another giant hint about magical powers.
"I had wondered if perhaps the Words of False Comprehension might be understandable to a student of Muggle science. Apparently not."
The author is disappointed that we don't get his hints.
If the conscious mind was in reality a wish-granting machine, then how could I test this without going insane?
The Mirror of Perfect Reflection has power over what is reflected within it, and that power is said to be unchallengeable. But since the True Cloak of Invisibility produces a perfect absence of image, it should evade this principle rather than challenging it.
A method to test this seems to be to become aware of one's own ego-image (stand in front of the Mirror), vividly imagine a different ego-image without identifying with it (bring in a different personality containing the same Self under an Invisibility Cloak), suddenly switch ego-identification to the other personality (swap the Invisibility Cloak in less than a second), and then become distracted so the ego-switch becomes permanent (Dumbledore traps himself in the Mirror).
I can't think of a way to test this without sanity damage. Comments?
**Thought experiment 1 – arbitrage opportunities in prediction market**
You’re Mitt Romney, biding your time before riding in on your white horse to win the US republican presidential preselection (bear with me, I’m Australian and don’t know US politics). Anyway, you’ve had your run and you’re not too fussed, but some of the old guard want you back in the fight.
Playing out like a XKCD comic strip ‘Okay’, you scheme. ‘Maybe I can trump Trump at his own game and make a bit of dosh on the election’.
A data-scientist you keep on retainer sometimes talks about LessWrong and other dry things. One day she mentions that decentralised prediction markets are being developed, one of which is Augur. She says one can bet on the outcome of events such as elections.
You’ve made a fair few bucks in your day. You read the odd Investopedia page and a couple of random forum blog posts. And there’s that financial institute you run. Arbitrage opportunity, you think.
You don’t fancy your chance of winning the election. 40% chance, you reckon. So, you bet against yourself. Win the election, lose the bet. Lose the bet, win the election. Losing the election doesn’t mean much to you, losing the bet doesn’t mean much to you, winning the election means a lot of to you and winning the bet doesn’t mean much to you. There ya go. Perhaps if you put
Let’s turn this into a probability weighted decision table (game theory):
Not participating in prediction market:
Election win (+2 value)
Election lose (-1 value)
Cumulative probability weighted value: (0.4*2) + (0.6*-1)=+0.2 value
participating in prediction market::
Election win +2
Election lose -1
Bet win (0)
Bet lose (0)
Cumulative probability weighted value: (0.4*2) + (0.6*-1)=+0.2 value
They’re the same outcome!
Looks like my intuitions were wrong. Unless you value winning more than losing, then placing an additional bet, even in a different form of capital (cash v.s. political capital for instance), then taking on additional risks isn’t an arbitrage opportunity.
For the record, Mitt Romney probably wouldn’t make this mistake, but what does post suggest I know about prediction?
**Thought experiment 2 – insider trading**
Say you’re a C level executive in a publicly listed enterprise. However, for this example you don’t need to be part of a publicly listed organisatiion, but it serves to illustrate my intuitions. Say you have just been briefed by your auditors of massive fraud by a mid level manager that will devastate your company. Ordinarily, you may not know how to safely dump your stocks on the stock exchange because of several reasons, one of which is insider trading.
Now, on a prediction market, the executive could retain their stocks, thus not signalling distrust of the company themselves (which itself is information the company may be legally obliged to disclose since it materially influences share price) but make a bet on a prediction market of impending stock losses, thus hedging (not arbitraging, as demonstrated above) their bets.
**Thought experiment 3 – market efficiency**
I’d expect that prediction opportunities will be most popular where individuals weighted by their capital believe they gave private, market relevant information. For instance, if a prediction opportunity is that Canada’s prime minister says ‘I’m silly’ on his next TV appearance, many people might believe they know him personally well enough that it’s a higher probability that the otherwise absurd sounding proposition sounds. They may give it a 0.2% chance rather than 0.1% chance. However, if you are the prime minister yourself, you may decide to bet on this opportunity and make a quick, easy profit…I’m not sure where I was going with this anymore. But it was something about incentives to misrepresent how much relevant market information one has, and the amount that competitor betters have (people who bet WITH you)