Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Second Insight: Repairing My "Repairs" or Aspiring to Rationality instead of "Rationality"

Bound_up 09 December 2016 02:27AM

[Link] The Ferrett: "The Day I Realized My Uncle Hung Around With Gay Guys"

2 CronoDAS 08 December 2016 04:26PM

[Link] Trusted Third Parties Are Security Holes

3 chron 08 December 2016 06:26AM

[Link] "What is Wrong With our Thoughts" -David Stove (1991)

4 NatashaRostova 08 December 2016 01:27AM

[Link] Self-Blackmail

1 Jack_LaSota 08 December 2016 01:15AM

Combining Prediction Technologies to Help Moderate Discussions

8 Wei_Dai 08 December 2016 12:19AM

I came across a 2015 blog post by Vitalik Buterin that contains some ideas similar to Paul Christiano's recent Crowdsourcing moderation without sacrificing quality. The basic idea in both is that it would be nice to have a panel of trusted moderators carefully pore over every comment and decide on its quality, but since that is too expensive, we can instead use some tools to predict moderator decisions, and have the trusted moderators look at only a small subset of comments in order to calibrate the prediction tools. In Paul's proposal the prediction tool is machine learning (mainly using individual votes as features), and in Vitalik's proposal it's prediction markets where people bet on what the moderators would decide if they were to review each comment.

It seems worth thinking about how to combine the two proposals to get the best of both worlds. One fairly obvious idea is to let people both vote on comments as an expression of their own opinions, and also place bets about moderator decisions, and use ML to set baseline odds, which would reduce how much the forum would have to pay out to incentivize accurate prediction markets. The hoped for outcome is that the ML algorithm would make correct decisions most of the time, but people can bet against it when they see it making mistakes, and moderators would review comments that have the greatest disagreements between ML and people or between different bettors in general. Another part of Vitalik's proposal is that each commenter has to make an initial bet that moderators would decide that their comment is good. The article notes that such a bet can also be viewed as a refundable deposit. Such forced bets / refundable deposits would help solve a security problem with Paul's ML-based proposal.

Are there better ways to combine these prediction tools to help with forum moderation? Are there other prediction tools that can be used instead or in addition to these?

[Link] Discussion of LW in Ezra Klein podcast [starts 47:40]

7 Yvain 07 December 2016 11:22PM

How I use Anki to learn mathematics

8 ArthurRainbow 07 December 2016 10:29PM
Here is my first less wrong post (after years spent blogging in French). I discovered  Anki on this blog. I'm now sharing the tips I've been using for months to learn mathematics with Anki.

I'm a French researcher in fundamental computer science, which essentially means that I do mathematics all day long. But my biggest problem is that I'm really bad at learning mathematics. Let me give an example. Hopefully, you should be able to understand even if you don't know the domains I'm using in my examples. One day, I wanted to discover what category theory is. Another day, I wanted to read an introduction to number theory. Two subjects which seem really interesting. And the first pages of those books seems to be crystal clear. After 10 or 20 pages, I have already forgotten the definitions given in the first page, and I am lost. Definitions are not hard, there is just too many of them.

Let me emphasize that I'm speaking of learning definitions you understand. As far as I know, it is mathematically useless to learn things you do not understand. (Apart, may be, if you try to  learn the first digits of Pi, or some few things like that).

In the category theory example, definitions are explained by showing how well-known properties of algebra or of topology are special cases of the introduced category definition. Those examples are really helpful to understand the definitions. Indeed, it allows my mind to go from new notions to known notions. For example, I recall that Epi and Mono are generalization of injective and of surjective. Or of surjective and injective. I can't remember which is which. But at least I know that an arrow can both be Epi and Mono. And even know that I know that Epi in Sets are surjective, I still don't know which of the properties of surjection remains. Which is a trouble since having only the Set example would imply that Epi and Mono imply Iso.

And, to solve this kind of problem, a spaced memorization software is wonderful. The only trouble I have is that the decks which already exist are not really interesting. Because, usually, card creation did not follow any special logic (some scanned hand written lectures note). And furthermore, the already existing decks concern mathematics I'm not interested in. I don't want to learn multiplication table, nor calculus. In fact, the only decks from the community I use are the one from Rationality, from A.I. to Zombie and the one which teach good learning practice. Hence, I'm now creating decks, from books on topic  I want to know.

The rules I follow


I create a deck per book. Hence I will be able to give the decks to the community, telling each time exactly what belongs in the deck. It means I accept to put in the decks things I already know. And to put the same thing in different decks (for example, the definition of Open Sets of course belongs to a topology deck. But it also belong to a complex analysis book where this definition is restricted to the case of metric space.) I don't think I'm really wasting my time, it allows me to learn things I don't perfectly know but that I would not have created otherwise. Furthermore, it will hopefully allows the community to begin by any book.

I must emphasize that the decks are supposed to help reading the book. Not to be used instead of the book. Because having read the proof at least once is often important. And my decks do not contains proofs (apart when I'm reading research paper and that my goal is to understand complex proofs).

I don't create the entire deck at once. For example, I'm reading a set theory book. Right now I've only read the first chapter. Because there are already too many definitions I do not remember correctly and which is confusing (the type, the cofinality, ...). Hence it  would be too much of a trouble to read the 2nd chapter yet. (Relating to this, I will need to tell the users of my decks to suspend the chapter n+1 until they know the content of the chapter n.)

Reciprocally, I create many decks simultaneously. Which is coherent with the way maths are studied in university. That is, even if I'm currently blocked in the complex analysis book, I can still read a graph theory book. And while I learn my analysis course, I can create the graph deck.

Kind of cards.

A basic anki card is a question and an answer. Sometime, both sides of the cards are questions and answer. It's useful for vocabulary. Since I'm French, I want to know both the English and the French name of the definitions I learn.

Otherwise, I only use clozed deletions. That is, a basic text, with holes you must recall. (Note that it allows to simulate basic cards. So once you selected cloze deletion, you never need to switch back to usual mode).

Apart from the usual «front» and «extra» field, I always have a «reference» field. In this field,  I write chapter, section, subsection and so on. I also write the theorem, lemma, corollary, definition number. I do not write down the page number because of laziness. I think I should, because some books have long sections without any numbered results. And in those case, it takes minutes to find where an information come from. Having theorem number is required because, in order to recall a result, it is usually helpful to remember its proof. And reciprocally, if you forgot the theorem, it may be usefull to read its proof again.

Last important fact. I told Anki not to erase fields when I completed the creation of a card. It is clear that chapter and section numbers rarely change, so it is useful not to have to write them down again. Concerning the mathematics part, I saw that successive cards are often similar. For example, in a linear algebra book, many results begin by «Let U,V be two vector space and T a morphism from u to V». It is helpful not to have to retype it in each cards.


In a definition card, there is usually three deletions. The first is the name of the defined object. The second is the notation of this object. The third one is the definition. If an object admits many equivalent definitions, all definitions appears on the same card. Each definition being a different cloze deletion. Indeed, if your card is «A square is ... » and you answer «A diamond with a 90° angle», you don't want to be wrong because it is written «A rectangle with two adjacent side of same length». Therefore, the card is:
«{{c1::A square}} is: equivalently
-{{c2::A diamond with a 90° angle}} or
-{{c3::A rectangle with two adjacent side of same length}}»

Beware, sometime it is better if the name and the notation belong to the same deletion. For example, if X is a subset of a vector space, it is easy to guess that «Aff(X)» is «the Affine Hull  of X» and that «the affine hull of X» is denoted «Aff(X)». What you really want is to recall that «the set of vectors of the form Sum of x_ir_i, with x_i\in X and r_i in the field, where the sum of the r_i is 1»  is «the affine hull of X».


A theorem usually admits two deletions. Hypothesis and conclusion. It sometime admits a third deletion if the theorem has a classical name. For example you may want to remember that the theorem stating «In a right triangle,  the square of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the other two sides. » is called  «the Pythagorean theorem».

Beware, no hypothesis in a deletion should  introduce an object.  «If, ... , then the center of P is not trivial» is hard to understand, whereus «Let P be a group. If ..., then the center of P is non trivial» is more clear.


A first important thing is to always write all hypothesis. Sometime, some hypothesis are given in the beginning of a chapter and are assumed to be true in the whole chapter. But, when the card appears, you don't recall those hypothesis (and you don't want to learn that, in this very book, in this very chapter, only complex vector spaces are considered).

It is important to have empty deletion. In automata theory, some theorems deal with monoids, and some other only deal with finite monoid. If I write «If M is a {{c1::finite}} monoid» for theorem dealing with finite monoid, and  «If M is a monoid» for theorem dealing with arbitrary monoids, the deletion show that M is assumed finite. Therefore, in the second case, I must write «If M is a {{c1::}} monoid». It's really simple to do, since Anki don't delete the fields when I create a card. So, when the hypothesis that M is finite is not required anymore, I can keep the deletion in the field to create the new card.

Multiple implications

This leads me to another problem. An hypothesis may have many implications. And a property may be implied by many hypothesis. Therefore, it is better to state:

«If {{c1::P1}} or {{c2::P2}} .. or {{c3::Pi}} then {{c4::Q}}», or «If {{c1::P}} then {{c2::Q1}} and {{c3::Q2}} ... and {{c4::Qi}}». Indeed, you don't want to be wrong because, of all hypothesis, you thought of the wrong one for this very card. Here, each term in brackets represents a cloze deletion.

There would be a problem if P implies Q and R and if O and P  both imply Q. But this has not happened yet to me. It seems that it does not often happen in math textbooks.

Order of the words

When a theorem is of the form «{{c1::The  centralizer of A in G}} {{c2::divides}} {{c3::the normalizer of A in G}}», you must also have a card «The normalizer of A in G {{c1::is divided by}} the centralizer of  A in G». Otherwise, you could guess the result by pattern matching. Because you know the hole in the middle is either an equality or a dividability statement. Right now, I did not figure out how to do this efficiently. But I should edit my old cards in order to satisfy this property.

Note that you always want to have a cloze deletion named c1. If you don't have it, you can't preview your card, therefore you can't check whether your LaTeX compile. And if you choose to change c2 to c1 later, you need to review your card again, because Anki thinks that it is a new cloze deletion.


The last thing I want in Anki are examples. Generally, they are of the form 
«{{c1::A}}, {{c2::B}}.., {{c3::Z}} are examples of {{c4::P}}».

I used to believe that examples were not mathematics. Because, as a Bourbakist, as a logician, I know that an example does not belong to any theory. What matters is definitions, axioms, theorems, proofs. Since, I understood examples have at least three goals. Officially, it gives intuition, and shows cases where theorems can be applied. Formally, it shows that some set of hypothesis is not contradictory, hence it is useful to consider those hypothesis. In practice, this very examples may be used when one want to test statements which does not appear in the course. That is, examples are sometime answers to exercices, or counter example to idea one may have.


Once the cards are made, I ask to Anki to create all images using LaTex (which should be done immediatly in a perfect program, but is not). And I use synchronization to send all cards to my smartphone. (Free application, without ad, they are wonderful!) to learn in the public transport.

Of course, when I  create cards, I made mistake. Either because my LaTeX doesn't compile, and Anki shows an error message in the cards. Or because I forgot a word, wrote a word instead of another one. (For example, I always write «set» instead of «state».) If I believe there is a mistake, I suspend the card. Once at home, I synchronize and, on my computer, check whether it is really a mistake or not. 
It is why I wrote the chapter and section number of each card.  It allows me to check quickly whether the card and the book states the same thing.

Sometime, the mistake is a bad cloze deletion. It is possible that, when a part of the sentence is deleted, it makes no sens anymore. For example, it happens if I did not follow the above mentioned rules. (It was the case for the first deck I created, where I didn't devise any rule yet). Same rule applies if I see a mistake so big that I do not understand anymore the question I asked. 

Thanks to dutchie and to rhaps0dy who corrected many typo of this post.

[Link] OpenAI releases Universe an interface between AI agents and the real world

1 Gunnar_Zarncke 07 December 2016 10:04PM

[Link] If Prison Were a Disease, How Bad Would It Be?

3 sarahconstantin 07 December 2016 09:46PM

[Link] Mic-Ra-finance and the illusion of control

3 Benquo 07 December 2016 08:00PM

Land war in Asia

11 Apprentice 07 December 2016 07:31PM

Introduction: Here's a misconception about World War II that I think is harmful and I don't see refuted often enough.

Misconception: In 1941, Hitler was sitting pretty with most of Europe conquered and no huge difficulties on the horizon. Then, due to his megalomania and bullshit ideology, he decided to invade Russia. This was an unforced error of epic proportions. It proved his undoing, like that of Napoleon before him.

Rebuttal: In hindsight, we think of the Soviet Union as a superpower and military juggernaut which you'd be stupid to go up against. But this is not how things looked to the Germans in 1941. Consider World War I. In 19171918, Germany and Austria had defeated Russia at the same time as they were fighting a horrifyingly bloody war with France and Britain - and another devastating European war with Italy. In 1941, Italy was an ally, France had been subdued and Britain wasn't in much of a position to exert its strength. Seemingly, the Germans had much more favorable conditions than in the previous round. And they won the previous round.

In addition, the Germans were not crazy to think that the Red Army was a bit of a joke. The Russians had had their asses handed to them by Poland in 1920 and in 19391940 it had taken the Russians three months and a ridiculous number of casualties to conquer a small slice of Finland.

Nevertheless, Russia did have a lot of manpower and a lot of equipment (indeed, far more than the Germans had thought) and was a potential threat. The Molotov-Ribbentrop pact was obviously cynical and the Germans were not crazy to think that they would eventually have to fight the Russians. Being the first to attack seemed like a good idea and 1941 seemed like a good time to do it. The potential gains were very considerable. Launching the invasion was a rational military decision.

Why this matters: The idea that Hitler made his most fatal decision for irrational reasons feeds into the conception that evil and irrationality must go hand in hand. It's the same kind of thinking that makes people think a superintelligence would automatically be benign. But there is no fundamental law of the universe which prevents a bad guy from conquering the world. Hitler lost his war with Russia for perfectly mundane and contingent reasons like, “the communists had been surprisingly effective at industrialization.”

[Link] Unspeakable conversations

0 casebash 07 December 2016 03:24PM

[Link] On Philosophers Against Malaria

2 Benquo 07 December 2016 02:03AM

[Link] The Distribution of Users’ Computer Skills: Worse Than You Think

4 morganism 06 December 2016 10:42PM

Measuring the Sanity Waterline

4 moridinamael 06 December 2016 08:38PM

I've always appreciated the motto, "Raising the sanity waterline." Intentionally raising the ambient level of rationality in our civilization strikes me as a very inspiring and important goal.

It occurred to me some time ago that the "sanity waterline" could be more than just a metaphor, that it could be quantified. What gets measured gets managed. If we have metrics to aim at, we can talk concretely about strategies to effectively promulgate rationality by improving those metrics. A "rationality intervention" that effectively improves a targeted metric can be said to be effective.

It is relatively easy to concoct or discover second-order metrics. You would expect a variety of metrics to respond to the state of ambient sanity. For example, I would expect that, all things being equal, preventable deaths should decrease when overall sanity increases, because a sane society acts to effectively prevent the kinds of things that lead to preventable deaths. But of course other factors may also cause these contingent measures to fluctuate whichever way, so it's important to remember that these are only indirect measures of sanity.

The UN collects a lot of different types of data. Perusing their database, it becomes obvious that there are a lot of things that are probably worth caring about but which have only a very indirect relationship with what we could call "sanity". For example, one imagines that GDP would increase under conditions of high sanity, but that'd be a pretty noisy measure.

Take five minutes to think about how one might measure global sanity, and maybe brainstorm some potential metrics. Part of the prompt, of course, is to consider what we could mean by "sanity" in the first place.


This is my first pass at brainstorming metrics which may more-or-less directly indicate the level of civilizational sanity:

  • (+) Literacy rate
  • (+) Enrollment rates in primary/secondary/tertiary education
  • (-) Deaths due to preventable disease
  • (-) QALYs lost due to preventable causes
  • (+) Median level of awareness about world events
  • (-) Religiosity rate
  • (-) Fundamentalist religiosity rate
  • (-) Per-capita spent on medical treatments that have not been proven to work
  • (-) Per-capita spent on medical treatments that have been proven not to work
  • (-) Adolescent fertility rate
  • (+) Human development index

It's potentially more productive (and probably more practically difficult) to talk concretely about how best to improve one or two of these metrics via specific rationality interventions, than it is to talk about popularizing abstract rationality concepts.

Sidebar: The CFAR approach may yield something like "trickle down rationality", where the top 0.0000001% of rational people are selected and taught to be even more rational, and maybe eventually good thinking habits will infect everybody in the world from the top down. But I wouldn't bet on that being the most efficient path to raising the global sanity waterline.

As to the question of the meaning of "sanity", it seems to me that this indicates a certain basic package of rationality.

In Eliezer's original post on the topic, he seems to suggest a platform that boils down to a comprehensive embrace of probability-based reasoning and reductionism, with enough caveats and asterisks applied to that summary that you might as well go back and read his original post to get his full point. The idea was that with a high enough sanity waterline, obvious irrationalities like religion would eventually "go underwater" and cease to be viable. I see no problem with any of the "curricula" Eliezer lists in his post.

It has become popular within the rationalsphere to push back against reductionism, positivism, Bayesianism, etc. While such critiques of "extreme rationality" have an important place in the discourse, I think for the sake of this discussion, we should remember that the median human being really would benefit from more rationality in their thinking, and that human societies would benefit from having more rational citizens. Maybe we can all agree on that, even if we continue to disagree on, e.g., the finer points of positivism.

"Sanity" shouldn't require dogmatic adherence to a particular description of rationality, but it must include at least a basic inoculation of rationality to be worthy of the name. The type of sanity that I would advocate for promoting is this more "basic" kind, where religion ends up underwater, but people are still socially allowed to be contrarian in certain regards. After all, a sane society is aware of the power of conformity, and should actively promote some level of contrarianism within its population to promote a diversity of ideas and therefor avoid letting itself become stuck on local maxima.

My problems with Formal Friendly Artificial Intelligence work

2 whpearson 06 December 2016 08:31PM

I'm writing this to get information about the lesswrong community and whether it worth engaging. I'm a bit out of the loop in terms of what the LW community is like and whether it can maintain multiple view points (and how known the criticisms are).

The TL;DR is I have problems with treating computation in an overly formal fashion. The more pragmatic philosophy suggested implies (but doesn't prove) that AI will not be as powerful as expected as the physicality of computation is important and instantiating computing in a physical fashion is expensive.

I think all the things I will talk about are interesting, but I don't see the sufficiency of them when considering AI running in the real world in real computers.

continue reading »

[Link] The Internal Lawyer

2 ProofOfLogic 06 December 2016 05:00PM

Unfortunate Information

6 NatashaRostova 06 December 2016 05:31AM

I'm growing increasingly convinced the unfortunate correlations between types of people and types of arguments leads to persistent biases in uncovering actual knowledge.

As an example, MR wrote this article (they just linked to again today) on Ben Carson in 2015/11. Cowen's argument is that, while perhaps implausible (although it may have tenuous support), the idea that Carson believes that the pyramids were used as grain storage isn't in any possible way less unrealistic than any other religious beliefs. If anything, that singular belief is relatively realistic compared to more widely accepted miracles in Christianity, or similar religions.

So why does he get so much flak for it? Cowen argues that he shouldn't, that it's unfounded and irrational/inconsistent. Is it? He obviously has a fair point. The downside is that despite the belief when analyzed not being particularly ridiculous, we all have a shared estimation/expectation that people who hold this type of belief (let's call them Class B religious beliefs) ARE particularly ridiculous.

This then creates a new equilibrium, where only those people who take their Class B religious beliefs *very* seriously will share them. As a result when Carson says the pyramids have grain, our impulse is "wacky!" But when Obama implies he believes Jesus rose from the dead, our impulse is "Boilerplate -- he probably doesn't give it too much thought -- it's a typical belief, which he might not even believe."

As a result we get this constant mismatch between the type of person to hold a belief, and the truth value of the belief itself. I don't mean to only bring up controversial examples, but it's no surprise that this is where these examples thrive. HBD is another common one. While there is something there, which after a fair amount of reading I suspect is overlooked, the type of person to be really passionate about HBD is (more often than not, with exceptions), not the type of person you want over for dinner.

This can suck for people like us. On one hand we want to evaluate individual pieces of information, models, or arguments, based on how they map to reality. On the other hand, if we advocate or argue for information that is correlated with an unsavory type of person, we are classified as that type of person. In this sense, for someone who values a good social standing and no risk to their career as primary objectives, it would be irrational to publicly blog about controversial topics. It's funny, Scott Alexander was retweeted by Anne Coulter for his SCC blog on Trump. He was thrilled, but imagine if he was an aspiring professor? I think he would probably still be fine, because his unique level of genius would still shine through, but lately professors I know who have non-mainstream political views have stopped publicly sharing for fear of controversy.

This is a topic I think about a lot, and now notice becoming a bigger issue in the US. And I wonder directly how to respond. The contradiction between rationally evaluating an idea and irrationally sharing analysis is growing.


Open thread, Dec. 05 - Dec. 11, 2016

3 MrMind 05 December 2016 07:52AM

If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] David Allen vs. Mark Forster

2 entirelyuseless 05 December 2016 01:04AM

Beware of identifying with school of thoughts

9 ChristianKl 05 December 2016 12:30AM

As a child I decided to do a philosophy course as an extracurricular activity. In it the teacher explained to us the notion of schools of philosophical thought. According to him classifying philosophers as adhering either to school A or school B, is typical for Anglo thought.

It deeply annoys me when Americans talk about Democrat and Republican political thought and suggest that you are either a Democrat or a Republican. The notion that allegiance to one political camp is supposed to dictate your political beliefs feels deeply wrong.

A lot of Anglo high schools do policy debating. The British do it a bit differently than the American but in both cases it boils down to students having to defend a certain side.

Traditionally there's nearly no debating at German high schools.

When writing political essays in German school there’s a section where it's important to present your own view. Your own view isn't supposed to be one that you simply copy from another person. Good thinking is supposed to provide a sophisticated perspective on the topic that is the synthesis of arguments from different sources instead of following a single source.

That’s part of the German intellectual thought has the ideal of 'Bildung'. In Imprisoned in English Anna Wierzbicka tells me that 'Bildung' is a particularly German construct and the word isn't easily translatable into other languages. The nearest English word is 'education'. 'Bildung' can also be translated as 'creation'. It's about creating a sophisticated person, that's more developed than the average person on the street who doesn't have 'Bildung'. Having 'Bildung' signals having a high status.

According to this ideal you learn about different viewpoints and then you develop a sophisticated opinion. Not having a sophisticated opinion is low class. In liberal social circles in the US a person who agrees with what the Democratic party does at every point in time would have a respectable political opinion. In German intellectual life that person would be seen as a credulous low status idiot that failed to develop a sophisticated opinion. A low status person isn't supposed to be able to fake being high status by memorizing the teacher's password.

If you ask me the political question "Do you support A or B?", my response is: "Well, I neither want A or B. There are these reasons for A, there are those reasons for B. My opinion is that we should do C which solves those problems better and takes more concerns into account." A isn’t the high status option so that I can signal status by saying that I'm in favour of A.

How does this relate to non-political opinions? In Anglo thought philosophic positions belong to different schools of thought. Members belonging to one school are supposed to fight for their school being right and being better than the other schools.

If we take the perspective of hardcore materialism, a statement like: "One of the functions of the heart is to pump blood" wouldn't be a statement that can be objectively true because it's teleology. The notion of function isn't made up of atoms.

From my perspective as a German there's little to be gained in subscribing to the hardcore materialist perspective. It makes a lot of practical sense to say that such as statement can be objectively true. I have gotten the more sophisticated view of the world, that I want to have. Not only statements that are about arrangements of atoms can be objectively true but also statements about the functions of organs. That move is high status in German intellectual discourse but it might be low status in Anglo-discourse because it can be seen as being a traitor to the school of materialism.

Of course that doesn't mean that no Anglo accepts that the above statement can be objectively true. On the margin German intellectual norms make it easier to accept the statement as being objectively true. After Hegel you might say that thesis and antithesis come together to a synthesis instead of thesis or antithesis winning the argument.

The German Wikipedia page for "continental philosophy" tells me that the term is commonly used in English philosophy. According to the German Wikipedia it's mostly used derogatorily. From the German perspective the battle between "analytic philosophy" and "continental philosophy" is not a focus of the debate. The goal isn't to decide which school is right but to develop sophisticated positions that describe the truth better than answers that you could get by memorizing the teacher's password.

One classic example of an unsophisticated position that's common in analytic philosophy is the idea that all intellectual discourse is supposed to be based on logic. In Is semiotics bullshit? PhilGoetz stumbles about a professor of semiotics who claims: "People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis."

That's seen as a strong violation of how reasoning based on logical positivism is supposed to work. It violates the memorized teachers password. But is it true? To answer that we have to ask what 'logical basis' means. David Chapman analysis the notion of logic in Probability theory does not extend logic. In it he claims that in academic philosophical discourse the phrase logic means predicate logic.

Predicate logic can make claims such:

(a) All men are mortal.

(b) Socrates is a man.


(c) Socrates is mortal.

According to Chapman the key trick of predicate logic is logical quantification. That means every claim has to be able to be evaluated as true or false without looking at the context.

We want to know whether a chemical substance is safe for human use. Unfortunately our ethical review board doesn't let us test the substance on humans. Fortunately they allow us to test the substance on rats. Hurray, the rats survive.

(a) The substance is safe for rats.

(b) Rats are like humans


(c) The substance is safe for humans.

The problem with `Rats are like humans` is that it isn’t a claim that’s simply true or false.

The truth value of the claim depends on what conclusions you want to draw from it. Propositional calculus can only evaluate the statement as true or false and can’t judge whether it’s an appropriate analogy because that requires looking at the deeper meaning of the statement `Rats are like humans` to decide whether `Rats are like humans` in the context we care about.

Do humans sometimes make mistakes when they try to reason by analogy? Yes, they do. At the same time they also come to true conclusions by reasoning through analogy. Saying "People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis." sounds fancy, but if we reasonably define the term logical basis as being about propositional calculus, it's true.

Does that mean that you should switch from the analytic school to the school of semiotics? No, that's not what I'm arguing. I argue that just as you shouldn't let tribalism influence yourself in politics and identify as Democrat or Republican you should keep in mind that philosophical debates, just as policy debates, are seldom one-sided.

Daring to slay another sacred cow, maybe we also shouldn't go around thinking of ourselves as Bayesian. If you are on the fence on that question, I encourage you to read David Chapman's splendid article I referenced above:


Probability theory does not extend logic

Some thoughts on double crux.

4 ProofOfLogic 04 December 2016 06:03PM

[Epistemic status: quite speculative. I've attended a CFAR workshop including a lesson on double crux, and found it wore counterintuitive than I expected. I ran my own 3-day event going through the CFAR courses with friends, including double crux, but I don't think anyone started doing double crux based on my attempt to teach it. I have been collecting notes on my thoughts about double crux so as to not lose any; this is a synthesis of some of those notes.]

This is a continuation of my attempt to puzzle at Double Crux until it feels intuitive. While I think I understand the _algorithm_ of double crux fairly well, and I _have_ found it useful when talking to someone else who is trying to follow the algorithm, I haven't found that I can explain it to others in a way that causes them to do the thing, and I think this reflects a certain lack of understanding on my part. Perhaps others with a similar lack of understanding will find my puzzling useful.

Here's a possible argument for double crux as a way to avoid certain conversational pitfalls. This argument is framed as a sort of "diff" on my current conversational practices, which are similar to those mentioned by CCC. So, here is approximately what I do when I find an interesting disagreement:


  1. We somehow decide who states their case first. (Usually, whoever is most eager.) That person gives an argument for their side, while checking for understanding from the other person and looking for points of disagreement with the argument.
  2. The other person asks questions until they think they understand the whole argument; or, sometimes, skip to step 3 when a high-value point of disagreement is apparent before the full argument is understood.
  3. Recurse into step 1 for the most important-seeming point of disagreement in the argument offered. (Again the person whose turn it is to argue their case will be chosen "somehow"; it may or may not switch.)
  4. If that process is stalling out (the argument is not understood by the other person after a while of trying, or the process is recursing into deeper and deeper sub-points without seeming to get closer to the heart of the disagreement), switch roles; the person who has explained the least of their view should now give an argument for their side.

Steps 1-3 can have a range of possible results [using 'you' as the argument-giver and 'they' as the receiver]:
  • In the best case, they accept your argument, perhaps after a little recursion into sub-arguments to clarify.
  • In a very good case, the process finds a lot of common ground (in the form of parts of the argument which are agreed upon) and a precise point of disagreement, X, such that if either person changed their mind about X they'd change their mind about the whole. They can now dig into X in the same way they dug into the overall disagreement, with confidence that resolving X is a good way to resolve the disagreement.
  • In a slightly less good case, a precise disagreement X is found, but it turns out that the argument you gave wasn't your entire reason for believing what you believe. IE, you've given an argument which you believe to be sufficient to establish the point, but not necessary. This means resolving the point of disagreement X is only potentially changing their mind. At best you may find that your argument fails, in which case you'd give another argument.
  • In a partial failure case, all the points of disagreement are right away; IE, you fail to find any common ground for arguments to gain traction. It's still possible to recurse into points of disagreement in this case, and doing so may still be productive, but often this is a sign that you haven't understood the other person well enough or that you've put them on the defensive so that they're biased to disagree.
  • In a failure case, you keep digging down into reasons why they don't buy one point after another, and never really get anywhere. You don't contact with anything which would change their mind, because you're digging into your reasons rather than theirs. Your search for common ground is failing.
  • In a failure case, you've made a disingenuous argument which your motivated cognition thinks they'll have a hard time refuting, but which is unlikely to convince them. A likely outcome is a long, pointless discussion or an outright rejection of the argument without any attempt to point at specific points of disagreement with it.

I think double crux can be seen as an attempt to modify the process of 1-4 in a way which attempts to make the better outcomes more common. You can still give your same argument in double crux, but you're checking earlier to see whether it will convince the other person. Suppose you have an argument for the disagreement D:


A implies B.

B implies C.

C implies D.

So, D."

In my algorithm, you start by checking for agreement with "A". You then check for agreement with "A implies B". And so on, until a point of disagreement is reached. In double crux, you are helping the other person find cruxes by suggesting cruxes for them. You can ask "If you believed C, would you believe D?" Then, if so, "If you believed B, would you believe D?" and so on. Going through the argument backwards like this, you only keep going for so long as you have some assurance that you've connected with their model of D. Going through the argument in the forward direction, as in my method, you may recurse into further and further sub-arguments starting at a point of disagreement like "B implies C" and find that you never make contact with something in their model which has very much to do with their disbelief of D. Also, looking for the other person's cruxes encourages honest curiosity about their thinking, which makes the whole process go better.

Furthermore, you're looking for your own cruxes at the same time. So, you're more likely to think about arguments which are critical to your belief, and much less likely to try disingenuous arguments designed to be merely difficult to refute.

A quote from Feynman's Cargo Cult Science:

The first principle is that you must not fool yourself—and you are the easiest person to fool.  So you have to be very careful about that.  After you’ve not fooled yourself, it’s easy not to fool other scientists.  You just have to be honest in a conventional way after that. 


I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I’m not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being.  We’ll leave those problems up to you and your rabbi.  I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to do when acting as a scientist.  And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.


This kind of "bending over backwards to show how maybe you're wrong" (in service of not fooling yourself) is close to double crux. Listing cruxes puts us in the mindset of thinking about ways we could be wrong.

On the other hand, I notice that in a blog post like this, I have a hard time really explaining how I might be wrong before I've explained my basic position. It seems like there's still a role for baking arguments forwards, rather than backwards. In my (limited) experience, double crux still requires each side to explain themselves (which then involves giving some arguments) before/while seeking cruxes. So perhaps double crux can't be viewed as a "pure" technique, and really has to be flexible, mixed with other approaches including the one I gave at the beginning. But I'm not sure what the best way to achieve that mixture is.

[Link] [Secular Solstice UK] We all have a part to play

2 Raemon 04 December 2016 05:51PM

[Link] A Few Billionaires Are Turning Medical Philanthropy on Its Head

0 ike 04 December 2016 03:08PM

[Link] Construction of practical quantum computers radically simplified

0 morganism 03 December 2016 11:49PM

[Link] This AI Boom Will Also Bust

4 username2 03 December 2016 11:21PM

[Link] Crowdsourcing moderation without sacrificing quality

8 paulfchristiano 02 December 2016 09:47PM

[Link] When companies go over 150 people......

2 NancyLebovitz 02 December 2016 07:57PM

[Link] Contra Robinson on Schooling

4 Vaniver 02 December 2016 07:05PM

Weekly LW Meetups

0 FrankAdamek 02 December 2016 04:47PM

Question about metaethics

4 pangel 02 December 2016 10:21AM

In a recent Facebook post, Eliezer said :

You can believe that most possible minds within mind design space (not necessarily actual ones, but possible ones) which are smart enough to build a Dyson Sphere, will completely fail to respond to or care about any sort of moral arguments you use, without being any sort of moral relativist. Yes. Really. Believing that a paperclip maximizer won't respond to the arguments you're using doesn't mean that you think that every species has its own values and no values are better than any other.

And so I think part of the metaethics sequence went over my head.

I should re-read it, but I haven't yet. In the meantime I want to give an summary of my current thinking and ask some questions.

My current take on morality is that, unlike facts about the world, morality is a question of preference. The important caveats are :

  1. The preference set has to be consistent. Until we develop something akin to CEV, humans are probably stuck with a pre-morality where they behave and think over time in contradictory ways, and at the same time believe they have a perfectly consistent moral system.
  2. One can be mistaken about morality, but only in the sense that, unknown to them, they actually hold values different from what the deliberative part of their mind thinks it holds. An introspection failure or a logical error can cause the mistake. Once we identify ground values (not that it's effectively feasible), "wrong" is a type error.
  3. It is OK to fight for one's morality. Just because it's subjective doesn't mean one can't push for it. So "moral relativism" in the strong sense isn't a consequence of morality being a preference. But "moral relativism" in the weak, technical sense (it's subjective) is.

I am curious about the following :

  • How does your current view differ from what I've written above?
  • How exactly does that differ from the thesis of the metaethics sequence? In the same post, Eliezer also said : "and they thought maybe I was arguing for moral realism...". I did kind of think that, at times.
  • I specifically do not understand this : "Believing that a paperclip maximizer won't respond to the arguments you're using doesn't mean that you think that every species has its own values and no values are better than any other.". Unless "better" is used in the sense of "better according to my morality", but that would make the sentence barely worth saying.


[Link] Optimizing the news feed

9 paulfchristiano 01 December 2016 11:23PM

Which areas of rationality are underexplored? - Discussion Thread

13 casebash 01 December 2016 10:05PM

There seems to actually be real momentum behind this attempt as reviving Less Wrong. One of the oldest issues on LW has been the lack of content. For this reason, I thought that it might be worthwhile opening a thread where people can suggest how we can expand the scope of what people write about in order for us to have sufficient content.

Does anyone have any ideas about which areas of rationality are underexplored? Please only list one area per comment.

Making intentions concrete - Trigger-Action Planning

22 Kaj_Sotala 01 December 2016 08:34PM

I'll do it at some point.

I'll answer this message later.

I could try this sometime.

For most people, all of these thoughts have the same result. The thing in question likely never gets done - or if it does, it's only after remaining undone for a long time and causing a considerable amount of stress. Leaving the "when" ambiguous means that there isn't anything that would propel you into action.

What kinds of thoughts would help avoid this problem? Here are some examples:

  • When I find myself using the words "later" or "at some point", I'll decide on a specific time when I'll actually do it.
  • If I'm given a task that would take under five minutes, and I'm not in a pressing rush, I'll do it right away.
  • When I notice that I'm getting stressed out about something that I've left undone, I'll either do it right away or decide when I'll do it.
Picking a specific time or situation to serve as the trigger of the action makes it much more likely that it actually gets done.

Could we apply this more generally? Let's consider these examples:
  • I'm going to get more exercise.
  • I'll spend less money on shoes.
  • I want to be nicer to people.
These goals all have the same problem: they're vague. How will you actually implement them? As long as you don't know, you're also going to miss potential opportunities to act on them.

Let's try again:
  • When I see stairs, I'll climb them instead of taking the elevator.
  • When I buy shoes, I'll write down how much money I've spent on shoes this year.
  • When someone does something that I like, I'll thank them for it.
These are much better. They contain both a concrete action to be taken, and a clear trigger for when to take it.

Turning vague goals into trigger-action plans

Trigger-action plans (TAPs; known as "implementation intentions" in the academic literature) are "when-then" ("if-then", for you programmers) rules used for behavior modification [i]. A meta-analysis covering 94 studies and 8461 subjects [ii] found them to improve people's ability for achieving their goals [iii]. The goals in question included ones such as reducing the amount of fat in one's diet, getting exercise, using vitamin supplements, carrying on with a boring task, determination to work on challenging problems, and calling out racist comments. Many studies also allowed the subjects to set their own, personal goals.

TAPs were found to work both in laboratory and real-life settings. The authors of the meta-analysis estimated the risk of publication bias to be small, as half of the studies included were unpublished ones.

Designing TAPs

TAPs work because they help us notice situations where we could carry out our intentions. They also help automate the intentions: when a person is in a situation that matches the trigger, they are much more likely to carry out the action. Finally, they force us to turn vague and ambiguous goals into more specific ones.

A good TAP fulfills three requirements [iv]:
  • The trigger is clear. The "when" part is a specific, visible thing that's easy to notice. "When I see stairs" is good, "before four o'clock" is bad (when before four exactly?). [v]
  • The trigger is consistent. The action is something that you'll always want to do when the trigger is fulfilled. "When I leave the kitchen, I'll do five push-ups" is bad, because you might not have the chance to do five push-ups each time when you leave the kitchen. [vi]
  • The TAP furthers your goals. Make sure the TAP is actually useful!
However, there is one group of people who may need to be cautious about using TAPs. One paper [vii] found that people who ranked highly on so-called socially prescribed perfectionism did worse on their goals when they used TAPs. These kinds of people are sensitive to other people's opinions about them, and are often highly critical of themselves. Because TAPs create an association between a situation and a desired way of behaving, it may make socially prescribed perfectionists anxious and self-critical. In two studies, TAPs made college students who were socially prescribed perfectionists (and only them) worse at achieving their goals.

For everyone else however, I recommend adopting this TAP:

When I set myself a goal, I'll turn it into a TAP.

Origin note

This article was originally published in Finnish at kehitysto.fi. It draws heavily on CFAR's material, particularly the workbook from CFAR's November 2014 workshop.


[i] Gollwitzer, P. M. (1999). Implementation intentions: strong effects of simple plans. American psychologist, 54(7), 493.

[ii] Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement: A meta‐analysis of effects and processes. Advances in experimental social psychology, 38, 69-119.

[iii] Effect size d = .65, 95% confidence interval [.6, .7].

[iv] Gollwitzer, P. M., Wieber, F., Myers, A. L., & McCrea, S. M. (2010). How to maximize implementation intention effects. Then a miracle occurs: Focusing on behavior in social psychological theory and research, 137-161.

[v] Wieber, Odenthal & Gollwitzer (2009; unpublished study, discussed in [iv]) tested the effect of general and specific TAPs on subjects driving a simulated car. All subjects were given the goal of finishing the course as quickly as possible, while also damaging their car as little as possible. Subjects in the "general" group were additionally given the TAP, "If I enter a dangerous situation, then I will immediately adapt my speed". Subjects in the "specific" group were given the TAP, "If I see a black and white curve road sign, then I will immediately adapt my speed". Subjects with the specific TAP managed to damage their cars less than the subjects with the general TAP, without being any slower for it.

[vi] Wieber, Gollwitzer, et al. (2009; unpublished study, discussed in [iv]) tested whether TAPs could be made even more effective by turning them into an "if-then-because" form: "when I see stairs, I'll use them instead of taking the elevator, because I want to become more fit". The results showed that the "because" reasons increased the subjects' motivation to achieve their goals, but nevertheless made TAPs less effective.

The researchers speculated that the "because" might have changed the mindset of the subjects. While an "if-then" rule causes people to automatically do something, "if-then-because" leads people to reflect upon their motivates and takes them from an implementative mindset to a deliberative one. Follow-up studies testing the effect of implementative vs. deliberative mindsets on TAPs seemed to support this interpretation. This suggests that TAPs are likely to work better if they can be carried out as consistently and as with little thought as possible.

[vii] Powers, T. A., Koestner, R., & Topciu, R. A. (2005). Implementation intentions, perfectionism, and goal progress: Perhaps the road to hell is paved with good intentions. Personality and Social Psychology Bulletin, 31(7), 902-912.

Downvotes temporarily disabled

16 Vaniver 01 December 2016 05:31PM

This is a stopgap measure until admins get visibility into comment voting, which will allow us to find sockpuppet accounts more easily.


The best place to track changes to the codebase is the github LW issues page.

[Link] Hate Crimes: A Fact Post

7 sarahconstantin 01 December 2016 04:25PM

December 2016 Media Thread

4 ArisKatsaris 01 December 2016 07:41AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.


  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

[Link] "Decisions" as thoughts which lead to actions.

1 ProofOfLogic 01 December 2016 12:47AM

[Link] What they don’t teach you at STEM school

8 RomeoStevens 30 November 2016 07:20PM

View more: Next