Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Case Studies Highlighting CFAR’s Impact on Existential Risk

4 Unnamed 10 January 2017 06:51PM
Comment author: Douglas_Knight 01 January 2017 11:42:19PM 0 points [-]

Most questions don't have a preferred direction. Look at Scott's predictions. Which direction should you point each one?

Most people don't make enough predictions to get a statistically significant difference between the two sides of the scale. And even if they do, their bias to the extremes ("overconfidence") swamps the effect.

Comment author: Unnamed 02 January 2017 12:57:47AM *  0 points [-]

Just looking at the 50% questions, here is how I would score 1) if either direction is an event rather than the default and 2) if either direction is probably preferred by Scott:

US unemployment to be lower at end of year than beginning: 50%

Neither direction is an event, Yes is preferred.

SpaceX successfully launches a reused rocket: 50%

Yes is an event, Yes is preferred.

California’s drought not officially declared over: 50%

No is an event, No is preferred.

At least one SSC post > 100,000 hits: 50%

Yes is an event, Yes is preferred.

UNSONG will get > 1,000,000 hits: 50%

Yes is an event, Yes is preferred.

UNSONG will not miss any updates: 50%

No is an event, Yes is preferred.

I will be involved in at least one published/accepted-to-publish research paper by the end of 2016: 50%

Yes is an event, Yes is preferred.

[Over] 10,000 Twitter followers by end of this year: 50%

Yes is an event, Yes is preferred.

I will not get any new girlfriends: 50%

No is an event, perhaps No is preferred.

I will score 95th percentile or above in next year’s PRITE: 50%

Yes is an event, Yes is preferred.

I will not have any inpatient rotations: 50%

No is an event, perhaps Yes is preferred.

I get at least one article published on a major site like Huffington Post or Vox or New Statesman or something: 50%

Yes is an event, Yes is preferred.

I don’t attend any weddings this year: 50%

No is an event, perhaps No is preferred.

Scott would know better than I do, and he also could have marked a subset that he actually cared about.

Including the "perhaps"es, I count that 7/12 happened in the preferred direction, and 5/11 of the events happened. With this small sample there's no sign of optimism bias, and he's also well-calibrated on whether a non-default event will happen. Obviously you'd want to do this with the full set of questions and not just the 50% ones to get a more meaningful sample size.

Comment author: Douglas_Knight 01 January 2017 06:27:04PM *  1 point [-]

You can do calibration and accuracy. You can start with predictions of arbitrary granularity and then force them into whatever boxes you want.

For calibration, it isn't very useful to score events at 50%. Instead of making boxes of 50, 60, 70, 80, 90, 95, 99%, you should instead do something like 55, 70, 80, 90, 95, 99%. Taking an event that you "really" think is 50/50 and forcing yourself to choose a side to make it 45/55 is no worse than taking an event that you think is 45/55 and forcing it to be either 50 or 60%.

Also, the jump from 95 to 99 is pretty big. Better to add an intermediate category of 97 or 98. Or just replace 99 with 98.


I think 60, 80, 90, 95, 98 would be a good set of bins for beginners.

Comment author: Unnamed 01 January 2017 07:45:24PM 2 points [-]

50% predictions can be useful if you are systematic about which option you count as "yes". e.g., "I estimate a 50% chance that I will finish writing my book this year" is a meaningful prediction. If I am subject to standard biases, then we would expect this to have less than a 50% chance of happening, so the outcomes of predictions like this provide a meaningful test of my prediction ability.

2 conventions you could use for 50% predictions: 1) pose the question such that "yes" means an event happened and "no" is the default, or 2) pose the question such that "yes" is your preferred outcome and "no" is the less desirable outcome.

Actually, it is probably better to pick one of these conventions and use it for all predictions (so you'd use the whole range from 0-100, rather than just the top half of 50-100). "70% chance I will finish my book" is meaningfully different than "70% chance I will not finish my book"; we are throwing away information about possible miscalibrated by treating them both merely as 70% predictions.

Even better, you could pose the question however you like and also note when you make your prediction 1) which outcome (if either) is an event rather than the default and 2) which outcome (if either) you prefer. Then at the end of the year you could look at 3 graphs, one which looks at whether the outcome that you considered more likely occurred, one that looks at whether the (non-default) event occurred, and one which looks at whether your preferred outcome occurred.

Comment author: Unnamed 05 December 2016 10:09:35PM *  3 points [-]

(This is Dan from CFAR)

Here are a few examples of disagreements where I'd expect double crux to be an especially useful approach (assuming that both people hit the prereqs that Duncan listed):

2 LWers disagree about whether attempts to create "Less Wrong 2.0" should try to revitalize Less Wrong or create a new site for discussion.

2 LWers disagree on whether it would be good to have a norm of including epistemic effort metadata at the start of a post.

2 EAs disagree on whether the public image of EA should make it seem commonsense and relatable or if it should highlight ways in which EA is weird and challenges mainstream views.

2 EAs disagree on the relative value of direct work and earning to give.

2 co-founders disagree on whether their startup should currently be focusing most of its efforts on improving their product.

2 housemates disagree on whether the dinner party that they're throwing this weekend should have music.

These examples share some features:

  • The people care about getting the right answer because they (or people that they know) are going to do something based on the answer, and they really want it to go well.
  • The other person's head is one of the better sources of information that you have available. You can't look these things up on Wikipedia, and the other person's opinion seems likely to reflect some relevant experiences/knowledge/skills/models/etc. that you haven't fully incorporated into your own thinking.
  • Most likely, neither person came into the conversation with a clear, detailed model of the reasoning behind their own conclusion.

So digging into your own thinking and the other person's thinking on the topic is one of the more promising options available for making progress towards figuring out something that you care about. And focusing on cruxes can help keep that conversation on track so that it can engage with the most relevant differences between your models and have a good chance of leading to important updates.

There are other cases where double crux is also useful which don't share all of these features, but these 6 examples are in what I think of as the core use case for double crux. I think it helps to have these sorts of examples in mind (ideally ones that actually apply to you) as you're trying to understand the technique, rather than just trying to apply double crux to the general category of "disagreement".

Comment author: MrMind 01 December 2016 02:10:52PM *  0 points [-]

Correct me if I'm wrong. You are searching for a sentence B such that:

1) if B then A
2) if not B, then not A. Which implies if A then B.

Which implies that you are searching for an equivalent argument. How can an equivalent argument have an explanatory power?

Comment author: Unnamed 05 December 2016 09:41:07PM 2 points [-]

(This is Dan from CFAR)

This is easier to think about in the context of specific examples, rather than as abstract logical propositions. You can generally tell when statement B is progress towards making the disagreement about A more concrete / more tractable / closer to the underlying source of disagreement.

I typically think of the arrows as causal implication between beliefs. For example, my belief that school uniforms reduce bullying causes me to believe that students should wear uniforms. With logical implication the contrapositive is equivalent to the original statement (as you say). With causal implication, trying to do the contrapositive would give us something like "If I believed that students should not wear uniforms, that would cause me to believe that uniforms don't reduce bullying" which is not the sort of move that I want to make in my reasoning.

Another way to look at this, while sticking to logical implication, is that we don't actually have B-->A. Instead we have (B & Q & R & S & T ... & Z) --> A. For example, I believe that students should wear uniforms because uniforms reduce bullying, and uniforms are not too expensive, and uniforms do not reduce learning, and uniforms do not cause Ebola, etc. If you take the contrapositive, you get ~A --> (~B or ~Q or ~R or ~S or ~T ... or ~Z). Or, in English, I have many cruxes for my belief that students should wear uniforms, and changing my mind about that belief could involve changing my mind about any one of those cruxes.

Comment author: MrMind 02 December 2016 10:12:32AM 0 points [-]

I think the unofficial, undercover ban on basilisks should be removed. Acausal trade is an important topic and should be openly discussed.

Strongly agree Strongly disagree

Submitting...

Comment author: Unnamed 03 December 2016 03:06:50AM 4 points [-]

As of October 2015, "The Roko's basilisk ban isn't in effect anymore."

Comment author: scarcegreengrass 01 December 2016 06:23:18PM 0 points [-]

I find the terms System 1 and System 2 difficult to memorize. Are there existing synonyms for these?

Comment author: Unnamed 01 December 2016 08:24:47PM 2 points [-]

System 1 came first, in evolutionary terms.

Comment author: mk54 28 November 2016 06:56:17PM *  1 point [-]

The theory of comedy that I find the most convincing is that things we find "funny" are non-threatening violations of social mores. According to that theory being funny isn't so much about being rational, but understanding the unwritten rules that govern society. More specifically it's about understanding when breaking social rules is actually acceptable. It's kind of like speeding. It's theoretically illegal to go 26 in a 25 mph zone. But as a practical matter, no cop is going to pull you over for it. I'm not sure that an especially detailed understanding of social norms is directly useful to becoming more rational. Maybe to the extent that you're more consciously aware of them and how they influence your thinking.

Comment author: Unnamed 29 November 2016 09:14:56AM 1 point [-]

"Non-threatening violations of social mores" seems to underspecify what things are funny. Most non-threatening norm violations lead to other reactions like cringe, annoyance, sympathy, contempt, confusion, or indifference rather than comedy. Curb Your Enthusiasm and Mr. Bean had lots of funny scenes which involved norm violations, but if their creators were less talented then people would've cringed instead of laughing (and some people do that anyways). I don't think their talent consists primarily of 'finding ways to violate social mores' or 'figuring out how to make that benign'.

"Norm violations" and "non-threatening" also seem like generalizations that aren't true of all humor. "The crows seemed to be calling his name, thought Caw" and referencing movies don't seem like norm violations. Gallows humor and bullies laughing at their victim don't seem threat-free.

Comment author: paulfchristiano 28 November 2016 01:11:30AM 0 points [-]

The relevant comparison is if you know you are going to arrive at either 12:15 or 12:05 equiprobably---do you say "12:10" or "12:07"? Or, if you are giving a distribution, do you say that the two are equiprobable, or claim a 2/3 chance of 12:05?

Consciously, I am thinking "Let's think this through together to figure out if it's worth doing," not "how can I convince him to approve this?" I'm not at all convinced that the difficulty of lying extends to the difficulty of maintaining a mismatch between conscious reasoning and various subconscious processes that feed into estimates.

Comment author: Unnamed 28 November 2016 05:09:08AM *  1 point [-]

Consciously, I am thinking "Let's think this through together to figure out if it's worth doing," not "how can I convince him to approve this?" I'm not at all convinced that the difficulty of lying extends to the difficulty of maintaining a mismatch between conscious reasoning and various subconscious processes that feed into estimates.

I'm imagining signs during the conversation like: If it starts to look like some other project would be more valuable than the idea you came in with, do you seem excited or frustrated? Or: If a new consideration comes up which might imply that your project idea is not worth doing, do you pursue that line of thought with the same sort of curiosity and deftness that you bring to other topics?

These are different from the kinds of tells that a person gives when lying, but they do point to the general rule of thumb that one's mental processes are typically neither perfectly opaque nor perfectly transparent to others. They do seem to depend on the processes that are actually driving your behavior; merely thinking "Let's think this through together" will probably not make you excited/curious/etc. if your subconscious processes aren't in accord with that thought.

The relevant comparison is if you know you are going to arrive at either 12:15 or 12:05 equiprobably---do you say "12:10" or "12:07"? Or, if you are giving a distribution, do you say that the two are equiprobable, or claim a 2/3 chance of 12:05?

These are subtle enough differences so that I don't have clear intuitions on which ETA would lead me to have the most positive impression of the person who showed up late.

I agree with your broader point that there are social incentives which favor various sorts of inaccuracy, and that accuracy won't always create the best impression. My broader point is that there are also social incentives for accuracy, and various indicators of whether a person is seeking accuracy, and it's possible to build a community that strengthens those relative to the incentives for inaccuracy.

Comment author: Jacobian 09 October 2016 12:09:34PM 0 points [-]

I am very much in favor of "expanding the circle of empathy". My thesis is that this consists of supplanting your emotional empathy (who your heart beats in harmony with naturally) with cognitive empathy (who your brain tells you is worthy of empathy even if you don't really feel their Tajik feelings).

Comment author: Unnamed 27 November 2016 10:42:24PM 0 points [-]

I think that "supplant" is not the right move. I do agree that having a wide circle does not require going around feeling lots of emotional empathy for everyone, but I think that emotional empathy helps with getting the circle to expand. A one-time experience of emotional empathy (e.g., from watching a movie about an Iranian family) can lead to a permanent expansion in the circle of concern (e.g., thinking of the Tajiks as people who count, even if you don't actively feel emotional empathy for them in the moment).

A hypothesis: counterfactual emotional empathy is important for where you place your circle of concern. If I know that I would feel emotional empathy for someone if I took the time to understand their story from their perspective, then I will treat them as being inside the circle even if I don't actually go through the effort to get their point of view and don't have the experience of feeling emotional empathy for them.

View more: Next