Nominated Posts for the 2019 Review

Posts need at least 2 nominations to continue into the Review Phase.
Nominate posts that you have personally found useful and important.
Sort by: fewest nominations
32Calibrating With Cards
[anonymous]
3
1 0
133Blackmail
55
2 2

2019 Review Discussion

Epistemic status: Tentative. I’ve been practicing this on-and-off for a year and it’s seemed valuable, but it’s the sort of thing I might look back on and say “hmm, that wasn’t really the right frame to approach it from.”

 

In doublecrux, the focus is on “what observations would change my mind?” 

In some cases this is (relatively) straightforward. If you believe minimum wage helps workers, or harms them, there are some fairly obvious experiments you might run. “Which places have instituted minimum wage laws? What happened to wages? What happened to unemployment? What happened to worker migration?”

The details will matter a lot. The results of the experiment might be weird and confusing. If I ran the experiment myself I’d probably get a lot of things wrong, misuse statistics and...

Self10

Keeps baffling me how much easier having a concept for something makes thinking about it.

Previously: Keeping Beliefs Cruxy


When disagreements persist despite lengthy good-faith communication, it may not just be about factual disagreements – it could be due to people operating in entirely different frames — different ways of seeing, thinking and/or communicating.

If you can’t notice when this is happening, or you don’t have the skills to navigate it, you may waste a lot of time.

Examples of Broad Frames

Gears-oriented Frames

Bob and Alice’s conversation is about cause and effect. Neither of them are planning to take direct actions based on their conversation, they’re each just interested in understanding a particular domain better.

Bob has a model of the domain that includes gears A, B, C and D. Alice has a model that includes gears C, D and F. They’re able to exchange information, and...

One I've noticed is pretty well-intentioned "woke" people are more "lived experiences" oriented and well-intentioned "rationalist" people are more "strong opinions weakly held." Honestly, if your only goal is truth seeking, and admitting I'm rationalist-biased when I say this and also this is simplified, the "woke" frame is better at breadth and the "rationalist" frame is better at depth. But ohmygosh these arguments can spiral. Neither realize their meta is broken down. The rationalist thinks low-confidence opinions are humility; the woke thinks "I am ope... (read more)

Epistemic Status: I've really spent some time wrestling with this one. I am highly confident in most of what I say. However, this differs from section to section. I'll put more specific epistemic statuses at the end of each section.

Some of this post is generated from mistakes I've seen people make (or, heard people complain about) in applying conservation-of-expected-evidence or related ideas. Other parts of this post are based on mistakes I made myself. I think that I used a wrong version of conservation-of-expected-evidence for some time, and propagated some wrong conclusions fairly deeply; so, this post is partly an attempt to work out the right conclusions for myself, and partly a warning to those who might make the same mistakes.

All of the mistakes I'll argue against...

Fair. I think the analysis I was giving could be steel-manned as: pretenders are only boundedly sophisticated; they can't model the genuine mindset perfectly. So, saying what is actually on your mind (eg calling out the incentive issues which are making honesty difficult) can be a good strategy.

However, the "call out" strategy is not one I recall using very often; I think I wrote about it because other people have mentioned it, not because I've had sucess with it myself.

Thinking about it now, my main concerns are:
1. If the other person is being genuine, an... (read more)

Load More