yeah no worries be as late as you like :P
"Hmm. I didn't interpret a hypothetical apostasy as the fiercest critique, but rather the best critique--i.e. weight the arguments not by "badness if true" but by something like badness times plausibility."
See http://www.amirrorclear.net/academic/papers/risk.pdf. Plausibility depends on your current model/arguments/evidence. If the badness times probability of these being wrong dwarfs the former, you must account for it.
Upvoted.
N=1: Time to stop self-identifying with thoughts was less than 5 total hours of meditation practice (scattered across months, but still). This was especially helpful in diminishing neurotic behavior - the thoughts are still there just not engaged with.
Corrected, thank you
Yes, a blank spot and one that makes everything else near-useless. This needs to be figured out.
That automation makes sense, thank you. Trying to think of how to generalize it, and how to to merge it with the first suggestion.
Anki doesn't work for me on this, agreed. The above suggestion seems to dominate this one.
In response to this post: http://www.overcomingbias.com/2013/02/which-biases-matter-most-lets-prioritise-the-worst.html
Robert Wiblin got the following data (treated by a dear friend of mine):
89 Confirmation bias
54 Bandwagon effect
50 Fundamental attribution error
44 Status quo bias
39 Availability heuristic
38 Neglect of probability
37 Bias blind spot
36 Planning fallacy
36 Ingroup bias
35 Hyperbolic discounting
29 Hindsight bias
29 Halo effect
28 Zero-risk bias
28 Illusion of control
28 Clustering illusion
26 Omission bias
25 Outcome bias
25 Neglect of prior base rates effect
25 Just-world phenomenon
25 Anchoring
24 System justification
24 Kruger effect
23 Projection bias
23 Mere exposure effect
23 Loss aversion
22 Overconfidence effect
19 Optimism bias
19 Actor-observer bias
18 Self-serving bias
17 Texas sharpshooter fallacy
17 Recency effect
17 Outgroup homogeneity bias
17 Gambler's fallacy
17 Extreme aversion
16 Irrational escalation
15 Illusory correlation
15 Congruence bias
14 Self-fulfilling prophecy
13 Wobegon effect
13 Selective perception
13 Impact bias
13 Choice-supportive bias
13 Attentional bias
12 Observer-expectancy effect
12 False consensus effect
12 Endowment effect
11 Rosy retrospection
11 Information bias
11 Conjunction fallacy
11 Anthropic bias
10 Focusing effect
10 Déformation professionnelle
08 Positive outcome bias
08 Ludic fallacy
08 Egocentric bias
07 Pseudocertainty effect
07 Primacy effect
07 Illusion of transparency
06 Trait ascription bias
06 Hostile media effect
06 Ambiguity effect
04 Unit bias
04 Post-purchase rationalization
04 Notational bias
04 Effect)
04 Contrast effect
03 Subadditivity effect
03 Restorff effect
02 Illusion of asymmetric insight
01 Reminiscence bump
How do you correct your mistakes?
For example, I recently found out I did something wrong at a conference. In my bio, in areas of expertise I should have written what I can teach about, and in areas of interest what I want to be taught about. This seems to maximize value for me. How do I keep that mistake from happening in the future? I don't know when the next conference will happen. Do I write it on anki and memorize that as a failure mode?
More generally, when you recognize a failure mode in yourself how do you constrain your future self so that it doesn't repeat this failure mode? How do you proceduralize and install the solution?
The point in the first paragraph is well made, but in a way that might be interpret as misvaluing the content which is in fact, very good. It shifts the means from "Find the right advice" to "Figure out how to implement the advice you already know is right" which is a very notable change.
Excellent post, OP.