Epistemic Effort: Thought seriously for 5 minutes about it. Thought a bit about how to test it empirically. Spelled out my model a little bit. I'm >80% confident this is worth trying and seeing what happens. Spent 45 min writing post.

I've been pleased to see "Epistemic Status" hit a critical mass of adoption - I think it's a good habit for us to have. In addition to letting you know how seriously to take an individual post, it sends a signal about what sort of discussion you want to have, and helps remind other people to think about their own thinking.

I have a suggestion for an evolution of it - "Epistemic Effort" instead of status. Instead of "how confident you are", it's more of a measure of "what steps did you actually take to make sure this was accurate?" with some examples including:

  • Thought about it musingly
  • Made a 5 minute timer and thought seriously about possible flaws or refinements
  • Had a conversation with other people you epistemically respect and who helped refine it
  • Thought about how to do an empirical test
  • Thought about how to build a model that would let you make predictions about the thing
  • Did some kind of empirical test
  • Did a review of relevant literature
  • Ran an Randomized Control Trial

[Edit: the intention with these examples is for it to start with things that are fairly easy to do to get people in the habit of thinking about how to think better, but to have it quickly escalate to "empirical tests, hard to fake evidence and exposure to falsifiability"]

A few reasons I think this (most of these reasons are "things that seem likely to me" but which I haven't made any formal effort to test - they come from some background in game design and reading some books on habit formation, most of which weren't very well cited)

  • People are more likely to put effort into being rational if there's a relatively straightforward, understandable path to do so
  • People are more likely to put effort into being rational if they see other people doing it
  • People are more likely to put effort into being rational if they are rewarded (socially or otherwise) for doing so.
  • It's not obvious that people will get _especially_ socially rewarded for doing something like "Epistemic Effort" (or "Epistemic Status") but there are mild social rewards just for doing something you see other people doing, and a mild personal reward simply for doing something you believe to be virtuous (I wanted to say "dopamine" reward but then realized I honestly don't know if that's the mechanism, but "small internal brain happy feeling")
  • Less Wrong etc is a more valuable project if more people involved are putting more effort into thinking and communicating "rationally" (i.e. making an effort to make sure their beliefs align with the truth, and making sure to communicate so other people's beliefs align with the truth)
  • People range in their ability / time to put a lot of epistemic effort into things, but if there are easily achievable, well established "low end" efforts that are easy to remember and do, this reduces the barrier for newcomers to start building good habits. Having a nice range of recommended actions can provide a pseudo-gamified structure where there's always another slightly harder step you available to you.
  • In the process of writing this very post, I actually went from planning a quick, 2 paragraph post to the current version, when I realized I should really eat my own dogfood and make a minimal effort to increase my epistemic effort here. I didn't have that much time so I did a couple simpler techniques. But even that I think provided a lot of value.

Results of thinking about it for 5 minutes.

  • It occurred to me that explicitly demonstrating the results of putting epistemic effort into something might be motivational both for me and for anyone else thinking about doing this, hence this entire section. (This is sort of stream of conscious-y because I didn't want to force myself to do so much that I ended up going 'ugh I don't have time for this right now I'll do it later.')
  • One failure mode is that people end up putting minimal, token effort into things (i.e. randomly tried something on a couple doubleblinded people and call it a Randomized Control Trial).
  • Another is that people might end up defaulting to whatever the "common" sample efforts are, instead of thinking more creatively about how to refine their ideas. I think the benefit of providing a clear path to people who weren't thinking about this at all outweights people who might end up being less agenty about their epistemology, but it seems like something to be aware of.
  • I don't think it's worth the effort to run a "serious" empirical test of this, but I do think it'd be worth the effort, if a number of people started doing this on their posts, to run a followup informal survey asking "did you do this? Did it work out for you? Do you have feedback."
  • A neat nice-to-have, if people actually started adopting this and it proved useful, might be for it to automatically appear at the top of new posts, along with a link to a wiki entry that explained what the deal was.

Next actions, if you found this post persuasive:

Next time you're writing any kind of post intended to communicate an idea (whether on Less Wrong, Tumblr or Facebook), try adding "Epistemic Effort: " to the beginning of it. If it was intended to be a quick, lightweight post, just write it in its quick, lightweight form.

After the quick, lightweight post is complete, think about whether it'd be worth doing something as simple as "set a 5 minute timer and think about how to refine/refute the idea". If not, just write "thought about it musingly" after Epistemic Status. If so, start thinking about it more seriously and see where it leads.

While thinking about it for 5 minutes, some questions worth asking yourself:

  • If this were wrong, how would I know?
  • What actually led me to believe this was a good idea? Can I spell that out? In how much detail?
  • Where might I check to see if this idea has already been tried/discussed?
  • What pieces of the idea might you peel away or refine to make the idea stronger? Are there individual premises you might be wrong about? Do they invalidate the idea? Does removing them lead to a different idea?

New Comment
25 comments, sorted by Click to highlight new comments since:

I think this is an intriguing idea. It reminds me of the discussion of vague language in Superforecasters: the intelligence community put a lot of effort into optimizing language in its reports, such as "possibly", "likely", "almost certainly", etc. only to later realize that they didn't know what those words meant (in terms of probabilities) even after discussing word choice quite a bit. Someone went around asking analysts what was meant by the words and got very different probabilities from different people. Similarly, being careful about describing epistemic status is likely better than not doing so, but the words may not have as clear a meaning as you think; describing what you actually did seems like a good way to keep yourself honest.

This is sort of stream of conscious-y because I didn't want to force myself to do so much that I ended up going 'ugh I don't have time for this right now I'll do it later.'

This seems like an important failure mode. People may not be so interested in writing if they also have to indicate their amount of effort. :p

Another problem I see is: "epistemic effort" may not play as well with signalling games as "epistemic status". Putting your specific efforts out there rather than a degree of confidence can make something look scatter-brained that is actually well-conceived. For example, "thought about it for 5 minutes" on your post doesn't indicate the degree of support the idea has from background knowledge and experience. Your actual post indicates that. But, the real reasons you think something will work will often be hard to summarize in a small blurb and will instead go into the content of the post itself.

I think what I'll do is keep using the "epistemic status" tag, starting with a vague status such as "confident" or "speculative", and then providing more detail with the notion of "epistemic effort" in mind.

Yeah. One thing that'd be very counterintuitive for many is that "thought seriously for 5 minutes" is actually a surprisingly high bar. (i.e. most people do not do that at all).

I also wonder if it might be better to eschew vague labels like "confident" and instead issue more concrete statements like "80% confident this will be useful for X", in the interest of avoiding the problem you list in the first paragraph.

Integration with existing signaling games is an important concern. I do think it'd be valuable to shift our status norms to reflect "what useful labor actually looks like." For example, when someone says "I will think seriously about that for 5 minutes", I actually now have very positive associations with that - I take it to mean that, while it's not their top priority, they bothered (at all) to evaluate it in "serious mode."

That may or may not be achievable to shift, but I think ideally our cultural norms / internal status games should help us learn what actually works, and give more transparency on how much time people actually spend thinking about it.

I agree. My knee-jerk reaction "does not play well with signaling games" has a lot to do with how "thought about it for five minutes" looks to someone not familiar with the LW meme about that. This might address my other point as well: perhaps if people were used to seeing things like "thought for 5 minutes" and "did one google search" and so on, they would feel comfortable writing those things and it wouldn't make people self-conscious. Or maybe not, if (like me) they also think about how non-community-members would read the labels.

I think some beliefs I have that others may not share is that:

a) for the Less Wrong project to succeed, we'll need to develop a lot of cultural tools that are different from how mainstream society does things, and that may mean it'll necessarily look weird to outsiders.

b) the Less Wrong brand is, frankly, already pretty thoroughly ruined. Not enough effort was put into PR concerns in the early days. By now, it's well known as a pretty weird place, and trying to salvage that reputation seems like wasted effort to me. This is almost convenient though, because it means we can now focus mostly on doing what is effective rather than worrying (overly much, anyhow), about what looks weird.

(Epistemic effort: have not actually done anything to validate either of these assumptions, they are just how it seems to me)

So I think it, as far as posts on Less Wrong itself go, it's totally fine to do things that don't necessarily interface with outside status games.

I do think it's also handy to develop cultural tools that are accessible to the rest of the world. On your facebook wall, it'd be nice to have status-tags that other people might want to adopt. Where possible, I do agree that we should cultivate norms on Less Wrong that work well in the rest of the world. But I don't think we should completely shy away from norms.

I personally don't have any intuitive sense of "thought about it for 5 minutes" to be a bad thing (especially for the reasons WhySpace describes - it tells people what to expect). And if you're publishing a major essay that you want to be taken seriously, it's important that you put more than 5 minutes of thought into it. If you want it to be taken seriously, the option of "actually put in more work that sounds genuinely impressive" is an option.

My reaction was the complete opposite: an excellent signaling tool.

If I just made a connection between 2 things, and want to bounce ideas off people, I can just say Epistemic effort: Thought about it musingly, and wanted to bounce the idea off a few people and no one will judge me for have a partially formed idea. Perhaps more importantly, anyone not interested in such things will skip the article, instead of wasting their time, and feeling the need do discourage my offending low quality post.

I'm not a fan of "brainstorming" in particular, but there really does seem to be a problem that brainstorming is trying to solve, and I think this would help solve it. Refining a diamond in the rough doesn't have to be a solitary activity; it can be a community task.

"Epistemic status" metadata plays two roles: first, it can be used to suggest to a reader how seriously they should consider a set of ideas. Second, though, it can have an effect on signalling games, as you suggest. Those who lack social confidence can find it harder to contribute to discussions, and having the ability to qualify statements with tags like "epistemic status: not confident" makes it easier for them to contribute without feeling like they're trying to be the center of attention.

"Epistemic effort" metadata fulfills the first of these roles, but not the second; if you're having a slow day and take longer to figure something out or write something than normal, then it might make you feel bad to admit that it took you as much effort as it did to produce said content. Nudging social norms towards use of "epistemic effort" over "epistemic status" provides readers with the benefit of having more information, at the potential cost of discouraging some posters.

Generally: articles should have (important) metadata.

  • epistemic effort
  • trigger warnings
  • declarations of conflict of interest
  • "the author is an expert on this topic, because he/she works for this institution"

There are probably more examples, some of them already used in practice. Could be interesting to have them all listed at one place, where the authors could look before writing an article.

Yeah. I'm wary about the metadata getting too big (if your metadata ends up being almost as large as your post content you probably have too much), but hopefully any given post doesn't require all of those things unless it's a longer piece anyway.

When people agree on the meaning of metadata, or if there is a place to hyperlink to, metadata could be short. For example:

How to teach cats programming

  • epistemic effort: I spent an afternoon trying to teach my cat Java, didn't read any literature on teaching animals
  • trigger warnings: Java, animal abuse
  • conflict of interest: my neighbor's cousin volunteers in an animal shelter
  • level of expertise: 10 years working as a Java programmer

(...the text of the article...)

I spent an afternoon trying to teach my cat Java... trigger warnings: Java, animal abuse

I lol'ed at the idea that teaching a cat specifically java constitutes animal abuse!

However, I am wondering whether LW really needs trigger warnings as part of a standard set of article metadata. IMO most trigger warnings are infantilizing, and I've never seen anything on LW that would benefit from a trigger warning. I suppose that if someone is putting forward an Roko's-Basilisk-like thought experiment and feels so inclined, he/she could add a "memetic hazard" warning. But making it a standard part of a recommended set of metadata is a bad idea IMO.

I think "content note" has overtaken Trigger Warning as the word-of-choice, for good reason: trigger warning originally referred to a particular bad thing that might happen if someone read something that ywas psychologically triggering, but there's a wide variety of reasons you might want to warn people about your content beyond literal triggering in the classical sense.

Then, "standard" shouldn't mean mandatory, only "if you want to express something like this, use this keyword; also look at the list of keywords to find things that may be useful for your article".

So, if your articles doesn't need trigger warnings, don't use them. The word is there because (1) it is sometimes useful, (2) it may be useful to remind authors to consider whether it is useful, and (3) so that all authors that want to express this kind of idea use the same label for it, making it easier for readers familiar with the convention.

Perhaps people will now think: "Hm, I wasn't going to write a post about this because I didn't feel that it would be good enough for LessWrong. And I'm not sure how confident exactly I am, so I'm not sure what I'd write for epistemic status. But with epistemic effort, I can just describe the efforts I made, readers will draw the right conclusions regarding how much weight to assign my article. And I don't have to worry about misleading anyone. I otherwise would worry about misleading readers because I get the sense that implicitly, a post on LessWrong implies high epistemic effort and high epistemic status."

If people think this, my impression is that it would be largely a good thing.

Main worry I have: Cognitive ability is really unevenly distributed, and an idea from Robin Hanson about anything related to Economics that he thought about for 5 minutes, is probably going to be much more thought-out than a 5-hour research project from an average Econ undergraduate. Epistemically effort seems to indicate that time spent on a post is more important than other factors, though it actually doesn't rank very highly in my model of what makes a good post.

Certainly agree - the question is "what's the best social norm we can do here?"

The alternatives are:

1) no disclaimer - just say what you thinkt 2) Epistemic Status (as currently practiced) where Robin Hanson and Joe Somebody both say "I'm pretty confident in this" or "just a musing" or whatever. 3) Epistemic Effort, where Robin Hanson and Joe Somebody both say "I thought about this for 5 minutes", and depending on how well you know Robin Hanson, you might consider that equal, or you might have past experience which causes you to weight Hanson's (or Somebody's) 5 minutes more.

Of those options, I think 3 is best, especially since it lets you know things like how Robin Hanson goes about validating his own claims (I currently have no idea, and honestly I have not actually been especially impressed with Robin Hanson as compared to many other thinkers in our blogosphere).

If you have suggestions for an improvement over #3 or think #1 or #2 are better, I'd be interested in hearing them though. :)

Interesting idea. I think epistemic effort might be useful for solving this kind of problem. (Summary: Morendil offers a case study in a sort of "academic urban legend", where people keep using a citation to support an assertion that it actually provides flimsy evidence for.)

(Epistemic effort: I'm recalling Morendil's post from memory, it's possible I'm summarizing it incorrectly.)

It could be argued that describing the evidence ought to be a significant focus of the body of a post, if you are trying to persuade someone who might not otherwise be persuaded. It's certainly a useful concept though, particularly when you just want to quickly share an idea that you expect will not require significant persuasive effort to be well received (to avoid people being overconfident in your idea).

Yeah - my intent was the epistemic effort was a short summary of how seriously you were trying to examine an idea. So you might say "ran a small informal experiment" or "did a serious literature review" in the Epistemic Effort section, and then later in the post you'd expound on what that actually meant.

I think this is conflating two different things: how much effort did you spend (e.g. "Made a 5 minute timer and thought seriously about possible flaws or refinements") and what did you do to empirically test the idea (e.g. "Ran an Randomized Control Trial"). These two are somewhat correlated, hopefully, but it's possible both to engage in very complex and effortful flights of fancy without any connection to the empirics, and to start with simple and basic actual tests without thinking about the problem too hard or too long.

I think I'd rather see people state the Falsifiability Status of their idea, say, on a scale ranging from trivial to never. For example:

  • Trivial: most anyone could do it in a few minutes at most
  • Easy: many people could do it with a modest investment of time
  • Moderate: amateurs could do it but it would require effort
  • Difficult: doable by professionals with a budget; amateurs will have huge difficulties
  • Very hard: probably doable, but requires large teams and a lot of money and effort
  • Potentially possible: probably doable in the future, but not at the current technology level
  • Never: not falsifiable

I do think that's a fair point (I certainly don't want to encourage people to think hard but non-usefully).

The way I attempted to approach that problem was to have the lower/easier things on the scale be optimized for being easy to get started on (I think the difference between "think seriously for 5 minutes" and "have an actual model of why you think this makes sense" and "not have those things" is fairly extreme, and people who aren't even doing that should do that).

But I also intended to have it quickly escalate to "think about how to empirically test it, and/or actually empirically test it if possible." (Not sure if this addresses your concern but I edited the post to make that a bit more clear)

[-]gjm00

I'd split it up a bit differently. "How much effort" versus "What actual reason does anyone else have to agree with this?". The latter isn't quite the same as "what empirical testing has it had?" but I think it's the more important question.

However, "Epistemic effort" as proposed here (1) probably does correlate pretty well with "how much reason to agree?", (2) also gives information about the separate question "how seriously is this person taking this discussion?" and (3) is probably easier to give an accurate account of than "what actual reason ...".

"What actual reason does anyone else have to agree with this?"

I think this formulation is a bit iffy, since the monkey-mind might easily have a LOT of reasons to agree with something without any of these reasons being connected to correctly reflecting reality.

[+][comment deleted]00