A year ago, I started trying to deliberate practice skills that would "help people figure out the answers to confusing, important questions." I experimented with Thinking Physics questions, GPQA questions, Puzzle Games , Strategy Games, and a stupid twitchy reflex game I had struggled to beat for 8 years[1]. Then I went back to my day job and tried figuring stuff out there too.

The most important skill I was trying to learn was Metastrategic Brainstorming – the skill of looking at a confusing, hopeless situation, and nonetheless brainstorming useful ways to get traction or avoid wasted motion. 

Normally, when you want to get good at something, it's great to stand on the shoulders of giants and copy all the existing techniques. But this is challenging if you're trying to solve important, confusing problems because there probably isn't (much) established wisdom on how to solve it. You may need to discover techniques that haven't been invented yet, or synthesize multiple approaches that haven't previously been combined. At the very least, you may need to find an existing technique buried in the internet somewhere, which hasn't been linked to your problem with easy-to-search keywords, without anyone to help you.

In the process of doing this, I found a few skills that came up over and over again.

I didn't invent the following skills, but I feel like I "won" them in some sense via a painstaking "throw myself into the deep end" method. I feel slightly wary of publishing them in a list here, because I think it was useful to me to have to figure out for myself that they were the right tool for the job. And they seem like kinda useful "entry level" techniques, that you're more likely to successfully discover for yourself.

But, I think this is hard enough, and forcing people to discover everything for themselves seems unlikely to be worth it.

The skills that seemed most general, in both practice and on my day job, are:

  1. Taking breaks/naps
  2. Working Memory facility
  3. Patience
  4. Knowing what confusion/deconfusion feels like
  5. Actually Fucking Backchain
  6. Asking "what is my goal?"
  7. Having multiple plans

There were other skills I already was tracking, like Noticing, or Focusing. There were also somewhat more classic "How to Solve It" style tools for breaking down problems. There are also a host of skills I need when translating this all into my day-job, like "setting reminders for myself" and "negotiating with coworkers."

But the skills listed above feel like they stood out in some way as particularly general, and particularly relevant for "solve confusing problems."

Taking breaks, or naps

Difficult intellectual labor is exhausting. During the two weeks I was working on solving Thinking Physics problems, I worked for like 5 hours a day and then was completely fucked up in the evenings. Other researchers I've talked to report similar things. 

During my workshops, one of the most useful things I recommended people was "actually go take a nap. If you don't think you can take a real nap because you can't sleep, go into a pitch black room and lie down for awhile, and the worst case scenario is your brain will mull over the problem in a somewhat more spacious/relaxed way for awhile."

Practical tips: Get yourself a sleeping mask, noise machine (I prefer a fan or air purifier), and access to a nearby space where you can rest. Leave your devices outside the room. 

Working Memory facility

Often a topic feels overwhelming. This is often because it's just too complicated to grasp with your raw working memory. But, there are various tools (paper, spreadsheets, larger monitors, etc) that can improve this. And, you can develop the skill of noticing "okay this isn't fitting in my head, or even on my big monitor – what would let it fit in my head?".

The "eye opening" example of this for me was trying to solve a physics problem that included 3 dimensions (but one of the dimensions was "time"). I tried drawing it out but grasping the time-progression was still hard. I came up with the idea of using semi-translucent paper, where I would draw a diagram of what each step looked like on separate pages, and then I could see where different elements were pointed.

I've also found "spreadsheet literacy" a recurring skill – google sheets is very versatile but you have to know what all the functions are, have a knack for arranging elements in an easy-to-parse way, etc.

Practical Tips: Have lots of kinds of paper, whiteboards and writing supplies around. 

On google sheets:

  • You can make collapsible sections, which help with making complex models while also being able to hide away the complexity of sub-parts you aren't modeling. (hotkey: alt-shift-rightarrow)
  • switch between "display formulas" and the default "display the result" mode 
    (hotkey: ctrl-backtic)

Patience

If I'm doing something confusingly hard, there are times when it feels painful to sit with it, and I'm itchy to pick some solution and get moving. This comes up in two major areas:

  • Deliberate/purposeful practice. A key thing here is to be practicing the form perfectly, which requires somehow slowing things down such that you have time to get each moment correct. The urge to rush can undo the practice you just did, by training mistakes, or prevent you from actually successfully practicing at all.
  • Launching into a plan, or declaring yourself done, when you are still confused. Sitting with the uncomfortableness feels very itchy. But vague plans can be completely wrong, resting on confused assumptions.

There is of course a corresponding virtue of "just get moving, build up momentum and start learning through iteration." The wisdom to tell the difference between "I'm still confused and need to orient more" and "I need to get moving" is important. But, an important skill there is at least being capable of sitting with impatient discomfort, in the situations where that's the right call.

Practical tips: I dunno I still kinda suck at this one, but I find taking deep breaths, and deliberately reminding myself "Slow is smooth, smooth is fast".

Know what deconfusion, or "having a crisp understanding" feels like

A skill from both Thinking Physics and Baba is You. 

When I first started Thinking Physics, I would get to a point where "I dunno, I feel pretty sure, and I can't think of more things to do to resolve my confusion", and then impatiently roll the dice on checking the answer. Sometimes I'd be right, more often I'd be wrong.

Eventually I had a breakthrough where I came up with a crisp model of the problem, and was like "oh, man, now it would actually be really surprising if any of the other answers were true." From then on... well, I'd still sometimes got things wrong (mostly due to impatience). But, I could tell when I still had pieces of my model that were vague and unprincipled.

Similarly in Baba is You: when people don't have a crisp understanding of the puzzle, they tend to grasp and straws and motivatedly-reason their way into accepting sketchy sounding premises. But, the true solution to a level often feels very crisp and clear and inevitable. 

Learning to notice this difference in qualia is quite valuable.

Practical tips: This is where Noticing and Focusing are key, but are worthwhile for helping you notice subtle differences in how an idea feels in your mind. 

Try either making explicit numerical predictions about whether you've solved an exercise before you look up the answer; or, write down a qualitative sentence like "I feel like I really deeply understand the answer" or "this seems probably right but I feel some niggling doubts."

Actually Fucking Backchain

From Baba is You, I got the fear-of-god put in me seeing how easy it was to spin my wheels, tinkering around with stuff that was nearby/accessible/easy-to-iterate-with, and how that often turned out to not be at all relevant to beating a level. 

I had much less wasted motion when I thought through "What would the final stages of beating this level need to look like? What are the stages just before those?", and focusing my attention on things that could help me get to that point.

One might say "well, Baba is You is a game optimized for being counterintuitive and weird." I think for many people with a goal like "build a successful startup", it can sometimes be fine to just be forward chaining with stuff that feels promising, rather than trying to backchain from complex goals.

But, when I eyeball the realworld problems I'm contending with (i.e. x-risk) they really do seem like there's a relatively narrow set of victory conditions that plausibly work. And, many of the projects I feel tempted to start don't actually really seem that relevant.

(I also think great startup founders are often doing a mix of forward and backward chaining. i.e. I bet Jeff Bezos was like "okay I bet I could make an online bookstore that worked", was also thinking "but, what if I ultimately wanted the Everything Store? What are obstacles that I'd eventually need to deal")

Practical tips: First, come up with at least one concrete story of what the world would look like, if you succeeded at your goals. Try hard to come up with 2 other worlds, so you aren't too anchored on your first idea. 

Then, try to concretely imagine the steps that would come a little bit earlier in the chain from the end.

Don't worry about mapping out all the different possible branches of the future (that's impossible). But, for a complex plan, have at least one end-to-end plan that connects all the dots from the resources you have now to the victory condition at the end.

Meanwhile, while doing most of your work, notice when it starts to feel like you've lost the plot (try just making a little tally-mark whenever you notice yourself rabbitholing in a way that feels off). And ask "what is my goal? is what I'm currently doing helping"

Ask "What's My Goal?"

Actually, having just written the previous section, I'm recalling a simpler, more commonly useful skill, which is simply to ask "what is my goal?". 

Often, doing this throws into relief that you're not sure what your goal is. Sometimes, asking the question immediately prompts me to notice a key insight I'd been glossing over.

If you're not sure what your goal is, try babbling some things that seem like they might be a goal, and then ask yourself "does this feel like what I'm most trying to achieve right now?"

It's okay if it turns out your goal is different or more embarrassing-sounding than you thought. You might say "Actually, you know what? I do care more about showing off and sounding smart, than actually learning something right now." (But, you might also realize "okay, I separately care about learning something and sounding smart", and then be more intentional about finding a tactic that accomplishes both)

Once you remember (or figure out) your goal, as you brainstorm strategies, ask yourself "would I be surprised if this didn't help me achieve my goals?", and then prioritize strategies that you viscerally expect to work.

Always[2] try to have 3 hypotheses

This one is important enough to be it's own post. (I guess, probably most of these are important enough to be a full post? But, this one especially)

But, listing here for completeness: 

Whether you are solving a puzzle, or figuring out how to solve a puzzle, or deciding what your team should do next week, try to have multiple hypotheses. (I usually say "try to have at least 3 plans", but a plan is basically a special case – a hypothesis about "doing X is the best way to achieve goal Y"). 

They each need to be a hypothesis you actually believe in.

I say "at least 3", because I think it gets you "fully intellectually agile." If you only have one plan, it's easy to get tunnel vision on it and not notice that it's doomed. Two ideas helps free up your mind, but then you might still evaluate all evidence in terms of "does this support idea 1 or idea 2?". If you have 3 different hypotheses, it's much more natural to keep generating more hypotheses, and to pivot around in a multiple dimensional space of possibility.

 

  1. ^

    This wasn't practice for "solving confusing problems", but it was practice for "accomplish anything at all through purposeful practice." It took 40 hours despite me being IMO very fucking clever about it.

  2. ^

    Okay not literally always, but, whenever you're about to spend a large chunk of timing on a project or figuring something out.

New Comment
18 comments, sorted by Click to highlight new comments since:
[-]aysja294

Similarly in Baba is You: when people don't have a crisp understanding of the puzzle, they tend to grasp and straws and motivatedly-reason their way into accepting sketchy sounding premises. But, the true solution to a level often feels very crisp and clear and inevitable. 

A few of the scientists I’ve read about have realized their big ideas in moments of insight (e.g., Darwin for natural selection, Einstein for special relativity). My current guess about what’s going on is something like: as you attempt to understand a concept you don’t already have, you’re picking up clues about what the shape of the answer is going to look like (i.e., constraints). Once you have these constraints in place, your mind is searching for something which satisfies all of them (both explicitly and implicitly), and insight is the thing that happens when you find a solution that does.

At least, this is what it feels like for me when I play Baba is You (i.e., when I have the experience you’re describing here). I always know when a fake solution is fake, because it’s really easy to tell that it violates one of the explicit constraints the game has set out (although sometimes in desperation I try it anyway :p). But it’s immediately clear when I've landed on the right solution (even before I execute it), because all of the constraints I’ve been holding in my head get satisfied at once. I think that’s the “clicking” feeling.

Darwin’s insight about natural selection was also shaped by constraints. His time on the Beagle had led him to believe that “species gradually become modified,” but he was pretty puzzled as to how the changes were being introduced. If you imagine a beige lizard that lives in the sand, for instance, it seems pretty clear that it isn’t the lizard itself (its will) which causes its beigeness, nor is it the sand that directly causes the coloring (as in, physically causes it within the lizards lifetime). But then, how are changes introduced, if not by the organism, and not by the environment directly? He was stuck on this for awhile, when: “I can remember the very spot in the road, whilst in my carriage, when to my joy the solution occurred to me.”

There’s more going on to Darwin’s story than that, but I do think it has elements of the sort of thing you're describing here. Jeff Hawkins also describes insight as a constraint satisfaction problem pretty explicitly (I might’ve gotten this idea from him), and he experienced it when coming up with the idea of a thousand brains.

Anyway, I don’t have a strong sense of how crucial this sort of thing is to novel conceptual inquiry in general, but I do think it’s quite interesting. It seems like one of the ways that someone can go from a pre-paradigmatic grasping around for clues sort of thing to a fully formed solution.

The wisdom to tell the difference between "I'm still confused and need to orient more" and "I need to get moving" is important.

My stock advice on this: if you don't even know what the hard parts are, then you should just dive in and try stuff in order to gather object-level data. Once you understand enough to have a decent idea of what the hard parts/bottlenecks are and why they're hard, you're past the point where "just dive in" has much value, and you should hold off on proposing solutions/plans.

I bet Jeff Bezos was like "okay I bet I could make an online bookstore that worked", was also thinking "but, what if I ultimately wanted the Everything Store? What are obstacles that I'd eventually need to deal"

 

I've heard Jeff Bezos was aiming for Everything Store from the beginning, and started with books because they have a limited range of sizes. 

Yeah pretty much. In more detail:

Bezos explained why he chose to only sell books on his website — at least, at first — in a “lost” video interview recorded at a Special Libraries Association conference in June 1997, which resurfaced in 2019 when it was posted online by entrepreneur Brian Roemmele.

Out of all the different products you might be able to sell online, books offered an “incredibly unusual benefit” that set them apart, Bezos said.

“There are more items in the book category than there are items in any other category, by far,” said Bezos. “Music is No. 2 — there are about 200,000 active music CDs at any given time. But in the book space, there are over 3 million different books worldwide active in print at any given time across all languages, [and] more than 1.5 million in English alone.”

When Bezos launched Amazon in 1994, the internet and e-commerce industry were still in their earliest stages. He knew it would take some time before online shopping became ubiquitous, he said, so he wanted to start with a concept that couldn’t be replicated by a seller with only physical locations.

“When you have that many items, you can literally build a store online that couldn’t exist any other way,” he explained. “That’s important right now, because the web is still an infant technology. Basically, right now, if you can do things using a more traditional method, you probably should do them using the traditional method.”

Still, Bezos hinted at the company’s potential for expansion, noting that “we’re moving forward in so many different areas.”

“This is Day 1,” he added. “This is the very beginning. This is the Kittyhawk stage of electronic commerce.”

Yeah I had vaguely remembered this story but not the details.

[-]Ruby60

Curated. I think Raemon's been doing a lot of work in the last year pushing this stuff, and this post pulls together in one place a lot of good ideas/advice/approach.

I would guess that because of the slow or absent feedback loops, people don't realize how bad human reasoning and decision-making is when operating outside of the familiar and quick feedback. That's many domains, but certainly the whole AI situation. Ray is going after the hard stuff here.

And the same time, this stuff ends up feeling like the "eat your vegetables" of reasoning and decision-making. It's not sexy, or at least it's not that fun to sit down and e.g. try to brainstorm further plans when you already have one that's appealing. or backchain from your ostensible goal. I think we'd be in a better place if these skills and practices were normalized, in the sense of there's a norm that you do these things and if you don't, then you're probably screwing up.

I might get time to right more detail in the future but wanted to say I found this helpful.

Here are some other lessons I learned over the last months from doing alignment research on trying to find the right ontology for modelling (my) cognition:

  • make examples: if you have an abstract goal or abstract hypothesis/belief/model/plan, clarify on an example what it predicts.
    • e.g. given thought "i might want to see why some thoughts are generated" -> what does that mean more concretely? -> more concrete subcases:
      • could mean noticing a common cognitive strategy
      • could mean noticing some suggestive concept similarity
      • maybe other stuff like causal inference (-> notice i'm not that clear on what i mean by that -> clarify and try come up with example):
        • basically i mean that maybe sometimes a thought pops into my mind because it is a causal consequence of some other event i modeled in my mind
          • e.g. "i imagine hiking a longer path" -> "i imagine missing the call i have in the evening"
    • (NOTE: feel free to skip this.) e.g. for proposal "i might want to try to train lessons by seeing how i could've applied it to other recent problems i attacked": 
      • -> trigger make example -> ok what lesson could i pick -> ok let's use "don't just ask what your goal is but also why you want to achieve the goal"
        • -> see how i could've applied the lesson to recent problems i attacked -> what are recent problems i attacked? (-> darn i'm in studying phase and don't have recent problems super clearly on my mind -> ok let's pick an upcoming problem -> i guess i had planned recent because then i might be better able to evaluate whether it would've actually helped but nvm now) -> upcoming problem: "plan how to setup initial deliberate practice plan for training myself to better model what is happening in my mind"
          • --apply-lesson-> why do i want to do this? -> basically want to train introspection to get better data for forming models of my mind, but thought that 'introspection' is not an atomic skill but comes from training to see particular patterns or so -> also better modelling my mind might help me to notice when i ought to apply some lesson -> also want to better understand how i make progress to review and improve ---> (ok could go further here but i guess this is enough since it's just a sub-example of sth else)
        • --> review: "was this useful" ->
          • i guess applying the lesson for the upcoming problem is a good idea
          • i guess for training lessons i need to focus more on the trigger part and not just go through problems and apply it
          • i guess considering that i originally wanted to just make an example for how to come up with an example for seeing whether a hypothesis is true i derailed a lot in a way that the takeaway will be the lesson "don't just ask what your goal is but also why you want to achieve the goal" or the lesson "i might want to try to train lessons i learn from review by seeing how i could've applied it to other recent problems i attacked" instead of "if you have an abstract hypothesis/belief/model/plan, clarify on an example what it predicts" -> OOPS
            • -> but i did learn that "i guess for training lessons i need to focus more on the trigger part and not just go through problems and apply it" from actually imagining a concrete example of how it would look like if i "train lessons by seeing how i could've applied it to other recent problems i attacked".
      • (yes it's often annoying and not easy, especially in the beginning)
      • (if you can't you're still confused.)
  • generally be very concrete. also Taboo your words and Replace the Symbol with the Substance.
  • I want to highlight the "what is my goal" part
    • also ask "why do i want to achieve the goal?"
      • (-> minimize goodhart)
    • clarify your goal as much as possible.
      • (again Taboo your words...)
      • clarify your goal on examples
        • when your goal is to understand something, how will you be able to apply the understanding on a particular example?
        • (NOTE: feel free to skip this.) e.g. say my goal is "become able to model what is happening in my mind (especially when doing research)"
          • => goal on example: "become able to model what happened in my mind when i came up with with the above bullet point (the one that starts with 'when your goal is to understand something')"
          • => clarify goal on example: "well i don't know the right ontology yet for modelling processes in my mind, but here's an example of how it could look like (though it won't look like that, i'm only trying to get clearer on the shape of the answer): 
            • 'short-term-memory context: previous recalled models on "why do i want to achieve the goal" and some other bunch -> loaded query "get example for 'clarify your goal on examples'" -> parse how goal might look like -> think "perhaps i want to understand sth" -> adjust query to "get example for 'clarify your goal to understand sth on examples'" -(unconscious-in-parallel)-> background process also updates "why do i want to achieve the goal?" to "why do i want to achieve the goal to understand sth?" -(unconscious)-> suggests answer that i can better model particular cases that come up -> match the active "example" concept to "particular cases" -> try apply this -> ...'. 
              • (tbc this example for what might have happened in my mind is totally made up and not grounded in observations. (i didn't try to introspect there.) (in this case it was actually probably more of a cached thought.)
          • ...and well actually it's maybe not that much of a chain of thoughts but more like what mini-goals are being attacked or what models are loaded. and perhaps not actually in that much detail for some time to come. but when i have the right frames it might be easier to compress introspective observations into it. (...?)"
          • (yeah sry i maybe ought to have used a simpler example.)
  • try to extract the core subproblems/subgoals.
    • e.g. for corrigibility a core subproblem is the shutdown problem
    • e.g. for "solve unbounded diamond maximizer propsoal" a core problem is "understand what kind of low-level structure can correspond to high-level abstractions".
    • (for both examples above one needs to get even more precise core subproblems recursively.)
    • (NOTE: bad initial example, feel free to skip.) e.g. for "solve alignment to a pivotal level" (which is actually a bad example because it doesn't factor neatly) a not-incredibly-awful initial breakdown for the my approach might be:
      • find the right ontology for modelling cognition; find some way we could understand how smart AIs work.
      • solve ontology identification
      • solve subsystem alignment; figure out how to design a robust goal slot into the AI
      • solve corrigibility 
      • find what pivotal act to aim for
    • i guess make sure you think concretely and list subproblems and summarize the core ones and iterate. follow up on confusions where problems still seem sorta mixed up. let your mind find the natural clusters. (not sure if that will be sufficient for you.)
  • tie yourself closely to observations.
  • drop all assumptions. apply generalized hold off on proposing solutions.
    • in particular, try not to make implicit non-well-founded assumptions about how the ontology looks like, like asking questions like "how can i formalize concepts" or "what are thoughts". just see the observations as directly as possible and try to form a model of the underlying process that generates those.
  • first form a model about concrete narrow cases and only later generalize
    • e.g. first study precisely what thoughtchains you had on particular combinatorics problems before hypothesizing what kind of general strategies your mind uses.
    • special case: (first) plan how to solve specific research subproblems rather than trying to come up with good general methodology for the kinds of problems you are attacking.
  • don't overplan and rather try stuff and review how it's going and replan and iterate.
    • this is sorta an application of "get concrete" where you get concrete through actually trying the thing rather than imagining how it will look like if you attack it.
  • often review how you made progress and see how to improve.
  • (also generally lots of other lessons from the sequences (and HPMoR): notice confusion, noticing mysterious answers, know how an actual reduction looks like, and probably a whole bunch more)

Tbc those are sorta advanced techniques. Most alignment researchers are working on line of hopes that pretty obviously won't work while thinking it has a decent chance of working, and I wouldn't expect those techniques to be much use for them.
There is this quite foundational skill of "notice when you're not making progress / when your proposals aren't actually good" which is required for further improvement, and I do not know how to teach this. It's related to be very concrete and noticing mysterious answers or when you're too abstract or still confused. It might sorta be what Eliezer calls security mindset.

(Also other small caveat: I did not yet get very clear great results out of my research, but I do think I am making faster progress (and I'm setting myself a very high standard). I'd guess the lessons can probably be misunderstood and misapplied, but idk.)

go into a pitch black room and lie down for awhile, and the worst case scenario is your brain will mull over the problem in a somewhat more spacious/relaxed way for awhile

I wonder if this is more or less effective than a short session of mindfulness meditation.

There are two problems I often encounter: lack of (1) attention or (2) energy. Interestingly, I've found mindfulness meditation to be highly effective with (1) but I haven't seen an obvious improvement to (2).

Perhaps, "lie down in dark room" could help for (2). I'll have to test it out!

If you have 3 different hypotheses, it's much more natural to keep generating more hypotheses, and to pivot around in a multiple dimensional space of possibility.

The way I imagine this playing out—though I'm not sure how literal this is—is that three hypotheses plus the starting state generate a three-dimensional vector basis when they're in general position. A corollary would be that you want neither all three nor two alongside the starting state to be collinear.

Yeah. I'm not entirely sure what you mean by the metaphor, but I use a similar metaphor. 

I think, ideally, you'll have at least 3 plans and at least 3 different frames for "what are your plans trying to accomplish". i.e. the 3 plans are solving somewhat different problems. And this is specifically getting at the "being able to move around freely in 3 dimensions" thingy.

The actual top-level blogpost I plan to write is probably going to be called "3 plans, 2 frames, and a crux", which is based on what actually feels reasonable to ask of people in most circumstances. i.e:

  • Come up with at least 3 different plans. 
  • At least two of them should looking at the problem differently. 
  • If it's obvious which plan is your favorite, figure out what observations you could make later on that might change your mind towards one of the other two.

When you're fluent at this sort of thing, you should have lots of different plans and backup plans in mind such that you have way more than 3, but, this felt like a reasonable bid given that people will probably find all these steps cumbersome and annoying at first.

I've also found "spreadsheet literacy" a recurring skill

What exactly do you use spreadsheets for? Any examples?

I want to acknowledge that Fluent, Cruxy Predictions isn't on this list, despite me being really into it. I am currently making a bet that this will turn out to be useful as part of a mature rationality practice, and is my current mechanism for tracking whether any of this stuff is really paying off to the best of my awareness. But it's still very much a less mature skill that I'm not quite sure how to best execute.

I too have found Google Sheets to be an incredibly useful tool in recent years. One tip, ChatGPT is really good at writing very complex scripts and formulas for spreadsheets. Usually all you need to do is tell it what your goal is. Sometimes you have to take a couple cracks at it to get it just right, but now I can figure out some pretty amazing things with them.

Always try to have 3 hypotheses

This one is important enough to be it's own post.

Duncan Sabien's Split and Commit would seem to be cover (large?) parts of that.

Yeah I do think it is basically a reformulation of that idea, but tailored for a different cluster of problems. (I also think Leave a Line of Retreat and some other sequences posts cover similar ground).

Random extra tip on naps is doing a yoga nidra or non sleep deep rest. You don't have to fall asleep to get the benefits of a nap+. It also has some extra growth hormone release and dopamine generation afterwards. (Huberman bro, out)

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?