Eliezer describes the similarity between understanding what a locally valid proof step is in mathematics, knowing there are bad arguments for true conclusions, and that for civilization to hold together, people need to apply rules impartially even if it feels like it costs them in a particular instance. He fears that our society is losing appreciation for these points.
A coordination problem is when everyone is taking some action A, and we’d rather all be taking action B, but it’s bad if we don’t all move to B at the same time. Common knowledge is the name for the epistemic state we’re collectively in, when we know we can all start choosing action B - and trust everyone else to do the same.
How do human beings produce knowledge? When we describe rational thought processes, we tend to think of them as essentially deterministic, deliberate, and algorithmic. After some self-examination, however, Alkjash came to think that his process is closer to babbling many random strings and later filtering by a heuristic.
In this post, Alkjash explores the concept of Babble and Prune as a model for thought generation. Babble refers to generating many possibilities with a weak heuristic, while Prune involves using a stronger heuristic to filter and select the best options. He discusses how this model relates to creativity, problem-solving, and various aspects of human cognition and culture.
Babble is our ability to generate ideas. Prune is our ability to filter those ideas. For many people, Prune is too strong, so they don't generate enough ideas. This post explores how to relax Prune to let more ideas through.
Eliezer explores a dichotomy between "thinking in toolboxes" and "thinking in laws".
Toolbox thinkers are oriented around a "big bag of tools that you adapt to your circumstances." Law thinkers are oriented around universal laws, which might or might not be useful tools, but which help us model the world and scope out problem-spaces. There seems to be confusion when toolbox and law thinkers talk to each other.
Often you can compare your own Fermi estimates with those of other people, and that’s sort of cool, but what’s way more interesting is when they share what variables and models they used to get to the estimate. This lets you actually update your model in a deeper way.
Scott Alexander reviews and expands on Paul Graham's "hierarchy of disagreement" to create a broader and more detailed taxonomy of argument types, from the most productive to the least. He discusses the difficulty and importance of avoiding lower levels of argument, and the value of seeking "high-level generators of disagreement" even when they don't lead to agreement.
There are problems with the obvious-seeming "wizard's code of honesty" aka "never say things that are false". Sometimes, even exceptionally honest people lie (such as when hiding fugitives from an unjust regime). If "never lie" is unworkable as an absolute rule, what code of conduct should highly honest people aspire to?
Some people claim that aesthetics don't mean anything, and are resistant to the idea that they could. After all, aesthetic preferences are very individual.
Sarah argues that the skeptics have a point, but they're too epistemically conservative. Colors don't have intrinsic meanings, but they do have shared connotations within a culture. There's obviously some signal being carried through aesthetic choices.
By default, humans are a kludgy bundle of impulses. But we have the ability to reflect upon our decision making, and the implications thereof, and derive better overall policies. You might want to become a more robust, coherent agent – in particular if you're operating in an unfamiliar domain, where common wisdom can't guide you.