Overcoming the mind-killer
I've been asked to start a thread in order to continue a debate I started in the comments of an otherwise-unrelated post. I started to write a post on that topic, found myself introducing my work by way of explanation, and then realized that this was a sub-topic all its own which is of substantial relevance to at least one of the replies to my comments in that post -- and a much better topic for a first-ever post/thread .
So I'm going to write that introductory post first, and then start another thread specifically on the topic under debate.
Individual vs. Group Epistemic Rationality
It's common practice in this community to differentiate forms of rationality along the axes of epistemic vs. instrumental, and individual vs. group, giving rise to four possible combinations. I think our shared goal, as indicated by the motto "rationalists win", is ultimately to improve group instrumental rationality. Generally, improving each of these forms of rationality also tends to improve the others, but sometimes conflicts arise between them. In this post I point out one such conflict between individual epistemic rationality and group epistemic rationality.
We place a lot of emphases here on calibrating individual levels of confidence (i.e., subjective probabilities), and on the idea that rational individuals will tend to converge toward agreement about the proper level of confidence in any particular idea as they update upon available evidence. But I argue that from a group perspective, it's sometimes better to have a spread of individual levels of confidence about the individually rational level. Perhaps paradoxically, disagreements among individuals can be good for the group.
Epistemic Luck
Who we learn from and with can profoundly influence our beliefs. There's no obvious way to compensate. Is it time to panic?
During one of my epistemology classes, my professor admitted (I can't recall the context) that his opinions on the topic would probably be different had he attended a different graduate school.
What a peculiar thing for an epistemologist to admit!
Of course, on the one hand, he's almost certainly right. Schools have their cultures, their traditional views, their favorite literature providers, their set of available teachers. These have a decided enough effect that I've heard "X was a student of Y" used to mean "X holds views basically like Y's". And everybody knows this. And people still show a distinct trend of agreeing with their teachers' views, even the most controversial - not an unbroken trend, but still an obvious one. So it's not at all unlikely that, yes, had the professor gone to a different graduate school, he'd believe something else about his subject, and he's not making a mistake in so acknowledging...
But on the other hand... but... but...
But how can he say that, and look so undubiously at the views he picked up this way? Surely the truth about knowledge and justification isn't correlated with which school you went to - even a little bit! Surely he knows that!
Value Uncertainty and the Singleton Scenario
In January of last year, Nick Bostrom wrote a post on Overcoming Bias about his and Toby Ord’s proposed method of handling moral uncertainty. To abstract away a bit from their specific proposal, the general approach was to convert a problem involving moral uncertainty into a game of negotiation, with each player’s bargaining power determined by one’s confidence in the moral philosophy represented by that player.
Robin Hanson suggested in his comments to Nick’s post that moral uncertainty should be handled the same way we're supposed to handle ordinary uncertainty, by using standard decision theory (i.e., expected utility maximization). Nick’s reply was that many ethical systems don’t fit into the standard decision theory framework, so it’s hard to see how to combine them that way.
In this post, I suggest we look into the seemingly easier problem of value uncertainty, in which we fix a consequentialist ethical system, and just try to deal with uncertainty about values (i.e., utility function). Value uncertainty can be considered a special case of moral uncertainty in which there is no apparent obstacle to applying Robin’s suggestion. I’ll consider a specific example of a decision problem involving value uncertainty, and work out how Nick and Toby’s negotiation approach differs in its treatment of the problem from standard decision theory. Besides showing the difference in the approaches, I think the specific problem is also quite important in its own right.
The problem I want to consider is, suppose we believe that a singleton scenario is very unlikely, but may have very high utility if it were realized, should we focus most of our attention and effort into trying to increase its probability and/or improve its outcome? The main issue here is (putting aside uncertainty about what will happen after a singleton scenario is realized) uncertainty about how much we value what is likely to happen.
Sorting Out Sticky Brains
tl;dr: Just because it doesn't seem like we should be able to have beliefs we acknowledge to be irrational, doesn't mean we don't have them. If this happens to you, here's a tool to help conceptualize and work around that phenomenon.
There's a general feeling that by the time you've acknowledged that some belief you hold is not based on rational evidence, it has already evaporated. The very act of realizing it's not something you should believe makes it go away. If that's your experience, I applaud your well-organized mind! It's serving you well. This is exactly as it should be.
If only we were all so lucky.
Brains are sticky things. They will hang onto comfortable beliefs that don't make sense anymore, view the world through familiar filters that should have been discarded long ago, see significances and patterns and illusions even if they're known by the rest of the brain to be irrelevant. Beliefs should be formed on the basis of sound evidence. But that's not the only mechanism we have in our skulls to form them. We're equipped to come by them in other ways, too. It's been observed1 that believing contradictions is only bad because it entails believing falsehoods. If you can't get rid of one belief in a contradiction, and that's the false one, then believing a contradiction is the best you can do, because then at least you have the true belief too.
The mechanism I use to deal with this is to label my beliefs "official" and "unofficial". My official beliefs have a second-order stamp of approval. I believe them, and I believe that I should believe them. Meanwhile, the "unofficial" beliefs are those I can't get rid of, or am not motivated to try really hard to get rid of because they aren't problematic enough to be worth the trouble. They might or might not outright contradict an official belief, but regardless, I try not to act on them.
The Power of Positivist Thinking
Related to: No Logical Positivist I, Making Beliefs Pay Rent, How An Algorithm Feels From Inside, Disguised Queries
Call me non-conformist, call me one man against the world, but...I kinda like logical positivism.
The logical positivists were a dour, no-nonsense group of early 20th-century European philosophers. Indeed, the phrase "no-nonsense" seems almost invented to describe the Positivists. They liked nothing better then to reject the pet topics of other philosophers as being untestable and therefore meaningless. Is the true also the beautiful? Meaningless! Is there a destiny to the affairs of humankind? Meaningless? What is justice? Meaningless! Are rights inalienable? Meaningless!
Positivism became stricter and stricter, defining more and more things as meaningless, until someone finally pointed out that positivism itself was meaningless by the positivists' definitions, at which point the entire system vanished in a puff of logic. Okay, it wasn't that simple. It took several decades and Popper's falsifiabilism to seal its coffin. But vanish it did. It remains one of the least lamented theories in the history of philosophy, because if there is one thing philosophers hate it's people telling them they can't argue about meaningless stuff.
But if we've learned anything from fantasy books, it is that any cabal of ancient wise men destroyed by their own hubris at the height of their glory must leave behind a single ridiculously powerful artifact, which in the right hands gains the power to dispel darkness and annihilate the forces of evil.
The positivists left us the idea of verifiability, and it's time we started using it more.
The ethic of hand-washing and community epistemic practice
by Steve Rayhawk and Anna Salamon. (Joint authorship; there's currently no way to notate that in the Reddit code base.)
Related to: Use the Native Architecture
When cholera moves through countries with poor drinking water sanitation, it apparently becomes more virulent. When it moves through countries that have clean drinking water (more exactly, countries that reliably keep fecal matter out of the drinking water), it becomes less virulent. The theory is that cholera faces a tradeoff between rapidly copying within its human host (so that it has more copies to spread) and keeping its host well enough to wander around infecting others. If person-to-person transmission is cholera’s only means of spreading, it will evolve to keep its host well enough to spread it. If it can instead spread through the drinking water (and thus spread even from hosts who are too ill to go out), it will evolve toward increased lethality. (Critics here.)
I’m stealing this line of thinking from my friend Jennifer Rodriguez-Mueller, but: I’m curious whether anyone’s gotten analogous results for the progress and mutation of ideas, among communities with different communication media and/or different habits for deciding which ideas to adopt and pass on. Are there differences between religions that are passed down vertically (parent to child) vs. horizontally (peer to peer), since the former do better when their bearers raise more children? Do mass media such as radio, TV, newspapers, or printing presses decrease the functionality of the average person’s ideas, by allowing ideas to spread in a manner that is less dependent on their average host’s prestige and influence? (The intuition here is that prestige and influence might be positively correlated with the functionality of the host’s ideas, at least in some domains, while the contingencies determining whether an idea spreads through mass media instruments might have less to do with functionality.)
Extending this analogy -- most of us were taught as children to wash our hands. We were given the rationale, not only of keeping ourselves from getting sick, but also of making sure we don’t infect others. There’s an ethic of sanitariness that draws from the ethic of being good community members.
Chaotic Inversion
I was recently having a conversation with some friends on the topic of hour-by-hour productivity and willpower maintenance—something I've struggled with my whole life.
I can avoid running away from a hard problem the first time I see it (perseverance on a timescale of seconds), and I can stick to the same problem for years; but to keep working on a timescale of hours is a constant battle for me. It goes without saying that I've already read reams and reams of advice; and the most help I got from it was realizing that a sizable fraction other creative professionals had the same problem, and couldn't beat it either, no matter how reasonable all the advice sounds.
"What do you do when you can't work?" my friends asked me. (Conversation probably not accurate, this is a very loose gist.)
And I replied that I usually browse random websites, or watch a short video.
"Well," they said, "if you know you can't work for a while, you should watch a movie or something."
"Unfortunately," I replied, "I have to do something whose time comes in short units, like browsing the Web or watching short videos, because I might become able to work again at any time, and I can't predict when—"
And then I stopped, because I'd just had a revelation.
Aiming at the Target
Previously in series: Belief in Intelligence
Previously, I spoke of that very strange epistemic position one can occupy, wherein you don't know exactly where Kasparov will move on the chessboard, and yet your state of knowledge about the game is very different than if you faced a random move-generator with the same subjective probability distribution - in particular, you expect Kasparov to win. I have beliefs about where Kasparov wants to steer the future, and beliefs about his power to do so.
Well, and how do I describe this knowledge, exactly?
In the case of chess, there's a simple function that classifies chess positions into wins for black, wins for white, and drawn games. If I know which side Kasparov is playing, I know the class of chess positions Kasparov is aiming for. (If I don't know which side Kasparov is playing, I can't predict whether black or white will win - which is not the same as confidently predicting a drawn game.)
More generally, I can describe motivations using a preference ordering. When I consider two potential outcomes, X and Y, I can say that I prefer X to Y; prefer Y to X; or find myself indifferent between them. I would write these relations as X > Y; X < Y; and X ~ Y.
Suppose that you have the ordering A < B ~ C < D ~ E. Then you like B more than A, and C more than A. {B, C}, belonging to the same class, seem equally desirable to you; you are indifferent between which of {B, C} you receive, though you would rather have either than A, and you would rather have something from the class {D, E} than {B, C}.
When I think you're a powerful intelligence, and I think I know something about your preferences, then I'll predict that you'll steer reality into regions that are higher in your preference ordering.
Belief in Intelligence
Previously in series: Expected Creative Surprises
Since I am so uncertain of Kasparov's moves, what is the empirical content of my belief that "Kasparov is a highly intelligent chess player"? What real-world experience does my belief tell me to anticipate? Is it a cleverly masked form of total ignorance?
To sharpen the dilemma, suppose Kasparov plays against some mere chess grandmaster Mr. G, who's not in the running for world champion. My own ability is far too low to distinguish between these levels of chess skill. When I try to guess Kasparov's move, or Mr. G's next move, all I can do is try to guess "the best chess move" using my own meager knowledge of chess. Then I would produce exactly the same prediction for Kasparov's move or Mr. G's move in any particular chess position. So what is the empirical content of my belief that "Kasparov is a better chess player than Mr. G"?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)