Roles are Martial Arts for Agency
A long time ago I thought that Martial Arts simply taught you how to fight – the right way to throw a punch, the best technique for blocking and countering an attack, etc. I thought training consisted of recognizing these attacks and choosing the correct responses more quickly, as well as simply faster/stronger physical execution of same. It was later that I learned that the entire purpose of martial arts is to train your body to react with minimal conscious deliberation, to remove “you” from the equation as much as possible.
The reason is of course that conscious thought is too slow. If you have to think about what you’re doing, you’ve already lost. It’s been said that if you had to think about walking to do it, you’d never make it across the room. Fighting is no different. (It isn’t just fighting either – anything that requires quick reaction suffers when exposed to conscious thought. I used to love Rock Band. One day when playing a particularly difficult guitar solo on expert I nailed 100%… except “I” didn’t do it at all. My eyes saw the notes, my hands executed them, and no where was I involved in the process. It was both exhilarating and creepy, and I basically dropped the game soon after.)
You’ve seen how long it takes a human to learn to walk effortlessly. That's a situation with a single constant force, an unmoving surface, no agents working against you, and minimal emotional agitation. No wonder it takes hundreds of hours, repeating the same basic movements over and over again, to attain even a basic level of martial mastery. To make your body react correctly without any thinking involved. When Neo says “I Know Kung Fu” he isn’t surprised that he now has knowledge he didn’t have before. He’s amazed that his body now reacts in the optimal manner when attacked without his involvement.
All of this is simply focusing on pure reaction time – it doesn’t even take into account the emotional terror of another human seeking to do violence to you. It doesn’t capture the indecision of how to respond, the paralysis of having to choose between outcomes which are all awful and you don’t know which will be worse, and the surge of hormones. The training of your body to respond without your involvement bypasses all of those obstacles as well.
This is the true strength of Martial Arts – eliminating your slow, conscious deliberation and acting while there is still time to do so.
Roles are the Martial Arts of Agency.
When one is well-trained in a certain Role, one defaults to certain prescribed actions immediately and confidently. I’ve acted as a guy standing around watching people faint in an overcrowded room, and I’ve acted as the guy telling people to clear the area. The difference was in one I had the role of Corporate Pleb, and the other I had the role of Guy Responsible For This Shit. You know the difference between the guy at the bar who breaks up a fight, and the guy who stands back and watches it happen? The former thinks of himself as the guy who stops fights. They could even be the same guy, on different nights. The role itself creates the actions, and it creates them as an immediate reflex. By the time corporate-me is done thinking “Huh, what’s this? Oh, this looks bad. Someone fainted? Wow, never seen that before. Damn, hope they’re OK. I should call 911.” enforcer-me has already yelled for the room to clear and whipped out a phone.
Roles are the difference between Hufflepuffs gawking when Neville tumbles off his broom (Protected), and Harry screaming “Wingardium Leviosa” (Protector). Draco insulted them afterwards, but it wasn’t a fair insult – they never had the slightest chance to react in time, given the role they were in. Roles are the difference between Minerva ordering Hagrid to stay with the children while she forms troll-hunting parties (Protector), and Harry standing around doing nothing while time slowly ticks away (Protected). Eventually he switched roles. But it took Agency to do so. It took time.
Agency is awesome. Half this site is devoted to becoming better at Agency. But Agency is slow. Roles allow real-time action under stress.
Agency has a place of course. Agency is what causes us to decide that Martial Arts training is important, that has us choose a Martial Art, and then continue to train month after month. Agency is what lets us decide which Roles we want to play, and practice the psychology and execution of those roles. But when the time for action is at hand, Agency is too slow. Ensure that you have trained enough for the next challenge, because it is the training that will see you through it, not your agenty conscious thinking.
As an aside, most major failures I’ve seen recently are when everyone assumed that someone else had the role of Guy In Charge If Shit Goes Down. I suggest that, in any gathering of rationalists, they begin the meeting by choosing one person to be Dictator In Extremis should something break. Doesn’t have to be the same person as whoever is leading. Would be best if it was someone comfortable in the role and/or with experience in it. But really there just needs to be one. Anyone.
cross-posted from my blog
Confused as to usefulness of 'consciousness' as a concept
Years ago, before I had come across many of the power tools in statistics, information theory, algorithmics, decision theory, or the Sequences, I was very confused by the concept of intelligence. Like many, I was inclined to reify it as some mysterious, effectively-supernatural force that tilted success at problem-solving in various domains towards the 'intelligent', and which occupied a scale imperfectly captured by measures such as IQ.
Realising that 'intelligence' (as a ranking of agents or as a scale) was a lossy compression of an infinity of statements about the relative success of different agents in various situations was part of dissolving the confusion; the reason that those called 'intelligent' or 'skillful' succeeded more often was that there were underlying processes that had a greater average tendency to output success, and that greater average success caused the application of the labels.
Any agent can be made to lose by an adversarial environment. But for a fixed set of environments, there might be some types of decision processes that do relatively well over that set of environments than other processes, and one can quantify this relative success in any number of ways.
It's almost embarrassing to write that since put that way, it's obvious. But it still seems to me that intelligence is reified (for example, look at most discussions about IQ), and the same basic mistake is made in other contexts, e.g. the commonly-held teleological approach to physical and mental diseases or 'conditions', in which the label is treated as if—by some force of supernatural linguistic determinism—it *causes* the condition, rather than the symptoms of the condition, in their presentation, causing the application of the labels. Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the space of possible or actual humans, the validity of which can only be granted by how well it summarises the characteristics.
For the sake of brevity, even when we realise these approximations, we often use them without commenting upon or disclaiming our usage, and in many cases this is sensible. Indeed, in many cases it's not clear what the exact, decompressed form of a concept would be, or it seems obvious that there can in fact be no single, unique rigorous form of the concept, but that the usage of the imprecise term is still reasonably consistent and correlates usefully with some relevant phenomenon (e.g. tendency to successfully solve problems). Hearing that one person has a higher IQ than another might allow one to make more reliable predictions about who will have the higher lifetime income, for example.
However, widespread use of such shorthands has drawbacks. If a term like 'intelligence' is used without concern or without understanding of its core (i.e. tendencies of agents to succeed in varying situations, or 'efficient cross-domain optimization'), then it might be used teleologically; the term is reified (the mental causal graph goes from "optimising algorithm->success->'intelligent'" to "'intelligent'->success").
In this teleological mode, it feels like 'intelligence' is the 'prime mover' in the system, rather than a description applied retroactively to a set of correlations. But knowledge of those correlations makes the term redundant; once we are aware of the correlations, the term 'intelligence' is just a pointer to them, and does not add anything to them. Despite this, it seems to me that some smart people get caught up in obsessing about reified intelligence (or measures like IQ) as if it were a magical key to all else.
Over the past while, I have been leaning more and more towards the conclusion that the term 'consciousness' is used in similarly dubious ways, and today it occurred to me that there is a very strong analogy between the potential failure modes of discussion of 'consciousness' and between the potential failure modes of discussion of 'intelligence'. In fact, I suspect that the perils of 'consciousness' might be far greater than those of 'intelligence'.
~
A few weeks ago, Scott Aaronson posted to his blog a criticism of integrated information theory (IIT). IIT attempts to provide a quantitative measure of the consciousness of a system. (Specifically, a nonnegative real number phi). Scott points out what he sees as failures of the measure phi to meet the desiderata of a definition or measure of consciousness, thereby arguing that IIT fails to capture the notion of consciousness.
What I read and understood of Scott's criticism seemed sound and decisive, but I can't shake a feeling that such arguments about measuring consciousness are missing the broader point that all such measures of consciousness are doomed to failure from the start, in the same way that arguments about specific measures of intelligence are missing a broader point about lossy compression.
Let's say I ask you to make predictions about the outcome of a game of half-court basketball between Alpha and Beta. Your prior knowledge is that Alpha always beats Beta at (individual versions of) every sport except half-court basketball, and that Beta always beats Alpha at half-court basketball. From this fact you assign Alpha a Sports Quotient (SQ) of 100 and Beta an SQ of 10. Since Alpha's SQ is greater than Beta's, you confidently predict that Alpha will beat Beta at half-court.
Of course, that would be wrong, wrong, wrong; the SQ's are encoding (or compressing) the comparative strengths and weaknesses of Alpha and Beta across various sports, and in particular that Alpha always loses to Beta at half-court. (In fact, if other combinations lead to the same SQ's, then *not even that much* information is encoded, since other combinations might lead to the same scores.) So to just look at the SQ's as numbers and use that as your prediction criterion is a knowably inferior strategy to looking at the details of the case in question, i.e. the actual past results of half-court games between the two.
Since measures like this fictional SQ or actual IQ or fuzzy (or even quantitative) notions of consciousness are at best shorthands for specific abilities or behaviours, tabooing the shorthand should never leave you with less information, since a true shorthand, by its very nature, does not add any information.
When I look at something like IIT, which (if Scott's criticism is accurate) assigns a superhuman consciousness score to a system that evaluates a polynomial at some points, my reaction is pretty much, "Well, this kind of flaw is pretty much inevitable in such an overambitious definition."
Six months ago, I wrote:
"...it feels like there's a useful (but possibly quantitative and not qualitative) difference between myself (obviously 'conscious' for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?))..."
Mark Friedenbach replied recently (so, a few months later):
"Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)"
I feel like if Mark had made that reply soon after my comment, I might have had a hard time formulating why, but that I would have been inclined towards disputing that my computer is conscious. As it is, at this point I am struggling to see that there is any meaningful disagreement here. Would we disagree over what my computer can do? What information it can process? What tasks it is good for, and for which not so much?
What about an animal instead of my computer? Would we feel the same philosophical confusion over any given capability of an average chicken? An average human?
Even if we did disagree (or at least did not agree) over, say, an average human's ability to detect and avoid ultraviolet light without artificial aids and modern knowledge, this lack of agreement would not feel like a messy, confusing philosophical one. It would feel like one tractable to direct experimentation. You know, like, blindfold some experimental subjects, control subjects, and experimenters and see how the experimental subjects react to ultraviolet light versus other light in the control subjects. Just like if we were arguing about whether Alpha or Beta is the better athlete, there would be no mystery left over once we'd agreed about their relative abilities at every athletic activity. At most there would be terminological bickering over which scoring rule over athletic activities we should be using to measure 'athletic ability', but not any disagreement for any fixed measure.
I have been turning it over for a while now, and I am struggling to think of contexts in which consciousness really holds up to attempts to reify it. If asked why it doesn't make sense to politely ask a virus to stop multiplying because it's going to kill its host, a conceivable response might be something like, "Erm, you know it's not conscious, right?" This response might well do the job. But if pressed to cash out this response, what we're really concerned with is the absence of the usual physical-biological processes by which talking at a system might affect its behaviour, so that there is no reason to expect the polite request to increase the chance of the favourable outcome. Sufficient knowledge of physics and biology could make this even more rigorous, and no reference need be made to consciousness.
The only context in which the notion of consciousness seems inextricable from the statement is in ethical statements like, "We shouldn't eat chickens because they're conscious." In such statements, it feels like a particular sense of 'conscious' is being used, one which is *defined* (or at least characterised) as 'the thing that gives moral worth to creatures, such that we shouldn't eat them'. But then it's not clear why we should call this moral criterion 'consciousness'; insomuch as consciousness is about information processing or understanding an environment, it's not obvious what connection this has to moral worth. And insomuch as consciousness is the Magic Token of Moral Worth, it's not clear what it has to do with information processing.
If we relabelled zxcv=conscious and rewrote, "We shouldn't eat chickens because they're zxcv," then this makes it clearer that the explanation is not entirely satisfactory; what does zxcv have to do with moral worth? Well, what does consciousness have to do with moral worth? Conservation of argumentative work and the usual prohibitions on equivocation apply: You can't introduce a new sense of the word 'conscious' then plug it into a statement like "We shouldn't eat chickens because they're conscious" and dust your hands off as if your argumentative work is done. That work is done only if one's actual values and the definition of consciousness to do with information processing already exactly coincide, and this coincidence is known. But it seems to me like a claim of any such coincidence must stem from confusion rather than actual understanding of one's values; valuing a system commensurate with its ability to process information is a fake utility function.
When intelligence is reified, it becomes a teleological fake explanation; consistently successful people are consistently successful because they are known to be Intelligent, rather than their consistent success causing them to be called intelligent. Similarly consciousness becomes teleological in moral contexts: We shouldn't eat chickens because they are called Conscious, rather than 'these properties of chickens mean we shouldn't eat them, and chickens also qualify as conscious'.
So it is that I have recently been very skeptical of the term 'consciousness' (though grant that it can sometimes be a useful shorthand), and hence my question to you: Have I overlooked any counts in favour of the term 'consciousness'?
Too good to be true
A friend recently posted a link on his Facebook page to an informational graphic about the alleged link between the MMR vaccine and autism. It said, if I recall correctly, that out of 60 studies on the matter, not one had indicated a link.
Presumably, with 95% confidence.
This bothered me. What are the odds, supposing there is no link between X and Y, of conducting 60 studies of the matter, and of all 60 concluding, with 95% confidence, that there is no link between X and Y?
Answer: .95 ^ 60 = .046. (Use the first term of the binomial distribution.)
So if it were in fact true that 60 out of 60 studies failed to find a link between vaccines and autism at 95% confidence, this would prove, with 95% confidence, that studies in the literature are biased against finding a link between vaccines and autism.
Confound it! Correlation is (usually) not causation! But why not?
It is widely understood that statistical correlation between two variables ≠ causation. But despite this admonition, people are routinely overconfident in claiming correlations to support particular causal interpretations and are surprised by the results of randomized experiments, suggesting that they are biased & systematically underestimating the prevalence of confounds/common-causation. I speculate that in realistic causal networks or DAGs, the number of possible correlations grows faster than the number of possible causal relationships. So confounds really are that common, and since people do not think in DAGs, the imbalance also explains overconfidence.
Full article: http://www.gwern.net/Causality
Steelmanning Inefficiency
When considering writing a hypothetical apostasy or steelmanning an opinion I disagreed with, I looked around for something worthwhile, both for me to write and others to read. Yvain/Scott has already steelmanned Time Cube, which cannot be beaten as an intellectual challenge, but probably didn't teach us much of general use (except in interesting dinner parties). I wanted something hard, but potentially instructive.
So I decided to steelman one of the anti-sacred cows (sacred anti-cows?) of this community, namely inefficiency. It was interesting to find that it was a little easier than I thought; there are a lot of arguments already out there (though they generally don't come out explicitly in favour of "inefficiency"), it was a question of collecting them, stretching them beyond their domains of validity, and adding a few rhetorical tricks.
The strongest argument
Let's start strong: efficiency is the single most dangerous thing in the entire universe. Then we can work down from that:
A superintelligent AI could go out of control and optimise the universe in ways that are contrary to human survival. Some people are very worried about this; you may have encountered them at some point. One big problem seems to be that there is no such thing as a "reduced impact AI": if we give a superintelligent AI a seemingly innocuous goal such as "create more paperclips", then it would turn the entire universe into paperclips. Even if it had a more limited goal such as "create X paperclips", then it would turn the entire universe into redundant paperclips, methods for counting the paperclips it has, or methods for defending the paperclips it has - all because these massive transformations allow it to squeeze just a little bit more expected utility from the universe.
The problem is one of efficiency: of always choosing the maximal outcome. The problem would go away if the AI could be content with almost accomplishing its goal, or of being almost certain that its goal was accomplished. Under those circumstances, "create more paperclips" could be a viable goal. It's only because a self-modifying AI drives towards efficiency, that we have the problem in the first place. If the AI accepted being inefficient in its actions, even a little bit, the world would be much safer.
So the first strike against efficiency is that it's the most likely thing to destroy the world, humanity, and everything of worth and value in the universe. This could possibly give us some pause.
Double Illusion of Transparency
Followup to: Explainers Shoot High, Illusion of Transparency
My first true foray into Bayes For Everyone was writing An Intuitive Explanation of Bayesian Reasoning, still one of my most popular works. This is the Intuitive Explanation's origin story.
In December of 2002, I'd been sermonizing in a habitual IRC channels about what seemed to me like a very straightforward idea: How words, like all other useful forms of thought, are secretly a disguised form of Bayesian inference. I thought I was explaining clearly, and yet there was one fellow, it seemed, who didn't get it. This worried me, because this was someone who'd been very enthusiastic about my Bayesian sermons up to that point. He'd gone around telling people that Bayes was "the secret of the universe", a phrase I'd been known to use.
So I went into a private IRC conversation to clear up the sticking point.
A Dialogue On Doublethink
Followup to: Against Doublethink (sequence), Dark Arts of Rationality, Your Strength as a Rationalist
Doublethink
It is obvious that the same thing will not be willing to do or undergo opposites in the same part of itself, in relation to the same thing, at the same time. --Book IV of Plato's Republic
Can you simultaneously want sex and not want it? Can you believe in God and not believe in Him at the same time? Can you be fearless while frightened?
To be fair to Plato, this was meant not as an assertion that such contradictions are impossible, but as an argument that the soul has multiple parts. It seems we can, in fact, want something while also not wanting it. This is awfully strange, and it led Plato to conclude the soul must have multiple parts, for surely no one part could contain both sides of the contradiction.
Often, when we attempt to accept contradictory statements as correct, it causes cognitive dissonance--that nagging, itchy feeling in your brain that won't leave you alone until you admit that something is wrong. Like when you try to convince yourself that staying up just a little longer playing 2048 won't have adverse effects on the presentation you're giving tomorrow, when you know full well that's exactly what's going to happen.
But it may be that cognitive dissonance is the exception in the face of contradictions, rather than the rule. How would you know? If it doesn't cause any emotional friction, the two propositions will just sit quietly together in your brain, never mentioning that it's logically impossible for both of them to be true. When we accept a contradiction wholesale without cognitive dissonance, it's what Orwell called "doublethink".
When you're a mere mortal trying to get by in a complex universe, doublethink may be adaptive. If you want to be completely free of contradictory beliefs without spending your whole life alone in a cave, you'll likely waste a lot of your precious time working through conundrums, which will often produce even more conundrums.
Suppose I believe that my husband is faithful, and I also believe that the unfamiliar perfume on his collar indicates he's sleeping with other women without my permission. I could let that pesky little contradiction turn into an extended investigation that may ultimately ruin my marriage. Or I could get on with my day and leave my marriage intact.
It's better to just leave those kinds of thoughts alone, isn't it? It probably makes for a happier life.
Against Doublethink
Suppose you believe that driving is dangerous, and also that, while you are driving, you're completely safe. As established in Doublethink, there may be some benefits to letting that mental configuration be.
There are also some life-shattering downsides. One of the things you believe is false, you see, by the law of the excluded middle. In point of fact, it's the one that goes "I'm completely safe while driving". Believing false things has consequences.
Be irrationally optimistic about your driving skills, and you will be happily unconcerned where others sweat and fear. You won't have to put up with the inconvenience of a seatbelt. You will be happily unconcerned for a day, a week, a year. Then CRASH, and spend the rest of your life wishing you could scratch the itch in your phantom limb. Or paralyzed from the neck down. Or dead. It's not inevitable, but it's possible; how probable is it? You can't make that tradeoff rationally unless you know your real driving skills, so you can figure out how much danger you're placing yourself in. --Eliezer Yudkowsky, Doublethink (Choosing to be Biased)
What are beliefs for? Please pause for ten seconds and come up with your own answer.
Ultimately, I think beliefs are inputs for predictions. We're basically very complicated simulators that try to guess which actions will cause desired outcomes, like survival or reproduction or chocolate. We input beliefs about how the world behaves, make inferences from them to which experiences we should anticipate given various changes we might make to the world, and output behaviors that get us what we want, provided our simulations are good enough.
My car is making a mysterious ticking sound. I have many beliefs about cars, and one of them is that if my car makes noises it shouldn't, it will probably stop working eventually, and possibly explode. I can use this input to simulate the future. Since I've observed my car making a noise it shouldn't, I predict that my car will stop working. I also believe that there is something causing the ticking. So I predict that if I intervene and stop the ticking (in non-ridiculous ways), my car will keep working. My belief has thus led to the action of researching the ticking noise, planning some simple tests, and will probably lead to cleaning the sticky lifters.
If it's true that solving the ticking noise will keep my car running, then my beliefs will cash out in correctly anticipated experiences, and my actions will cause desired outcomes. If it's false, perhaps because the ticking can be solved without addressing a larger underlying problem, then the experiences I anticipate will not occur, and my actions may lead to my car exploding.
Doublethink guarantees that you believe falsehoods. Some of the time you'll call upon the true belief ("driving is dangerous"), anticipate future experiences accurately, and get the results you want from your chosen actions ("don't drive three times the speed limit at night while it's raining"). But some of the time, if you actually believe the false thing as well, you'll call upon the opposite belief, anticipate inaccurately, and choose the last action you'll ever take.
Without any principled algorithm determining which of the contradictory propositions to use as an input for the simulation at hand, you'll fail as often as you succeed. So it makes no sense to anticipate more positive outcomes from believing contradictions.
Contradictions may keep you happy as long as you never need to use them. Should you call upon them, though, to guide your actions, the debt on false beliefs will come due. You will drive too fast at night in the rain, you will crash, you will fly out of the car with no seat belt to restrain you, you will die, and it will be your fault.
Against Against Doublethink
What if Plato was pretty much right, and we sometimes believe contradictions because we're sort of not actually one single person?
It is not literally true that Systems 1 and 2 are separate individuals the way you and I are. But the idea of Systems 1 and 2 suggests to me something quite interesting with respect to the relationship between beliefs and their role in decision making, and modeling them as separate people with very different personalities seems to work pretty darn well when I test my suspicions.
I read Atlas Shrugged probably about a decade ago. I was impressed with its defense of capitalism, which really hammers home the reasons it’s good and important on a gut level. But I was equally turned off by its promotion of selfishness as a moral ideal. I thought that was *basically* just being a jerk. After all, if there’s one thing the world doesn’t need (I thought) it’s more selfishness.
Then I talked to a friend who told me Atlas Shrugged had changed his life. That he’d been raised in a really strict family that had told him that ever enjoying himself was selfish and made him a bad person, that he had to be working at every moment to make his family and other people happy or else let them shame him to pieces. And the revelation that it was sometimes okay to consider your own happiness gave him the strength to stand up to them and turn his life around, while still keeping the basic human instinct of helping others when he wanted to and he felt they deserved it (as, indeed, do Rand characters). --Scott of Slate Star Codex in All Debates Are Bravery Debates
If you're generous to a fault, "I should be more selfish" is probably a belief that will pay off in positive outcomes should you install it for future use. If you're selfish to a fault, the same belief will be harmful. So what if you were too generous half of the time and too selfish the other half? Well, then you would want to believe "I should be more selfish" with only the generous half, while disbelieving it with the selfish half.
Systems 1 and 2 need to hear different things. System 2 might be able to understand the reality of biases and make appropriate adjustments that would work if System 1 were on board, but System 1 isn't so great at being reasonable. And it's not System 2 that's in charge of most of your actions. If you want your beliefs to positively influence your actions (which is the point of beliefs, after all), you need to tailor your beliefs to System 1's needs.
For example: The planning fallacy is nearly ubiquitous. I know this because for the past three years or so, I've gotten everywhere five to fifteen minutes early. Almost every single person I meet with arrives five to fifteen minutes late. It is very rare for someone to be on time, and only twice in three years have I encountered the (rather awkward) circumstance of meeting with someone who also arrived early.
Before three years ago, I was also usually late, and I far underestimated how long my projects would take. I knew, abstractly and intellectually, about the planning fallacy, but that didn't stop System 1 from thinking things would go implausibly quickly. System 1's just optimistic like that. It responds to, "Dude, that is not going to work, and I have a twelve point argument supporting my position and suggesting alternative plans," with "Naaaaw, it'll be fine! We can totally make that deadline."
At some point (I don't remember when or exactly how), I gained the ability to look at the true due date, shift my System 1 beliefs to make up for the planning fallacy, and then hide my memory that I'd ever seen the original due date. I would see that my flight left at 2:30, and be surprised to discover on travel day that I was not late for my 2:00 flight, but a little early for my 2:30 one. I consistently finished projects on time, and only disasters caused me to be late for meetings. It took me about three months before I noticed the pattern and realized what must be going on.
I got a little worried I might make a mistake, such as leaving a meeting thinking the other person just wasn't going to show when the actual meeting time hadn't arrived. I did have a couple close calls along those lines. But it was easy enough to fix; in important cases, I started receiving Boomeranged notes from past-me around the time present-me expected things to start that said, "Surprise! You've still got ten minutes!"
This unquestionably improved my life. You don't realize just how inconvenient the planning fallacy is until you've left it behind. Clearly, considered in isolation, the action of believing falsely in this domain was instrumentally rational.
Doublethink, and the Dark Arts generally, applied to carefully chosen domains is a powerful tool. It's dumb to believe false things about really dangerous stuff like driving, obviously. But you don't have to doublethink indiscriminately. As long as you're careful, as long as you suspend epistemic rationality only when it's clearly beneficial to do so, employing doublethink at will is a great idea.
Instrumental rationality is what really matters. Epistemic rationality is useful, but what use is holding accurate beliefs in situations where that won't get you what you want?
Against Against Against Doublethink
There are indeed epistemically irrational actions that are instrumentally rational, and instrumental rationality is what really matters. It is pointless to believing true things if it doesn't get you what you want. This has always been very obvious to me, and it remains so.
There is a bigger picture.
Certain epistemic rationality techniques are not compatible with dark side epistemology. Most importantly, the Dark Arts do not play nicely with "notice your confusion", which is essentially your strength as a rationalist. If you use doublethink on purpose, confusion doesn't always indicate that you need to find out what false thing you believe so you can fix it. Sometimes you have to bury your confusion. There's an itsy bitsy pause where you try to predict whether it's useful to bury.
As soon as I finally decided to abandon the Dark Arts, I began to sweep out corners I'd allowed myself to neglect before. They were mainly corners I didn't know I'd neglected.
The first one I noticed was the way I responded to requests from my boyfriend. He'd mentioned before that I often seemed resentful when he made requests of me, and I'd insisted that he was wrong, that I was actually happy all the while. (Notice that in the short term, since I was probably going to do as he asked anyway, attending to the resentment would probably have made things more difficult for me.) This self-deception went on for months.
Shortly after I gave up doublethink, he made a request, and I felt a little stab of dissonance. Something I might have swept away before, because it seemed more immediately useful to bury the confusion than to notice it. But I thought (wordlessly and with my emotions), "No, look at it. This is exactly what I've decided to watch for. I have noticed confusion, and I will attend to it."
It was very upsetting at first to learn that he'd been right. I feared the implications for our relationship. But that fear didn't last, because we both knew the only problems you can solve are the ones you acknowledge, so it is a comfort to know the truth.
I was far more shaken by the realization that I really, truly was ignorant that this had been happening. Not because the consequences of this one bit of ignorance were so important, but because who knows what other epistemic curses have hidden themselves in the shadows? I realized that I had not been in control of my doublethink, that I couldn't have been.
Pinning down that one tiny little stab of dissonance took great preparation and effort, and there's no way I'd been working fast enough before. "How often," I wondered, "does this kind of thing happen?"
Very often, it turns out. I began noticing and acting on confusion several times a day, where before I'd been doing it a couple times a week. I wasn't just noticing things that I'd have ignored on purpose before; I was noticing things that would have slipped by because my reflexes slowed as I weighed the benefit of paying attention. "Ignore it" was not an available action in the face of confusion anymore, and that was a dramatic change. Because there are no disruptions, acting on confusion is becoming automatic.
I can't know for sure which bits of confusion I've noticed since the change would otherwise have slipped by unseen. But here's a plausible instance. Tonight I was having dinner with a friend I've met very recently. I was feeling s little bit tired and nervous, so I wasn't putting as much effort as usual into directing the conversation. At one point I realized we had stopped making making any progress toward my goals, since it was clear we were drifting toward small talk. In a tired and slightly nervous state, I imagine that I might have buried that bit of information and abdicated responsibility for the conversation--not by means of considering whether allowing small talk to happen was actually a good idea, but by not pouncing on the dissonance aggressively, and thereby letting it get away. Instead, I directed my attention at the feeling (without effort this time!), inquired of myself what precisely was causing it, identified the prediction that the current course of conversation was leading away from my goals, listed potential interventions, weighed their costs and benefits against my simulation of small talk, and said, "What are your terminal values?"
(I know that sounds like a lot of work, but it took at most three seconds. The hard part was building the pouncing reflex.)
When you know that some of your beliefs are false, and you know that leaving them be is instrumentally rational, you do not develop the automatic reflex of interrogating every suspicion of confusion. You might think you can do this selectively, but if you do, I strongly suspect you're wrong in exactly the way I was.
I have long been more viscerally motivated by things that are interesting or beautiful than by things that correspond to the territory. So it's not too surprising that toward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques--things like willful doublethink--that I double-thought myself into believing accuracy was not so great.
But I was wrong. And that mattered. Having accurate beliefs is a ridiculously convergent incentive. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map. Even if "beauty" is a terminal value, "being viscerally motivated to increase your ability to make predictions that lead to greater beauty" increases your odds of success.
Dark side epistemology prevents total dedication to continuous improvement in epistemic rationality. Though individual dark side actions may be instrumentally rational, the patterns of thought required to allow them are not. Though instrumental rationality is ultimately the goal, your instrumental rationality will always be limited by your epistemic rationality.
That was important enough to say again: Your instrumental rationality will always be limited by your epistemic rationality.
It only takes a fraction of a second to sweep an observation into the corner. You don't have time to decide whether looking at it might prove problematic. If you take the time to protect your compartments, false beliefs you don't endorse will slide in from everywhere through those split-second cracks in your art. You must attend to your confusion the very moment you notice it. You must be relentless an unmerciful toward your own beliefs.
Excellent epistemology is not the natural state of a human brain. Rationality is hard. Without extreme dedication and advanced training, without reliable automatic reflexes of rational thought, your belief structure will be a mess. You can't have totally automatic anti-rationalization reflexes if you use doublethink as a technique of instrumental rationality.
This has been a difficult lesson for me. I have lost some benefits I'd gained from the Dark Arts. I'm late now, sometimes. And painful truths are painful, though now they are sharp and fast instead of dull and damaging.
And it is so worth it! I have much more work to do before I can move on to the next thing. But whatever the next thing is, I'll tackle it with far more predictive power than I otherwise would have--though I doubt I'd have noticed the difference.
So when I say that I'm against against against doublethink--that dark side epistemology is bad--I mean that there is more potential on the light side, not that the dark side has no redeeming features. Its fruits hang low, and they are delicious.
But the fruits of the light side are worth the climb. You'll never even know they're there if you gorge yourself in the dark forever.
Willpower Depletion vs Willpower Distraction
I once asked a room full of about 100 neuroscientists whether willpower depletion was a thing, and there was widespread disagreement with the idea. (A propos, this is a great way to quickly gauge consensus in a field.) Basically, for a while some researchers believed that willpower depletion "is" glucose depletion in the prefrontal cortex, but some more recent experiments have failed to replicate this, e.g. by finding that the mere taste of sugar is enough to "replenish" willpower faster than the time it takes blood to move from the mouth to the brain:
Carbohydrate mouth-rinses activate dopaminergic pathways in the striatum–a region of the brain associated with responses to reward (Kringelbach, 2004)–whereas artificially-sweetened non-carbohydrate mouth-rinses do not (Chambers et al., 2009). Thus, the sensing of carbohydrates in the mouth appears to signal the possibility of reward (i.e., the future availability of additional energy), which could motivate rather than fuel physical effort.-- Molden, D. C. et al, The Motivational versus Metabolic Effects of Carbohydrates on Self-Control. Psychological Science.
Stanford's Carol Dweck and Greg Walden even found that hinting to people that using willpower is energizing might actually make them less depletable:
When we had people read statements that reminded them of the power of willpower like, “Sometimes, working on a strenuous mental task can make you feel energized for further challenging activities,” they kept on working and performing well with no sign of depletion. They made half as many mistakes on a difficult cognitive task as people who read statements about limited willpower. In another study, they scored 15 percent better on I.Q. problems.-- Dweck and Walden, Willpower: It’s in Your Head? New York Times.
While these are all interesting empirical findings, there’s a very similar phenomenon that’s much less debated and which could explain many of these observations, but I think gets too little popular attention in these discussions:
Willpower is distractible.
Indeed, willpower and working memory are both strongly mediated by the dorsolateral prefontal cortex, so “distraction” could just be the two functions funging against one another. To use the terms of Stanovich popularized by Kahneman in Thinking: Fast and Slow, "System 2" can only override so many "System 1" defaults at any given moment.
So what’s going on when people say "willpower depletion"? I’m not sure, but even if willpower depletion is not a thing, the following distracting phenomena clearly are:
- Thirst
- Hunger
- Sleepiness
- Physical fatigue (like from running)
- Physical discomfort (like from sitting)
- That specific-other-thing you want to do
- Anxiety about willpower depletion
- Indignation at being asked for too much by bosses, partners, or experimenters...
... and "willpower depletion" might be nothing more than mental distraction by one of these processes. Perhaps it really is better to think of willpower as power (a rate) than energy (a resource).
If that’s true, then figuring out what processes might be distracting us might be much more useful than saying “I’m out of willpower” and giving up. Maybe try having a sip of water or a bit of food if your diet permits it. Maybe try reading lying down to see if you get nap-ish. Maybe set a timer to remind you to call that friend you keep thinking about.
The last two bullets,
- Anxiety about willpower depletion
- Indignation at being asked for too much by bosses, partners, or experimenters...
are also enough to explain why being told willpower depletion isn’t a thing might reduce the effects typically attributed to it: we might simply be less distracted by anxiety or indignation about doing “too much” willpower-intensive work in a short period of time.
Of course, any speculation about how human minds work in general is prone to the "typical mind fallacy". Maybe my willpower is depletable and yours isn’t. But then that wouldn’t explain why you can cause people to exhibit less willpower depletion by suggesting otherwise. But then again, most published research findings are false. But then again the research on the DLPFC and working memory seems relatively old and well established, and distraction is clearly a thing...
All in all, more of my chips are falling on the hypothesis that willpower “depletion” is often just willpower distraction, and that finding and addressing those distractions is probably a better a strategy than avoiding activities altogether in order to "conserve willpower".
On Terminal Goals and Virtue Ethics
Introduction
A few months ago, my friend said the following thing to me: “After seeing Divergent, I finally understand virtue ethics. The main character is a cross between Aristotle and you.”
That was an impossible-to-resist pitch, and I saw the movie. The thing that resonated most with me–also the thing that my friend thought I had in common with the main character–was the idea that you could make a particular decision, and set yourself down a particular course of action, in order to make yourself become a particular kind of person. Tris didn’t join the Dauntless cast because she thought they were doing the most good in society, or because she thought her comparative advantage to do good lay there–she chose it because they were brave, and she wasn’t, yet, and she wanted to be. Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’
(Tris did have a concept of some future world-outcomes being better than others, and wanting to have an effect on the world. But that wasn't the causal reason why she chose Dauntless; as far as I can tell, it was unrelated.)
My twelve-year-old self had a similar attitude. I read a lot of fiction, and stories had heroes, and I wanted to be like them–and that meant acquiring the right skills and the right traits. I knew I was terrible at reacting under pressure–that in the case of an earthquake or other natural disaster, I would freeze up and not be useful at all. Being good at reacting under pressure was an important trait for a hero to have. I could be sad that I didn’t have it, or I could decide to acquire it by doing the things that scared me over and over and over again. So that someday, when the world tried to throw bad things at my friends and family, I’d be ready.
You could call that an awfully passive way to look at things. It reveals a deep-seated belief that I’m not in control, that the world is big and complicated and beyond my ability to understand and predict, much less steer–that I am not the locus of control. But this way of thinking is an algorithm. It will almost always spit out an answer, when otherwise I might get stuck in the complexity and unpredictability of trying to make a particular outcome happen.
Virtue Ethics
I find the different houses of the HPMOR universe to be a very compelling metaphor. It’s not because they suggest actions to take; instead, they suggest virtues to focus on, so that when a particular situation comes up, you can act ‘in character.’ Courage and bravery for Gryffindor, for example. It also suggests the idea that different people can focus on different virtues–diversity is a useful thing to have in the world. (I'm probably mangling the concept of virtue ethics here, not having any background in philosophy, but it's the closest term for the thing I mean.)
I’ve thought a lot about the virtue of loyalty. In the past, loyalty has kept me with jobs and friends that, from an objective perspective, might not seem like the optimal things to spend my time on. But the costs of quitting and finding a new job, or cutting off friendships, wouldn’t just have been about direct consequences in the world, like needing to spend a bunch of time handing out resumes or having an unpleasant conversation. There would also be a shift within myself, a weakening in the drive towards loyalty. It wasn’t that I thought everyone ought to be extremely loyal–it’s a virtue with obvious downsides and failure modes. But it was a virtue that I wanted, partly because it seemed undervalued.
By calling myself a ‘loyal person’, I can aim myself in a particular direction without having to understand all the subcomponents of the world. More importantly, I can make decisions even when I’m rushed, or tired, or under cognitive strain that makes it hard to calculate through all of the consequences of a particular action.
Terminal Goals
The Less Wrong/CFAR/rationalist community puts a lot of emphasis on a different way of trying to be a hero–where you start from a terminal goal, like “saving the world”, and break it into subgoals, and do whatever it takes to accomplish it. In the past I’ve thought of myself as being mostly consequentialist, in terms of morality, and this is a very consequentialist way to think about being a good person. And it doesn't feel like it would work.
There are some bad reasons why it might feel wrong–i.e. that it feels arrogant to think you can accomplish something that big–but I think the main reason is that it feels fake. There is strong social pressure in the CFAR/Less Wrong community to claim that you have terminal goals, that you’re working towards something big. My System 2 understands terminal goals and consequentialism, as a thing that other people do–I could talk about my terminal goals, and get the points, and fit in, but I’d be lying about my thoughts. My model of my mind would be incorrect, and that would have consequences on, for example, whether my plans actually worked.
Practicing the art of rationality
Recently, Anna Salamon brought up a question with the other CFAR staff: “What is the thing that’s wrong with your own practice of the art of rationality?” The terminal goals thing was what I thought of immediately–namely, the conversations I've had over the past two years, where other rationalists have asked me "so what are your terminal goals/values?" and I've stammered something and then gone to hide in a corner and try to come up with some.
In Alicorn’s Luminosity, Bella says about her thoughts that “they were liable to morph into versions of themselves that were more idealized, more consistent - and not what they were originally, and therefore false. Or they'd be forgotten altogether, which was even worse (those thoughts were mine, and I wanted them).”
I want to know true things about myself. I also want to impress my friends by having the traits that they think are cool, but not at the price of faking it–my brain screams that pretending to be something other than what you are isn’t virtuous. When my immediate response to someone asking me about my terminal goals is “but brains don’t work that way!” it may not be a true statement about all brains, but it’s a true statement about my brain. My motivational system is wired in a certain way. I could think it was broken; I could let my friends convince me that I needed to change, and try to shoehorn my brain into a different shape; or I could accept that it works, that I get things done and people find me useful to have around and this is how I am. For now. I'm not going to rule out future attempts to hack my brain, because Growth Mindset, and maybe some other reasons will convince me that it's important enough, but if I do it, it'll be on my terms. Other people are welcome to have their terminal goals and existential struggles. I’m okay the way I am–I have an algorithm to follow.
Why write this post?
It would be an awfully surprising coincidence if mine was the only brain that worked this way. I’m not a special snowflake. And other people who interact with the Less Wrong community might not deal with it the way I do. They might try to twist their brains into the ‘right’ shape, and break their motivational system. Or they might decide that rationality is stupid and walk away.
Being Foreign and Being Sane
I've been reading Less Wrong for a while now, and have recently been casting about for suitable topics to write on. I've decided to break the ice now with an essay on what living and working abroad in Korea has taught me which carries over into studying rationality. While more personal than technical, this inaugural post contains generalizable lessons that I think will be of interest to anyone trying to improve their thinking.
You may be skeptical, so let me briefly make my case that traveling offers something to the aspiring rationalist. Many have written about the benefits of traveling, but for our purposes here is what matters:
Being abroad can make certain important concepts in rationality a part of you in ways studying can't match.
It's easy to read -- and to really believe -- that the map is not the territory, say, without it changing how you actually act. Information often gathers dust on the shelves in your frontal lobe without ever making it into the largely unconscious bits of your brain where so much of your deciding takes place.
With this in mind travel can be seen as part of the class of efforts to learn rationality without directly studying the science, instead doing something like playing Go or poker, for example. I don't know for sure, but such efforts could hold the promise of teaching us to incorporate insights into emotional attachment, statistical probabilities, strategy, maximizing utility, and the like -- things we've known for a long time -- into our instincts, deep down where they can actually change how we behave.
I say all this because what living in a foreign country has given me is not so much a software update which has remade me into a paragon of rationality, but rather a hearty appreciation for certain facts which might make my thought-improvement efforts more fruitful. No doubt many of you have already long-ago internalized all of this, and for you I won't be saying anything very profound.
Nevertheless, here is what I've learned:
1) You are vastly more complicated than you think you are.
The proposal for the Dartmouth conference of 1956, considered by some to be the birth of the field of AI research, had this to say:
An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
Not to deny that considerable progress has been made in the past half century, but I think we can all agree that this thinking was just a tad bit optimistic.
I'm not an expert on AI research history, but it seems reasonable to assume that these proto-AI researchers perhaps didn't appreciate how complex humans are. You look at a triangle and you see a triangle; you reach for a coffee cup and grasp it; you start speaking a sentence and finish it with only the occasional pause. What could be simpler? We all forget our car keys sometimes, and some of us know a little bit about bizarre neurological problems like aphasia, but still. In general we function so well that it never occurs to us that the things we do might actually be difficult to implement.
The problem runs deeper than this, though, because there doesn't seem to be much in the way of techniques for elucidating this complexity from the inside. If there were, neuroscience might've been discovered a millennium ago in East Asia by Buddhist adepts. But instead our efforts at aiming the introspective flashlights on the machinery of our minds are thwarted by their presence totally outside our conscious awareness.
Well, if you ever feel like you're not fully appreciating the intricacies of your wetware, sit in a coffee shop or bus stop in a foreign country while eavesdropping on people whose effortless bantering could not be more inscrutable, and you'll have it impressed upon you. Alternatively, try to explain to someone with little-to-no English knowledge what something like "simple" or "almost all of" means. Even without a bit of neuroscience training you'll start to get a grasp on the vastness of the gears and levers that make every utterance possible.
This insight, at least for me, seems to creep into the rest of your thinking life, though in my case it's hard to tell because I've always pondered things like this. It isn't a far leap from here to see the potential value of research into topics like Friendly AI. If human language and vision are complicated, what are the chances that human value systems are simple? If you didn't manage to notice your retinal blind spot or the mechanisms by which you conjugate verbs in your native tongue, what are the chances that you aren't at least a little mistaken about your true goals and desires and how best to achieve them? Exactly. So maybe it's time to start reading those sequences, eh?
2) Don't be bewitched by words
Obviously if you go to a country where English or a different language you're already fluent in is spoken, this won't apply as much. But my experience has shown me that living in and learning a foreign language bestows several valuable insights on those intrepid enough to stick with it. Simply put, a sufficiently reflective and intelligent person could independently figure out about half of the sequence A Human's Guide to Words just by being in a foreign country and thinking about the experience.
First you'd have to go through the shocking revelation that so much of what you say is a fairly arbitrary set of language conventions, and then you'd begin to relearn how to communicate. You'd come to realize that words are mental paintbrush handles with which you guide the attention of other humans to certain clusters in thingspace, and that they are often disgusied queries with hidden connotations. This will be triply reinforced by the fact that you'd often have to resort to empiricism to get your point across - accompanying the word 'red' or 'chair' by actually point to red things or chairs. If you're spending time with natives the inverse will happen, and they will have to point to the parts of the world that words represent to communicate. You'll have a head start in replacing the symbol with the substance because you'll be playing taboo with nearly every word you know. Since you'll be doing this with low-level language, it'll require elbow grease to port this into your native tongue when discussing topics like free will. But if you can avoid slipping into cached thoughts, the training you received when you were a foreigner will likely prove useful.
Beyond this, however, is the tantalizing possibility that we may be more rational when we think in a foreign language, perhaps because it increases reliance on the slow, analytic System 2 at the expense of the rapid-fire, emotional System 1. Psychologists from the University of Chicago tested this idea using English speakers proficient in Japanese, Korean speakers proficient in English, and English speakers proficient in French (Keysar, hayakawa, & An, 2011) [NOTE: I'm aware this study has been mentioned before on Less Wrong, but I believe this is the first actual discussion of the experiment and its methodology]. In the first few experiments participants were randomly sorted into two groups, one of which was given a test in their native language and one of which was given a test in the foreign language. These tests were designed to elicit a well-known tendency for humans to differ in their risk preference depending on how the situation is framed.
Here's how it works: imagine that you turn on the news today to find out that an exotic new disease is ravaging Asia, with an expected final death toll of 600,000. The governments of the world decided that the best solution would be to design two separate drugs, and then to randomly select one reader of Less Wrong to decide between the two. Your number came up, and now you have a choice to make.
Drug A is guaranteed to save 200,000 people. Drug B has a 33% chance of saving everyone and a 66% chance of saving no one.
This is called the gain-framing, because what's emphasized is how many lives you'll save, or gain. When framed this way, people often prefer to administer Drug A. But studies find that if the same problem is loss-framed - that is, with drug A it is guaranteed that 400,000 people die while with Drug B there is a 33% chance that no one will die and a 66% that everyone will - far fewer people prefer Drug A, even though the results of using the drugs are identical.
Besides being sorted by foreign language participants were also randomly sorted by whether or not they got the gain or loss framing. Participants tested in their native language showed the predicted bias, but when tested in the foreign language, about an equal number of people preferred Drug A and Drug B.
An additional study found the same effect of foreign language on reasoning, but using a different bias. People tend to be loss averse, preferring to avoid a loss more than they prefer to gain an identical (or slightly better) amount. This means that people will often turn down an even bet which holds the possibility of gaining $12 and the possibility of losing $10, even though this bet has positive expected value. As with the other studies, Korean speakers proficient in English more often showed this tendency when reasoning in their native language than when reasoning in a foreign one, especially for larger bets.
There are a million reasons to learn a foreign language, but it'd be a very costly way to improve rationality. With that said, for anyone willing to invest the time and effort, better thinking could be the outcome. But even if you don't go to the trouble, simply trying to communicate with people who don't speak the same language as you will teach you a lot about how cognition and communication work.
3) The Zen of the Unfamiliar
Living in another culture can make you aware of so many things that you previously failed to notice at all. I remember not long after I got to Korea, I was in my kitchen and noticed that my sink was different from any of the ones I'd seen back in the States. It was a single open pit sunk into the counter, with a strange spinning mechanism where the drain usually is. After investigating for a while, I realized two things: one, the spinning mechanism was actually a multi-part contraption meant to catch food before it went down the drain (no idea why it could spin) and two, I'd just spend 100 times longer thinking about sinks than I had in the rest of my life combined.
To successfully live in a foreign country you'll have to master the art of noticing things fairly quickly. You'll start to watch how people dress, how they talk, how close they stand to each other, the relative frequency of eye contact, how they chew their food, what order people get served drinks. You'll learn to read the environment to learn where to stand in line, where to catch the bus, where and how to buy things, which door is the exit and which one the entrance, whether or not certain places are likely to be safe, etc.
You'll accomplish most of this by gathering evidence, forming hypotheses, using induction and deduction, and updating on new evidence. The things you've been reading about on Less Wrong will be put to use in finding food and shelter, the tools of rationality will be your compass in a world where you can't read what's written on signs or buildings and most people can't understand your questions. So there's a box on your wall with three buttons, two dials, a bunch of lights, and you're pretty sure it can make hot water come out of the shower? Not a word of English anywhere on it, you say? Well then you'll have to change one variable at a time and take note of the results, like any good scientist would.
Being immersed in a set of shared cultural and linguistic norms that you don't understand makes almost every aspect of your life an experiment. It's exhausting, and one of the most informative experiences I've ever had. On an emotional level, it will teach you to be more at ease with partial understanding, frustration, and confusion. With your comfort zone an ocean away, you'll either persevere and think on your feet, or you'll end up sleeping in the rain.
__
Like with learning a foreign language, there are many reasons to travel abroad and experience another culture. And of course, a plane ticket alone is not enough to make you a better thinker. But if you know what to look for and are actively seeking to grow from the experience, I can attest that being foreign for a little while is one way to become a bit more sane.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)