Epistemic Status: Long piece written because the speed premium was too high to write a short one. Figured I should get my take on this out quickly.

(Note: This has been edited to reflect that I originally mixed up UpToDate with WebMD (since it’s been years since I’ve used either and was writing quickly) and gave the impression that WebMD was a useful product and that my wife liked it or used it. I apologize for the mix-up, and affirm that WebMD is mostly useless, but UpToDate is awesome.)

Scott Alexander has a very high opinion of my ability to figure things out.. It’s quite the endorsement to be called the person that first comes to mind as likely things right. Scott even thinks it would hypothetically be great if I was the benevolent dictator or secretary of health, as my decisions would do a lot of good.

In turn, I have a very high opinion of Scott Alexander. If you get to read one person on the internet going either forwards or backwards, I’d go with Scott. Even as others are often in awe of my level (and sometimes quality) of output, I have always been in awe of his level and quality of output.

His core explanation in one paragraph:  I have a much easier task than those in charge. All I have to do is get the right answer, without worrying (anything like as much) about liability or politics or leadership, or being legible, or any number of other things those with power and responsibility have to worry about lest they lose that power and responsibility and/or get sued into oblivion. Those with power have to optimize for seeking power and play the game of Moloch, and we need to pick a selection process that makes this the least destructive we can and thus can only rely on legible expertise, and we actually kind of do a decent job of it. 

In that spirit, I’d like to welcome everyone coming here from Astral Codex Ten, flesh out and make more explicit my model of the dynamics involved, and point out some of the ways in which I think Scott’s model of the situation is wrong or incomplete. Which in turn will be partly wrong, and only be some of the ways in which it is wrong or incomplete, as my model is also doubtless wrong and incomplete. 

The core disagreement between my model and Scott’s model is that Scott’s model implicitly assumes people with power have goals and are maximizing for success given those goals. I do not believe this.

More broadly, Scott assumes a generalized version of the efficient market hypothesis, or that people are efficiently producing a bundle of goods at the production possibilities frontier, because to do better in some way they’d have to sacrifice in another – if you want better policies you’ll have to pay political capital for it, and you only have so much political capital, whereas there aren’t free wins lying around or someone would have taken them. 

Again, I think this is incorrect. There’s treasure everywhere. 

The other core disagreement is revealed most explicitly in the comments, when Scott is asked what the mysterious ‘force of power’ is that would work to take me down if Biden decided to put me in charge. Scott’s answer is ‘someone like but not as high quality as Dr. Fauci’ would take me out, which is a plausible figurehead of such an effort, but I think the real answer is most everyone with power, implicitly acting together.

I’ve divided this post into sections that correspond to Scott’s sections, so my Section I comments on Scott’s Section I.

I

I think Scott’s diagnosis of WebMD is mostly spot on.

I know this because my wife is a psychiatrist and when I wrote the original version of this I remembered ‘yeah, the online website she uses is great’ and somehow thought it was WebMD. Which it wasn’t. UpToDate is the useful website that actually works and manages to provide useful information. WebMD is useless and terrible, and she was quite cross when she realized I’d messed this up and was singing its praises. When I was running MetaMed we found UpToDate fantastically useful all the time. It’s a wealth of great information.  Whereas WebMD… isn’t.

WebMD is instead fantastically unhelpful and useless, especially things like side effect warnings, but also the rest of it also. Given who it is aimed that, that’s actually not so surprising. You’d get great info in the earlier sections, then a mass of side effect warnings that looked like every other mass of side effect warnings, and if you wanted to know what the real side effect risks and risk level were you had to ask someone who would know, so they could tell you. 

If you wanted to know which of various drugs was better, or the right way to use a particular drug, UpToDate will give you good information to help you figure that out, and even tell you the standard of care, but actually figuring out the real correct answer is still your problem. If you look at WebMD, well, good luck with that.

That all lines up with how Scott’s model works in later sections.

Compared to a version of WebMD that had all the info but also levels with you the way a knowledgeable doctor would in private, WebMD is useless. Scott can (at least for now) produce a subset of WebMD that levels with you in that way, and it’s wonderful, but it can’t replace WebMD. 

Scott points directly at the core issue here, that such authorities are held liable for certain consequences and not for others. Which is exactly Asymmetric Justice, used as a weapon against any and all meaningful action. Whatever is not in context considered an action, and thus becomes the default action, mostly becomes what happens.

The FDA faces the most extreme form of Asymmetric Justice, where calls for even potentially looking like it’s not putting in enough good and proper delays gets people to trot out reminders of the ‘children of thalidomide.’ It’s not a great situation from which to do a proper cost/benefit analysis or act in accordance with a large speed premium.

That’s nothing like the complete story, but it’s a central piece of the story.

II-a

Once again, Scott has a very high opinion of me. In one way, too high. In particular, he says: When Zvi asserts an opinion, he has only one thing he’s optimizing for – being right – and he does it well.

In terms of simulacra levels, this is a claim that I’m acting purely on level 1, ignoring higher-level considerations. 

I do my best to keep that as close to true as often as possible, but it’s definitely not entirely true.

I absolutely think about the higher-level consequences of what I’m writing. If someone believed what I said, as they would interpret it, what would they likely do? What does this statement signal about group affiliation, and who does it give attention to, or lower or raise in status? How does this strategically impact the overall playing field? How does this interplay with being good writing or being useful or with the norms I’ve created for myself, especially around politics? Does this make me seem too angry or crazy, or just make no sense or seem unjustified until I figure out how to communicate things I don’t yet know how to communicate? Would this create cancellation risk, or risk of blameworthiness in other ways, for myself or others? 

I do my best to let the what-is-true/right considerations win out as often as possible. When other considerations do come into play, I do my best to let that only do things like softening statements, adding qualifications or at worst not mentioning something at all, with a hard rule against lying or misrepresentation. And if something is important I do my best to say it anyway. But there are plenty of things I don’t say because of other considerations. 

That doesn’t mean I have a problem of similar difficulty to that of the Dr. Fauci or the Director of the CDC, or the problem I would have if I became Director of the CDC or if I was given the role of Dr. Fauci. Different orders of magnitude are dramatically different. 

Yet they are not so different. Even most people in positions like mine make deep compromises to the wishes of power, and are mostly caring about different things than I care about. A lot of what makes these decisions so hard is that this isn’t a game with known rules and consequences, rather there’s always the temptation to do what your power-seeking and status-seeking instincts want to do, no matter how small the relevant stakes. This mostly isn’t a rational cost-benefit analysis on anyone’s part, rather it’s people executing the systems that got them to where they are. 

As I’ve said before, I believe Dr. Fauci is special because he is a Level 2 agent who cares about making other people believe things, rather than someone who primarily plays the coalition game at Level 3. He does what he needs to do to placate power, but that’s not everything he does all the time. His goal is to get people to believe what he’s saying and modify their world models in ways that improve outcomes. He doesn’t think being right or honest much matters, except perhaps insofar as one must maintain credibility. Thus, the lies about masks. 

II-b

Before I go deeper, I’d like to point out this from Scott:

Are the experts worse in 2021 than they were in 2015 or 2005? I think the answer is yes. Part of that in my model is the continued growth of maze levels, part of that is a systematic war on experts and a systematic war on clarity and usefulness, and likely also partly deliberate sabotage of the relevant government agencies. And part of that is that expert incentives have become more perverse, as they bow more and more to considerations that are not what we’d like them to be, for reasons I will choose not to explore further here.

As the experts get worse, do some of us get better, or did we mostly get more confident slash notice that the threshold for usefulness went down? My guess is this is mostly the latter, but that there’s been some important learning and experience by those who are interested, about how to use social media and blogs to actually figure things out. 

Assume for the moment I’m as good at producing right answers as Scott thinks I am. 

Could the people in charge do better than me at identifying those right answers, if they wanted to?

I have a few complementary answers to that.

The first answer is that I have the unfair advantage of Zeroing Out, and the unfair advantage of playing offense while those in charge have to play defense.

The idea of Zeroing Out is that you can do better by taking a market price or model output as a baseline, then taking into account a bias or missing consideration. Thus, you can be much worse at creating the right answer from scratch, and yet do better than the consensus answer.

Think of this as trying someone’s fifteen-step chocolate chip cookie recipe, then noticing that it would be better with more chocolate chips. You might have better cookies, but you should not then claim that you are the superior baker.

If you are the Director of the CDC, in some important sense you have to model everything that is happening. Everyone else is looking to you to see everything that is happening, as the Responsible Authority Figure. 

Then, you post 99 true statements and 1 false statement, because getting everything right is hard and occasionally everyone messes up, and everyone sees the one false statement and calls you an idiot. 

Or, I see your 99 true statements, I identify one more true statement you hadn’t noticed yet, add it to your 99 statements, and now I have a better model than you do. And there will always be something missing, because the world is super complicated, and also changing, and also you can’t say everything you know all the time, and so forth.

The true beauty of Zeroing Out is that I can take all the analysis of the CDC as given, find one place where they are biased or making a mistake or missing a piece of information, and then reliably do better than the CDC by otherwise taking the CDC’s answers.

None of that makes me better than the CDC. But it could make it wise to look to me for answers, if I am incorporating all the CDC’s information properly, but the CDC isn’t incorporating mine.

Consider Advanced Chess. A human and a computer decide together what move to play. The computer on its own makes much better moves than the human on their own, but there was a period where the human could still improve the computer’s play, although I believe that is no longer the case. 

Along similar lines, consider an AI diagnostic tool that analyzes X-rays, versus a human who does the same. There’s a famous study where the AI not only beats the humans, but beats the humans when the humans know what the AI thinks because the humans think the AI is dumb and overrule it. If you showed the humans that study and then re-ran the study, the humans would likely do better because they’d trust the AI more. They’d presumably not fully believe it and still do worse, but one can easily imagine a human who trusts the AI 99.9% of the time, but every now and then something really weird pops out, and the human usefully intervenes. 

One can also imagine the reverse. The human overruling the AI is net harmful, but what would happen if the human diagnosis was one input in the AI’s training data, and we told the AI what the humans thought? I’ve never seen this tried, but I’d be very interested to see what happens. My assumption is that this would backfire if the training was done badly (the AI would overfit to human opinion) but if done right it would usefully improve the AI, because the AI would care about the human decision when the AI was uncertain slash in the types of places the humans were useful, and ignore the humans when the humans were clearly being dumb.

(If you wanted, you could presumably also introduce a penalty function for disagreeing with the human, so the diagnosis was less accurate but also less upsetting to the humans, and thus more likely to be accepted. Then hopefully you could slowly take that penalty away.)
 

Combine that with my ability to play offense while others play defense. This is about more than merely being legible. I get to attack every mistake the CDC makes, and the CDC doesn’t get to look for my mistakes in return. 

Thus, when I get to see the answers others are giving, and also have an idea what their biases and outside motivations might be, and also only have to point out the places we disagree,  I have a huge unfair advantage. 

11-b-2

The second answer is Leaders of Men.

Leaders of Men points out that there are many things that make one a good baseball manager, or in this case a good Director of the CDC. 

Some of those things are very important to your success. 

In particular, you need to be a good manager, to be a leader of those around you, and know how to hire, fire, promote, delegate and motivate. You also need to understand the most important things that determine everyone’s success, and how to help people do their best on those fronts. 

These skills are super valuable, and the strongest forms of them are rare. Even between two people with very good such skills, the one with better skills will usually do better.

On top of that, there are a host of other considerations before one can be a candidate for such a job. You need the right pedigree and credentials, and to be at the right place at the right time, and to want the job. 

Thus, you only have so many choices, and the bigger considerations like leadership mostly have to trump smaller considerations like knowing that bunting is terrible. Yes, bunting is terrible, but it might cost you one game a season, whereas if your players hate you and stop caring that will cost you five or ten. Yes, being better at evaluating science will help you get the right answers faster, but hiring well for your deputies matters much more.

Even if there’s something smaller that is purely helpful in doing one’s job, say knowing to almost never bunt in baseball or being a better evaluator of scientific claims when you read scientific papers as CDC director, that advantage only goes so far. 

In this part of the model, it’s not that the CDC director wants to form or assert a true opinion but there are things pushing against it. It’s that it’s better to pick a CDC director who can lead and manage (and ‘play politics’ in some senses) despite not being so good at what I do, whereas what I do is my primary consideration, so it shouldn’t be that surprising that there exist people in my position who are better at being-right than the CDC director. 

You can also look at this as a misunderstanding of what it means to be an “expert.”

Usually people say ‘expert’ when they mean ‘person with appropriate credential’ or ‘person who has experience in this field.’ 

Matt Yglasias, in an interesting recent podcast with Julia Galef, breaks this out as, yes experts are still experts, but he’s learning to be much more careful about exactly what is your expertise exactly? Being an expert in biology is one thing. Being an expert in leadership or politics, or the politics of biologists, is another. Being an expert in Baysian reasoning or statistics, or in making good decisions under uncertainty, is an entirely different thing. Being an expert in ‘bioethics’ arguably doesn’t even parse as actual expertise, what would that even mean? 

11-b-3

The third answer to extend Leaders of Men more explicitly to selection effects.

It’s true that if you want an effective leader who will do the best job, you might have to compromise on a lot of less important considerations, and let those considerations suffer.

It’s also true that the process for appointing leaders, at all levels, is not optimizing primarily for the effective leader who will do the best job. The primary consideration is power.

The government is like any other moral maze. If you want to succeed, you modify yourself to be someone who instinctively plays the political game of success, seeks power and forms an implicit coalition with others who seek power. You implicitly reward power seekers and those with power, and punish those without power and who do not seek power, without thinking about it. If you didn’t, the others in the game would notice you thinking about it, or worse notice you failing to act on it, and punish you accordingly.

You instinctively know that you must continuously demonstrate your commitment to power seeking, and to rewarding your allies and being with the program, or else you won’t be a reliable person who can be trusted to do what is required. You must avoid motive ambiguity, and make it clear that you are not going to sacrifice considerations of power to improve physical world outcomes or otherwise do the ‘right thing,’ or to assert the true answer to a question simply because it is true.

That doesn’t mean you have to always do the wrong thing. Success and failure do matter, and you probably still have a preference for better outcomes over worse ones all things being equal. But to act outside the usual course of events in order to do the right thing, you’ll need a good excuse, so you can claim you’re doing it for other reasons. “My boss ordered me to do that” is the gold standard, as is “the people demand it.” Doing it because of a (de facto) bribe from special interests isn’t the best public look, but suffices to satisfy your fellow bureaucrats and power seekers. Importantly, if you would be blameworthy for failure quickly enough, typically under a time horizon of about two weeks, then you can use physical considerations as an excuse. 

Thus, by the time anyone gets to the level of Director of the CDC via normal channels (e.g. anyone Biden would ever consider appointing) that person will be someone who has leadership and management skills, and who also instinctively plays the games of politics and power. They will be part of the implicit coalition. 

This is where my model and Scott’s diverge. The earlier sections I imagine Scott mostly agreeing with, and they don’t contradict anything Scott is saying. Here, we have a disagreement. Consider Scott’s model of the CDC Director:

That last line is Scott’s continuous assumption – that everyone involved should be presumed to be handling their problems the best they can under the circumstances, and is likely optimizing for good outcomes given the restrictions around them. I don’t think that’s a good or useful assumption. Nor do I think it even makes sense to meaningfully defend or excuse the CDC here at all, any more than it would be to attack or blame them. This is what you get when you put the CDC and its people into this situation in this civilization under these selective pressures. Our goal is to create a model that correctly understands what the CDC will do with a given set of inputs, to help us better find out what is happening and is likely to happen, and to know how we might change inputs to get different results.

When I use blame language, it’s partly because it’s easy to slip into, partly because it’s hard to avoid when describing what is physically happening because human brains automatically link these concepts together, and partly because the concept of blame is useful. It’s useful to make it clear that changing the things that happen at the CDC/FDA could dramatically improve physical results, and it’s important because imminent blameworthiness for failure to act can, under this model, cause action. That’s exactly where blame becomes useful, and where making excuses gets in the way. 

I also don’t think ‘corruption’ is the right word here, as its connotations are more misleading than helpful. It wouldn’t shock me if those involved were corrupt and taking bribes or helping friends far more centrally than I’d expect, but I don’t think that’s the central thing at all. Corruption is usually net harmful, but it’s clear (for example) that there is limited ability at best to ‘grease the wheels’ of the system to get faster service via paying money, or various people and especially corporations would have done it. At most, corruption is only being used to stop action.

In Scott’s model, Rochelle Walensky (the Director of the CDC) is a utility maximizer, has the utility function of F(p+r) where p=power and r=being right, and chooses to produce along the production possibilities frontier, making tradeoffs where she can be less right to gain power, so she can in other places sacrifice some power to say more things that are right.

Standard disclaimer: All I know about Rochelle Walensky is that she’s the new head of the CDC. I know nothing about her personally or history. 

In my model, that’s not how someone in her position thinks at all. She has no coherent utility function. She doesn’t have one because, to the extent she ever did have one, it was trained out of her long ago, by people who were rewarding lack of utility functions and punishing those who had coherent utility functions with terms for useful things. The systems and people around her kept rewarding instinctive actions and systems, and punishing intentional actions and goals. 

Thus, she does what seems like the thing to do at the time rather than thinking about things in terms of trade-offs. Sometimes that does a reasonable job mimicking trade-offs and making reasonable decisions, sometimes it doesn’t. Often it seems to mean ‘implement whatever Biden’s latest executive order says and call it CDC guidance.’  

Certainly the idea of pressure from others with power is a real and important part of this story. But to the extent ‘is there airborne transmission?’ is a question whose answer impacts her answer to the question ‘is there airborne transmission?’ it is because of pressures being brought upon her, or that she anticipates will be brought in the future, as a result of that truth, rather than because of any particular desire to be accurate. 

When the CDC issues its guidelines it seems very clear that those guidelines are designed to let those involved tell stories about being safe and following proper procedure, and they have at best vague bearing on whether things are actually safe. They’re certainly not even attempts to create a correct physical model of anything. They’re giving people a chance to avoid blame and to show adherence to power, while themselves avoiding blame and showing their adherence to power. They are also chosen so that there’s a fig leaf that these guidelines make sense, so they have some bearing on reality via that vector.

I do not think there is much scientific-paper-reading that is informing Walensky’s decisions, nor does it make that much sense even in an ideal world for her to be reading papers. She’s super busy. Ideally, she should have someone she trusts read the papers, and give her the key information. In practice, it’s more like having someone who keeps an eye out for what people will probably claim in the near future to be the scientific consensus, so she can avoid blame for clashing with that.

I think this is exactly what is happening, except it’s more like ‘the director was selected in part for her ability to’ and for ‘mental health’ we should substitute ‘power’:

11-c

So what would happen if Biden put me in charge of the CDC?

It is important that Biden won’t do this, and won’t consider doing this. 

The system that picks Rochelle Walensky to be CDC Director also picked Joe Biden to be president. Part of the reason it picked Biden to be president is knowing he’s not the type of person who might throw a wrench in things by picking me for director of the CDC (or FDA, which is the post I’d want a lot more). 

That’s the first and only truly necessary line of defense against someone like me. 

But what would happen if, somehow, Biden decided to do it anyway? 

The first problem is that I don’t know the players inside the CDC and also lack the management expertise, political skills and a lot of necessary background knowledge. Again, just because I can do my job better than Rochelle Walensky could do it, and part of that involves sometimes saying that the CDC made a dumb decision and needs to change that decision, again, doesn’t mean that I can do her job. 

So the first thing I’d have to do is find someone who did have those skills who could work closely with me to make sure things didn’t blow up in my face right away. This isn’t a story (at least not yet) about ‘Zvi told bold truths and people fought back’ it’s a story of ‘Zvi really doesn’t have the skill set to be CDC director and if he’s not very careful and quick about it, people are going to notice quickly.’ 

But let’s assume for the moment that I manage to overcome that with good advisers, and can do the other parts of the job well enough to keep the trains running on time – effectively, I convince someone like Walensky to be Shadow Director and keep things running in the background, but take orders from me when needed. What happens next?

Next is that the bureaucrats refuse to cooperate. They’re told to modify their guidelines and procedures, and to approve things that need to be approved and do it yesterday, but their incentives are all backwards and their training and selection effects are for those who prevent action and avoid blame, and do so by avoiding any visibility whatsoever. So in general I’d expect them to push back, and push back hard. The CIA sabotage manual would most definitely be followed. 

This would at least force me to prioritize which things to force through, as I’d only have so much political capital and so much ability to lean on people, and yes this is a sense in which I’d be ‘pissing people off.’ 

Various special interests and lobbyists would care about decisions one way or another, and various other bureaucrats would get mad about having to adapt to policies that are different from old policies and not designed to make their lives easier. So there’d be a more general pushback and revolt by such folks. Deep state. So there’d be general grumbling and pressure, and again, battles would have to be picked.

The next stage would be the media getting involved, and those with power in general. They’d notice that someone was running around trying to be helpful despite considerations of power pushing against it, and doing so in ways that would contrast with what others were doing in ways that might make others look bad. They’d look to ‘get’ me, to point out how awful and harmful and unethical everything was, to dig into my past for ways to attack me. They’d point out my lack of credentials, all my points of view or isolated sentences they didn’t like from anywhere.  There would be blood in the water. Thus, the pressure on Biden to fire me slash on me to resign.

If this went on long enough it would be increasingly distracting from Biden’s agenda, and eventually Biden would have bad enough. In theory perhaps they might go after Biden outright for it, but not until long after he’d have given up many times over.

11-d

Or at least, that’s The Fear.

The Fear does a lot of the work. 

It’s not that such things would actually definitely happen. It’s that there’s some chance they might happen, and thus no one dares find out. 

Thus, there’s an additional level to Asymmetric Justice. It’s not only that you’re responsible for what goes wrong while not getting credit for what goes right. It’s that you’re responsible and blameworthy for the fact that something might go wrong and you might then be held blameworthy for it even if it would be together with lots of bigger good things, and also if it never actually happens – just for the fact that it could have happened!

Every parent knows about the whole ‘he could have been killed!’ problem – nothing went wrong, but it looked from outside like it might look from outside that something bad might have happened, so despite doing a good thing, now you’re in deep trouble. And you’d better not trust anyone who is foolish enough to get into such a position, or you’ll be in such a position yourself. The hole goes deep.

Meanwhile, when you’re considering changing from policy X to policy Y, you’re being judged against the standard of policy X, so any change is blameworthy. How irresponsible to propose First Doses First.

But…

A good test is to ask, when right things are done on the margin, what happens? When we move in the direction of good policies or correct statements, how does the media react? How does the public react?

This does eventually happen on almost every issue.

The answer is almost universally that the change is accepted. The few who track such things praise it, and everyone else decides to memory hole that we ever claimed or advocated anything different. We were always at war with Eastasia. Whatever the official policy is becomes the null action, so it becomes the Very Serious Person line, which also means that anyone challenging that is blameworthy for any losses and doesn’t get credit for any improvements.

This is a pure ‘they’ll like us when we win.’ Everyone’s defending the current actions of the powerful in deference to power. Change what power is doing, and they’ll change what they defend. We see it time and again. Social distancing. Xenophobia. Shutdowns. Masks. Better masks. Airborne transmission. Tests. Vaccines. Various prioritization schemes. First Doses First. Schools shutting down. Schools staying open. New strains. The list goes on. 

There are two strategies. We can do what we’re doing now, and change elite or popular opinion to change policy. Or we can change policy, and in so doing change opinion leaders and opinion. 

So actually what I think would happen, provided I could navigate the technical job requirements with sufficient help, is… it would work.

The same thing would happen with less risk if you made me an advisor, slash went ahead and implemented the policies I’ve been advocating.

People would love to see it. The decisions would be popular. Once it was clear the decisions were popular, and thus good for the decision maker’s power, that would provide retroactive justification for those decisions, as they could be seen as motivated by their future popularity, thus erasing resistance. Political capital would increase, not decrease. Then the virtuous cycle could continue.

III

I agree with the thought here. 

If you’re going to put me in charge of the coronavirus response, the best defense is to not tell anyone that I’m in charge of the response. All you have to do is call me, ask me for what to do in as much or little detail as you’d like, and then tell me to not tell anyone. That way, I wouldn’t have to face lobbyists, or activists or politicians, no one would try to cancel me or threaten me or bribe me. I could be The Ruler of the Universe, only more self-aware. Which would suit me better anyway, the trappings of power are nothing but trouble. 

Democratic process is great at one very important thing, which is making sure that what’s being done isn’t something too transparently horrible. If you don’t use democracy, suddenly even more horrible things than usual end up satisfying the governing coalition. These days, they’ve got a workaround by forcing us to choose between two different packages each of which paints the other as increasingly horrible, thus allowing them to propose something less horrible as the alternative, and it’s working so well that the majority of you think that was unfair because it was symmetrical, which doesn’t matter because it works whether or not it’s symmetrical or fair. 

On smaller things, the process does similar things. If you’re worried about making a truly horrid decision, you need a democratic check on things, but exposing decisions like pandemic response to general public feedback is not going to be helping matters, because now everything becomes level-3 status games and decisions are made on the basis of slogans. Better to keep the slogans on a more high level. The system of ‘every few years throw the bums out if things suck’ does a reasonable job of getting those in charge to have at least some interest in things not sucking, with the unfortunate downside that those not in power have started to increasingly figure out they have at least some interest in things not not sucking, which is causing issues.

IV

This gets us into the legibility issue. It’s an interesting word to use. 

In some sense, it’s exactly right. I have zero ‘legible’ credentials, and the only way to know if I can figure things out is to work through hundreds of thousands of words of me figuring things out and decide if you want to engage with that process. Whereas being an MD/PhD with a fancy title and tenure or a directorship is highly legible.

In another sense, it’s exactly wrong. WebMD and Dr. Fauci are legibly legible, sure. We can agree they’re the Relevant Authority Figures, but that only makes them legibly okay (let alone legibly good) because we’re trusting that giant neon arrow that says “TRUST THIS GUY” despite it being kind of suspicious any time there’s a giant neon arrow suggesting anything at all. When we say these people are legible it’s because we’ve agreed that they are, but none of this is much evidence that they’re right. It could even be strong evidence that they’re not right in important senses, because they are now known to have bad incentives and have been subject to all the bad selection effects, whereas someone without such powers wouldn’t have those problems.

The problem of choosing the ‘right Zvi’ to put on the throne versus the wrong ones is a real issue. So is the fact that no one in a position to put me on a throne would know I was the one to put on it, or would want to put me on it even if they knew (and Scott’s view of me was correct). Then there’s the issue of the power to stay on that throne, when all powerful people would instinctively want to take down anyone in power that wasn’t optimizing for power. 

I do still think it would be worth trying this. It could be tried top down, where someone in power wakes up and decides to shake off their programming and try to generate a positive feedback loop. To some extent, ‘do some of the obviously correct things not being done that have no real downside’ is the Biden game plan and to the extent it was tried it seems popular so far. Who knows how far that could go? 

It could also be tried bottom up, in either of two ways. 

In theory, one could actually run a campaign on the basis of doing Level-1-good things and hope people voted for you. I know some people who are trying to do this in New York City, with City Council seats and even some higher offices, thanks to a campaign financing rules set that makes running remarkably practical, and a lack of other candidates for office. If you’ve been a New York City resident for five years, are an adult and that appeals to you, I’d be happy to connect you. And every now and then some celebrity comes in and tries to do it as a Governor somewhere, with mixed results.  

Alternatively, one can do this without having power as such, as a kind of influencer. There aren’t that many people engaging in original thought, and largely game recognizes game. The world is rather small. It’s also getting more confusing, reducing the number of people who can make direct sense of the world. 

V

Whenever someone proposes you as the source of right answers and as a good choice for a benevolent dictator, it’s hard for that not to become one’s focus. But that’s not actually the important question here. As fun (and scary) as it is to contemplate, that’s not one of our realistic options.

The important question is how to understand and improve the decision making process we have. What marginal improvements can we hope to make? What large changes would be good?

As one commenter points out, I have a concept called the Delenda Est Club. I have frequently used the terms ‘FDA Delenda Est’ and then, after things went too far, also ‘CDC Delenda Est.’  

When an organization is sufficiently a moral maze and bureaucratic mess, I do not think ‘reform’ is a good solution. Such cultures tend to ratchet up their maze levels and how much they fight against useful action, and bitterly resist attempts to ratchet such levels downwards. The people in place will reinforce each other, and hire more people to reinforce each other, and as public servants will be impossible to dislodge. 

Thus, you essentially have to burn the whole thing down and start over with new organizations. The typical failure mode is that the organization changes names, but keeps doing what it was doing before. In private enterprise, this happens because a new upstart beats you if you’re sufficiently dysfunctional for too long. In the public sphere, that’s not available, so often you have to wait for a revolution. That’s a problem, because revolutions have a lot of bad side effects, so I’d strongly prefer to find an alternative. Then again, if we don’t change the incentive problems, then any sufficiently large organization will quickly get back to the current state once again, no matter how fresh the start.

For now, I think our best chance is to create an informal information cascade system designed to put proper pressure quickly onto elite institutions. 

We’re already doing that, but we can be more intentional about it, and make it more effective.

One can also think of this as a way to translate what Scott is calling illegible greatness into legible goodness. 

This serves as my best answer to hnau’s question number two on what my proposals are meant to do. In general, I’m not trying to say what I’d do as a sufficiently powerful benevolent dictator, but rather what I think could actually be done on the margin – that if we put pressure and assigned blame based on failure to adapt policy goal X, that movement towards X would be good, and fully doing X would be, if not the first best solution, at least a large improvement. 

(And of course, a lot of the advice I post is directed at individuals for their own lives. But there I have to do more of the esoteric between-the-lines thing, because of the same reasons WebMD says everything causes cancer, but I hope that anyone who reads through my posts can figure out their implications on that level.)

Or as another comment suggested, if we go 2mph instead of 1mph, I try to both praise going 2 instead of 1, and then call for going 10 instead of 2, but only to the extent that I think 10 is achievable. 

Anna Salamon suggested a model of a rising sanity water line, but in the sense that this makes it harder to stay above water and thus directly sane. There’s a small and decreasing number of people who are still capable of synthesizing information and creating new hypotheses and interpretations. 

Then there’s those who are mostly no longer capable of doing that, things got too complicated and weird, and they can’t keep up, but they can read the first group and meaningfully distinguish between people claiming to be in it, and between their individual claims, ask questions and help provide feedback. To them, the first group is legible. This forms a larger second group that can synthesize the points from the first group, and turn it into something that can be read as an emerging new consensus, which in turn can be legible to a third much larger group. 

This third group can then be legible to the general public slash general elites, who learn that this is where good new ideas come from. Then the Responsible Authority Figures can feel under public pressure, or see what the emerging new ideas are, and run with the ball from there, and the loop continues.

The filtering process also acts as a selection for feasibility, as the second layer picks up things from the first that it thinks it can present legibly to the third, and so on.

The formal division here is not necessary, especially as none of this is formal. Being formal would not help matters. Anyone can ‘step up’ and provide new ideas any time, and anyone can help synthesize material from elsewhere. 

This is an attempt to sidestep having to make those with power care about ‘being right’ since they don’t much care about being right. Instead, systematically align their incentives, and let nature take its course. And an attempt to sidestep appointing people who do care about being right to positions of power, since those in charge of positions of power won’t do that.

Of course, if anyone with the power to improve matters did want my advice on how to do that, I’d be honored to offer it, and tell no one that you asked. You know how to find me.

VI

As a bonus, Scott also has a short related post about Journalism and Legible Expertise. It seems journalist wants to write worthy Covid-19 articles, but can’t because no legible experts will talk to journalists and say anything useful, because journalists keep taking everything anyone says to them out of context and making their subjects look however the journalists feel like making them look that day. At a minimum, the nuance of your position is going to get stripped away.

In my experience, you have a basic choice. You can either trust that the piece is going to be net good for you and let it happen because you need/want the publicity or to get the word out or what not, or you can do the normally sensible thing and mostly not talk to reporters. 

The reporter could of course change that! In at least this case, they could commit to giving the ‘expert’ in question a chance to review the quotes to be used, and if not satisfied with the context let them ask to remove them, or something like that. Unless trust is so botched that even that offer wouldn’t be credible. 

So there are two problems. The first is that you need a Resident Official Legible Expert, and John Hodgeman isn’t always available. The second is that journalists so systematically act against the interests of the people around them that no one trusts them. You have to solve either one or the other, I suppose. 

In the meantime, I’ll keep writing weekly Covid-19 posts, and occasionally other stuff, as my time permits.

New Comment
23 comments, sorted by Click to highlight new comments since:

Speaking of public pressure to adopt better policies, let's form a twitter campaign to #unclogthefda. We're campaigning to decrease FDA red tape and accelerate vaccination approvals using tired-and-tested healthcare reform organizing techniques! You can read and comment on the plan here https://www.lesswrong.com/posts/QYkMWMZqQg49SrTdf/unclogthefda-a-twitter-storm-to-approve-vaccines

[-]Zvi140

Attempted to edit manually, but it won't let you switch to the doc editor, which makes it a mess to do this, and reimporting might make sense. Changes are near the top; I'd confused WebMD with UpToDate, and made it sound like I thought WebMD was useful or (even worse) that my wife found it useful, which is completely false. WebMD is useless, whereas UpToDate is great (but still has basically the same problems Scott describes with WebMD).

It’s quite the endorsement to be called the person most likely to get things right.

I couldn't find such an endorsement in Scott Alexander's linked post. The closest thing I could find was:

I can't tell you how many times over the past year all the experts, the CDC, the WHO, the New York Times, et cetera, have said something (or been silent about something in a suggestive way), and then some blogger I trusted said the opposite, and the blogger turned out to be right. I realize this kind of thing is vulnerable to selection bias, but it's been the same couple of bloggers throughout, people who I already trusted and already suspected might be better than the experts in a lot of ways. Zvi Mowshowitz is the first name to come to mind, though there are many others.

If I'm missing something please let me know. I downvoted the OP and wrote this comment because I think and feel that such inaccuracies are bad (even if not intentional) and I don't want them to occur on LW.

Does seem kinda important to get this right. My guess is it's an honest mistake, but still one I would like to see corrected, and think is worth investing some effort into avoiding.

[-]Zvi60

Corrected the wording to ensure it is definitely accurate. Speed premium among a lot of very strong claims that definitely happened and all that, but yeah, more careful would have been better.

Scott writes a comment response on the SSC subreddit

They’d look to ‘get’ me, to point out how awful and harmful and unethical everything was, to dig into my past for ways to attack me. They’d point out my lack of credentials, all my points of view or isolated sentences they didn’t like from anywhere.  There would be blood in the water

I'm fascinated how well this thought experiment parallels the story of Rasputin - a self-proclaimed healer who got into the inner circle of Czar's family by helping their sick child and then worked to expand his influence. In the end, a group of nobles decided that the influence of Rasputin was threatening the empire, so they poisoned, shot and drowned him. Blood in the water indeed.

Notably, Rasputin really was threatening the empire - many historians consider him a significant contributor to the revolution. The suggestion here is that trying to change the working system from the inside - being a Donald Trump - can lead to a completely different system replacing it. The replacing system might be worse in a completely different way, or it might be better, but either way it's going to cause a lot of pain in the mean time. Be careful what you try - destroying the FDA or the CDC may just lead to more people refusing to get vaccinated or wear masks, while simultaneously leading to snake oil or to a resurgence of other diseases. A gain in one area leads to being much worse off in others.

In my model, that’s not how someone in her position thinks at all. She has no coherent utility function. She doesn’t have one because, to the extent she ever did have one, it was trained out of her long ago, by people who were rewarding lack of utility functions and punishing those who had coherent utility functions with terms for useful things. The systems and people around her kept rewarding instinctive actions and systems, and punishing intentional actions and goals.

IMO a more accurate model is: such people do have a utility function, but how to use your brain's CPU cycles is part of your strategy. If you're in an environment where solving complex politics is essential to survival, you will spend all your cycles on solving complex politics. Moreover, if your environment gives you little slack then you have to do it myopically because there's no time for long-term planning while you're parrying the next sword thrust. At some point you don't have enough free cycles to re-evaluate your strategy of using cycles, and then you'll keep doing this even if it's no longer beneficial.

Having a strategy is not the same thing as having a utility function.

I think this is a fantastically clear analysis of how power and politics work, that made a lot of things click for me. I agree it should be shorter but honestly every part of this is insightful. I find myself confused even how to review it, because I don't know how to compare this to how confusing the world was before this post. This is some of the best sense-making I have read about how governmental organizations function today.

There's a hope that you can just put the person who's most obviously right in charge. This post walks through the basic things that would break, and explains some reasons he is in an advantageous position relative to the person in charge (because Zvi can just optimize for being right, whereas the person in charge has to handle politics and leadership). It then walks through how the internals of power actually work, what sort of person is selected for (and shapes themselves to be), and also some counterintuitive reasons why it might work to put an outsider in charge (because the status quo is always right, and if handled well it would soon become the status quo).

Somehow the post could be better, it's hard for me to see the whole picture at once, because the post discusses a number of separate dynamics all occurring at the same time in an organization. Nonetheless I give this a +9.

The reporter could of course change that! In at least this case, they could commit to giving the ‘expert’ in question a chance to review the quotes to be used, and if not satisfied with the context let them ask to remove them, or something like that. Unless trust is so botched that even that offer wouldn’t be credible. 

Yes, individual reporters have a lot of room. An individual reporter also has to write more then a single article on COVID-19. 

At the start the reporter would go to on expert that's willing to speak with him. Then he can make sure that the expert is happy with the interaction. In addition to reviewing quotes the reporter might even sent them the full article before publication to review. 

Then after publishing the article they might ask the expert to tweet it while saying that the expert feels well represented. Afterwards the reporter can both ask the same expert again for another COVID-19 article, they can also refer back to the old article and the tweet saying that the expert felt well represented.

Good reporters build relationships with sources of expertise. 

This was an important 'discovery' I made after I started reading news mostly thru a feed reader – there is a huge variance in quality among reports/journalists and there's basically no information in terms of reputation of outlets.

There’s a small and decreasing number of people who are still capable of synthesizing information and creating new hypotheses and interpretations.

Then there’s those who are mostly no longer capable of doing that, things got too complicated and weird, and they can’t keep up, but they can read the first group and meaningfully distinguish between people claiming to be in it, and between their individual claims, ask questions and help provide feedback. To them, the first group is legible. This forms a larger second group that can synthesize the points from the first group, and turn it into something that can be read as an emerging new consensus, which in turn can be legible to a third much larger group.

This third group can then be legible to the general public slash general elites, who learn that this is where good new ideas come from. Then the Responsible Authority Figures can feel under public pressure, or see what the emerging new ideas are, and run with the ball from there, and the loop continues.

Somewhat tangential, this is four levels from "people who figure things out" to "general public". I wonder how recently it would have been three. But also, I'm not sure how to tell how many levels there are; if you'd said there were three levels, or five, I don't think that would have seemed particularly off to me.

Free association: this is in some sense "one level of middle management" (the second group). I don't remember back to moral mazes very well, but I feel like that might have been where things start to go really wrong. (It might have been two levels of MM instead.) I'm not sure if this is a meaningful connection to make - if mapping "people who figure things out" to CEOs and "general public" to employees gives us any insights - but if it is... honestly I'm not sure what to do with that.

Standard disclaimer: All I know about Rochelle Walensky is that she’s the new head of the CDC. I know nothing about her personally or history. 

In a she coauthered they write:

More treatments for SARS-CoV-2 are urgently needed. Emergency Use Authorization is a necessary tool that can be used to make promising interventions available to those infected, treat patients earlier in the disease, and avert hospitalizations. Shifting the attention from inpatient to outpatient management of COVID-19 requires transparency in clinical efficacy, widespread equitable distribution of novel therapeutics, and controls on cost. 

That sounds to me like she's of the opinion that Emergency Use Authorization was given out to freely under the Trump administration and should come with more regulation for transparency, equity and cost control.

I'm very confused by the notion of "not having a utility function". My understanding of utility function is that it's impossible not to have one, even if the function is implicit, subconscious, or something that wouldn't be endorsed if it could be stated explicitly.

It seems like when you're saying the CDC chair doesn't have a utility function, you mean something like "the politics term in the utility function dominates all other terms". But perhaps I've misunderstood you, or I misunderstand the meaning of "utility function" in this context.

My understanding of utility function is that it's impossible not to have one, even if the function is implicit, subconscious, or something that wouldn't be endorsed if it could be stated explicitly.

My understanding is that a utility function implies consistent and coherent preferences. Humans definitely don't have that, our preferences are inconsistent and subject to your instance framing effects.

[-]TAG30

Thats the correct definition, but rationalists have got into the habit of using "utlity function" to mean prefefrences, leading to considerable confusion.

I've been interpreting 'utility function' along the lines of 'coherent extrapolated volition', i.e. something like 'the most similar utility function' that's both coherent and consistent and best approximates 'preferences'.

The intuition is that there is, in some sense, an adjacent or nearby utility function, even if human behavior isn't (perfectly) consistent or coherent.

I think Zvi is drawing on this informal distinction in The Blue-Minimizing Robot:

  • "Behavior-executor": acts on reflex, producing a fixed action in response to a fixed stimulus (regardless of how this corresponds to outcomes).
  • "Utility-maximizer": chooses actions based on their expected outcomes; makes long-term plans, and completely changes behavior if new information comes in suggesting their old behavior patterns aren't helping produce the desired outcomes.

Thanks, this (and the sister comment by Unnamed) makes perfect sense.