All of TheMajor's Comments + Replies

My previous go-to for understanding why we didn't adopt nuclear power on a massive scale is https://rootsofprogress.org/devanney-on-the-nuclear-flop (even citing some of the same sources and using the same charts). Note that the post summarizes Devanney's book, and the post author does not necessarily agree with the conclusion of the book.

Devanney places a lot of the blame with regulators, in particular the Linear No Threshold model, ALARA legislation and regulator incentives. Do you think this is inaccurate and/or overblown?

1Marius Hobbhahn
We discuss this in our post. We think it is a plausible story but it's not clear if decisive. Most people who are critical of the status quo don't see regulation as the primary issue but I'm personally sympathetic to it. 

If your colleagues are regularly giving unrealistically optimistic estimates, and you are judged worse for giving realistic estimates, clearly your superiors don't care for the accuracy of the estimates all that much. You're trying to play a fair game in a situation where you will be rewarded for doing the opposite of that.

Personally I've had good mileage out of offering to lie to the people asking for estimates. When asked for estimates during a sprint, or the likes, and if I sufficiently trust the people involved I would say something like "You are askin... (read more)

A shot in the dark, but the Malthusian theory of population suggests war is beneficial to local officials and leaders when they think the younger generation is growing at a sufficiently rapid pace that they are about to be replaced ('vent the testosterone', so to speak). The absence of such a growth spike is a mark against this explanation.

More generously: if the birth rate is below replacement, losing young people in a war has drastic consequences for the population ~20 years from now, since it will at least for a while drop far below replacement. If the birth rate is higher the consequences of losing a fraction of your youngest people are, in the long run, less severe.

At this moment in time >99% of humans are not at Malthusian limits and majority of wars of the past 100-200 years have been fought between societies not at Malthusian limits.

The simple story that wars are started by a small group of elite insiders driven by ideological commitments, perhaps fanned by larger nationalistic/jingoistic/militaristic/etc sentiments in the larger populace seems far more plausible. 

3[anonymous]
I still don't get the logic here. It's not like modern wars cost millions of lives (unless it goes nuclear, in which case nothing matters); how can birth rates ever be a factor.

The first example seems to be an issue of legibility, not fungibility.

1hawkebia
Could you elaborate? The point was that desires are not always fungible - they don't neatly add up or cancel out to give you a single satisfaction score. Your decision making math would still pick the suburb because its convenience value outweighs its lack of restaurants. But you don't suddenly stop caring about restaurants because of that. Convenience isn't fungible with it.

I think the section on Don't Look Up, in particular the comments on the relationship between science and policy, misses the mark in very important ways. The naive model of [science discovers how the world works] --> [policymakers use this to make policy to improve the world (for themselves, or their constituents, or everybody, or whatever)] does not give enough weight to the reverse action - where the policy is fixed, and the science that supports it is promoted until the policy is Scientific(TM). I think most science-that-determines-policy is selected ... (read more)

The link on severity of Omicron infections (https://www.nrk.no/urix/tall-fra-danmark_-omikron-forer-til-like-mange-innleggelser-som-delta-1.15769977) raises an interesting question. They deduce the severity by comparing the number of hospitalisations from Omicron with the spread of the variant 5 to 6 days prior to hospitalisation, which is the correct thing to do if we assume it takes 5 days from infection to developing symptoms severe enough to be admitted to the hospital. My two questions:

  • Are other news sources doing this consistently as well? If they ar
... (read more)
1tkpwaeub
On the other hand, everyone seems to have given up on tracking full recoveries, so maybe the two things cancel one another out, so we end up just using the most naive approach possible?

The paper seems to describe the Delta variant and classify its properties compared to older strains. I'm not an expert so I might well be misunderstanding it, but that paper seems to classify and compare two wild strains, not modify them? Maybe I'm missing something, but what is the relation with omicron?

I thought the fact that South Africa does far more sequencing than other countries in that part of the world (for example, check the reported Delta sequences by country, where South Africa is listed as 25th globally with 11,004 sequenced samples, and the n... (read more)

2ESRogs
I think the conjecture is that the virus would have mutated on its own, in the presence of antibodies. But on second thought, maybe that's relatively unrealistic (as a possible explanation for Omicron), because we'd expect much more mutation in the wild where there are much higher quantities of the virus?

Would you prefer that the FDA involves itself over that it stands by the sideline?

This seems correct to me, but I don't immediately see the importance/relevancy of this? At any rate the escape is speculative at this point.

3Steven Byrnes
Oh, I was responding to this part: I'm suggesting that population inhomogeneity weakens this argument, and that immune escape + population inhomogeneity would seem to be a plausible explanation of how omicron appears so much more infectious than delta. I wasn't assuming immune escape / erosion, I was arguing for it being more likely / less unlikely given what we know.

There have also been a dozen or so instances when new variants dominated some country that subsequently fizzled out

I completely failed to notice this, whoops. Do you have some more information on this?

Yea. Gamma was the dominant strain in Brazill, Luxembourg, Chile,  Argentina, and a few other places in early-to-mid 2021, but never become the dominant strain in the US. Similarly for Lambda in Peru, Mu in Colombia, 20B/S:732A in Mexico, 20A/S:439K in Slovenia, 20E in Lithuania, and a handful of other strains in some other countries. This is all from Covariants.org. Many of these countries do roughly as much sequencing as SA does, so it seems like an appropriate reference class for thinking about Omicron.

The Dutch festival actually was a 2-day event with a total capacity of 10,000 people per day. But it is reasonable to assume that some amount of people attend the first and then the second day, so the total number of participants is lower than 20,000 and correspondingly the rate of infection is unknown but somewhere between 5% and 10%.

Just wanted to confirm you have accurately described my thoughts, and I feel I have a better understanding of your position as well now.

I agree with your reading of my points 1,2,4 and 5 but think we are not seeing eye to eye on points 3 and 6. It also saddens me that you condensed the paragraph on how I would like to view the how-much-should-we-trust-science landscape to its least important sentence (point 4), at least from my point of view.

As for point 3, I do not want to make a general point about the reliability of science at all. I want to discuss what tools we have to evaluate the accuracy of any particular paper or claim, so that we can have more appropriate confidence across the bo... (read more)

3DirectedEvolution
This makes a lot of sense, actually. You're focused on mechanisms that a good thinker could use to determine whether or not a particular scientific finding is true or not. I'm worried about the ways that the conversation around skepticism can and does go astray. Perhaps I read some of the quotes from the papers uncharitably. Silberzahn asks "What if scientific results are highly contingent on subjective decisions at the analysis stage?" I interpreted this question, in conjunction with the paper's conclusion, as pointing to a line of thinking that goes something like this:   1. What if scientific results are highly contingent on subjective decisions at the analysis stage? 2. Some scientific results are highly contingent on subjective decisions at the analysis stage. 3. What if ALL scientific results are highly contingent on subjective decisions at the analysis stage across the board???!!! But a more charitable version for the third step is: 3. This method helped us uncover one such case, and might help us uncover more. Also, it's a reminder to avoid overconfidence in published research, especially in politically charged and important issues where good evidence is hard to come by. I spent the last ten years teaching children, and so my default mode is one of "educating the young and naive to be a little more sophisticated." Part of my role was to sequence and present ideas with care in order to increase the chance that an impressionable and naive young mind absorbed the healthy version of an idea, rather than a damaging misinterpretation. Maybe that informs the way I perceive this debate.

I've upvoted you for the clear presentation. Most of the points you state are beliefs I held several years ago, and sounded perfectly reasonable to me. However, over time the track record of this view worsened and worsened, to the point where I now disagree not so much on the object level as with the assumption that this view is valuable to have. I hope you'll bear with me as I try to give explaining this a shot.

I think the first, major point of disagreement is that the target audience of a paper like this is the "level 1" readers. To me it seems like the ... (read more)

2DirectedEvolution
I pulled out statements of your positive statements and beliefs, and give my response to each. Even if we set aside my concerns about how an audience with low trust in science might interpret this stuff, I still think that my points stand. We should be careful about extrapolation, and read these studies with the same general skepticism we should be applying to other studies. That might seem "misleading and irrelevant" to you, but I really don't understand why. They're good basic reminders for any discussion of science.

That is very interesting, mostly because I do exactly think that people are putting too much faith in textbook science. I'm also a little bit uncomfortable with the suggested classification.

I have high confidence in claims that I think are at low risk of being falsified soon, not because it is settled science but because this sentence is a tautology. The causality runs the other way: if our confidence in the claim is high, we provisionally accept it as knowledge.

By contrast, I am worried about the social process of claims moving from unsettled to settled s... (read more)

5DirectedEvolution
I have another way of stating my concern with the rhetoric and thought here. People start as "level 1" readers of science, and they may end up leveling up as they read more. One of the "skill slots" they can improve on is their skepticism. This means understanding intuitively about how much confidence to place in a claim, and why. To me, this line of argument is mainly aimed at those "level 1" readers. The message is "Hey, there's a lot of junk out there, and some of it even makes it into textbooks! It's hard to say how much, but watch out!" That sentence is useful to its audience if it builds more accurate intuitions about how to interpret science. And it's clear that it might well have that effect in a nonzero number of cases. However, it seems to me that it could also build worse intuitions about how to read science in "level 1" readers, by causing them to wildly overcorrect. For example, I have a friend who is deep into intelligent design, and has surrounded himself with other believers in ID (who are PhD-holding scientists). He views them as mentors. They've taught him not only about ID, but also a robust set of techniques for absolutely trashing every piece of research into evolution that he gets his hands on. It's a one-sided demand for rigor, to be sure, but it's hard to see or accept that when your community of practice has downleveled your ability to read scientific literature. I spend quite a bit of time reading the output of the online rationalist and rat-adjacent community. I see almost no explicit writing on when and why we should sometimes believe the contents of scientific literature, and a gigantic amount of writing on why we should be profoundly skeptical that it has truth content. I see a one-sided demand for rigor in this, on a community-wide level. It's this problem that I am trying to correct for, by being skeptical of the skepticism, using its own heuristics: 1. We should be careful before we extrapolate. 2. There is a range of appropr

It seems to me that we should be really careful before extrapolating from the specific datasets, methods, and subfields these researchers are investigating into others. In particular, I'd like to see some care put into forecasting and selecting research topics that are likely or unlikely to stand up to a multiteam analysis.

I think this is good advice, but only when taken literally. In my opinion there is more than sufficient evidence to suggest that the choices made by researchers (pick any of the descriptions you cited) have a significant impact on the co... (read more)

4DirectedEvolution
Let me reframe this. Settled science is the set of findings that we think are at low risk of being falsified in the future. These multi team analyses are basically showing that unsettled science is just that - unsettled... which is exactly what readers should have been expecting all along. So not only should we be careful about extrapolating from these findings to the harder sciences, we should be careful not to extrapolate from the fragility of unsettled science to assume that settled science is similarly fragile. Just because the stuff that makes it into a research paper might be overturned, doesn’t mean that the contents of your textbooks are likely to be overturned. This is why the very broad “science” framing concerns me. I think these scientists are doing good work, but the story some of them are telling is a caution that their readers shouldn’t require. But maybe I have too high an opinion of the average reader of scientific literature?

I've seen calls to improve all the things that are broken right now: <list>

I think this is a flaw in and of itself. There are many, many ways to go wrong, and the entire standard list (p-hacking, selective reporting, multiple stopping criteria, you name it) should be interpreted more as symptoms than as causes of a scientific crisis.

The crux of the whole scientific approach is that you empirically separate hypothetical universes. You do this by making your universe-hypotheses spit out predictions, and then verify them. It seems to me that by and larg... (read more)

Thank you for the wonderful links, I had no idea that (meta)research like this was being conducted. Of course it doesn't do to draw conclusion from just one or two papers like that, we would need a bunch more to be sure that we really need a bunch more before we can accept the conclusion.

Jokes aside, I think there is a big unwarranted leap in the final part of your post. You correctly state that just because the outcome of research seems to not replicate we should not assume evil intent (subconscious or no) on the part of the authors. I agree, but also fra... (read more)

5justinpombrio
Yeah, that was my reaction too: regardless of intentions, the scientific method is, in the "soft" sciences, frequently not arriving at the truth. The follow up question should of course be: how can we fix it? Or more pragmatically, how can you identify whether a study's conclusion can be trusted? I've seen calls to improve all the things that are broken right now: reduce p-hacking and publication bias, aim for lower p values, spread better knowledge of statistics, do more robustness checks, etc. etc.. This post adds to the list of things that must be fixed before studies are reliable. But one thing I've wondered is: what about focusing more on studies that find large effects? There are two advantages: (i) it's harder to miss large effects, making the conclusion more reliable and easier to reproduce, and (ii) if the effects is small, it doesn't matter as much anyways. For example, I trust the research on the planning fallacy more because the effect is so pronounced. And I'm much more interested to know about things that are very carcinogenic than about things that are just barely carcinogenic enough to be detected. So, has someone written the book "Top 20 Largest Effects Found in [social science / medicine / etc.]"? I would buy it in a heartbeat.
5DirectedEvolution
I think this is a really important, serious point. This post links five meta-studies: * Huntington-Klein et al. (2021) * Silberzahn et al. (2018) * Breznau et al. (2021) * Botvinik-Nezer, R., Holzmeister, F., Camerer, C.F. et al (2020) * Bastiaansen et al. (2020) Respectively, their fields are microeconomics, psychology, political science, neuroscience imaging, and psychology again. That's not a representative sample of "science" by field. Furthermore, the same assumptions about biased analyses apply to these meta-studies. It seems possible to me that these researchers targeted specific topics they thought would be likely to produce these results. That can be extremely valuable when the goal is to reveal which specific topics have inadequate methodologies. If instead, the goal is to argue that science as a whole is subject to a "hidden universe of uncertainty," then we need to find some way to control for this topic-selection bias in meta-studies. Perhaps specialists in various subfiels could submit a range of datasets and hypotheses, with their own forecasts for the chance that a meta-study would turn up a consistent directional effect. Without such control measures, these meta-studies seem vulnerable to the same publication bias as the fields they investigate. Do these studies purport to be investigating particular methods, or are they themselves arguing that their results should inform our uncertainty level for all of science? * I can't access Huntington-Klein, as Sci-Hub isn't cataloguing 2021 publications at the moment. * Silberzahn asks, "What if scientific results are highly contingent on subjective decisions at the analysis stage?" * Breznau states, "We argue that outcome variability across researchers is also a product of a vast but sometimes hidden universe of analytical flexibility," apparently referring to researchers in general across the scientific enterprise. * Botvinik-Nezer refers to "researcher degrees of freedom" in many areas of sc

What do you mean 'problem'? Everybody involved wants the inspection to go well, the correlation between the outcome of the inspection and the quality of the school/firm's books is incidental at best.

4Bucky
The problem is the notice given which results in the low correlation you mention. (by audit I don't really mean financial audits as I don't have experience of those - I'm more thinking of quality audits)

This is a very good point, and in my eyes explains the observations pretty much completely. Thanks!

(yet it was contained in the UK, which is great and suggests I'm talking BS)

I continue to be extremely surprised by the UK decline in numbers. The Netherlands is reporting a current estimated R of 1.1-1.2 for the English strain and 0.8-0.9 for the wild types. They furthermore estimate that just over half of all newly reported cases are English strain by now. But the UK daily cases have dropped by 80% in 40 days, which at a reproduction time of 6 days would mean R = 0.79 throughout.

In the past I suggested a few potential, not mutually exclusive, explanation... (read more)

8jsteinhardt
I think if you account for undertesting, then I'd guess 30% or more of the UK was infected during the previous peak, which should reduce R by more than 30% (the people most likely to be infected are also most likely to spread further), and that is already enough to explain the drop.

The loss of life and health of innocent people who got suckered into a political issue without considering the ramifications?

I mean, the group of people who holds out on getting a vaccine as long as possible will definitely be harder to convince than the average citizen. But with these numbers (death rate, long term health conditions, effectiveness of vaccines) around are you seriously suggesting trying to help them is not cost-effective? From the post I think you're talking about tens of millions of people in the USA alone, if not 100M+.

1kjz
By now, everyone has had a year to consider the ramifications of their decisions. People are free to make their own choices about the vaccine and their response to covid in general. If they make their choices based on their political affiliation or in-group signaling, so be it. I am seriously suggesting it is not cost-effective for me to try to influence others to get the vaccine. Most of the people I know have either already decided to get the vaccine at their first opportunity, or decided they will never get it. In November/December, as the vaccines were starting to get approved, I had some discussions with my few friends who I thought might be on the fence, but they weren't moved much by my arguments. I don't actually think I know anyone that I could convince at this point. On a population level, I agree it is worthwhile and most likely cost-effective to continue to encourage people to get vaccinated. But that is almost entirely beyond my ability to influence. And I reject any blame for observing this situation and commenting on it without completely fixing it.
1cistran
Trying to help them how? Education? Financial incentives to vaccinate? Social disincentives to hold out? At least some forms of trying will not be cost-effective.
4PatrickDFarley
Help them by living your life and demonstrating the advantages of vaccination. What actions are you advocating instead of that?

I personally have a very tough time fitting your interpretation into my model of the world. To me the popularity and actions of Facebook et al. are mostly disconnected from our ability to communicate with family and close friends.

In my opinion the timeline seems to be a little more as follows:

  • People are on Facebook and Twitter and other social media platforms both to stay in touch with friends and to complain about the outgroup.
  • COVID-19 hit, significantly reducing quality of life everywhere. People realign their political discussions and notions of outgrou
... (read more)

You are correct, but the hope is that the probabilities involved stay low enough that a linear approximation is reasonable. Using for example https://www.microcovid.org/, typical events like a shopping trip carry infection risks well below 1% (dependent on location, duration of activity and precautions etc.).

I meant after the first shot, sorry for the confusion.

2jefftk
10-14d after the first shot you're still not very protected. Protected enough that I think a strategy of focusing on first doses probably makes more sense, but not protected enough for most of this post to apply.

I think ojno has a point. Furthermore, to the best of my knowledge the protection from the vaccines takes a bit of time (10 days? 14 days?) to kick in after the vaccination. Arguably "proceed with the same caution as before" is a better message than "go nuts, dance and hug and visit all your friends" in this period, and for simplicity's sake this has become the default message.

Who am I kidding, this is of course because we don't want vaccination to be unfair. If you get social benefits from being vaccinated (by not having to abide by some of the restrictio... (read more)

2jefftk
FDA is using 7d for Pfizer (https://www.fda.gov/media/144245/download) and 14d for Moderna (https://www.fda.gov/media/144434/download). I'm using "two weeks after the second shot" in the post to be on the safe side.

Mathoverflow has discussion on it. In short:

  • This area definition is equivalent to the standard definition, although this was (to me) not immediately obvious.
  • Some statements (linearity of integrals, for example) are obvious from the one definition, while others (the Monotone Convergence Theorem) are obvious from the other definition. Unfortunately, proving that the two definitions are equivalent is pretty much the proof for these statements (assuming the other definition).
  • The general approach of "given a claim, test it on indicator functions, then simple fu
... (read more)

It was pointed out to me that it is really not accurate to consider the UK daily COVID numbers as a single data-point. There could be any number of possible explanations for the decrease in the numbers. Some possible explanations include:

  1. The current lockdown and measures are sufficient to bring the English variant to R<1.
  2. The current measures bring the English variant to an R slightly above 1, and the wild variants to R well below 1, and because nationally the English variant is not dominant yet (even though it is in certain regions) this gives a nationa
... (read more)

To what extent does 'positive PCR test' equate to 'infectious'? Or is there some other good indicator? I know most health authorities say something like "if you have been contact with a person who tested positive, then from the point they are no longer symptomatic/first negative test after you have to be careful for X days', so I assumed they are (somewhat) related.

To the best of my knowledge there are four evil inaccurate but not-completely-moronic reasons for sticking with a 2-dose vaccination plan. Just to be clear: none of these arguments convincingly suggest that 2-dose will be a better method to combat the pandemic.

  1. Many officials may be convinced that "no Proper Scientific Procedure has investigated this" is identical to "there is no knowledge". In non-pandemic times, if you squint juuust right, this looks like a cost-benefit analysis of delaying medical research versus endorsing crackpot pharmaceutics. I find
... (read more)

Oh, it’s so much worse than that. What happens when the central planner combines threats to those who don’t distribute all the vaccine doses they get, with other threats to those who let someone ‘jump the line’? Care to solve for the equilibrium? 

You conclude that vaccination facilities will reduce their orders so they are guaranteed to be able to distribute all. I think in practice it is much easier to cook the books and/or destroy vaccines as necessary.

More pressingly, this is the first mention I've run into of the potential seriousness of the South... (read more)

2TheSimplestExplanation
If you have the choice it seems simpler and safer to order no vaccine. Especially if you're hardly getting paid anyway.
3Zvi
Definitely need more data, and data would be easy to get. Just need to do the tests. Your alternative equilibrium... Isn't better...

There has been previous discussion about this on LessWrong. In particular, this is precisely the focus of Why the tails come apart, if I'm not mistaken.

If I remember correctly that very post caused a brief investigation into an alleged negative correlation between chess ability and IQ, conditioning on very high chess ability (top 50 or something). Unfortunately I don't remember the conclusion.

Edit: and now I see Mo Nastri already pointed this out. Oops.

Your point on alternative hypotheses is well taken, I only mentioned the superspreader one since that was considered the main possibility for strong relative growth of one variant over another without increased infectiousness. Could you expand on the likelihood of any of these being true/link to discussion on them?

2Douglas_Knight
If you make any of these hypotheses precise enough to calculate, then I don't think that they are likely enough to be worth calculating. The point was just to give suggest how big the space of unknown unknowns is. I think you need an outside view to estimate it. You might hope to get that from the virologists, but they are dismissing it as a "founder effect" which is even more specific, rather than accepting the ignorance of an outside view. I think I got them all from Francois Balloux, though I'm not sure what he was saying and I may have interpolated a lot of detail. I got 2a and maybe 1 from here. 2b is from here, a response to the first thread. Added: actually, I think I got 2a from the "Does it matter" video, which was generally hostile to reason and knowledge epidemiology, but did suggest something like this at the end.

I also thought this, but was told this was not the case (without sources though). If you are right then the scaling assumption is probably close to accurate. I tried briefly looking for more information on this but found it too complicated to judge (for example, papers summarizing contact tracing results in order to determine the relative importance of superspreader events are too complicated for me to undo their selection effects - in particular the ones I saw limited to confirmed cases, or sometimes even confirmed cases with known source).

EDIT: if I chec... (read more)

I agree that this means particular interactions would have a larger risk increase than the 70% cited (again, or whatever average you believe in).

In the 24-minute video in Zvi's weekly summary Vincent Racaniello makes the same point (along with many other good points), with the important additional fact that he is an expert (as far as I can tell?). The problem is that this leaves us in the market for an alternative explanation of the UK data, both their absolute increase in cases as well as the relative growth of this particular variant as a fraction of all... (read more)

I had a long discussion on this very topic, and wanted to share my thoughts somewhere. So why not here.

Disclaimer: I am not an expert on any of this.

The scaling assumption (if the new strain has an R of 1.7 when the old one has an R of 1, then we need countermeasures pulling the old one down to 0.6 to get the new one to 0.6 * 1.7 = 1) is almost certainly too pessimistic an estimate, but I have no clue by how much. A lot of high risk events (going to a concert, partying with 10+ people in a closed room for an entire night, having a multiple hour Christmas d... (read more)

7Jacob Falkovich
I don't think it was that easy to get to the saturated end with the old strain. As I remember, the chance of catching COVID from a sick person in your household was only around 20-30%, and at superspreader events it was still just a small minority of total attendees that were infected.
4Venusian
Suppose, as you say, some of this nonlinearity is already factored into the 70% estimate, that would imply that the 'real' number is even higher. For some interaction, like having a face to face conversation without any protection, the probability of an infection may have increased by 100% or even more. I'm also not an expert. Intuitively this seems like a big step with just a handful of mutations.

My father sent me this video (24 min) that makes the case for all of this being mostly a nothingburger. Or, to be more precise, he says he has only low confidence instead of moderate confidence that the new strain is substantially more infectious, which therefore means don’t be concerned. Which is odd, since even low confidence in something this impactful should be a big deal! It points to the whole ‘nothing’s real until it is proven or at least until it is the default outcome’ philosophy that many people effectively use.

I think this is a great video, it e... (read more)

Good point, I'm likely misinterpreting nextstrain website then.

Answer by TheMajor*100

I can answer this one, or more specifically the PHE can. The tl; dr of this technical briefing is that the new strain tests positive on two assays (N, ORF1ab) and negative on a third (S), and that up to some noise this is currently the only strain to do so. So the number of PCR tests that are both S-negative and COVID-positive is a good indication of the spread of the new strain, without the need for genome sequencing. This document makes this argument precise, and then produces a painful graph on page 8 showing the 'S dropout' proportion at the Milton Key... (read more)

8Oskar Mathiasen
85 seems incongruent with this source https://covidcg.org/?tab=global_sequencing  

I've been trying to understand this discussion (and I agree that this is one of the central questions for the model of how things will progress from here, in particular if March-style lockdowns will be sufficient or not to halt the spread of this strain). But now I'm mainly confused - isn't such a dramatic increase in Rt incompatible with the slower increase in the graph, as pointed out by CellBioGuy?

 

Edit: I've read yesterday's PHE investigation report, and they do explicitly confirm it is an increase of over +0.5 to the Rt under the conditions in En... (read more)

I certainly expect status games, above and beyond power games. Actually saying 'power games' was the wrong choice of words in my comment. Thank you for pointing this out!

That being said, I don't think the situation you describe is fully accurate. You describe group meetings as an arena for status (in the office), whereas I think instead they are primarily a tool for forcing cooperation. The social aspect still dominates the decision making aspect*, but the meeting is positive sum in that it can unify a group into acting towards a certain solution, even if ... (read more)

2Mathisco
As another commenter noted, there exists an alternative strategy. Which is to organize a lot of one-on-one meetings to build consensus. And then to use a single group meeting to demonstrate that consensus and polarizing the remaining minority. This may be a more efficient way to enforce cooperation. Anyway, I wonder if there is a good method to find out the dominant forces at play here.

I'm gonna pull a Hanson here. What makes you think group meetings are about decision making?

 

I think the primary goal of many group meetings is not to find a solution to a difficult problem, but to force everybody in the meeting to publicly commit to the action that is decided on. This both cuts off the opportunity for future complaining and disobedience ('You should have brought that up in the meeting!') and spreads the blame if the idea doesn't work ('Well, we voted on it/discussed it'). Getting to the most effective solutions to your problems is se... (read more)

0Ian David Moss
I don't dispute that the phenomenon you're describing is real, but purely as a data point I'd offer that in the majority of my recent experiences working with organizations as a consultant, managers have not explicitly sought to use meetings this way, and in a few cases they have proactively pushed for input from others. It's certainly possible that the sample of organizations I'm working with is biased both because a) they are mostly nonprofits and foundations, and b) if they are working with me it's a signal that they're unusually attentive to their decision-making process. But I don't want people reading this thread to be under the impression that all managers are this cynical.

How about another angle.

Most meetings are not just power games. They are pure status games. Only in such group meetings can you show off. Power plays are one way to show off.

You will speak quickly and confidently, while avoiding to make any commitment to action. If you attend someone else's meeting, you quickly interrupt and share your arguments in order to look confident and competent.

The low status meeting participants are mainly there to watch. They will try to quickly join the highest status viewpoints to avoid loss of more status, thereby causing casc... (read more)

Why not the Total Variation norm? KS distance is also a good candidate.

1Maxwell Peterson
I think I didn't like the supremum part of the KS distance (which it looks like Total Variation has too) - felt like using just the supremum was using too little information. But it might have worked out anyway.

I usually just hope the Twitter links aren't that important/interesting.

I think your early analysis is accurate, but connecting this to 'reliable information sources about COVID' is completely off the mark. I don't know how to explain properly why I think this is so completely wrong - or at least, not without delving into a few-month sequence based on the material of https://samzdat.com. The 1-minute version goes something like:

There are many possible steps that all need to go right before appropriate collective action is taken to combat a national or global threat. This is especially true if we have shared responsibility, and... (read more)

I am not able to make it because of a one-off other appointment (a flight, actually). So I don't think this is very informative for the sake of planning. Usually my Sundays are unclaimed.

I really would have loved to attend, but won't be able to make it at that time. Will you (with permission of the participants, I imagine) record the meeting, or maybe write some possibly anonymised summary of the discussion after?

2Ben Pace
Thanks! I think this time I will probably not record it, while we're getting used to it all, because on the margin people don't feel comfortable being videoed. But probably we'll make some notes in a google doc during it that can be shared. Out of interest, can you not make it because of time zone or because you're generally busy Sundays? 12-2 PT is the time I always pick when I want something to work internationally, so am interested to know why people can't make it.

I definitely agree that there is a bias in this community for technological solutions over policy solutions. However, I don't think that this bias is the deciding factor for judging 'trying to induce policy solutions on climate change' to not be cost-effective. You (and others) already said it best: climate change is far more widely recognised than other topics, with a lot of people already contributing. This topic is quite heavily politicized, and it is very difficult to distinguish "I think this policy would, despite the high costs, be a great benefit to... (read more)

I completely agree, and would like to add that I personally draw a clear line between "the importance of climate change" and "the importance of me working on/worrying about climate change". All the arguments and evidence I've seen so far suggest solutions that are technological, social(/legal), or some combination of both. I have very little influence on any of these, and they are certainly not my comparative advantage.

If OP has a scheme where my time can be leveraged to have a large (or, at least, more than likely cost-effective) impact on climate change ... (read more)

1lincolnquirk
Regarding one’s ability to effect social change: It seems like the standard arguments about small-probability, high-impact paths apply. I think a lot of STEM types tend to default to shy away from policy change, not because of comparative advantage (which would often be a good reason) but because of some blind spot in the way technologists talk about how to get things done in society. I think for historical reasons (the way the rationality community has grown) we tend to be biased towards technical solutions and away from policy ones.
Emiya120

I'm trying to reply as little as possible to the comments of this post to avoid influencing the future replies I'll get, but in this case I felt that it was better to do so, since this point is likely an important one to determine the interest users will have for this subject, and consequently to determine how many replies I'll have.

I'm aware that it wouldn't be very useful to make a post exclusively aimed at making the users of this site feel more worried about climate change.

What the individual users of this site can do about it, considering the cost-eff... (read more)

Load More