All of Yandong Zhang's Comments + Replies

https://www.sanders.senate.gov/wp-content/uploads/Masks-for-All-2022-Final-Text.pdf

Omicron Makes Biden’s Vaccine Mandates Obsolete

There is no evidence so far that vaccines are reducing infections from the fast-spreading variant.

By Luc Montagnier and Jed Rubenfeld

Jan. 9, 2022 5:20 pm ET
----WSJ 

 

I did not believe the user of this website was really about reason, as this post was devoted greatly.

Should be smaller R0. However, I meant not to fix it. It took 22 months that CDC start considering to recommend N95 and some areas (Salt Lake city) starts giving free N95.

People who did not understand the richness,fastness, unpredictable, of COVID's mution could not appreciate my conclusoin two years ago. 

For knowing this result, u need not to have an "accurate" model with many dependence assumptions. 

Clearly the Exponential function dominate the linear function (benefit of vaccines/re-infect immune) in UK.

https://twitter.com/DrEricDing/status/1473752247376961543

1Yandong Zhang
For knowing this result, u need not to have an "accurate" model with many dependence assumptions. 

Another reason is, all extra dependent hypothesis will be explored equally at an earlier stage of a research topic. In brief, most trash papers compete each other at the earlier stage and only after some time, a dominate theory/model will be established. The competition process actually is very similar  as the process of virus evolution.  At the earlier stage, there is no reason to assume a dominate new model yet. Thus, no heterogenous should be assumed.

There is no reason to assume heterogeneous, as the COVID is so new and the information/knowledge about its mutation direction is very shallow till now.

https://www.reuters.com/business/healthcare-pharmaceuticals/omicron-cases-doubling-15-3-days-areas-with-local-spread-who-2021-12-18/

 

With the speed of double 1-3 days, I did not believe other details/aspects played any significant role. Only the transmission control/observe has relationship with the true reality.

Let's assume there were many COVID mutated variants. What is the best model for the average of the spreading path of all those mutations? It is the SIR model, as it has less dependency. More "accurate" models have more assumptions, hypothesis and depended conditions, which are not reliable. In brief, any other models looks more or less like the result of the SIR model. The difference cancels out.

1JBlack
That's a strong claim. Do you have any evidence for it?
1Yandong Zhang
https://www.reuters.com/business/healthcare-pharmaceuticals/omicron-cases-doubling-15-3-days-areas-with-local-spread-who-2021-12-18/   With the speed of double 1-3 days, I did not believe other details/aspects played any significant role. Only the transmission control/observe has relationship with the true reality.

https://twitter.com/DrEricDing/status/1469723185084084225?s=20

 

Please check the calculation part. I wish the health system would not stress out by the omicron. 

"the chance of contracting disease at all compared with those who are not vaccinated (~40-70% for Delta, reduced to maybe ~10-30% for Omicron);"

 

Do you have a link to the peer review papers about the above item? 

"6 months ago I wrote about how 30-year-olds should basically go back to normal and no longer take many COVID precautions."

Will the hospital system stress out again in many states because people did not control the transmission? We will see soon. I just did not understand that why so many people did not understand the power exponential functions.

Currently, the omicron doubles 3-4 days (Germany and British data). Let's assume the vaccines reduce the severity into the swan flu level. Now, what will the swan flu that doubles in each 4 days will lead? Simple math will tell us, it is UNACCEPTABLE.  

2Connor_Flexman
Yes, it definitely will, and yes that will be unacceptable. Will that be because of vaccinated scrupulous LessWrong-reading mask-wearing 30yos during the holidays? No. That will contribute much less harm-to-benefit than many, many other actions.

If 30s live as normal, the transmission will not be controlled and the health system will stress out even further. 

https://www.news5cleveland.com/news/continuing-coverage/coronavirus/local-coronavirus-news/gov-dewine-orders-ohio-national-guard-to-help-understaffed-hospitals-with-rising-covid-cases

2Connor_Flexman
First, I think we are all still pretty far from living as normal. Many things in our past lives would have been more than 1k microcovids. Second, even the most informal versions of test and trace (telling your friends if you develop any symptoms, so they can tell their friends) can significantly reduce transmission rate. Third, all this is in the context of the holidays. Fourth, 30yos are not the only segment of society. Fifth, the health care system is not yet close to capacity in almost all places (if your local hospitals are overwhelmed, obviously do not act normally). Etc

I am surprised that most people did not read the virus spreading dynamics even after two years of COVID. For any large scale plague, the transmission will cause the most life loss. Assuming the serenity of COVID is reduced to the same level as a flu and ignore the long COVID. 

Now, Think about a ten times transmissible swan flu. Individual tends to think it is acceptable. However, it could cause millions of life loss in US alone.  

It is not possible that 100% will get it. 

https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology

1cistrane
it is possible that over 50% will get it. 
1Yandong Zhang
https://twitter.com/DrEricDing/status/1469723185084084225?s=20   Please check the calculation part. I wish the health system would not stress out by the omicron. 
1JBlack
I was presuming that we (and many other readers) are already familiar with such simplistic models. I don't know why you are asking me to do calculations using them when my post explicitly notes some of the errors in the assumptions of such models, and how the actual spread of infectious diseases does not follow such models as scale increases.
1Yandong Zhang
"the chance of contracting disease at all compared with those who are not vaccinated (~40-70% for Delta, reduced to maybe ~10-30% for Omicron);"   Do you have a link to the peer review papers about the above item? 

Simple calculation suggests the transmission rate contribute much more to the life of loss than the mortality rate. Any measures improve the transmission will cancel the vaccines' linear contribution to the death rate. The first priority of the vaccine should be prevent transmission, not mortality rate.

After the two horrible years, any new thoughts?

We need to think earlier, before too late. Nobody could exclude the possibility that the COVID would last tens of years. I did have some knowledge of genetic algorithm and understand the power of small mutation. Hope more research could be done to control/predict the consequence of mutation, as the mutation itself is not predictable. 

The vaccine does not prevent the transmission (to my best knowledge, for both Delta and Omicron; I am not anti-vaccine). A simple calculation suggests that the linear contribution (reduce the death rate, etc.) provided by the vaccines was dominated by the exponential contribution of increasing R0 of the Omicron. It looks like the only equilibrium is still universal N95/other PPE, in theory.

2Dagon
I wonder if we mean different things by the word "equilibrium".  I think it means a somewhat stable outcome as a balance of opposing forces.  Universal proper masks isn't happening and isn't sustainable (people hate it), so will never be an equilibrium. Vaccines that bring individual risk down to tolerable levels, and some amount of NPI theater, so that the disease continues to infect people but most people aren't severely harmed could be a possible equilibrium, but I don't think it's been going on very widely for long enough to call it one yet.
2Dagon
I'd have won my bet on a technicality - I see almost zero proper use of quality masks, though pretty pervasive mask theater.  Seems prior infection is slightly less long-term effective than hoped, and vaccination about as expected.   We're closest to equilibrium #1+2 (vaccines fairly effective at preventing serious problems, but not at eliminating the virus entirely), but we haven't seen a variant that really tests our responsiveness, so I'd argue it's not a long-term equilibrium yet.  Nobody talks about herd immunity any longer, as far as I can tell.  
Answer by Yandong Zhang10

After two years, should this post get more upvotes? 

1Yandong Zhang
We need to think earlier, before too late. Nobody could exclude the possibility that the COVID would last tens of years. I did have some knowledge of genetic algorithm and understand the power of small mutation. Hope more research could be done to control/predict the consequence of mutation, as the mutation itself is not predictable. 

Any new thoughts? It seems that the mutation of RNA is too fast. 

Unfortunately, no other possible equilibrium point of the COVID evolution had been observed until now. On the contrary, more COVID variations appeared. I guess, soon or later, people as a whole will learn to use masks, and then better protection gears. 

https://orgmode.org/worg/org-gtd-etc.html

I would like to recommend the emacs' org mode and some discussion about its relationship with GTD technique.

Even (1),(2) and (3) were proven true in the future, it was not apocalyptic scenario. People only need to wear serious respirators while not at home. It was not a big deal in my opinion.

I understood that there would be strongly against toward serious respirator. A picture of kids wearing scary respirator is kind of unthinkable to me. However, it is the only equilibrium point that I did not see any scientific uncertainties.

Besides the theoretical consideration, in reality, mine workers had used respirators to protect their lung for years.

1Yandong Zhang
Any new thoughts? It seems that the mutation of RNA is too fast. 

Herd immunity may not be reachable since we did not know how long the immune effects could last for infected people.

3Dagon
Correct. It's not clear which long-term equilibria are likely. Early evidence of mutations ( https://www.scmp.com/news/china/science/article/3080771/coronavirus-mutations-affect-deadliness-strains-chinese-study ) could help or hinder some of these - if mutations are common and large enough to make previous antibodies ineffective, the immunity and immunization ones get tougher. If mutations change the lethality, but not the immune response effectiveness, then the pervasive low-level infection of less-lethal variants become more possible. I will bet a lot against full-time serious mask (well-fitted N95 or better) use for a majority of people for more than a few months.

It is not a joke; Also, English is not my mother tongue. However, the above proof is the only proof of the possible ending of COVID-19 since (as I posted in another topic):

"Everybody wearing a respirator could be one of the equilibrium point of the social evolution under the COVID-19, though may be not the only one. Unfortunately, I did not figure other equilibrium point yet. To my best knowledge, nobody gives other end point of the social evolution in a rigors way. "

If anybody could proposed other equilibrium point of social evolution, I would be more than happy.

1Yandong Zhang
Unfortunately, no other possible equilibrium point of the COVID evolution had been observed until now. On the contrary, more COVID variations appeared. I guess, soon or later, people as a whole will learn to use masks, and then better protection gears. 

The outlook of kids with a respiratory is kind of scary, as shown in HK. I did not see any other issue with this strategy.

Everybody wearing a respirator could be one of the equilibrium point of the social evolution under the COVID-19, though may be not the only one. Unfortunately, I did not figure other equilibrium point yet. To my best knowledge, nobody gives other end point of the social evolution in a rigors way.

3jmh
Agree. My comments can certainly be seen as suggesting a starting approach and then refining that approach. However, I would actually expect to see a lot of zigs and zags, and possible multiple types of solutions that will work in different settings. About the only thing I would say I have any high confidence about is that we need to start doing something. Masks, just like telecommuting for some, is one starting point. It might just end up being the crutch used to get over some hurdle. So I see things more as a stage where we will likely do some things that could be called trial and error approaches (but the error needs to be more on the "we cannot do this because I cannot stand doing X" side and not "we cannot do it this way because we just tripled R0!" side)

In amzon, the 2097 filtering is more expensive than before. But still available.

https://www.amazon.com/3m-2097/s?k=3m+2097

2Decius
Not much more expensive; less than 2x the price that ordinary users would have paid before. Grainger still lists them for regular price, and doesn't claim that every p100 filter is sold out yet, just the less-expensive ones. If it's worth the extra cost to pay for a Olive/Black/Magenta filter when you just want a Magenta one... https://www.grainger.com/category/safety/respiratory-protection/cartridges-and-filters

If a researcher was given 1000X more data, 1000X CPU power, would he switch to a brute-force approach? I did not see the connection between "data and computation power" and the brute-force models.

2johnswentworth
A simple toy model: a roll a pair of dice many, many times. If we have a sufficiently large amount of data and computational power, then we can brute-force fit the distribution of outcomes - i.e. we can count how many times each pair of numbers is rolled, estimate the distribution of outcomes based solely on that, and get a very good fit to the distribution. By contrast, if we have only a small amount of data/compute, we need to be more efficient in order to get a good estimate of the distribution. We need a prior which accounts for the fact that there are two dice whose outcomes are probably roughly independent, or that the dice are probably roughly symmetric. Leveraging that model structure is more work for the programmer - we need to code that structure into the model, and check that it's correct, and so forth - but it lets us get good results with less data/compute. So naturally, given more data/compute, people will avoid that extra modelling/programming work and lean towards more brute-force models - especially if they're just measuring success by fit to their data. But then, the distribution shifts - maybe one of the dice is swapped out for a weighted die. Because our brute force model has no internal structure, it doesn't have a way to re-use its information. It doesn't have a model of "two dice", it just has a model of "distribution of outcomes" - there's no notion of some outcomes corresponding to the same face on one of the two dice. But the more principled model does have that internal structure, so it can naturally re-use the still-valid subcomponents of the model when one subcomponent changes. Conversely, additional data/compute doesn't really help us make our models more principled - that's mainly a problem of modelling/programming which currently needs to be handled by humans. To the extent that generalizability is the limiting factor to usefulness of models, additional data/compute alone doesn't help much - and indeed, despite the flagship applic

Such a great article! I thought the AlexNet that led to the recent AI break through could be viewed as a discontinuity too. The background and some statistics result are well summarized in below link.

https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/

The graph evolution system are of:

[a] easy to be stated

[b] Turing complete


Conway's Game of Life also has the above two properties.

Answer by Yandong Zhang90

From the perspective of mathematical logic, string replacement systems could be as powerful as a full functional computer. The proposed graph evolution systems are of the same power too. The author provided many well explained good features of the system and I was persuaded to try to think some science topics from the viewpoint of "graph evolution".

If in future the author or others can obtain new physics findings by using this system, then evidentially the new "fundamental ontology" had some advantages.

However, at this moment, I did ... (read more)

Can we reduce the issue of “we can't efficiently compute that update” by adding sensors?

What if we could get more data ? —— if facing such type of difficulties, I would ask that question first.

1Kenny
No, he's referring to something like performing a Bayesian update over all computable hypotheses – that's incomputable (i.e. even in theory). It's infinitely beyond the capabilities of even a quantum computer the size of the universe. Think of it as a kind of (theoretical) 'upper bound' on the problem. None of the actual computable (i.e. on real-world computers built by humans) approximations to AIXI are very good in practice.
3johnswentworth
Yeah, the usual mechanism by which more data reduces computational difficulty is by directly identifying the values some previously-latent variables. If we know the value of a variable precisely, then that's easy to represent; the difficult-to-represent distributions are those where there's a bunch of variables whose uncertainty is large and tightly coupled.

For training new graduates from computer science major, I often asked them to develop a simple website to predict the UP/DOWN probability of tomorrow’s SP index (close price), by using any machine learning model. Then, if the website reported a number that was very close 50%, I would say: the website worked well since the SP index was very close to random walk. “What is the meaning of the work!” Most of them would ask angrily. “50% visitors will be impressed by your website. “

I apologize if you feel the story is irrelevant. In my opinion, 50% prediction de

... (read more)

Below is a simplified COVID-19 framework:

Data acquiring ---> social engineering based on model ----> better result


Yes. A better model will be definitely helpful. However, (as pointed out indirectly earlier by someone else), to my best knowledge, there were no good and robust model for large lag dynamic systems. Such kind of model could lead to Chaos and random like result easily. Thus, I believed that increasing the data acquiring capability was the key (South Korea's approach).

[This comment is no longer endorsed by its author]Reply

(1)

A wrong model could be useful if the action (based on the module) can compensate the models' error effectively. Usually, you need to know some properties of the model's error.

(2)

Even a wrong model could be very useful. For example, the earth is flat. That wrong model setup the question correctly and so that people could start thinking the shape of the earth.

[This comment is no longer endorsed by its author]Reply

Also, people can design a specific application environment to reduce the bad effects resulting from the error of the model.

A lesson from last 30 years AI development: data and computation power are the key factor of improvement.

Thus, IMPHO,,for obtaining a better model, the most reliable approach is to get more data.

1Kenny
A lot of AI development has been in relatively 'toy' domains – compared to modeling the Earth's climate! Sometimes what is needed beyond just more data (of the same type) is a different type of data.
4johnswentworth
I'd push back on this pretty strongly: data and computation power, devoid of principled modelling, have historically failed very badly to make forward-looking predictions, especially in economics. That was exactly the topic of the famous Lucas critique. The main problem is causality: brute-force models usually just learn distributions, so they completely fail when distributions shift.
2Pattern
This would make a good answer.