The US FDA (U.S. Food and Drug Administration)'s current advice on what to do about covid-19 still pretty bad.
Hand-washing and food safety seem to just be wrong, as far as we can tell covid-19 is almost entirely transmitted in the air, not on hands or food; hand-washing is a good thing to do but it won't help against covid-19 and talking about it displaces talk about things that actually do help.
6 feet of distance is completely irrelevant inside, but superfluous outside. Inside, distance doesn't matter - time does. Outside is so much safer than inside that you don't need to think about distance, you need to think about spending less time inside [in a space shared with other people] and more time outside.
Cloth face coverings are suboptimal compared to N95 or P100 masks and you shouldn't wear a cloth face covering unless you are in a dire situation where N-95 or P100 isn't available. Of course it's better than not wearing a mask, but that is a very low standard.
Donating blood is just irrelevant right now, we need to eliminate the virus. Yes, it's nice to help people, but talking about blood donation crowds out information that will help to eliminate the virus.
Reporting fake tests is not exactly the most important thing that ordinary people need to be thinking about. Sure, if you happen to come across this info, report it. But this is a distraction that displaces talk about what actually works.
Essentially every item on the FDA graphic is wrong.
In fact the CDC is still saying not to use N95 masks, in order to prevent supply shortages. This is incredibly stupid - we are a whole year into covid-19, there is no excuse for supply shortages, and if people are told not to wear them then there will never be an incentive to make more of them.
6 feet of distance is completely irrelevant inside, but superfluous outside.
That seems to be a bold claim. Do you have a link to a page that goes into more detail on the evidence for it?
In fact the CDC is still saying not to use N95 masks, in order to prevent supply shortages. This is incredibly stupid - we are a whole year into covid-19, there is no excuse for supply shortages, and if people are told not to wear them then there will never be an incentive to make more of them.
Here in Germany Bavaria decided as a first step to make N95 masks required when using public transport and shopping and it's possible that more German states will adopt this policy as time goes on.
One weird trick for estimating the expectation of Lognormally distributed random variables:
If you have a variable X that you think is somewhere between 1 and 100 and is Lognormally distributed, you can model it as being a random variable with distribution ~ Lognormal(1,1) - that is, the logarithm has a distribution ~ Normal(1,1).
What is the expectation of X?
Naively, you might say that since the expectation of log(X) is 1, the expectation of X is 10^1, or 10. That makes sense, 10 is at the midpoint of 1 and 100 on a log scale.
This is wrong though. The chances of larger values dominate the expectation or average of X.
But how can you estimate that correction? It turns out that the rule you need is 10^(1 + 1.15*1^2) ≈ 141.
In general, if X ~ Lognormal(a, b) where we are working to base 10 rather than base e, this is the rule you need:
E(X) = 10^(a + 1.15*b^2)
The 1.15 is actually ln(10)/2.
For a product of several independent lognormals, you can just multiply these together, which means adding in the exponent. If you have 2 or 3 things which are all lognormal, the variance-associated corrections can easily add up to quite a lot.
Remember: add 1.15 times the sum of squares of log-variances!
regrettably I have forgotten (or never knew) the proof but it is on Wikipedia
https://en.wikipedia.org/wiki/Log-normal_distribution
I suspect that it is some fairly low-grade integral/substitution trick
There may be no animal welfare gain to veganism
I remain unconvinced that there is any animal welfare gain to vegi/veganism, farm animals have a strong desire to exist and if we stopped eating them they would stop existing.
Vegi/veganism exists for reasons of signalling, it would be surprising if it had any large net benefits other than signalling.
On top of this, the cost to mitigate most of the aspects of farming that animals disprefer is likely vastly smaller than the harms to human health.
Back of the envelope calculation is that making farming highly preferable to nonexistence for beef cattle raises the price by 25%-50%. I have some sources that ethically raised beef cattle has a cost of production of slightly more than $4.17/lb. Chicken has an ethical cost of production that's $2.64/lb vs $0.87/lb (from the same source). But, taking into account various ethics-independent overheads the consumer will not see those prices. Like, I cannot buy chicken for $0.87/lb, I pay about $3.25/lb. So I suspect that the true difference that the consumer would see is in the 25%-50% range. The same source above gives a smaller gap for pork - $6.76/lb vs $5.28/lb.
So, we could pay about 33% more for ethical meat that gives animals lives that are definitely preferable to nonexistence. The average consumer apparently spends about $1000/year on meat. So, that's about 70 years * $333 = $23,000
Now, if we conservatively assume that vegi/veganism costs say 2 years of life expectancy adjusted for quality due to nutritional deficiencies (ignore the pleasure of eating meat here, and also ignoring the value to the animals of their own lives) - with a statistical value of life of $10 million that's a cost of about $300,000.
If we value animal lives at say k% of a human life per unit time, and for simplicity assume that a person eats only $1000 of beef per year ~= 200 lb ~= 1/2 a cow, then each person causes the existence of about 0.75 cows on a permanent basis, each living for about 18 months, which is valued at 0.75k%.$10M. Vegans do not usually give an explicit value for k. Is an animal life worth the same as a human life per year? 1/10th? 1/20th? 1/100th? In any case, it doesn't really matter what you pick for this, it's overdetermined here.
So, veganism fails cost-benefit analysis based on these assumptions, compared to the option of just paying a bit extra for farming techniques that are more preferable to animals at an acceptably elevated cost.
Of course you could argue that veganism is good for human health, but I believe that is wrong due to bias and confounding (there are many similar screwups where a confounding effect due to something being popular with the upper class swamps a causal effect in the other direction). There are, as far as I am aware, no good RCTs on veganism.
In summary, veganism is a signalling game that fails rational cost-benefit analysis.
This sounds to me like: "freeing your slaves is virtue signaling, because abolishing slavery is better". I agree with the second part, but it can be quite difficult for an individual or a small group to abolish slavery, while freeing your slaves is something you can do right now (and then suffer the economical consequences).
If I had a magical button that would change all meat factories into humane places, I would press it.
If there was a referendum on making humane farms mandatory, I would vote yes.
In the meanwhile, I can contribute a tiny bit to the reduction of animal suffering by reducing my meat consumption.
You may call it virtue signaling, I call it taking the available option, instead of dreaming about hypothetically better options that are currently not available.
I think this doesn't make sense any more now that veganism is such a popular and influential movement that influences government policy and has huge control over culture.
But a slightly different version of this is that because there's no signalling value in a collective decision to impose welfare standards, it's very hard to turn into a political movement. So we may be looking at a heavily constrained system.
Nitpick: You did not prove that veganism is a signalling game. It might, but it doesn't follow. People might be vegan for many reasons, e.g., taste, different ethical framework, different key assumptions, habit, ...
Yes, I didn't address that here. But I think anyone who is vegan for nonsignalling reasons is sort of mistaken.
Like, I cannot buy chicken for $0.87/lb, I pay about $6.50/lb
I'm sorry, what? Like, I can in fact go buy boneless chicken thighs for $6.50/lb at Whole Foods in the Bay Area, but that is not what the average consumer is paying. Prices are in fact more like $1/lb for drumsticks, $1.5/lb for whole birds, $3/lb for boneless thighs/breasts.
The Contrarian 'AI Alignment' Agenda
Overall Thesis: technical alignment is generally irrelevant to outcomes, and almost everyone in the AI Alignment field is stuck with this incorrect assumption, working on technical alignment of LLM models
(1) aligned superintelligence that is provably logically realizable [already proved]
(2) aligned superintelligence is not just logically but also physically realizable [TBD]
(3) ML interpretability/mechanistic interpretability cannot possibly be logically necessary for aligned superintelligence [TBD]
(4) ML interpretability/mechanistic interpretability cannot possibly be logically sufficient for aligned superintelligence [TBD]
(5) given certain minimal intelligence, minimal emulation ability of humans by AI (e.g. understands common-sense morality and cause and effect) and of AI by humans (humans can do multiplications etc) the internal details of AI models cannot possibly make a difference to the set of realizable good outcomes, though they can make a difference to the ease/efficiency of realizing them [TBD]
(6) given near-perfect or perfect technical alignment (=AI will do what the creators ask of it with correct intent) awful outcomes are Nash Equilibrium for rational agents [TBD]
(7) small or even large alignment deviations make no fundamental difference to outcomes - the boundary between good/bad is determined by game theory, mechanism design and initial conditions, and only by a satisficing condition on alignment fidelity which is below the level of alignment of current humans (and AIs) [TBD]
(8) There is no such thing as superintelligence anyway because intelligence factors into many specific expert systems rather than one all-encompassing general purpose thinker. No human has a job as a “thinker” - we are all quite specialized. Thus, it doesn’t make sense to talk about “aligning superintelligence”, but rather about “aligning civilization” (or some other entity which has the ability to control outcomes) [TBD]
No human has a job as scribe, because literacy is 90%+.
I don't think that unipolar/multipolar scenarios differ greatly in outcomes.
No human has a job as scribe
Yes, correct. But people have jobs as copywriters, secretaries, etc. People specialize, because that is the optimal way to get stuff done.
it doesn’t make sense to talk about “aligning superintelligence”, but rather about “aligning civilization” (or some other entity which has the ability to control outcomes)
The key insight here is that
(1) "Entities which do in fact control outcomes"
and
(2) "Entities which are near-optimal at solving the specific problem of grabbing power and wielding it"
and
(3) "Entities which are good at correctly solving a broad range of information processing/optimization problems"
are three distinct sets of entities which the Yudkowsky/Bostrom/Russell paradigm of AI risk has smooshed into one ("The Godlike AI will be (3) so therefore it will be (2) so therefore it will be (1)!"). But reality may simply not work like that and if you look at the real world, (1), (2) and (3) are all distinct sets.
The gap between (3) and (2) is the advantage of specialization. Problem-solving is not a linear scale of goodness, it's an expanding cone where advances in some directions are irrelevant to other directions.
The gap between (1) and (2) - the difference between being best at getting power and actually having the most power - is the advantage of the incumbent. Powerful incumbents can be highly suboptimal and still win because of things like network effects, agglomerative effects, defender's advantage and so on.
There is also another gap here. It's the gap between making entities that are generically obedient, and making a power-structure that produces good outcomes. What is that gap? Well, entities can be generically obedient but still end up producing bad outcomes because of:
(a) coordination problems (see World War I)
(b) information problems (see things like the promotion of lobotomies or HRT for middle-aged women)
(c) political economy problems (see things like NIMBYism, banning plastic straws, TurboTax corruption)
Problems of type (a) happen when everyone wants a good outcome, but they can't coordinate on it and defection strategies are dominant so people get the bad Nash Equilibrium
Problems of type (b) happen when everyone obediently walks off a cliff together. Supporting things like HRT for middle-aged or drinking a glass of red wine per week women was backed by science, but the science was actually bunk. People like to copy each other and obedience makes this worse because dissenters are punished more. They're being disobedient, you see!
Problems of type (c) happen because a small group of people actually benefit from making the world worse, and it often turns out that that small group are the ones who get to decide whether to perpetuate that particular way of making the world worse!
For an example of the crushing advantage of specialization, see this tweet about how a tiny LLM with specialized training for multiplication of large numbers is better at it than cutting-edge general purpose LLMs.
I am proud to announce that I just solved* AI Alignment.
https://transhumanaxiology.substack.com/p/a-nonconstructive-existence-proof
*(some implementation details were left as an exercise to the reader)