All of samshap's Comments + Replies

There are, but what does having a length below 10^90 have to do with the solomonoff prior? There's no upper bound on the length of programs.

Yes, you are missing something.

Any DEADCODE that can be added to a 1kb program can also be added to a 2kb program. The net effect is a wash, and you will end up with a  ratio over priors

4Lucius Bushnaq
Why aren’t there 2^{1000} less programs with such dead code and a total length below 10^{90} for p_2, compared to p_1?

Thirder here (with acknowledgement that the real answer is to taboo 'probability' and figure out why we actually care)

The subjective indistinguishability of the two Tails wakeups is not a counterargument  - it's part of the basic premise of the problem. If the two wakeups were distinguishable, being a halfer would be the right answer (for the first wakeup).

Your simplified example/analogies really depend on that fact of distinguishability. Since you didn't specify whether or not you have it in your examples, it would change the payoff structure.

I'll al... (read more)

Thanks for sharing that study. It looks like your team is already well-versed in this subject!

You wouldn't want something that's too hard to extract, but I think restricting yourself to a single encoder layer is too conservative - LLMs don't have to be able to fully extract the information from a layer in a single step.

I'd be curious to see how much closer a two-layer encoder would get to the ITO results.

:Here's my longer reply.

I'm extremely excited by the work in SAEs and their potential for interpretability, however I think there is a subtle misalignment in the SAE architecture and loss function, and the actual desired objective function.

The SAE loss function is:

, where  is the -Norm.

or 

I would argue that, however, what you are actually trying to solve is the sparse coding problem:

where, imp... (read more)

2Neel Nanda
Interesting! You might be interested in a post from my team on inference-time optimization It's not clear to me what the right call here is though, because you want f to be something the model could extract. The encoder being so simple is in some ways a feature, not a bug - I wouldn't want it to be eg a deep model, because the LLM can't easily extract that!

This is great work. My recommendation: add a term in your loss function that penalizes features with high cosine similarity.

I think there is a strong theoretical underpinning for the results you are seeing.

I might try to reach out directly - some of my own academic work is directly relevant here.

1Bart Bussmann
Interesting! I actually did a small experiment with this a while ago, but never really followed up on it. I would be interested to hear about your theoretical work in this space, so sent you a DM :)

This is one of those cases where it might be useful to list out all the pros and cons of taking the 8 courses in question, and then thinking hard about which benefits could be achieved by other means.

Key benefits of taking a course (vs. Independent study) beyond the signaling effect might include:

  • precommitting to learning a certain body of knowledge
  • curation of that body of knowledge by an experienced third party
  • additional learning and insight from partnerships / teamwork / office hours

But these depend on the courses and your personality. The precommi... (read more)

Instead of demanding orthogonal representations, just have them obey the restricted isometry property.

Basically, instead of requiring  , we just require  .

This would allow a polynomial number of sparse shards while still allowing full recovery.

I think the success or failure of this model really depends on the nature and number of the factions. If interfactional competition gets too zero-sum (this might help us, but it helps them more, so we'll oppose it) then this just turns into stasis.

During ordinary times, vetocracy might be tolerable, but it will slowly degrade state capacity. During a crisis it can be fatal.

Even in America, we only see this factional veto in play in a subset of scenarios - legislation under divided government. Plenty of action at the executive level or in state governments don't have to worry about this.

You switch positions throughout the essay, sometimes in the same sentence!

"Completely remove efficacy testing requirements" (Motte) "... making the FDA a non-binding consumer protection and labeling agency" (Bailey)

"Restrict the FDA's mandatory authority to labeling" logically implies they can't regulate drug safety, and can't order recalls of dangerous products. Bailey! "... and make their efficacy testing completely non-binding" back to Motte again.

"Pharmaceutical manufactures can go through the FDA testing process and get the official “approved’ label i... (read more)

This is a Motte and Bailey argument.

The Motte is 'remove the FDAs ability to regulate drugs for efficacy'

The Bailey is 'remove the FDAs ability to regulate drugs at all'

The FDA doesn't just regulate drugs for efficacy, it regulates them for safety too. This undercuts your arguments about off-label prescriptions, which were still approved for use by the FDA as safe.

Relatedly, I'll note you did not address Scott's point on factory safety.

If you actually want to make the hardline position convincing, you need to clearly state and defend that the FDA should not regulate drugs for safety.

0Maxwell Tabarrok
It's not a Motte and Bailey because I don't switch between positions. My definition of the hardline position is to "restrict the FDA’s mandatory authority to labeling and make their efficacy testing completely non-binding."  I could have made an argument for removing FDA safety testing as well but I didn't. I am arguing only for the Motte against Scott's plan to expand supplements and experimental drugs.

The differentiation between CDT as a decision theory and FDT as a policy theory is very helpful at dispelling confusion. Well done.

However, why do you consider EDT a policy theory? It's just picking actions with the highest conditional utility. It does not model a 'policy' in the optimization equation.

Also, the ladder analogy here is unintuitive.

1Cole Wyeth
I suggest the paper I mentioned on sequential extensions of causal and evidential decision theory. Sequential policy evidential decision theory is definitely a policy theory. But sequential action evidential decision theory is a decision theory making slightly weaker assumptions than CDT. So it's not clear where the general category EDT should go; I think I'll update the post to be more precise about that.

This doesn't make sense to me. Why am I not allowed to update on still being in the game?

I noticed that in your problem setup you deliberately removed n=6 from being in the prior distribution. That feels like cheating to me - it seems like a perfectly valid hypothesis.

After seeing the first chamber come up empty, that should definitively update me away from n=6. Why can't I update away from n=5 ?

3dr_s
Yes, the n=6 case is special. I didn't mean to "cheat" but I simply excluded it because it's trivial. But past the certainty that the game isn't rigged that much, you can't gain anything else. If you didn't condition on the probability of observing the sequence, nothing would actually change anyway. Your probability distribution would be P(n)∝(1−n6)N (properly normalized, of course). This skews the distribution ever further towards low values of n, irrespective of any information about the actual gun. In other words, if you didn't quit at the beginning, this will never make you quit - you will think you're safer and safer by sheer virtue of playing longer, irrespective of whether you actually are. So, what use are you getting out of this information? None at all. If you are in a game that is worth playing, you gain zero; you would have played anyway. If you are not in a game that is worth playing, you lose in expectation the difference V−WPLAY. So either way, this information is worthless. The only information that is useful is one that behaves differently (again, in expectation) between a world in which the optimal strategy is to play, and one in which the optimal strategy is to quit, and allows you to make better decisions. But there is no such useful information you can gain in this game upstream of your decision. Also please notice that in the second game, the one with the blanks, my criterion allows you to define a distribution of belief that actually you can get some use out of. But if we consistently applied your suggested criterion, and did not normalize over observable paths, then the belief after E empty chambers would just be P(b;E)=(E+1)bE which behaves exactly like the function above. It's not really affected by your actual trajectory, it will simply convince you that playing is safer every time an empty chamber comes up, and can't change your optimal strategy. Which means, again, you can't get any usefulness out of it. This for an example of a gam

Counterpoint, robotaxis already exist: https://www.nytimes.com/2023/08/10/technology/driverless-cars-san-francisco.html

You should probably update your priors.

6Daniel Kokotajlo
From the OP:  I agree that robotaxis are pretty close. I think that AGI is also pretty close. 

Nope.

According to the CDC pulse survey you linked (https://www.cdc.gov/nchs/covid19/pulse/long-covid.htm) the metrics for long covid are trending down. This includes: currently experiencing, any limitations, and significant limitations categories.

How is this in the wrong place?

Nice. This also matches my earlier observation that the epestemic failure is of not anticipating one's change in value. If you do anticipate it, you won't agree to this money pump.

I agree that the type of rationalization you've described is often practically rational. And it's at most a minor crime against epestemic rationality. If anything, the epestemic crime here is not anticipating that your preferences will change after you've made a choice.

However, I don't think this case is what people have in mind when they critique rationalization.

The more central case is when we rationalize decisions that affect other people; for example, Alice might make a decision that maximizes her preferences and disregards Bob's, but after the fact s... (read more)

2Kevin Dorst
Nice point. Yeah, that sounds right to me—I definitely think there are things in the vicinity and types of "rationalization" that are NOT rational.  The class of cases you're pointing to seems like a common type, and I think you're right that I should just restrict attention. "Preference rationalization" sounds like it might get the scope right. Sometimes people use "rationalization" to by definition be irrational—like "that's not a real reason, that's just a rationalization".  And it sounds like the cases you have in mind fit that mold. I hadn't thought as much about the cross of this with the ethical version of the case.  Of course, something can be (practically or epistemically) rational without being moral, so there are some versions of those cases that I'd still insist ARE rational even if we don't like how the agent acts. 
1Sweetgum
Yes, and another meaning of "rationalization" that people often talk about is inventing fake reasons for your own beliefs, which may also be practically rational in certain situations (certain false beliefs could be helpful to you) but it's obviously a major crime against epistemic rationality. I'm also not sure rationalizing your past personal decisions isn't an instance of this; the phrase "I made the right choice" could be interpreted as meaning you believe you would have been less satisfied now if you chose differently, and if this isn't true but you are trying to convince yourself it is to be happier then that is also a major crime against epistemic rationality.

I can use a laptop to hammer in a nail, but it's probably not the fastest or most reliable way to do so.

I don't see how this is more of a risk for a shutdown-seeking goal, than it is for any other utility function that depends on human behavior.

If anything, the right move here is for humans to commit to immediately complying with plausible threats from the shutdown-seeking AI (by shutting it down). Sure, this destroys the immediate utility of the AI, but on the other hand it drives a very beneficial higher level dynamic, pushing towards better and better alignment over time.

3Aaron_Scher
Yes, it seems like AI extortion and threat could be a problem for other AI designs. I'll take for example an AI that wants shut-down and is extorting humans by saying "I'll blow up this building if you don't shut me down" and an AI that wants staples and is saying "I'll blow up this building if you don't give me $100 for a staples factory." Here are some reasons I find the second case less worrying:  Shutdown is disvaluable to non-shutdown-seeking AIs (without other corrigibility solutions): An AI that values creating staples (or other non-shut-down goals) gets disvalue from being shut off, as this prevents it from achieving its goals; see instrumental convergence. Humans, upon being threatened by this AI, will aim to shut it off. The AI will know this and therefore has a weaker incentive to extort because it faces a cost in the form of potentially being shut-down. [omitted sentence about how an AI might deal with this situation]. For a shut-down seeking AI, humans trying to diffuse the threat by shutting off the AI is equivalent to humans giving in to the threat, so no additional cost is incurred.  From the perspective of the human you have more trust that the bargain is held up for a shut-down-seeking AI. Human action, AI goal, and preventing disvalue are all the same for shut-down-seeking AI. The situation with shut-down-seeking AI posing threats is that there is a direct causal link between shutting down the AI and reducing the harm it's causing (you don't have to give in to its demands and hope it follows through). For non-shut-down-seeking AI if you give in to extortion you are trusting that upon you e.g., helping it make staples, it will stop producing disvalue; these are not as strongly coupled as when the AI is seeking shut-down.    To the second part of your comment, I'm not sure what the optimal thing to do is; I'll leave it to the few researchers focusing on this kind of thing. I will probably stop commenting on this thread because it's plausibly bad

That assumption literally changes the nature of the problem, because the offer to bet, is information that you are using to update your posterior probability.

You can repair that problem by always offering the bet and ignoring one of the bets on tails. But of course that feels like cheating - I think most people would agree that if the odds makers are consistently ignoring bets on one side, then the odds no longer reflect the underlying probability.

Maybe there's another formulation that gives 1:1 odds, but I can't think of it.

3tgb
You're right that my construction was bad. But the number of bets does matter. Suppose instead that we're both undergoing this experiment (with the same coin flip simultaneously controlling both of us). We both wake up and I say, "After this is over, I'll pay you 1:1 if the coin was a heads." Is this deal favorable and do you accept? You'd first want to clarify how many times I'm going to payout if we have this conversation two days in a row.  (Is promising the same deal twice mean we just reaffirmed a single deal or that we agreed to two separate, identical deals? It's ambiguous!) But which one is the correct model of the system? I don't think that's resolved. I do think phrasing it in terms of bets is useful: nobody disagrees on how you should bet if we've specified exactly how the betting is happening, which makes this much less concerning of a problem. But I don't think that specifying the betting makes it obvious how to resolve the original question absent betting.

To the second point, because humans are already general intelligences.

But more seriously, I think the monolithic AI approach will ultimately be uncompetitive with modular AI for real life applications. Modular AI dramatically reduces the search space. And I would contend that prediction over complex real life systems over long-term timescales will always be data-starved. Therefore being able to reduce your search space will be a critical competitive advantage, and worth the hit from having suboptimal interfaces.

Why is this relevant for alignment? Because y... (read more)

I take issue with the initial supposition:

  • How could the AI gain practical understanding of long-term planning if it's only trained on short time scales?
  • Writing code, how servers work, and how users behave seen like very different types of knowledge, operating with very different feedback mechanisms and learning rules. Why would you use a single, monolithic 'AI' to do all three?
4paulfchristiano
Existing language models are trained on the next word prediction task, but they have a reasonable understanding of the long-term dynamics of the world. It seems like that understanding will continue to improve even without increasing horizon length of the training. Why would you have a single human employee do jobs that touch on all three? Although they are different types of knowledge, many tasks involve understanding of all of these (and more), and the boundaries between them are fuzzy and poorly-defined such that it is difficult to cleanly decompose work. So it seems quite plausible that ML systems will incorporate many of these kinds of knowledge. Indeed, over the last few years it seems like ML systems have been moving towards this kind of integration (e.g. large LMs have all of this knowledge mixed together in the same way it mixes together in human work). That said, I'm not sure it's relevant to my point.

My weak prediction is that adding low levels of noise would change the polysemantic activations, but not the monosemantic ones.

Adding L1 to the loss allows the network to converge on solutions that are more monosemantic than otherwise, at the cost of some estimation error. Basically, the network is less likely to lean on polysemantic neurons to make up small errors. I think your best bet is to apply the L1 loss on the hidden layer and the output later activations.

I've been thinking along very similar lines, and would probably generalize even further:

Hypothesis: All DNNs thus far developed are basically limited to system-1 like reasoning.

Great stuff!

Do you have results with noisy inputs?

The negative bias lines up well with previous sparse coding implementations: https://scholar.google.com/citations?view_op=view_citation&hl=en&user=JHuo2D0AAAAJ&citation_for_view=JHuo2D0AAAAJ:u-x6o8ySG0sC

Note that in that research, the negative bias has a couple of meanings/implications:

  • It should correspond to the noise level in your input channel.
  • Higher negative biases directly contribute to the sparsity/monosemanticty of the network.

Along those lines, you might be able to further improve m... (read more)

2Adam Jermyn
Nope! Do you have predictions for what noise might do here? Oooo I'll definitely take a look. This looks very relevant. We don't have any noise, but we do think that the bias is serving a de-interference role (because features get packed together in a space that's not big enough to avoid interference). Can you say more about why? We know that L1 regularization on the activations (or weights) increases monosemanticity, but why do you think this would happen when done as part of the task loss?

Yes, but that was decades ago, when Yeltsin was president! The 'union state' has been moribund since the early aughts.

I have some technical background in neuromorphic AI.

There are certainly things that the current deep learning paradigm is bad at which are critical to animal intelligence: e.g. power efficiency, highly recurrent networks, and complex internal dynamics.

It's unclear to me whether any of these are necessary for AGI. Something, something executive function and global workspace theory?

I once would have said that feedback circuits used in the sensory cortex for predictive coding were a vital component, but apparently transformers can do similar tasks using purel... (read more)

If the world's governments decided tomorrow that RL was top-secret military technology (similar to nuclear weapons tech, for example), how much time would that buy us, if any? (Feel free to pick a different gateway technology for AGI, RL just seems like the most salient descriptor).

2aogara
Interesting question. As far as what government could do to slow down progress towards AGI, I'd also include access to high-end compute. Lots of RL is knowledge that's passed through papers or equations, and it can be hard to contain that kind of stuff. But shutting down physical compute servers seems easier. 
1plex
Depends whether they considered it a national security issue to win the arms race, and if they did how able they would be to absorb and keep the research teams working effectively.

In my model, Chevron and the US military are probably open to AI governance, because: 1 - they are institutions traditionally enmeshed in larger cooperative/rule-of-law systems, AND 2 - their leadership is unlikely to believe they can do AI 'better' than the larger AI community.

My worry is instead about criminal organizations and 'anti-social' states (e.g. North korea) because of #1, and big tech because of #2.

Because of location, EA can (and should) make decent connective with US big tech. I think the bigger challenge will be tech companies in other countries , especially China.

I published an article on induction https://www.lesswrong.com/posts/7x4eGxXL5DMwRwzDQ/commensurable-scientific-paradigms-or-computable-induction of decent length/complexity that send to have gotten no visibility at all, which I found very discouraging for my desire to ever do so again. I could only find it by checking my user profile!

3Ben Pace
Alas. I think re-posting in a week or so is fine.

I'm downvoting this, not because it's wrong or because of weak epistemics, but because politics is the mind killer, and this article is deliberately structured to make that worse.

I believe politically sensitive topics like this can be addressed on less wrong, but the inflammatory headline and first sentence here are just clickbait.

3ebrodey
I understand that perspective and will try and due better to work within LW's guidelines. Thanks for the feedback. I think the first sentence was justified given the humanitarian costs associated with the sanctions. 

Articles are hard! I was lucky enough to be raised bilingual, so I'm somewhat adept at navigating between different article schemes). I won't claim these are hard and fast rules in English, but:

1 - 'Curiosity' is an abstract noun (e.g. liberty, anger, parsimony). These generally don't have articles, unless you need some reason to distinguish between subcategories (e.g. 'the liberty of the yard' vs. 'the liberty of the French')

2 - 'Context' can refer to either a specific context (e.g. 'see in the proper context'), in which case the articles are included, or... (read more)

I'm confused.

In the counterfactual where lesswrong had the epistemic and moderation standards you desire, what would have been the result of the three posts in question, say three days after they were first posted? Can you explain why, using the standards you elucidated here?

(If you've answered this elsewhere, I apologize).

Full disclosure: I read all three of those posts, and downvoted the third post (and only that one), influenced in part by some of the comments to that post.

The three posts would all exist.

The first one would be near zero, karmawise, and possibly slightly negative.  It would include a substantial disclaimer, up front, noting and likely apologizing for the ways in which the first draft was misleading and underjustified.  This would be a result of the first ten comments containing at least three highly upvoted ones pointing that out, and calling for it.

The second post would be highly upvoted; Zoe's actual writing was well in line with what I think a LWer should upvote.  The comments would contain ... (read more)

"However there’s definitely an additional problem, which is that the fees are going to the city."

Money which the city could presumably use to purchase scarce and vital longshoreman labor.

The city is getting a windfall because it owns a scarce resource. Would you consider this a problem if the port were privately owned?

What Ryan is calling punishment is just an ECON 101 cost increase.

7Zvi
It's what's called a hold-up problem. LA+LB together are 40% of shipping, so they have a ton of pricing power even medium-term, and short-term they can effectively take more than all of your profits because the alternative is even worse. The cities could extract a substantial percentage of the value of international shipping, but the deadweight loss triangle involved would be gigantic, and the cost pass-through might destabilize the entire economy if they got too aggressive. Yes, you want to do enough of this to allocate via price, but there's the temptation to do far more than that in order to transfer wealth.  If the money is being used to improve the port, then I think I explicitly note that I'm fine with that, and I'm noting that I mostly disagree with Ryan on the fees being bad - I simply want the fees not to create incentives for the city that go against the public good.  If it was a privately owned port, it would be a different situation in many ways, and hopefully long term enterprise value would keep the port doing mostly the right things if it was truly private (and also it would have been made much more efficient by now and be implementing all these solutions!) but anything that big with this kind of leverage that was privately owned in 2021 is effectively not so private. 

I'm actually ok with the social pressures inherent in the activity. It's a subtle reminder of the real influence of this community. The fact that this community would enforce a certain norm makes me more likely to be a conscientious objector in contexts with the opposite norm. (This is true of historical C.O.s, who often come from religious communities).

I'd highly recommend 'The Bomber Mafia' by Malcolm Gladwell on this subject, which details the internal debates of the US Army Air Corps generals during WWII.

One of the key questions was whether to use the bombers to target strategic industries, or just for general attrition (i.e. firebombing of civilians). Obviously the first one would have been preferable from a humanitarian perspective (and likely would have ended the European War sooner), but it was very difficult to execute in practice.

I think the Bob example is very informative! I think there's an intuitive and logical reason why we think Bob and Edward are worse off. Their happiness is contingent on the masquerade continuing, which has a probability less than one in any plausible setup.

(The only exception to this would be if we're analyzing their lives after they are dead)

Yes, I was completely turned off from 'debate' as a formal endeavor as a high schooler, despite my love for informal debate.

One of the main problems is that debate contests are usually formulated as zero sum, whereas the typical informal debate I engage in is not.

Do you know of any formats for nonzero sum debate competitions where the competitors argue points they actually believe in? e.g. both debaters get more points if they identify a double-crux, and you win by having more points in the tournament as a whole, not by beating your opponent.

I believe that determinism and free will are both good models of reality, albeit at different conceptual level.

Human brains are high dimensional chaotic systems. I believe that if you put a very smart human in a task that demands creativity and insight, it will be extremely difficult to predict what they'll do, even if you precisely knew their connectome and data inputs. Maybe that's not the same thing as a philosophical "free will", but I don't see how it would result in a different end experience. 

This chapter would make a great movie.

Russia's' has an extra quote.

Alice's explanation of the Bayesian model sounds like technobabble. Unless that was the intent, it could use a bit more elaboration.

1lsusr
Fixed the extra quote. Thanks.

Depends on the environment. My assumption is that the venue is sufficiently crowded that the tamperer would never be alone with the drink, and the main protection is their risk of being spotted.

A tamper proof solution would likely be far more costly to implement.

Answer by samshap200

Lids and straws. Presumably this would make slipping a drug in way more obvious.

8gilch
Also consider taping the lids on as a matter of course.
1Maxwell Peterson
Totally! I think that's the leader right now - someone mentioned a felt thing that fits in a pocket crumpled up but can be spread decently over a drink opening, so I ordered a few of those. But really I guess a lid is a normal thing that other places already provide, so maybe it's not that complicated!

"Miriam placed poker her hand against" should be "Miriam placed her hand" or "poked her hand"

4lsusr
Fixed. Thanks.

I think I agree. I hadn't realized the UK vaccination rates were so high. In that case I'll lean towards the pockets of unvaccinated reaching herd immunity + shorter incubation period hypothesis.

I agree that this seems to explain it, but it raises a new question: how did the antibody rate get so high? Is it possible that part of Delta's contagiousness is that it has a lot more carriers who don't get sick?

9theme_arrow
I actually don't think the high level of antibodies should be such a surprise. I updated my original comment to clarify, but much of that is from vaccination, not from natural infection. Between high rates of vaccination plus historical infections, it's not surprising to me that such a high fraction of adults in the UK have antibodies.

Good point! I'll edit my fermi analysis to reflect that.

Even in a scenario where all unvaccinated people were infected with covid, I would expect none of the Georgetown undergraduates to die from covid or get covid longer than 12 weeks.

Here's my fermi analysis:

  • in your 20s, covid CFR is .0001, compared to .01 for population as a whole.
  • covid longer than 12 weeks is .03 for covid population as a whole.
  • assume really long covid scales similarly to death and hospitalization
  • mRNA reduces these both by .9.

That gives us .03 x .01 x .1, for a case really long covid rate of .00003. .00003 x 6532 = .2 really long cov... (read more)

3Bucky
and mRNA vaccines don't decrease hospitalisation by .9 given someone has become infected, they decrease it by .9 compared to an unvaccinated person given typical community exposure. So I think your calculation is more like "Even if all the Georgetown undergraduates were exposed to Delta in a way which would be sufficient to infect them if they were unvaccinated". I would estimate that to get back to your original scenario we probably have to multiply by 3-5 (depending on how much of the resistance to hospitalisation you think is purely resistance to getting infected in the first place).
-1VCM
Even if that is true, you would still get a) a lot of sickness & suffering, and b) infect a lot of other people (who infect further). So some people would be seriously ill and some would die as a result of this experiment.
2Neel Nanda
This doesn't at all feel obvious to me? At least, I'd put a decent (>20%) chance that this is not true. Eg Long COVID isn't that correlated with hospitalisation

He recommends that for communities, which presumably include significant numbers of unvaccinated folks. Which, if targeted to N95 or better masks, and actually enforced, could have substantial effect!

But having members of the least infectious subpopulation voluntarily mask is pretty much useless.

As to your second point, there is strong evidence that is not the case: https://pubmed.ncbi.nlm.nih.gov/34250518/ Vaccinated individuals who get infected have substantially lower viral loads, and thus are substantially less contagious.

You reach the opposite conclusion from Tomas Pueyo (who seems to be your primary reference):

"If you’re vaccinated, you’re mostly safe, especially with mRNA vaccines. Keep your guard up for now, avoid events that might become super-spreaders, but you don’t need to worry much more than that."

Checking your math, I think your biggest error is equating long covid (at least one symptom still present after 28 days) with lifelong CFS. The vast majority seem to clear up in the next 8 weeks: https://www.nature.com/articles/s41591-021-01292-y

I believe the 64% reducti... (read more)

2Aaron Bergman
The if long COVID usually clears up after eight weeks, that would definitely weaken my point (which would be good news!) I haven’t decided if it would change my overall stance on masking though
5James Andrix
He also says: >Masks indoors and in crowds should be mandatory. Probably because: >If vaccinated people that end up having symptoms are as infectious as unvaccinated people with symptoms, you end up in a situation where even full vaccination won’t stop the epidemic, and you need a Delta-specific vaccine boost to stop it.
1AnthonyC
Came here to say exactly this, glad someone beat me to it.   Also, I can't quite tell if the OP is recommending wearing masks or mandating wearing masks
Load More