Haha fair — I wasn’t trying to overhaul the entire dating market with one LW shortform. I just noticed this helped a few people I know, so I figured I’d share it in case it resonates with others too.
Something I’ve noticed in dating apps that I think is actually useful for a majority of people: relying on incoming likes gives you much lower-quality matches. I’ve had >100 conversations and met ~5 people from my incoming likes. Nice people, but the chemistry just wasn’t there.
When I ignore all of that and only message profiles that feel genuinely high-potential to me, the matches are immediately better. Maybe 1 in 10 of my messages are responded to, but the funny thing is: it doesn’t feel like rejection at all. I forget the ones who don’t answer. The only things that stick are the good matches — and those almost always come from actively reaching out.
Example: someone had a line about valuing silence on their profile. I wrote “Silent first date?” and we actually did a totally mute first date. Super fun. That kind of thing has low probability of coming from passively waiting.
If you want better matches, volume and proactiveness matters. Don’t rely on who shows up — go after who you actually want.
You’re absolutely right that, in principle, you want to think about both: how costly early action is and how often it turns out to be a false alarm. In a fully explicit model, you’d compare “how much harm do I avert if this really is bad news?” to “how often am I going to spend those costs for nothing?”
This note is deliberately staying one level up from that, and just looking at the training data people’s guts get. In everyday life, most of us accumulate a lot of “big scary thing that turned out fine” and “I waited and it was fine” stories, and very few vivid “I waited and that was obviously a huge mistake” stories.
In a world where some rare events can permanently uproot you or kill you, it can actually be fine – even optimal – to tolerate a lot of false alarms. My worry is that our intuitions don’t just learn “signals are noisy”; they slide into “waiting is usually safe”, which can push people’s personal thresholds higher than they’d endorse if they were doing the full cost–benefit tradeoff explicitly.
Thanks — I agree that early action can genuinely prevent disasters, and Y2K may well be a case where large-scale remediation averted serious failures. That’s an important distinction, and I’m not trying to deny it.
Instead, I am deliberately mostly overlooking prevention (though I can make that clearer) because the level I’m focusing on in this note is one step down from that system view: what things look like to a reasonably informed non-expert in advance, under uncertainty, before the outcome is known. The reason I am overlooking prevention is because, for the purpose of this text, it would not affect my conclusion. In 1998–1999, it wasn’t obvious to most people outside the remediation teams whether Y2K fixes were sufficient or even well coordinated. Expert assessments diverged, public information was mixed, and there was no way for a layperson to “test” the fix ahead of time. Some people responded to that murky situation by preparing early.
Afterwards, when the rollover produced no visible breakdowns, it became easy to reframe Y2K as a non-event or a clean mitigation success. But foresight and hindsight operate on different information. From the point of view of a typical person in 1999, you couldn’t know whether early preparation would turn out to be prudent or would later look unnecessary — that only becomes clear after the fact. A similar pattern shows up in nuclear brinkmanship: diplomats may succeed in preventing escalation, but families deciding whether to leave Washington or New York during a crisis have to act under incomplete information. They cannot rely on knowing in advance that prevention efforts will succeed.
In that sense, I actually think your point strengthens the mechanism I’m interested in. If someone now looks back at Y2K and sees it as a mitigation success — “the system handled it” — then their lived lesson is still “I waited and it was fine; professionals took care of it.” For many others who barely tracked the details and just remember that nothing bad seemed to happen where they lived, the felt lesson is similar: “I waited and it was fine.” In both cases, doing nothing personally seemed to have worked, regardless of what, if any, beliefs they had about why there was no disaster. In both instances, that is exactly the kind of training signal I’m worried about for future timing decisions.
So I fully agree there can be real, competent prevention at the system level. My claim is about what these episodes teach individuals making timing choices under uncertainty. I’ll make that foresight–hindsight and system–individual distinction clearer in the Y2K section so readers don’t bounce off in the way you describe. Thanks for flagging it — this comment helps me see where the draft was under-explained. And none of my examples are completely clear: The Gunnison example is actual system level prevention, though at a "near-individual-level". I think this is generally the cases when trying to split actual, messy and complex parts of the world into delineated classes.
Side note: As I discuss in the note, one complication for future decisions is that institutional early-warning capacity may be weakening in some areas, while emerging technologies (especially in bio and AI) could create faster, harder-to-mitigate risks. So even if Y2K was ultimately a case where system-level remediation succeeded, that doesn’t guarantee the same dynamic will hold for future threats. But that’s a separate point from the hindsight/foresight issue you raised here.
Or take it one step further and just write comments like this one on popular posts! As someone said "My karma comes from thousands of comments, not from meaningful articles."
Just a note here - I am not sure e.g. 5-log reduction would be much less expensive. The counterintuitive design with serial filtration fed into a positively pressurized bubble is already cheap even at the >10 log level. The reductions in cost by removing logs would stem from:
-Lower power demands, meaning one might get away with a somewhat smaller power system, and/or smaller dimension air supply. However, nothing like a 50% cost reduction, more like 5%-10%
-One would need to buy less filters. But these are not extremely expensive, I would guess removing one filter would decrease overall cost by <5%
Said differently, the "performance-cost curve" is kind of jumpy: Below 3-5 log it is very cheap, like just a regular HEPA air cleaner in your room and some sealant at windows and doors. Then the next step is this bubble with relatively flat costs from 3-5 logs up to 13-16 log. After that I think one is looking at something markedly different and much more expensive, if such logs even make physical sense.
I might not have emphasized this sufficiently in the post, but the aim is not to achieve near 100% robustness. Instead, the goal is to provide people with a fair chance of survival in a subset of crisis scenarios. This concept is inspired by established systems like Nordic civilian defense against nuclear threats or lifeboats on ships. Neither of these protections guarantees survival for everyone—lifeboats, for instance, are not designed to save lives in every conceivable disaster, such as an airplane crash into shallow water at high speed.
The shelters are similarly intended to offer a reasonable chance of survival under specific catastrophic scenarios, recognizing that perfection is neither feasible nor necessary.
Determining the appropriate performance threshold will require ongoing dialogue and input from various stakeholders, including potential users. There are several considerations:
My initial intuition is that even if 70% of the units function effectively in a crisis, this would be a success. However, these thresholds should not be set arbitrarily—they should involve input from a wide range of stakeholders, particularly those who might depend on these shelters for survival.
For the current production, we plan to use certified components to ensure reliability. For example, the Camfil CamCube AC is certified and tested to Leakage Class C, meaning that the overall ductwork-filter assembly performs at least as well as the filter alone. This level of quality control significantly reduces the likelihood of leaks in the system.
It’s true that during a large-scale crisis, the luxury of certified components might not always be available. Your suggestion of using permanent bonds could indeed be a practical solution in such cases. As mentioned elsewhere, there is still time to prepare for scaling up production, which includes exploring how to adapt to components of varying sizes, qualities, and production environments. Ensuring robust performance across diverse conditions will be an important part of this preparation.
Hi Florin,
Thank you for raising these points. I’m breaking my responses into separate comments to ensure we tackle each thoroughly. Here, I’ll address your concerns about testing:
Testing for these shelters involves two distinct stages, each addressing a different challenge:
This stage focuses on validating whether the design meets theoretical and engineering requirements for contamination prevention.
The good news is that we have time to carry out these tests thoroughly before shelters need to be deployed. This stage is about getting even higher certainty around core physics and engineering principles in a deliberate and methodical way.
This stage ensures that individual shelters and suits perform to spec once they are mass-produced.
For the first stage, we already have time to test the fundamental design and physics—this is a well-defined engineering problem, albeit a challenging one. For the second stage, time and conditions are more constrained, especially in a sudden crisis. Scaling production while maintaining quality will be a major logistical challenge, which is why starting now (with prototypes and small-scale runs) is critical.
In summary, the feasibility of shelters rests on both validating the design (theoretical and physical testing) and ensuring that production methods consistently meet those validated standards. I’m cautiously optimistic about the first and focused on mitigating risks for the second through early preparation - this is exactly the type of work we now have time to perform at relatively low cost and that might be relevant for other cleanroom and related fields.
While rigorous testing will enhance confidence and could refine the design, the significant likelihood that the shelters will work as-is—supported by Los Alamos results and cleanroom precedent—suggests that they could prudently be deployed even without exhaustive testing if a crisis emerges and the above testing is not completed. This approach is not a matter of desperation but rather a strategic gamble with decent odds—akin to the logic behind Nordic nuclear bunkers, where survival is not guaranteed for every individual but the overall precaution substantially increases the chance of saving lives.
By leveraging existing knowledge and technology, we can make an informed decision to move forward under high-risk conditions, understanding that the alternative—inaction—could have catastrophic consequences. This dual approach balances the urgency of mitigating existential risks with the need for further refinement and testing where time allows.
I’d be interested to hear your thoughts on this distinction and whether it addresses your concerns. Looking forward to discussing your next point in detail!
Note for future work:
Look at roles or institutions with explicit early-action triggers — for example nuclear early-warning / launch-on-warning systems, where early action is pre-approved and procedurally mediated because delay is irrecoverable.
Not making a claim — just flagging this in case follow-on pieces explore how early-action systems are actually set up in practice.