All of khafra's Comments + Replies

khafra40

When there's little incentive against classifying harmless documents, and immense cost to making a mistake in the other direction, I'd expect overclassification to be rampant in these bureaucracies.

Your analysis of the default incentives is correct. However, if there is any institution that has noticed the mounds of skulls, it is the DoD. Overclassification, and classification for inappropriate reasons (explicitly enumerated in written guidance: avoiding embarrassment, covering up wrongdoing) is not allowed, and the DoD carries out audits of classified dat... (read more)

khafra*2010

As someone who has been allowed access into various private and government systems as a consultant, I think the near mode view for classified government systems is different for a reason. 


E.g., data is classified as Confidential when its release could cause damage to national security. It's Secret if it could cause serious damage to national security, and it's Top Secret if it could cause exceptionally grave damage to national security. 
People lose their jobs for accidentally putting a classified document onto the wrong system, even if it's still... (read more)

1Shankar Sivarajan
When there's little incentive against classifying harmless documents, and immense cost to making a mistake in the other direction, I'd expect overclassification to be rampant in these bureaucracies. And having documents basically be classified by default is handy if you're doing embarrassing things you'd rather not be public (or susceptible to FOIA requests). The claims that sidestepping procedural hurdles to enact significant reform of the system poses a serious threat to national security or whatever strike me as self-serving.  
2Purplehermann
I very much hope the computers brought in were vetted and kept airgapped. You keep systems separate, yes.  For some reason I assumed that write permissions were on user in the actual system/secure network and any data exporting would be into secured systems. If they created a massive security leak for other nations to exploit, that's a crux for me on whether this was reckless.   Added: what kind of idiot purposely puts data in the wrong system purposely? The DOGE guys doing this could somehow make sense,  governmental workers??
7robo
Sure, I think that's a fair objection!  Maybe, for a business, it may be worth paying the marginal security costs of giving 20 new people admin accounts, but for the federal government that security cost is too high. Is that what people are objecting to? I'm reading comments like this: And, I just don't think that's the case.  I think this is pretty-darn-usual and very normal in the management consulting / private equity world. I don't think foreign coups are a very good model for this?  Coups don't tend to start by bringing in data scientists. What I'm finding weird is...this was the action people thought worrying enough to make it to the LessWrong discussion.  Cutting red tape to unblock data scientists in cost-cutting shakeups -- that sometimes works well!  Assembling lists of all CIA officers and sending them emails, or trying to own the Gaza strip, or <take your pick>.  I'm far mode on these, have less direct experience, but they seem much more worrying.  Why did this make the threshold?
khafra110

The quoted paragraph is a reference to a CS Lewis essay about living under the threat of global thermonuclear war. The euphony and symmetry with the original quote is damaged by making it slightly more accurate by using that phrase instead of "if we are going to be destroyed by Zizianism."

khafra40

This is the most optimistic  believable scenario I've seen in quite a while!

khafra2-2

And yet it behaves remarkably sensibly. Train a one-layer transformer on 80% of possible addition-mod-59 problems, and it learns one of two modular addition algorithms, which perform correctly on the remaining validation set. It's not a priori obvious that it would work that way! There are other possible functions on  compatible with the training data.

Seems like Simplicia is missing the worrisome part--it's not that the AI will learn a more complex algorithm which is still compatible with the training data; it's that the simple... (read more)

4Noosphere89
While @Zack_M_Davis handled the practical part, I'm not sure the claim that the simplest algorithms compatible with the training data do kill all humans even OOD in something like Solomonoff induction, and the reason for that is I'm much more skeptical of "The Solomonoff Prior is Malign" argument is actually valid, and I have a comment on where I think the argument goes wrong (summary of it is that it incorrectly assumes that simulating solipsist universes are cheaper than simulating non-solipsist universes, combined with the inability to get information on the average values of the civilization across the entire multiverse by simulating something, and the probability distribution may not even exist in the most general case, if you accept the axiom of choice: https://www.lesswrong.com/posts/tDkYdyJSqe3DddtK4/alexander-gietelink-oldenziel-s-shortform#w2M3rjm6NdNY9WDez
6Zack_M_Davis
Simplicia: But how do you know that? Obviously, an arbitrarily powerful expected utility maximizer would kill all humans unless it had a very special utility function. Obviously, there exist programs which behave like a webtext-next-token-predictor given webtext-like input but superintelligently kill all humans on out-of-distribution inputs. Obviously, an arbitrarily powerful expected utility maximizer would be good at predicting webtext. But it's not at all clear that using gradient descent to approximate the webtext next-token-function gives you an arbitrarily powerful expected utility maximizer. Why would that happen? I'm not denying any of the vNM axioms; I'm saying I don't think the vNM axioms imply that.
khafra40

AFAICT, in the Highwayman example, if the would-be robber presents his ultimatum as "give me half your silk or I burn it all," the merchant should burn it all, same as if the robber says "give me 1% of your silk or I burn it all." 
But a slightly more sophisticated highwayman might say "this is a dangerous stretch of desert, and there are many dangerous, desperate people in those dunes. I have some influence with most of the groups in the next 20 miles. For x% of your silk, I will make sure you are unmolested for that portion of your travel." 
Then the merchant actually has to assign a probabilities to a bunch of events, calculate Shapley values, and roll some dice for his mixed strategy. 

khafra72

Tangentially to Tanagrabeast's "least you can do" suggestion, as a case report: I came out to my family as an AI xrisk worrier over a decade ago, when one could still do so in a fairly lighthearted way.  They didn't immediately start donating to MIRI and calling their senators to request an AI safety manhattan project, but they did agree with the arguments I presented, and check up with me, on occasion, about how the timelines and probabilities are looking. 

I have had two new employers since then, and a few groups of friends; and with each, when ... (read more)

khafra50

See also Steven Kaas' aphorisms on twitter:

> First Commandment of the Church of Tautology: Live next to thy neighbor  
And  
> "Whatever will be will be" is only the first secret of the tautomancers.
 

khafra120

The story I read about why neighbor polling is supposed to correct for bias in specifically the last few presidential elections is that some people plan to vote for Trump, but are ashamed of this, and don't want to admit it to people who aren't verified Trump supporters. So if you ask them who they plan to vote for, they'll dissemble. But if you ask them who their neighbors are voting for, that gives them permission to share their true opinion non-attributively. 

4DanielFilan
Yeah but a bunch of people might actually answer how their neigbours will vote, given that that's what the pollster asked - and if the question is phrased as the post assumes, that's going to be a massive issue.
5Linda Linsefors
If people are ashamed to vote for Trump, why would they let their neighbours know?
khafra20

In the late 80's, I was homeschooled, and studied caligraphy (as well as cursive); but I considered that more of a hobby than preparation for entering the workforce of 1000 years ago. 

I also learned a bit about DOS and BASIC, after being impressed with the fractal-generating program that the carpenter working on our house wrote, and demonstrated on our computer. 

khafra20

Your definition seems like it fits the Emperor of China example--by reputation, they had few competitors for being the most willing and able to pessimize another agent's utility function; e.g. 9 Familial Exterminations. 
And that seems to be a key to understanding this type of power, because if they were able to pessimize all other agents' utility functions, that would just be an evil mirror of bargaining power. Being able to choose a sharply limited number of unfortunate agents, and punish them severely pour encourager les autres, seems like it might ... (read more)

2tailcalled
Can you explain what this coordination would look like?
khafra30

Clarifying question: If A>B on the dominance hierarchy, that doesn't seem to mean that A can always just take all B's stuff, per the Emperor of China example. It also doesn't mean that A can trust B to act faithfully as A's agent, per the cowpox example. 

If all dominance hierarchies  control is who has to signal submission to whom, dominance seems only marginally useful for defense, law, taxes, and public expenditure; mostly as a way of reducing friction toward the outcome that would have happened anyway.

It seems like, with intelligence too ch... (read more)

2johnswentworth
I think that conclusion is basically correct.
6tailcalled
I think John Wentworth and I are modelling it in different ways and that may be the root of your confusion. To me, dominance is something like the credible ability and willingness to impose costs targeted at particular agents, whereas John Wentworth is more using the submission signalling definition.
khafra30

Note also that there are several free parameters in this example. E.g., I just moved to Germany, and now have wimpy German burners on my stove. If I put on a large container with 6L or more of water, and I do not cover it, the water will never go beyond bubble formation into a light simmer, let alone a rolling boil. If I cover the container at this steady state, it reaches a rolling boil in about another 90s. 
 

khafra1614

Is Patrick McKenzie (@patio11) another Matt Levine of fintech? Or is he something else? I know several people outside of the industry (including myself) who read pretty much everything he writes, which includes a lot of technical detail written very accessibly.

khafra82

I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn't have a leader they trust and respect

Catholic EA: You have a leader you trust and respect, and defer to their judgement.

Sola Fide EA: You read 80k hours and Givewell, but you keep your own spreadsheet of EV calculations. 

Elizabeth104

This is a good point. In my ideal movement makes perfect sense to disagree with every leader and yet still be a central member of the group. LessWrong has basically pulled that off. EA somehow managed to be bad at having leaders (both in the sense that the closest things to leaders don't want to be closer, and that I don't respect them), while being the sort of thing that requires leaders. 

khafra20

I'd be interested to know what the numbers on UV in ductwork look like over the past 5 years. When I had to get a new A/C system installed in 2020, they asked whether I wanted a UVC light installed in the air handler. I had, before then, been using a 70w UVC corn light I bought on Amazon to sterilize the exterior of groceries (back when we thought fomites might be a major transmission vector), and in improvised ductwork with fans and cardboard boxes taped together.
Getting a proper bulb--an optimal wavelength source--seemed like a big upgrade. Hard to come ... (read more)

khafra92

This is great! Everybody loves human intelligence augmentation, but I've never seen a taxonomy of it before, offering handholds for getting started. 

I'd say "software exobrain" is less "weaksauce," and more "80% of the peak benefits are already tapped out, for conscientious people who have heard of OneNote or Obsidian." I also am still holding out for bird neurons with portia spider architectural efficiency and human cranial volume; but I recognize that may not be as practical as it is cool.

khafra83

It's very standard advice to notice when a sense of urgency is being created by a counterparty in some transaction; and to reduce your trust in that counterparty as well as pausing.

It feels like a valuable observation, to me, that the counterparty could be internal--some unendorsed part of your own values, perhaps.

khafra*31

(e.g. in the hypothetical ‘harbinger tax’ world, you actively want to sabotage the resale value of everything you own that you want to actually use).

"harberger tax," for anyone trying to look that up.

khafra60

If you can pay the claimed experts enough to submit to some testing, you could use Google's new doubly-efficient debate protocol to make them either spend some time colluding, or spend a lot more time in their efforts at deception: https://www.lesswrong.com/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of

7johnswentworth
I haven't looked much at that work, but I strongly expect that it does not-at-all address the main difficult problems of outsourcing cognition. The problem isn't "figure out which experts are right about some legible falsifiable facts", the problem is "figure out which questions we should be asking and which stuff we should be paying attention to in the first place".
khafra73

This could exclude competent evaluators without other income--this isn't Dath Ilan, where a bank could evaluate evaluators and front them money at interest rates that depended on their probability of finding important risks--and their shortage of liquidity could provide a lever for distortion of their incentives. 

On Earth, if someone's working for you, and you're not giving them a salary commensurate with the task, there's a good chance they are getting compensation in other ways (some of which might be contrary to your goals).

khafra20

Thanks! Just what I was looking for.

khafra233

Some cities have dedicated LW/ACX Discord servers, which is pretty neat. Many of the cities hosting meetups over the next month are too small to have much traffic to such a server, were it set up. A combined, LW meetup oriented Discord server for all the smaller cities in the world, with channels for each city and a few channels for common small-meetup concerns, seems like a $20 bill on the sidewalk. So I’m checking whether such a thing exists here, before I start it.

8ChristianKl
Yes, such a thing does exist: https://discord.gg/mSek4Mmz
khafra100

I think the cruxes here are whether Aldi forced out small  retailers like Walmart did; and how significant the difference between Walmart and Aldi is, compared to the difference between Aldi and large, successful retail orgs in wentworthland or christiankiland. 

(my experience in German shopping is that most grocery stores are one of a half-dozen chains, most hardware stores are Bauhaus or OBI, but there isn't a dominant "everything" store like Walmart; Müller might be closest but its market dominance and scale is more like K-mart in the 90's than Walmart today.)

Answer by khafra20

An existing subgenre of this with several examples is the two-timer date. As I recall, it was popular in 90's sitcoms. Don't expect INT 18 tier scheming, but it does usually show the perspective of the people frantically trying to keep the deception running.

khafra20

Here's the intuition that's making me doubt the utility of provably correct system design to avoiding bridge crashes: 

I model the process leading up to a ship that doesn't crash into a bridge as having many steps. 

1. Marine engineers produce a design for a safe ship
2. Management signs off on the design without cutting essential safety features
3. Shipwrights build it to spec without cutting any essential corners
4. The ship operator understands and follows the operations and maintenance manuals, without cutting any essential corners
5. Nothing out-o... (read more)

9Steve_Omohundro
I totally agree in today's world! Today, we have management protocols which are aimed at requiring testing and record keeping to ensure that boats and ships in the state we would like them to be. But these rules are subject to corruption and malfeasance (such as the 420 Boeing jets which incorporated defective parts and yet which are currently flying with passengers: https://doctorow.medium.com/https-pluralistic-net-2024-05-01-boeing-boeing-mrsa-2d9ba398bd54 ) But it appears we are rapidly moving to a world in which much of the physical labor will be done by robots and in which each physical system will have a corresponding "digital twin" (eg. https://www.nvidia.com/en-us/omniverse/solutions/digital-twins/ ).  In that world, we can implement provable formal rules governing every system, from raw materials, to manufacture, to supply chain, to operations, and to maintenance.  In an AI world, much more sophisticated malfeasance can occur. Formal models of domains with proofs of adherence to rules and protection against adversaries is the only way to ensure our systems are safe and effective.
Answer by khafra30

https://space.mit.edu/home/tegmark/crazy.html Large universes put some subtleties into the meaning of "real" that aren't present in its common usage. 

Decision theory-wise, caring about versions of yourself that are inexorably about to dissolve into thermal noise doesn't seem useful. As a more general principle, caring about the decisions you make seems useful to the extent that those decisions can predictably change things.

 

My dreams have none of the consistency that allowed smart people to figure out the laws of nature over the millenia. It migh... (read more)

2Logan Zoellner
"I only care about conscious states where smart people are doing physics" has to be the most LessWrong take possible.
khafra90

That example seems particularly hard to ameliorate with provable safety. To focus on just one part, how could we prove the ship would not lose power long enough to crash into something? If you try to model the problem at the level of basic physics, it's obviously impossible. If you model it at the level of a circuit diagram, it's trivial--power sources on circuit diagrams do not experience failures. There's no obviously-correct model granularity; there are schelling points, but what if threats to the power supply do not respect our schelling points?

It seem... (read more)

In general, we can't prevent physical failures. What we can do is to accurately bound the probability of them occurring, to create designs which limit the damage that they cause, and to limit the ability of adversarial attacks to trigger and exploit them. We're  advocating for humanity's entire infrastructure to be upgraded with provable technology to put guaranteed bounds on failures at every level and to eliminate the need to trust potentially flawed or corrupt actors. 

In the case of the ship, there are both questions about the design of that s... (read more)

khafra61

I guess this is common knowledge, but I missed it: What is with the huge dip in CPI before 2020? I'm confused, especially because the 2008 crash barely shows up. A cursory googling and asking ChatGPT failed me.

6AnthonyC
IIRC it was a 2018 change in the calculation methodology. How they account for geographic variation and how they account for quality change over time, possibly among other things. So those might not be directly comparable numbers. That change in CPI price level does not show up if you look at y-o-y inflation. I assume they back-calculated the 2017 (and a few more years) numbers with the new methodology to do inflation calculations, but maybe actual price level data is harder to find to make a graph with?
khafra151

Anecdotally*, IPL/laser therapy seems to do all of these except increasing  dermal capillaries, which it instead reduces. This makes it ideal for people with rosacea or other inflammatory problems, and fair skin (which often accompanies these problems). 

*And with a few references: Effective treatment of rosacea using intense pulsed light systems - PubMed (nih.gov)
IPL irradiation rejuvenates skin collagen via the bidirectional regulation of MMP-1 and TGF-β1 mediated by MAPKs in fibroblasts - PubMed (nih.gov)
some studies find no significant effect ... (read more)

khafra61

You'll be happy to know that standards bodies have noticed the "entropy reduction from excessive rules" problem. The latest version of NIST Special Publication 800-63B says to disallow four password categories like "already in a breach database" and "aaaaa," but goes on to direct verifiers to not impose any other rules on password composition.

As for me, I just choose the first four digits of the busy beaver numbers--1621--as my PIN. As a noncomputable number, it's guaranteed to be the most random choice possible. 

khafra30

One unstated, load-bearing assumption is that whatever service or good humans can trade to ASI will be of equal or greater worth to it than our subsistence income.  

khafra30

Land Value Tax would solve this.

(Sort of--funding UBI from a 100% LVT would solve it for the case of literal rent seeking, because if landlords increased the rent, that additional money would be taxed back into the UBI pool. To make it a general solution, you'd have to identify all instances of rent-seeking, and tax the underlying asset with a metaphorical 100% LVT).

2Gordon Seidoh Worley
In that case UBI seems like a bad policy in isolation, as it seems like it may only be effective if rent seeking is effectively curtailed.
khafra40

Sure, that's fair enough. I was thinking in the context of "formal verification that would have prevented this outage."

khafra20

It would specifically be impossible to prove the Crowdstrike driver safe because, by necessity, it regularly loads new data provided by Crowdstrike threat intelligence, and changes its behavior based on those updates.

Even if you could get the CS driver to refuse to load new updates without proving certain attributes of those updates, you would also need some kind of assurance of the characteristics of every other necessary part of the Windows OS, in every future update.

2mishka
No, let's keep in mind the Aegis fire control for missile defense example. This is a highly variable situation, the "enemy action" can come in many forms, from multiple directions at once, the weather can change rapidly, the fleet to defend might have a variety of compositions and spatial distributions, and so on. So one deals with a lot of variable and unpredictable factors. Yet, they were able to formally establish some properties of that software, presumably to satisfaction of their DoD customers. It does not mean that they have a full-proof system, but the reliability is likely much better because of that effort at formal verification of software. With Windows, who knows. Perhaps it is even more complex than that. But formal methods are often able to account for a wide range of external situations and data. For a number of reasons, they nevertheless don't provide full guarantee (there is this trap of thinking, "formally verified => absolutely safe", it's important not to get caught into that trap; "formally verified" just means "much more reliable in practice"). I was trying to address a general point of whether a provably correct software is possible (obviously yes, since it is actually practiced occasionally for some mission-critical systems). I don't know if it makes sense to have that in the context of Windows kernels. From what people recently seem to say about Windows is that Microsoft is saying that the European regulator forced them not to refuse CrowdStrike-like updates (so much for thinking, "what could or should be done in a sane world").
khafra40

I feel like it's still Moloch to blame, if a sufficient bribe to landowners would cost less than indefinitely continued rent-seeking. 

I don't have any calculations to offer in support; but I would generally expect an individual landowner's time preference to be lower than society's as a whole, so I suspect this is indeed the case.

So the actual reason is that landowners don't want to be seen taking a bribe, because that would involve acknowledging they have been knowingly rent-seeking since 1879; and the government doesn't want to openly bribe them for moral hazard whatever; so even though everyone would be better off by their own lights it can't happen. And that's fairly moloch-flavored.

3Dagon
Note that if made in public, for legitimately-owned assets, and voluntarily accepted, the term is no longer "bribe" but "purchase".  A whole ton of my objections go away if the government (or even a private entity) is buying land and then figuring out the best use for it, charging optimal rent to the people who own the improvements separately.
3Gunnar_Zarncke
Would it work if the tax was raised very slowly? Like 10% points per generation. Would it work if the tax sets in only after death for privately owned property? That might significantly reduce resistance by individual land owners.
khafra12

Twitter has announced a new policy of deleting accounts which have had no activity for a few years. I used the Wayback Machine to archive Grognor's primary twitter account here. Hal Finney's wife is keeping his account alive. 
I do not know who else may have died, or cryo-suspended, over the years of LW; nor how long the window of action is to preserve the accounts.

khafra20

Or A*, which is a much more computationally efficient and deterministic way to minimize the distance to finish the maze, if you have an appropriate heuristic. I don't have an argument for it, but I feel like finding a good heuristic and leveraging it probably works very well as a generalizable strategy. 

khafra50

Iran is an agent, with a constrained amount of critical resources like nuclear engineers, centrifuges, etc.

AI development is a robust, agent-agnostic process that has an unlimited number of researchers working in adjacent areas who could easily cross-train to fill a deficit, an unlimited number of labs which would hire researchers from DeepMind and OpenAI if they closed, and an unlimited amount of GPUs to apply to the problem. 

Probably efforts at getting the second-tier AI labs to take safety more seriously, in order to give the top tier more slack, w... (read more)

lc*170

Iran is an agent...

Iran is a country, not an agent. Important distinction and I'm not being pedantic here. Iran's aggressive military stance towards Israel is not quite the result of a robust, agent-agnostic process but it's not the result of a single person optimizing for some goal either.

with a constrained amount of critical resources like nuclear engineers, centrifuges, etc. AI development is a robust, agent-agnostic process that has an unlimited number of researchers working in adjacent areas who could easily cross-train to fill a deficit...and an unli

... (read more)
khafra30

For personal communications, meta-conversations seem fine.

If you're setting up an organization, though, you should consider adopting some existing, time-tested system for maintaining secrets. For example, you could classify secrets into categories--those which would cause exceptionally grave harm to the secret's originator's values (call this category, say, "TS"); those which would cause serious harm ("S"), and those which would cause some noticeable harm ("C"). Set down appropriate rules for the handling of each type of secret--for example, you might not ... (read more)

khafra40

The answer I came up with, before reading, is that the proper maxent distribution obviously isn't uniform over every planck interval from here until protons decay; it's also obviously not a gaussian with a midpoint halfway to when protons decay. But the next obvious answer is a truncated normal distribution. And that is not a thought conducive to sleeping well.

3wunan
If it's a normal distribution, what's the standard deviation?
khafra20

I've used Eliezer's prayer to good effect, but it's a bit short. And I have considered The Sons of Martha, but it's a bit long.

Has anyone, in their rationalist readings, found something that would work as a Thanksgiving invocation of a just-right length?

2Yoav Ravid
Perhaps you can adapt something from Dennet's THANK GOODNESS!
khafra60

Robin Hanson said,  with Eliezer eventually concurring, that "bets like this will just recover interest rates, which give the exchange rate between resources on one date and resources on another date."

E.g., it's not impossible to bet money on the end of the world, but it's impossible to do it in a way substantially different from taking a loan.

2philh
Oh, thanks for the pointer. I confess I wish Robin was less terse here. I'm not sure I even understand the claim, what does it mean to "recover interest rates"? Is Robin claiming any such bet will either 1. Have payoffs such that [the person receiving money now and paying money later] could just take out a loan at prevailing interest rates to make this bet; or 2. Have at least one party who is being silly with money? ...oh, I think I get it, and IIUC the idea that fails is different from what I was suggesting. The idea that fails is that you can make a prediction market from these bets and use it to recover a probability of apocalypse. I agree that won't work, for the reason given: prices of these bets will be about both [probability of apocalypse] and [the value of money-now versus money-later, conditional on no apocalypse], and you can't separate those effects. I don't think this automatically sinks the simpler idea of: if Alice and Bob disagree about the probability of an apocalypse, they may be able to make a bet that both consider positive-expected-utility. And I don't think that bet would necessarily just be a combination of available market-rate loans? At least it doesn't look like anyone is claiming that.
khafra80

I built a thing.

UVC lamps deactivate viruses in the air, but harm skin, eyes, and DNA. So I made a short duct out of cardboard, with a 60W UVC corn bulb in a recessed compartment, and put a fan in it. 

I plan to run it whenever someone other than my wife and I visits my house. 

https://imgur.com/a/QrtAaUz

khafra20

Note that Mortal Engines--that steampunk movie with the mobile, carnivorous cities--was released halfway between the original publishing of this essay and today.

Given the difficulties people have mentioned with moving high-density housing between and through cities, maybe we need small cities on SMTs ?

khafra00

These were some great questions. I doubt a few of the answers, however. For example:

My estimate of how far off LEV is with 50% probability started out at 25 years 15 or so years ago, and is now 17 years, so let’s use round numbers and say 20 years. Those estimates have always been explicitly "post-money", though - in other words, when I say the money would make 10 years of difference, I mean that without the money, it would be 30 years. I think $1B is enough to remove that factor of 2-3 that you mentioned in the previous question, i.e. to
... (read more)
1Bugle
I suspect he chose the bearded look originally because he looked young without it, many people (i.e. Alan Moore) explicitly choose it for those reasons. Now, I suspect he might shave it off altogether one day if he has a big breakthrough to publicize. The contrast would be quite impressive, even absent any actual technological intervention. tl;dr it's the beard
4Mati_Roy
I think you're underestimating how addictive alcohol is for some people
4emanuele ascani
He would probably say that he doesn't care (he works for others, not for himself) and that alchool doesn't affect him, since people already kind of noted this and the answers were these. But tbh, this whole thing is not that interesting to me, and I would classify it as weak evidence for what he belives or not. Usually it is mainly gossip.
khafra20

The 100% efficacy for a middle filter layer that's had a saltwater + surfactant sprayed onto it sounds really good; but I wonder how tight the filter material has to be, for that level of efficacy. I also wonder how much air resistance the salt coat adds.

A HEPA filter + carbon would be less restrictive if the carbon part were salted than if the HEPA filter itself were salted, but that might not deactivate all of the virus.

khafra20

If virus exposure mid-illness worsens your symptoms, doesn't that mean being indoors is harmful? it would be far healthier to spend as much time outdoors as possible? Perhaps on a net hammock if you have to lie down, so your face isn't lying on a cloth full of the virus you're exhaling? Surely this effect would be so large that clinical studies would have noticed by now, people recovering much faster when they're not in a hospital room, or in a room at all.

On a gears-level, it seems like illness severity would be heavily dose-dependent... (read more)

khafra20

How many dimensions is inference space? How many duck-sized horses do we need, to have a 2/3 chance of taking those steps? And are they being modeled as duck-sized monkeys with typewriters, or are they closer to a proper mini-Einstein, who is likely to go the correct direction?

khafra40

I live in a hot region, and have a car parked outside. I've been putting non-heat-sensitive packages in there for a day, since interior temperatures should be going above 130F / 55C, and easily killing any viruses.

3Patrick Long
Ooh, good idea. Amazon Fresh seems to be working OK again. I didn't realize Pantry was separate from it until today, but that helps too since Pantry can just schedule to deliver 3 weeks out instead of having to keep going back looking for unavailable Fresh delivery times. I don't have a basement, but I do have a car. I stored my initial stockpile there just to differentiate it from normal food and not start eating it too soon. But contactless delivery+leaving stuff in there for longer than the virus can survive on its packaging makes this a lot easier.
Load More