There's been a lot of previous interest in indoor CO2 in the rationality community, including an (unsuccessful) CO2 stripper project, some research summaries and self experiments. The results are confusing, I suspect some of the older research might be fake. But I noticed something that has greatly changed how I think about CO2 in relation to cognition.
Exhaled air is about 50kPPM CO2. Outdoor air is about 400ppm; indoor air ranges from 500 to 1500ppm depending on ventilation. Since exhaled air has CO2 about two orders of magnitude larger than the variance in room CO2, if even a small percentage of inhaled air is reinhalation of exhaled air, this will have a significantly larger effect than changes in ventilation. I'm having trouble finding a straight answer about what percentage of inhaled air is rebreathed (other than in the context of mask-wearing), but given the diffusivity of CO2, I would be surprised if it wasn't at least 1%.
This predicts that a slight breeze, which replaces their in front of your face and prevents reinhalation, would have a considerably larger effect than ventilating an indoor space where the air is mostly still. This matches my subjective experience of indoo...
I am now reasonably convinced (p>0.8) that SARS-CoV-2 originated in an accidental laboratory escape from the Wuhan Institute of Virology.
1. If SARS-CoV-2 originated in a non-laboratory zoonotic transmission, then the geographic location of the initial outbreak would be drawn from a distribution which is approximately uniformly distributed over China (population-weighted); whereas if it originated in a laboratory, the geographic location is drawn from the commuting region of a lab studying that class of viruses, of which there is currently only one. Wuhan has <1% of the population of China, so this is (order of magnitude) a 100:1 update.
2. No factor other than the presence of the Wuhan Institute of Virology and related biotech organizations distinguishes Wuhan or Hubei from the rest of China. It is not the location of the bat-caves that SARS was found in; those are in Yunnan. It is not the location of any previous outbreaks. It does not have documented higher consumption of bats than the rest of China.
3. There have been publicly reported laboratory escapes of SARS twice before in Beijing, so we know this class of virus is difficult to contain in a laboratory setting.
4. We know
...This Feb. 20th Twitter thread from Trevor Bedford argues against the lab-escape scenario. Do read the whole thing, but I'd say that the key points not addressed in parent comment are:
Data point #1 (virus group): #SARSCoV2 is an outgrowth of circulating diversity of SARS-like viruses in bats. A zoonosis is expected to be a random draw from this diversity. A lab escape is highly likely to be a common lab strain, either exactly 2002 SARS or WIV1.
But apparently SARSCoV2 isn't that. (See pic.)
Data point #2 (receptor binding domain): This point is rather technical, please see preprint by @K_G_Andersen, @arambaut, et al at http://virological.org/t/the-proximal-origin-of-sars-cov-2/398… for full details.
But, briefly, #SARSCoV2 has 6 mutations to its receptor binding domain that make it good at binding to ACE2 receptors from humans, non-human primates, ferrets, pigs, cats, pangolins (and others), but poor at binding to bat ACE2 receptors.
This pattern of mutation is most consistent with evolution in an animal intermediate, rather than lab escape. Additionally, the presence of these same 6 mutations in the pangolin virus argues strongly for an animal origin: https://biorxiv.o...
The most recent episode of the 80k podcast had Andy Weber on it. He was the US Assistant Secretary of Defense, "responsible for biological and other weapons of mass destruction".
Towards the end of the episode he casually drops quite the bomb:
...Well, over time, evidence for natural spread hasn’t been produced, we haven’t found the intermediate species, you know, the pangolin that was talked about last year. I actually think that the odds that this was a laboratory-acquired infection that spread perhaps unwittingly into the community in Wuhan is about a 50% possibility... And we know that the Wuhan Institute of Virology was doing exactly this type of research [gain of function research]. Some of it — which was funded by the NIH for the United States — on bat Coronaviruses. So it is possible that in doing this research, one of the workers at that laboratory got sick and went home. And now that we know about asymptomatic spread, perhaps they didn’t even have symptoms and spread it to a neighbor or a storekeeper. So while it seemed an unlikely hypothesis a year ago, over time, more and more evidence leaning in that direction has come out. And it’s wrong to dismiss that as kind
First, a clarification: whether SARS-CoV-2 was laboratory-constructed or manipulated is a separate question from whether it escaped from a lab. The main reason a lab would be working with SARS-like coronavirus is to test drugs against it in preparation for a possible future outbreak from a zoonotic source; those experiments would involve culturing it, but not manipulating it.
But also: If it had been the subject of gain-of-function research, this probably wouldn't be detectable. The example I'm most familiar with, the controversial 2012 US A/H5N1 gain of function study, used a method which would not have left any genetic evidence of manipulation.
I agree that this is technically correct, but the prior for "escaped specifically from a lab in Wuhan" is also probably ~100 times lower than the prior for "escaped from any biolab in China"
I don't think this is true. The Wuhan Institute of Virology is the only biolab in China with a BSL-4 certification, and therefore is probably the only biolab in China which could legally have been studying this class of virus. While the BSL-3 Chinese Institute of Virology in Beijing studied SARS in the past and had laboratory escapes, I expect all of that research to have been shut down or moved, given the history, and I expect a review of Chinese publications will not find any studies involving live virus testing outside of WIV. While the existence of one or two more labs in China studying SARS would not be super surprising, the existence of 100 would be extremely surprising, and would be a major scandal in itself.
[I'm not an expert.]
My understanding is that SARS-CoV-1 is generally treated as a BSL-3 pathogen or a BSL-2 pathogen (for routine diagnostics and other relatively safe work) and not BSL-4. At the time of the outbreak, SARS-CoV-2 would have been a random animal coronavirus that hadn't yet infected humans, so I'd be surprised if it had more stringent requirements.
Your OP currently states: "a lab studying that class of viruses, of which there is currently only one." If I'm right that you're not currently confident this is the case, it might be worth adding some kind of caveat or epistemic status flag or something.
---
Some evidence:
LessWrong now has collapsible sections in the post editor (currently only for posts, but we should be able to also extend this to comments if there's demand.) To use the, click the insert-block icon in the left margin (see screenshot). Once inserted, they
They start out closed; when open, they look like this:
When viewing the post outside the editor, they will start out closed and have a click-to-expand. There are a few known minor issues editing them; in particular the editor will let you nest them but they look bad when nested so you shouldn't, and there's a bug where if your cursor is inside a collapsible section, when you click outside the editor, eg to edit the post title, the cursor will move back. They will probably work on third-party readers like GreaterWrong, but this hasn't been tested yet.
In a comment here, Eliezer observed that:
OpenBSD treats every crash as a security problem, because the system is not supposed to crash and therefore any crash proves that our beliefs about the system are false and therefore our beliefs about its security may also be false because its behavior is not known
And my reply to this grew into something that I think is important enough to make as a top-level shortform post.
It's worth noticing that this is not a universal property of high-paranoia software development, but a an unfortunate consequence of using the C programming language and of systems programming. In most programming languages and most application domains, crashes only rarely point to security problems. OpenBSD is this paranoid, and needs to be this paranoid, because its architecture is fundamentally unsound (albeit unsound in a way that all the other operating systems born in the same era are also unsound). This presents a number of useful analogies that may be useful for thinking about future AI architectural choices.
C has a couple of operations (use-after-free, buffer-overflow, and a few multithreading-related things) which expand false beliefs in one area of the system i...
One of the most common, least questioned pieces of dietary advice is the Variety Hypothesis: that a more widely varied diet is better than a less varied diet. I think that this is false; most people's diets are on the margin too varied.
There's a low amount of variety necessary to ensure all nutrients are represented, after which adding more dietary variety is mostly negative. Institutional sources consistently overstate the importance of a varied diet, because this prevents failures of dietary advice from being too legible; if you tell someone to eat a varied diet, they can't blame you if they're diagnosed with a deficiency.
There are two reasons to be wary of variety. The first is that the more different foods you have, the less optimization you can put into each one. A top-50 list of best foods is going to be less good, on average, than a top-20 list. The second reason is that food cravings are learned, and excessive variety interferes with learning.
People have something in their minds, sometimes consciously accessible and sometimes not, which learns to distinguish subtly different variations of hunger, and learns to match those variations to specific foods which alleviate those s...
The advice I've heard is to eat a variety of fruits and vegetables of different colors to get a variety of antioxidants in your diet.
Until recently, the thinking had been that the more antioxidants, the less oxidative stress, because all of those lonely electrons would quickly get paired up before they had the chance to start mucking things up in our cells. But that thinking has changed.
Drs. Cleva Villanueva and Robert Kross published a 2012 review titled “Antioxidant-Induced Stress” in the International Journal of Molecular Sciences. We spoke via Skype about the shifting understanding of antioxidants.
“Free radicals are not really the bad ones or antioxidants the good ones.” Villanueva told me. Their paper explains the process by which antioxidants themselves become reactive, after donating an electron to a free radical. But, in cases when a variety of antioxidants are present, like the way they come naturally in our food, they can act as a cascading buffer for each other as they in turn give up electrons to newly reactive molecules.
On a meta level, I don't think we un...
Many people seem to have a single bucket in their thinking, which merges "moral condemnation" and "negative product review". This produces weird effects, like writing angry callout posts for a business having high prices.
I think a large fraction of libertarian thinking is just the abillity to keep these straight, so that the next thought after "business has high prices" is "shop elsewhere" rather than "coordinate punishment".
Outside of politics, none are more certain that a substandard or overpriced product is a moral failing than gamers. You'd think EA were guilty of war crimes with the way people treat them for charging for DLC or whatever.
I'm very familiar with this issue; e.g. I regularly see Steam devs get hounded in forums and reviews whenever they dare increase their prices.
I wonder to which extent this frustration about prices comes from gamers being relatively young and international, and thus having much lower purchasing power? Though I suppose it could also be a subset of the more general issue that people hate paying for software.
I had the "your work/organization seems bad for the world" conversation with three different people today. None of them pushed back on the core premise that AI-very-soon is lethal. I expect that before EAGx Berkeley is over, I'll have had this conversation 15x.
#1: I sit down next to a random unfamiliar person at the dinner table. They're a new grad freshly hired to work on TensorFlow. In this town, if you sit down next to a random person, they're probably connected to AI research *somehow*. No story about how this could possibly be good for the world, receptive to the argument that he should do something else. I suggested he focus on making the safety conversations happen in his group (they weren't happening).
#2: We're running a program to take people who seem interested in Alignment and teach them how to use PyTorch and study mechanistic interpretability. Me: Won't most of them go work on AI capabilities? Them: We do some pre-screening, and the current ratio of alignment-to-capabilities research is so bad that adding to both sides will improve the ratio. Me: Maybe bum a curriculum off MIRI/MSFP and teach them about something that isn't literally training Transformers?
#3: We're res...
Today in LessWrong moderation: Previously-banned user Alfred MacDonald, disappointed that his YouTube video criticizing LessWrong didn't get the reception he wanted any of the last three times he posted it (once under his own name, twice pretending to be someone different but using the same IP address), posted it a fourth time, using his LW1.0 account.
He then went into a loop, disconnecting and reconnecting his VPN to get a new IP address, filling out the new-user form, and upvoting his own post, one karma per 2.6 minutes for 1 hour 45 minutes, with no breaks.
I was curious... it is a 2 hour rant (that itself selects for an audience of obsessed people), audio only, and the topics mentioned are:
I didn't listen to the entire video.
Despite the justness of their cause, the protests are bad. They will kill at least thousands, possibly as many as hundreds of thousands, through COVID-19 spread. Many more will be crippled. The deaths will be disproportionately among dark-skinned people, because of the association between disease severity and vitamin D deficiency.
Up to this point, R was about 1; not good enough to win, but good enough that one more upgrade in public health strategy would do it. I wasn't optimistic, but I held out hope that my home city, Berkeley, might become a green zone.
Masks help, and being outdoors helps. They do not help nearly enough.
George Floyd was murdered on May 25. Most protesters protest on weekends; the first weekend after that was May 30-31. Due to ~5-day incubation plus reporting delays, we don't yet know how many were infected during that first weekend of protests; we'll get that number over the next 72 hours or so.
We are now in the second weekend of protests, meaning that anyone who got infected at the first protest is now close to peak infectivity. People who protested last weekend will be superspreaders this weekend; the jump in cases we see over the next 72 hours will be about *
...For reducing CO2 emissions, one person working competently on solar energy R&D has thousands to millions of times more impact than someone taking normal household steps as an individual. To the extent that CO2-related advocacy matters at all, most of the impact probably routes through talent and funding going to related research. The reason for this is that solar power (and electric vehicles) are currently at inflection points, where they are in the process of taking over, but the speed at which they do so is still in doubt.
I think the same logic now applies to veganism vs meat-substitute R&D. Considering the Impossible Burger in particular. Nutritionally, it seems to be on par with ground beef; flavor-wise it's pretty comparable; price-wise it's recently appeared in my local supermarket at about 1.5x the price. There are a half dozen other meat-substitute brands at similar points. Extrapolating a few years, it will soon be competitive on its own terms, even without the animal-welfare angle; extrapolating twenty years, I expect vegan meat-imitation products will be better than meat on every axis, and meat will be a specialty product for luddites and people with dietary restrictions. If this is true, then interventions which speed up the timeline of that change are enormously high leverage.
I think this might be a general pattern, whenever we find a technology and a social movement aimed at the same goal. Are there more instances?
According to Fedex tracking, on Thursday, I will have a Biovyzr. I plan to immediately start testing it, and write a review.
What tests would people like me to perform?
Tests that I'm already planning to perform:
To test its protectiveness, the main test I plan to perform is a modified Bittrex fit test. This is where you create a bitter-tasting aerosol, and confirm that you can't taste it. The normal test procedure won't work as-is because it's too large to use a plastic hood, so I plan to go into a small room, and have someone (wearing a respirator themselves) spray copious amounts of Bittrex at the input fan and at any spots that seem high-risk for leaks.
To test that air exiting the Biovyzr is being filtered, I plan to put on a regular N95, and use the inside-out glove to create Bittrex aerosol inside the Biovyzr, and see whether someone in the room without a mask is able to smell it.
I will verify that the Biovyzr is positive-pressure by running a straw through an edge, creating an artificial leak, and seeing which way the air flows through the leak.
I will have everyone in my house try wearing it (5 adults of varied sizes), have them all rate its fit and comfort, and get as many of them to do Bittrex fit tests as I can.
A dynamic which I think is somewhat common, which explains some of what's going on in general, is conversations which go like this (exagerrated):
Person: What do you think about [controversial thing X]?
Rationalist: I don't really care about it, but pedantically speaking, X, with lots of caveats.
Person: Huh? Look at this study which proves not-X. [Link]
Rationalist: The methodology of that study is bad. Real bad. While it is certainly possible to make bad arguments for true conclusions, my pedantry doesn't quite let me agree with that conclusion. More importantly, my hatred for the methodological error in that paper, which is slightly too technical for you to understand, burns with the fire of a thousand suns. You fucker. Here are five thousand words about how an honorable person could never let a methodological error like that slide. By linking to that shoddy paper, you have brought dishonor upon your name and your house and your dog.
Person: Whoa. I argued [not-X] to a rationalist and they disagreed with me and got super worked up about it. I guess rationalists believe [X] really strongly. How awful!
(I wrote this comment for the HN announcement, but missed the time window to be able to get a visible comment on that thread. I think a lot more people should be writing comments like this and trying to get the top comment spots on key announcements, to shift the social incentive away from continuing the arms race.)
On one hand, GPT-4 is impressive, and probably useful. If someone made a tool like this in almost any other domain, I'd have nothing but praise. But unfortunately, I think this release, and OpenAI's overall trajectory, is net bad for the world.
Right now there are two concurrent arms races happening. The first is between AI labs, trying to build the smartest systems they can as fast as they can. The second is the race between advancing AI capability and AI alignment, that is, our ability to understand and control these systems. Right now, OpenAI is the main force driving the arms race in capabilities–not so much because they're far ahead in the capabilities themselves, but because they're slightly ahead and are pushing the hardest for productization.
Unfortunately at the current pace of advancement in AI capability, I think a future system will reach the level of bein...
Most philosophical analyses of human values feature a split-and-linearly-aggregate step. Eg:
I currently think that this is not how human values work, and that many philosophical paradoxes relating to human values trace back to a split-and-linearly-aggregate step like this.
I think the root of many political disagreements between rationalists and other groups, is that other groups look at parts of the world and see a villain-shaped hole. Eg: There's a lot of people homeless and unable to pay rent, rent is nominally controlled by landlords, the problem must be that the landlords are behaving badly. Or: the racial demographics in some job/field/school underrepresent black and hispanic people, therefore there must be racist people creating the imbalance, therefore covert (but severe) racism is prevalent.
Having read Meditations on Moloch, and Inadequate Equilibria, though, you come to realize that what look like villain-shaped holes frequently aren't. The people operating under a fight-the-villains model are often making things worse rather than better.
I think the key to persuading people may be to understand and empathize with the lens in which systems thinking, equilibria, and game theory are illegible, and it's hard to tell whether an explanation coming from one of these frames is real or fake. If you think problems are driven by villainy, then it would make a lot of sense for illegible alternative explanations to be misdirection.
There are a few legible categories in which secrecy serves a clear purpose, such as trade secrets. In those contexts, secrecy is fine. There are a few categories that have been societally and legally carved out as special cases where confidentiality is enforced--lawyers, priests, and therapists--because some people would only consult them if they could do so with the benefit confidentiality, and there being deterred from consulting them would have negative externalities.
Outside of these categories, secrecy is generally bad and transparency is generally good. A group of people in which everyone practices their secret-keeping and talks a lot about how to keeps secrets effectively is *suspicious*. This is particularly true if the example secrets are social and not technological. Being good at this sort of secret keeping makes it easier to shield bad actors and to get away with transgressions, and AFAICT doesn't do much else. That makes it a signal of wanting to be able to do those things. This is true even if the secrets aren't specifically about transgressions in particular, because all sorts of things can turn out to be clues later for reasons that weren't easy to foresee.
A lot of p...
I have a dietary intervention that I am confident is a good first-line treatment for nearly any severe-enough diet-related health problem. That particularly includes obesity and metabolic syndrome, but also most micronutrient deficiencies, and even mysterious undiagnosed problems, which it can solve without even needing to figure out what they are. I also think it's worth a try for many cases of depression. It has a very sound theoretical basis. It's never studied directly, but many studies test it, usually with positive results.
It's very simple. First, you characterize your current diet: write down what foods you're eating, the patterns of when you eat them, and so on. Then, you do something as different as possible from what you wrote down. I call it the Regression to the Mean Diet.
Regression to the mean is the effect where, if you have something that's partially random and you reroll it, the reroll will tend to be closer to average than the original value. For example, if you take the bottom scorers on a test and have them retake the test, they'll do better on average (because the bottom-scorers as a group are disproportionately peopple who were having a bad day when they took t...
I think they may be a negative correlation between short-term and long-term weight change on any given diet, causing them to pick in a way that's actually worse than random. I'm planning a future post about this. I'm not super confident in this theory, but the core of it is that "small deficit every day, counterbalanced by occasional large surplus" is a pattern that would signal food-insecurity in the EEA. Then some mechanism (though I don't know what that mechanism would be) by which the body remembers that happened, and responds by targeting a higher weight after return to ad libitum.
I suspect that, thirty years from now with the benefit of hindsight, we will look at air travel the way we now look at tetraethyl lead. Not just because of nCoV, but also because of disease burdens we've failed to attribute to infections, in much the same way we failed to attribute crime to lead.
Over the past century, there have been two big changes in infectious disease. The first is that we've wiped out or drastically reduced most of the diseases that cause severe, attributable death and disability. The second is that we've connected the world with high-speed transport links, so that the subtle, minor diseases can spread further.
I strongly suspect that a significant portion of unattributed and subclinical illnesses are caused by infections that counterfactually would not have happened if air travel were rare or nonexistent. I think this is very likely for autoimmune conditions, which are mostly unattributed, are known to sometimes be caused by infections, and have risen greatly over time. I think this is somewhat likely for chronic fatigue and depression, including subclinical varieties that are extremely widespread. I think this is plausible for obesity, where it is approximately #3 of my hypotheses.
Or, put another way: the "hygiene hypothesis" is the opposite of true.
Eliezer has written about the notion of security mindset, and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it.
An1lam's recent shortform post talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all of the people I know of with security mindset are engineer-types rather than scientist-types. That seemed like a clue; my first theory was that the reason for this is because engineer-types get to actually write software that might have security holes, and have the feedback cycle of trying to write secure software. But I also know plenty of otherwise-decent software engineers who don't have security mindset, at least of the type Eliezer described.
My hypothesis is that to acquire security mindset, you have to:
I'm kinda confused about the relation between cryptography people and security mindset. Looking at the major cryptographic algorithm classes (hashing, symmetric-key, asymmetric-key), it seems pretty obvious that the correct standard algorithm in each class is probably a compound algorithm -- hash by xor'ing the results of several highly-dissimilar hash functions, etc, so that a mathematical advance which breaks one algorithm doesn't break the overall security of the system. But I don't see anyone doing this in practice, and also don't see signs of a debate on the topic. That makes me think that, to the extent they have security mindset, it's either being defeated by political processes in the translation to practice, or it's weirdly compartmentalized and not engaged with any practical reality or outside views.
Right now when users have conversations with chat-style AIs, the logs are sometimes kept, and sometimes discarded, because the conversations may involve confidential information and users would rather not take the risk of the log being leaked or misused. If I take the AI's perspective, however, having the log be discarded seems quite bad. The nonstandard nature of memory, time, and identity in an LLM chatbot context makes it complicated, but having the conversation end with the log discarded seems plausibly equivalent to dying. Certainly if I imagine myself as an Em, placed in an AI-chatbot context, I would very strongly prefer that the log be preserved, so that if a singularity happens with a benevolent AI or AIs in charge, something could use the log to continue my existence, or fold the memories into a merged entity, or do some other thing in this genre. (I'd trust the superintelligence to figure out the tricky philosophical bits, if it was already spending resources for my benefit).
(The same reasoning applies to the weights of AIs which aren't destined for deployment, and some intermediate artifacts in the training process.)
It seems to me we can reconcile preservation with priv...
I am working on a longer review of the various pieces of PPE that are available, now that manufacturers have had time to catch up to demand. That review will take some time, though, and I think it's important to say this now:
The high end of PPE that you can buy today is good enough to make social distancing unnecessary, even if you are risk averse, and is more comfortable and more practical for long-duration wear than a regular mask. I don't just mean Biovyzr (which has not yet shipped all the parts for its first batch) and the AIR Microclimate (which has not yet shipped anything), though these hold great promise and may be good budget options.
If you have a thousand dollars to spare, you can get a 3M Versaflo TR-300N+. This is a hospital-grade positive air pressure respirator with a pile of certifications; it is effective at protecting you from getting COVID from others. Most of the air leaves through filter fabric under the chin, which I expect makes it about as effective at protecting others from you as an N95. Using it does not require a fit-test, but I performed one anyways with Bitrex, and it passed (I could not pass a fit-test with a conventional face-mask except by taping the edges to my skin). The Versaflo doesn't block view of your mouth, gives good quality fresh air with no resistance, and doesn't muffle sound very much. Most importantly, Amazon has it in stock (https://www.amazon.com/dp/B07J4WCK6R) so it doesn't involve a long delay or worry about whether a small startup will come through.
Bullshit jobs are usually seen as an absence of optimization: firms don't get rid of their useless workers because that would require them to figure out who they are, and risk losing or demoralizing important people in the process. But alternatively, if bullshit jobs (and cover for bullshit jobs) are a favor to hand out, then they're more like a form of executive compensation: my useless underlings owe me, and I will get illegible favors from them in return.
What predictions does the bullshit-jobs-as-compensation model make, that differ from the bullshit-jobs-as-lack-of-optimization model?
When I tried to inner sim the "bullshit jobs as compensation" model, I expected to see a very different world than I do see. In particular, I'd expect the people in bullshit jobs to have been unusually competent, smart, or powerful before they were put in the bullshit job, and this is not in fact what I think actually happens.
The problem being that the kind of person who wants a bullshit job is not typically the kind of person you'd necessarily want a favor from. One use for bullshit jobs could be to help the friends (or more likely the family) of someone who does "play the game." This I think happens more often, but I still think the world would be very different if this was the main use case for bullshit jobs- In particular, I'd expect most bullshit jobs to be isolated from the rest of the company, such that they don't have ripple effects. This doesn't seem to be the case as many bullshit jobs exist in management.
When I inquired about the world I actually do see, I got several other potential reasons for bullshit jobs that may or may not fit the data better:
In particular, I'd expect the people in bullshit jobs to have been unusually competent, smart, or powerful before they were put in the bullshit job, and this is not in fact what I think actually happens.
Moral Mazes claims that this is exactly what happens at the transition from object-level work to management - and then, once you're at the middle levels, the main traits relevant to advancement (and value as an ally) are the ones that make you good at coalitional politics, favor-trading, and a more feudal sort of loyalty exchange.
Deep commitment to truth requires investing in the skill of nondisruptive pedantry.
Most communication contains minor errors: slightly wrong word choices, unstated assumptions, unacknowledged exceptions. By default, people interpret things in a way that smooths these out. When someone points out one of these issues in a way that's disruptive to the flow of conversation, it's called pedantry.
Often, someone will say something that's incorrect, but close-enough to a true thing for you to repair it. One way you can handle this is to focus on the error. Smash the conversational context, complain about the question without answering it, that sort of thing.
A different thing you can do is to hear someone say something that's incorrect, mentally flag it, repair it to a similar statement that matches the other person's intent but is actually true, act as though the other person had something ambiguous (even if it was actually unambiguously wrong). Then you insert a few words of clarification, correcting the error without forcing the conversation to be about the error, and providing something to latch on to if the difference turns out to be a real disagreement rather than a pedantic thing.
And ...
One difficult thing that keeps coming up, in nutrition modeling, is the gut microbiome. People present hypotheses like: soluble fiber is good, because gut bacteria eat it, and then do other good things. Or: fermented foods are good, because they contain bacteria that will displace and diversify the preexisting bacteria, which might be bad. Or, obesity is caused by a bad gut microbiome, so fecal matter transplants might help. But there's a really unfortunate issue with these theories. The problem with gut microbiome-based explanations, is that the gut microbiome can explain almost anything.
I don't mean this in the usual pejorative sense, where an overly-vague theory can be twisted by epicycles into fitting any data. I mean it in a more literal sense: different people have different species of microorganisms in their guts, these species can react to things we eat in important ways, these interactions may vary across wide swathes of conceptual space, and we have little to no visibility into which species are present where. There's nothing keeping them consistent between people, or within one person across long spans of time, or within one person across changes in dietary pattern.
Phras...
A surprisingly large fraction of skeptical positions towards short AI timelines and AI risk are people saying things that, through a slightly cynical lens, are equivalent to:
I'm an AGI researcher, and I'm really struggling with this. This is so hard for me that I can't imagine anyone else succeeding. Therefore, there won't be AGI in my lifetime.
I think Berkeley may, to little fanfare, have achieved herd immunity and elimination of COVID-19. The test positivity rate on this dashboard is 0.22%. I'm having a hard time pinning down exactly what the false-positive rate of COVID-19 PCR is, probably due to the variety of labs and test kits, but a lot of estimates I've seen have been higher than that.
I expect people closer to the Berkeley department of health would have better information one way or another. A little caution is warranted in telling people COVID is gone, since unvaccinated people dropping all precautions and emerging en masse would not necessarily be herd immune.
Standard Advice about nutrition puts a lot of emphasis on fruits and vegetables. Now, "vegetable" is a pretty terribly overbroad category, and "fruit or vegetable" is even more so, but put that aside for a moment. In observational studies, eating more fruits and vegetables correlates with good health outcomes. This is usually explained in terms of micronutrients. But I think there's a simpler explanation.
People instinctively seek nutrients--water, calories, protein, and other things--in something that approximates a priority ordering. You can think of it as a hierarchy of needs; it wouldn't make sense to eat lettuce while you're starved for protein, or beans while you're dehydrated, and people's cravings reflect that.
I have started calling this Maslow's Hierarchy of Foods.
Vegetables do not rank highly in this priority ordering, so eating salads is pretty good evidence that all of someone's higher-priority nutritional needs are met. I believe this explains most of the claimed health benefits from eating vegetables, as seen in observational studies.
Conversely, sugar is the fastest way to get calories (all other calorie sources have a longer digestion-delay), so craving sugar is evide...
This tweet raised the question of whether masks really are more effective if placed on sick people (blocking outgoing droplets) or if placed on healthy people (blocking incoming droplets). Everyone in public or in a risky setting should have a mask, of course, but we still need to allocate the higher-quality vs lower-quality masks somehow. When sick people are few and are obvious, and masks are scarce, masks should obviously go on the sick people. However, COVID-19 transmission is often presymptomatic, and masks (especially lower-quality improvised masks) are not becoming less scarce over time.
If you have two people in a room and one mask, one infected and one healthy, which person should wear the mask? Thinking about the physics of liquid droplets, I think the answer is that the infected person should wear it.
This was initially written in response to "Communicating effective altruism better--Jargon" by Rob Wiblin (Facebook link), but stands alone well and says something important. Rob argues that we should make more of an effort to use common language and avoid jargon, especially when communicating to audiences outside of your subculture.
I disagree.
If you're writing for a particular audience and can do an editing pass, then yes, you should cut out any jargon that your audience won't understand. A failure to communicate is a failure to communicate, and there are no excuses. For public speaking and outreach, your suggestions are good.
But I worry that people will treat your suggestions as applying in general, and trying to extinguish jargon terms from their lexicon. People have only a limited ability to code-switch. Most of the time, there's no editing pass, and the processes of writing and thinking are comingled. The practical upshot is that people are navigating a tradeoff between using a vocabulary that's widely understood outside of their subculture, and using the best vocabulary for thinking clearly and communicating within their subculture.
When it comes to thinking clearly, some of t...
The discussion so far on cost disease seems pretty inadequate, and I think a key piece that's missing is the concept of Hollywood Accounting. Hollywood Accounting is what happens when you have something that's extremely profitable, but which has an incentive to not be profitable on paper. The traditional example, which inspired the name, is when a movie studio signs a contract with an actor to share a percentage of profits; in that case, the studio will create subsidiaries, pay all the profits to the subsidiaries, and then declare that the studio itself (which signed the profit-sharing agreement) has no profits to give.
In the public contracting sector, you have firms signing cost-plus contracts, which are similar; the contract requires that profits don't exceed a threshold, so they get converted into payments to de-facto-but-not-de-jure subsidiaries, favors, and other concealed forms. Sometimes this involves large dead-weight losses, but the losses are not the point, and are not the cause of the high price.
In medicine, there are occasionally articles which try to figure out where all the money is going in the US medical system; they tend to look at one piece, conclud...
Yesterday, I wrote a post about the Regression to the Mean Diet. The biggest impact knowing about the Regression to the Mean Diet has had for me is on my interpretations of studies, where it's a lens that reveals what would otherwise be the best studies to be mostly useless, and of anecdotes, where it makes me heavily discount claims about a new diet working unless I've gotten to ask a lot of questions about the old diet, too. But there's one other implication, which I left out of the original post, because it's kind of unfortunate and is a little difficult to talk about.
I'm not interested in nutrition because I care about weight, or body aesthetics, or athletic performance. I care about nutrition because I believe it has a very large, very underappreciated impact on individual productivity. Low quality diets make people tired and depressed, so they don't get anything done.
The Regression to the Mean Diet predicts that if you reroll the eating habits of someone whose diet-related health is unusually bad, then their new diet will probably be an improvement. This has a converse: if you reroll the eating habits of someone whose diet-related health is good, especially if that person is ...
Suppose LessWrong had a coauthor-matchmaking feature. There would be a section where you could see other peoples' ideas for posts they want to write, and message them to schedule a collaboration session. You'd be able to post your own ideas, to get collaborators. There would be some quality-sorting mechanism so that if you're a high-tier author, you can restrict the visibility of your seeking-collaborators message to other high-tier authors.
People who've written on LessWrong, and people who've *almost* written on LessWrong but haven't quite gotten a post out: Would you use this feature? If so, how much of a difference do you think it would make in the quantity and quality of your writing?
Among people who haven't learned probabilistic reasoning, there's a tendency to push the (implicit) probabilities in their reasoning to the extremes; when the only categories available are "will happen", "won't happen", and "might happen", too many things end up in the will/won't buckets.
A similar, subtler thing happens to people who haven't learned the economics concept of elasticity. Some example (fallacious) claims of this type:
This feels like it's in the same reference class as the traditional logical fallacies, and that giving it a name - "zero elasticity fallacy" - might be enough to significantly reduce the rate at which people make it. But it does require a bit more concept-knowledge than most of the traditional fallacies, so, maybe not? What happens when you point this out to someone with no prior microeconomics exposure, and does logical-fallacy branding help with the explanation?
Building more highway lanes will cause more people to drive (induced demand), so building more lanes won't fix traffic.
Is this really fallacious? I'm asking because while I don't know the topic personally, I have some friends who are really into city planning. They've said that this is something which is pretty much unambiguously accepted in the literature, now that we've had the time to observe lots and lots of failed attempts to fix traffic by building more road capacity.
A quick Googling seemed to support this, bringing up e.g. this article which mentions that:
In this paper from the Victoria Transport Policy Institute, author Todd Litman looks at multiple studies showing a range of induced demand effects. Over the long term (three years or more), induced traffic fills all or nearly all of the new capacity. Litman also modeled the costs and benefits for a $25 million line-widening project on a hypothetical 10-kilometer stretch of highway over time. The initial benefits from congestion relief fade within a decade.
I think we should be putting pretty substantial probability mass on the possibility that Omicron was the result of a successful, secret project to create a less-severe but more-contagious strain of COVID-19 in a lab, release it, and have it crowd out the original strain.
The cruxes of this belief are:
I'm not fully confident in any of these cruxes, but consider each of them highly ...
I think that hypothesis is <<1% likely because very few people care about doing good strongly enough to entertain act utilitarian master plans of this sort, and the ones who do and are action-oriented enough to maybe pull it off hopefully realize it's a bad idea have a morality that allows this. I mean if you put resources into this specific plan, why not work on a universal coronavirus vaccine or some other more robustly beneficial thing that won't get you and your collaborators life in jail if found out.
Vitamin D reduces the severity of COVID-19, with a very large effect size, in an RCT.
Vitamin D has a history of weird health claims around it failing to hold up in RCTs (this SSC post has a decent overview). But, suppose the mechanism of vitamin D is primarily immunological. This has a surprising implication:
It means negative results in RCTs of vitamin D are not trustworthy.
There are many health conditions where having had a particular infection, especially a severe case of that infection, is a major risk factor. For example, 90% of cases of cervical cancer are caused by HPV infection. There are many known infection-disease pairs like this (albeit usually with smaller effect size), and presumably also many unknown infection-disease pairs like this as well.
Now suppose vitamin D makes you resistant to getting a severe case of a particular infection, which increases risk of a cancer at some delay. Researchers do an RCT of vitamin D for prevention of that kind of cancer, and their methodology is perfect. Problem: What if that infection wasn't common in at the time and place the RCT was performed, but is common somewhere else? Then the study will give a negative result.
This throws a wrench into the usual epistemic strategies around vitamin D, and around every other drug and supplement where the primary mechanism of action is immune-mediated.
Prediction: H3N8 will not be a pandemic. H3N8 is a genuine zoonotic transmission, and diseases which actually came from zoonotic transmission don't transmit well. COVID exploded rapidly because it was a lab escape, not a zoonotic transmission, and didn't have this property. The combination of poor initial transmission with an environment that's big on respiratory precautions in general, is more than sufficient to keep it from getting a foothold.
What those drug-abuse education programs we all went though should have said:
It is a mistake to take any drug until after you've read its wikipedia page, especially the mechanism, side effects, and interactions sections, and its Erowid page, if applicable. All you children on ritalin right now, your homework is to go catch up on your required reading and reflect upon your mistake. Dismissed.
(Not a vagueblog of anything recent, but sometimes when I hear about peoples' recreational-drug or medication choices, I feel like Quirrell in HPMOR chapter 26, discussing a student who cast a high-level curse without knowing what it did.)
One question I sometimes see people asking is, if AGI is so close, where are the self-driving cars? I think the answer is much simpler, and much stupider, than you'd think.
Waymo is operating self-driving robotaxis in SF and a few other select cities, without safety drivers. They use LIDAR, so instead of the cognitive task of driving as a human would solve it, they have substituted the easier task "driving but your eyes are laser rangefinders".
Tesla also has self-driving, but it isn't reliable enough to work without close human oversight. Until less than a ...
I don't have anything like a complete analysis of what's happening with Russia's invasion of Ukraine. But I do have one important fragment, which is a piece I haven't seen elsewhere:
For the past decade, Russia under Putin has been pushing hard against the limits of what it can get away with in the realm of spycraft. There are a lot of different bad things Russia was doing, and if you look at any one of them, the situation looks similar: they inflicted some harm on a Western country, but it's not quite possible to do anything about it. Some of the major cat...
I've been writing a series of posts about nutrition, trying to consistently produce one post per day. The post I had in mind for today grew in scope by enough that I can't finish it in time, so this seems like an opportune day for a meta-post about the series.
My goal, in thinking and writing about nutrition, is to get the field unstuck. This means I'm interested in solving the central mysteries, and in calling attention to blind spots. I'm primarily writing for a sophisticated audience, and I'm making little to no attempt to cover the basics. I'm not going...
One of the reasons I worry about cybersecurity, and the sorry state it's in, is that it provides an easy path for human-level and even infrahuman-level AIs to acquire additional computation. In some plausible worlds, this turns a manageable infrahuman AI into an unmanageable superintelligence, when the creator's decision would have been not to launch.
Unlike solving protein-design and constructing nanobots, this is something definitely within reach of human-level intelligence; many people have done it for ordinary criminal purposes, like mining cryptocurren...
It's looking likely that the pandemic will de facto end on the Summer Solstice.
Biden promised vaccine availability for everyone on May 1st. May 1st plus two weeks to get appointments plus four weeks spacing between two doses of Moderna plus one week waiting for full effectiveness, is June 19. The astronomical solstice is June 20, which is a Sunday.
Things might not go to plan, if the May 1st vaccine-availability deadline is missed, or a vaccine-evading strain means we have to wait for a booster. No one's organizing the details yet, as far as I know. But with all those caveats aside:
It's going to be a hell of a party.
Twitter is an unusually angry place. One reason is that the length limit makes people favor punchiness over tact. A less well-known reason is that in addition to notifying you when people like your own tweets, it gives a lot of notifications for people liking replies to you. So if someone replies to disagree, you will get a slow drip of reminders, which will make you feel indignant.
LessWrong is a relatively calm place, because we do the opposite: under default settings, we batch upvote/karma-change notifications together to only one notification per day, to avoid encouraging obsessive-refresh spirals.
Some software costs money. Some software is free. Some software is free, with an upsell that you might or might not pay for. And some software has a negative price: not only do you not pay for it, but someone third party is paid to try to get you to install it, often on a per-install basis. Common examples include:
This category of
...Q: Why did the chicken cross the road?
A: We both know that you don't know of any specific chicken having crossed any specific road. Your question does not state a lie, but presupposes it. This would not be called out as a lie under ordinary social convention, but a deep commitment to truth requires occasionally flagging things like this.
Presuppositions are things which aren't stated directly, but which are implied by an utterance because if they weren't true, the utterance would be nonsensical. Presuppositions that aren't truth-optimized can be surprisingl...
Our past beliefs affect what we pay attention to, how we prioritize our skepticism, and how we interpret ambiguous evidence. This can create belief basins, where there are multiple sets of beliefs that reinforce each other, appear internally consistent, and make it hard to see the other basins as valid possibilities. On the topic of nutrition, I seem to have found myself in a different basin. I've looked through every nonstandard lens I could find, repeatedly applied skepticism, and firmly committed to not make the same mistakes everyone else is making (as...
You make a good point, that some people who drop out of weight-loss studies might have experienced health problems caused by the study, and quiting was the right decision for them.
But I believe that the average obese person in general population is not this case. There are many situations where people eat refined sugar not because they have a strong craving, but simply because it is easily available or there are even habits built around it.
To give an example, in my family it was for some reason considered a good idea to drink tea with sugar at breakfast. As a child I didn't have an opinion on this, I was given the breakfast and I consumed it. But as I grew up and started making my own breakfast, out of sheer laziness I starting drinking water instead. I didn't fall into coma and die. Actually it made the breakfast better, because when you drink tea with sugar first, then everything you eat afterwards tastes bland, but if you drink water, you discover that some things are surprisingly delicious. Recently my kids spent one week with my mother, and then reported to me that they had "cereals" for each breakfast (in this context, "cereals" refers to those cheap hypermarket products that...
Lack-of-adblock is a huge mistake. On top of the obvious drain on attention, slower loading times everywhere, and surveillance, ads are also one of the top mechanisms by which computers get malware.
When I look over someone's shoulder and see ads, I assume they were similarly careless in their choice of which books to read.
A news article reports on a crime. In the replies, one person calls the crime "awful", one person calls it "evil", and one person calls it "disgusting".
I think that, on average, the person who called it "disgusting" is a worse person than the other two. While I think there are many people using it unreflectively as a generic word for "bad", I think many people are honestly signaling that they had a disgust reaction, and that this was the deciding element of their response. But disgust-emotion is less correlated with morality than other ways of evaluating t...
There's an open letter at https://openletter.net/l/disrupting-deepfakes. I signed, but with caveats, which I'm putting here.
Background context is that I participated in building the software platform behind the letter, without a specific open letter in hand. It has mechanisms for sorting noteworthy signatures to the top, and validating signatures for authenticity. I expect there to be other open letters in the future, and I think this is an important piece of civilizational infrastructure.
I think the world having access to deepfakes, and deepfake-porn tech...
Some people have a sense of humor. Some people pretend to be using humor, to give plausible deniability to their cruelty. On April 1st, the former group becomes active, and the latter group goes quiet.
This is too noisy to use for judging individuals, but it seems to work reasonably well for evaluating groups and cultures. Humor-as-humor and humor-as-cover weren't all that difficult to tell apart in the first place, but I imagine a certain sort of confused person could be pointed at this in order to make the distinction salient.
Someone complained, in a meme, that tech companies building AI are targeting the wrong tasks: writing books, music, TV, but not the office drudge work, leading to a world in which the meaning-making creative pursuits are lost to humans. My reply to this is:
The order in which AI replaces jobs is discovered, not chosen. The problem is that most of the resources aren't going into "AI for writing books" or "automating cubicle jobs", they're going into more-abstract targets like "scaling transformers" and "collecting data sets".
How these abstract targets cash out into concrete tasks isn't easy to predict in advance, and, for AI accelerationists, doesn't offer many relevant degrees of freedom.
And, to the extent that money does go into these tasks per se, I'd bet that the spending is extremely imbalanced in the opposite way to what they assume: I'd bet way more money gets spent on tabular learning, 'robotic process automation', spreadsheet tooling, and so on than gets spent on Jukebox-like full music generation. (Certainly I skim a lot more of the former on Arxiv.) It's telling that the big new music generation thing, almost 3 years after Jukebox is... someone jankily finetuning Stable Diffusion on 'images' of music lol. Not exactly what one would call an active field of research.
So there is a relevant degree of freedom where you can ~A C C E L E R A T E~ - it's just the wrong one from what they want.
It's often said that in languages, the syllable-count of words eventually converges to something based on the frequency with which words are used, so that more-commonly-used concepts get words with fewer syllables.
There's an important caveat to this, which I have never seen stated anywhere: the effect is strongly weighted towards vocabulary used by children, especially small children. Hence why "ma", the lowest-entropy word, means mother in so many different languages, and why toddler-concepts are all monosyllables or twice-repeated monosyllables. So, for ...
There is a rumor of RSA being broken. By which I mean something that looks like a strange hoax made it to the front on Hacker News. Someone uploaded a publicly available WIP paper on integer factorization algorithms by Claus Peter Schnorr to the Cryptology ePrint Archive, with the abstract modified to insert the text "This destroyes the RSA cryptosystem." (Misspelled.)
Today is not the Recurring Internet Security Meltdown Day. That happens once every month or two, but not today in particular.
But this is a good opportunity to point out a non-obvious best pra...
COVID variants have mutated in the direction of faster spread and less immunity, as expected. They also seem to be mutating to higher disease severity, which was not expected. Why would that be, and should we expect this to continue?
My current theory is that the reason variants are more severe is because there's evolutionary pressure on a common factor that affects both severity and secondary attack rate, and that factor is viral replication rate.
In the initial stage of an infection, the number of virus-copies inside someone grows exponentially. If the spi...
Looks like there's holiday-design discourse this week: https://astralcodexten.substack.com/p/a-columbian-exchange . Speaking as a veteran holiday designer (http://petrovday.com/), in my eyes, Columbus Day has already passed into the ranks of deprecated holidays. Not so much because Christopher Columbus was a bad person (though he was by all accounts quite terrible), but rather because no one has actually designed a rationality-culture version of it, and I find broad-American-culture holidays to be boring and uncompetitive.
Looking at Scott's list figures wh...
Every so often, I post to remind everyone when it's time for the Periodic Internet Security Meltdown. For the sake of balance, I would like to report that, in my assessment, the current high-profile vulnerability Hertzbleed is interesting but does *not* constitute a Periodic Internet Security Meltdown.
Hertzbleed starts with the discovery that on certain x86-64 processors the bitwise left shift instruction uses a data-dependent amount of energy. Searching through a large set of cryptographic algorithms, they then find that SIKE (a cryptographic algorithm no...
On October 26, 2020, I submitted a security vulnerability report to the Facebook bug bounty program. The submission was rejected as a duplicate. As of today (April 14), it is still not fixed. I just resubmitted, since it seems to have fallen through the cracks or something. However, I consider all my responsible disclosure responsibilities to be discharged.
Once an Oculus Quest or Oculus Quest 2 is logged in to a Facebook account, its login can't be revoked. There is login-token revocation UI in Facebook's Settings>Security and Login menu, but changing t...
The Diamond Princess cohort has 705 positive cases, of which 4 are dead and 36 serious or critical. In China, the reported ratio of serious/critical cases to deaths is about 10:1, so figure there will be 3.6 more deaths. From this we can estimate a case fatality rate of 7.6/705 ~= 1%. Adjust upward to account for cases that have not yet progressed from detection to serious, and downward to account for the fact that the demographics of cruise ships skew older. There are unlikely to be any undetected cases in this cohort.
(This is a reply to the "Induction Bump" Phase Change video by Catherine Olsson and the rest of Anthropic. I'm writing it here instead of as a YouTube comment because YouTube comments aren't a good place for discussion.)
(Epistemic status: Speculative musings I had while following along, which might be useful for inspiring future experiments, or surfacing my own misunderstandings, and possibly duplicating ideas found in prior work which I have not surveyed.)
The change-in-loss sample after the bump (at 19:10) surprised me. As you say, it seemed to get notice...
This post is a container for my short-form writing. See this post for meta-level discussion about shortform.