Wiki Contributions

Comments

I think this post underrates two general rationalist skills, plus some assorted empirical facts. First, the two skills.

  1. Avoiding the fallacy of the one-sided wager. The post talks about cost-benefit analysis, but in a complete cost-benefit analysis one has to consider the risks of both choices under offer, not just one. The post takes specific notice of the default course of action's risks (money, tears, side effects) but focuses less on the risks of the alternative (e.g. toddlers winding up in the ER because they're shitting themselves half to death from rotavirus).

  2. Trying to look things up. I'll pick this point up briefly below.

The rest of this comment is going to be scattershot, as it just runs through relevant facts I was inspired to check or dig up by different bits of the post.

I grew up in the US in the 80s and I don't remember getting nearly this many. Is my memory faulty?

Probably not, there's a simpler alternative explanation: adults remember basically nothing from before age 3 or so. However, we don't even need that explanation, because...

I'm pretty sure it was more like 12 back in those days.

...the CDC actually did recommend fewer vaccines in the 1980s (via). Though this wouldn't address whatever local or state-level vaccine program you might've also experienced as a kid.

Is this all really necessary? Nobody likes getting shots, especially not children. What changed, anyway?

Scientists and clinicians developed and tested newer vaccines and better vaccines. Seriously! (I think this is an example of how people, even very educated people, tend to not understand on a gut level how much of microbiology's progress was made just in the past 40 years.)

The CDC's 1989 vaccination schedule and current schedule for normal children have only 3 vaccines in common: DTP/DTaP, HbCV/Hib, and MMR. That leaves 7 vaccines which appear on the current schedule but not the 1989 schedule. I looked each of the 7 up online and discovered the following.

  • A patent on hepatitis B vaccine was filed in 1969, but the earliest actual vaccine appears to have come only in the 1970s. It was shown effective in 1980 and made available in 1981, but the vaccine wasn't ideal for mass vaccination because it came directly from carriers' purified blood and was hard to mass produce. A superior recombinant vaccine came along only in 1986, the first of its kind for humans.

  • Rotavirus vaccines didn't even get to the point of testing until the 1980s, and the first publicly introduced vaccine arrived only in 1998. And was then promptly withdrawn due to concern over a potential side effect — clinicians & manufacturers do keep an eye open for side effects!

  • Pneumococcal vaccines have been tested in people for about a century but were relatively ineffective and poorly understood, and their popularity waned with the rise of penicillin. Modern tests began again in 1968 and continued into the 1970s, resulting in US approval for a new vaccine in 1977. However, that vaccine covered only 14 variants of pneumococcus; an improved 23-variant version "covering about 87% of bacteremic pnemuococcal disease in the US" came out in 1983 and was recommended for routine vaccination only in 1984 (and then just in old adults).

  • Inactivated poliovirus wasn't new (Salk famously developed it in the 1950s) but in the current CDC schedule it merely replaces the oral polio vaccine (OPV) used in the 1980s. The inactivated poliovirus vaccine is safer than the OPV in that children who receive the OPV can crap the live, active virus back out.

  • Influenza vaccines are even older, dating to the 1930s.

  • The first varicella vaccine was developed in Japan in the early 1970s, but its safety and worthiness were controversial. Clinical trials took place in the 1980s and the vaccine was licensed for use in Japan in 1986. The US followed suit in 1995.

  • Hepatitis A vaccine went on the market in the early 1990s. Based on playing with Google Scholar, I think the key human studies were done in the late 1980s and early 1990s.

So we have a mundane explanation for most of the newly introduced vaccines for healthy young children; today's vaccines weren't ready before the '80s.

Now, I'm not an expert on immunology or epidemiology so I expect diving into the literature isn't going to be fruitful; I won't be able to ante up decades of education and experience fast enough.

Don't do yourself down! A lot of material written by clinicians & researchers is out there, some of it deliberately targeted to laypeople, and you can often get some understanding even of technical material just by reading, recalling high-school biology, doing arithmetic, and looking things up in medical dictionaries. You won't learn everything, but if the topic is important to you you can discover a lot by spending a few weekends with Google. (There are topics it's hard to get a hold on as a layperson, but it's hard to know whether a topic's that difficult without trying to get a hold on it.)

Here's how many shots each nation's health care system recommends by the time children turn 5.

37 US

25 UK

I thought I'd take a closer look at these two countries (they're both Anglophone, easiest to check). I get somewhat different numbers: 32 or 33 for the US/CDC (count the yellow boxes, remembering to count the annual flu virus 5 times) and 19 for the UK/NHS (only 4 anti-flu injections here; we don't start them until age 2).

Also, while there's a clear UK-US difference in the number of injections, it's exaggerated by the UK lumping multiple vaccines together into one injection. The UK bundles the DTP, polio vaccine, Hib and hepatitis B vaccines; if I broke those out separately I'd get 29 injections instead of just 19 (and then I'd get 30 if I split the combined Hib/MenC vaccine). The numbers of distinct exposures to microbes are similar in the two countries.

When it comes to cultural and environment differences I have a hard time imagining that the orthodoxy varies because Hep A is a much bigger deal in the US. I presume the calculus changes based on your geographic neighbors, but is it a meaningful difference?

Probably the prevalence of hepatitis A in the US itself plays a bigger role. Trying to summarize hepatitis A's prevalence in different countries is a bit of a pain, because prevalence varies a lot by age and cohort as well as place, but I did find a couple of kinda representative studies of the prevalence of hep. A antibodies in the US and UK. Immediately before (1988-1994) vaccine licensing a national survey found a prevalence of 32% in the US, while a nationwide UK study got a prevalence of 12% in unvaccinated individuals around 2002.

On the flip side of this argument: so what if we vaccinate kids against more diseases than other countries? Well, they're not free. [...] Those other nations (presumably) ran cost-benefit analyses too and came to different conclusions. It would be nice if each country showed their work.

At least 3 of the 5 countries you discuss have shown work. See the US's CDC, the UK's Joint Committee on Vaccination and Immunisation, and Germany's Standing Vaccination Committee at its Robert Koch Institute. Granted, I couldn't find any dedicated webpages for Denmark or Sweden in a few minutes of searching, but that may be due to my non-knowledge of Danish & Swedish.

Upvoted for asking an interesting question, but my answer would be "probably not". Whether patents are a good idea even as is is debatable — see Michele Boldrin and David Levine's Against Intellectual Monopoly — and I expect beefing them up to be bad on the margin.

I'm unclear on whether the proposed super-patents would

  1. be the same as normal patents except fileable before the work of sketching a plausible design has been done, or

  2. would be even more powerful, by also allowing the filer to monopolize a market in which they carry out e.g. "market research, product development and building awareness", even if that involves no original design work,

but in any case the potential downsides hit me as more obvious than the potential upsides.

Item 1 would likely lead to more patents being filed "just in case", even without a real intention of bringing a real product to market. This would then discourage other profit-seeking people/organizations from investigating the product area, just as existing patents do.

Item 2 seems to take us beyond the realm of patents and intellectual work; it's about compensating a seller for expenses which produce positive spillovers for other sellers. As far as I know, that's not usually considered a serious enough issue to warrant state intervention, like granting a seller a monopoly. I suspect that when The Coca-Cola Company runs an advert across the US, Wal-Mart sells more of its own knockoff colas, but the US government doesn't subsidize Coca-Cola or its advertising on those grounds!

I believe the following is a comprehensive list of LW-wide surveys and their turnouts. Months are those when the results were reported.

  1. May 2009, 166
  2. December 2011, 1090
  3. December 2012, 1195
  4. January 2014, 1636
  5. January 2015, 1503
  6. May 2016, 3083

And now in the current case we have "about 300" responses, although results haven't been written up and published. I hope they will be. If the only concern is sample size, well, 300 beats zero!

I found the same article on an ad-blocker-friendly website. And here's a direct link to the academic article in Complexity.

I think in January I read you as amplifying James_Miller's point, giving "tariff and other barriers" as an example of something to slot into his "Government regulations" claim (hence why I thought my comment was germane). But in light of your new comment I probably got your original intent backwards? In which case, fair enough!

I hope this a joke.

Yeah — scurvy's no fun!

Did Kuhn (or Popper or Lakatos) spell out substantial implications of the analogy? A lot of the interest would come from that, rather than the fact of the analogy in itself.

Let's say two AIs want to go to war for whatever reason. Then they can agree to some other procedure that predicts the outcome of war (e.g. war in 1% of the universe, or simulated war) and precommit to accept the outcome as binding. It seems like both would benefit from that.

My (amateur!) hunch is that an information deficit bad enough to motivate agents to sometimes fight instead of bargain might be an information deficit bad enough to motivate agents to sometimes fight instead of precommitting to exchange info and then bargain.

Coming up with an extensive form game might not help, because what if the AIs use a different extensive form game?

Certainly, any formal model is going to be an oversimplification, but models can be useful checks on intuitive hunches like mine. If I spent a long time formalizing different toy games to try to represent the situation we're talking about, and I found that none of my games had (a positive probability of) war as an equilibrium strategy, I'd have good evidence that your view was more correct than mine.

There's been pretty much no progress on this in a decade, I don't see any viable attack.

There might be some analogous results in the post-Fearon, rational-choice political science literature, I don't know it well enough to say. And even if not, it might be possible to build a relevant game incrementally.

Start with a take-it-or-leave-it game. Nature samples a player's cost of war from some distribution and reveals it only to that player. (Or, alternatively, Nature randomly assigns a discrete, privately known type to a player, where the type reflects the player's cost of war.) That player then chooses between (1) initiating a bargaining sub-game and (2) issuing a demand to the other player, triggering war if the demand is rejected. This should be tractable, since standard, solvable models exist for two-player bargaining.

So far we have private information, but no precommitment. But we could bring precommitment in by adding extra moves to the game: before making the bargain-or-demand choice, players can mutually agree to some information-revealing procedure followed by bargaining with the newly revealed information in hand. Solving this expanded game could be informative.

The amount of wastage from bitcoin mining pales compared to the GDP spent on traditional forms of trust. Think banking isn't contributing to global warming? Well all those office buildings have lights and electricity and back-room servers, not to mention the opportunity costs.

That provoked me to do a Fermi estimate comparing banking's power consumption to Bitcoin's. Posting it in case anyone cares.

Estimated energy use of banking

The service sector uses 7% of global power and produces 68% of global GDP. Financial services make up about 17% of global GDP, hence about 25% of global services' contribution to GDP. If financial services have the same energy intensity as services in general, financial services use about 25% × 7% = 1.8% of global power. World energy consumption is of order 15 TW, so financial services use about 260 GW. Rounding that down semi-arbitrarily (because financial services include things like insurance & pension services, as well as banking), the relevant power consumption number might be something like 200 GW.

Estimated energy use of Bitcoin

A March blog post estimates that the Bitcoin network uses 0.774 GW to do 3250 petahashes per second. Scaling the power estimate up to the network's current hash rate (5000 petahashes/s, give or take) makes it 1.19 GW. So Bitcoin is a couple of orders of magnitude short of overtaking banking.

Load More