A Cost- Benefit Analysis of Immunizing Healthy Adults Against Influenza

14 Fluttershy 11 November 2014 04:10AM

As of 11:30CST, 11/11/14, this cost-benefit analysis has been revised, in order to address concerns raised in the comments. See http://lesswrong.com/r/discussion/lw/l8k/expansion_on_a_previous_costbenefit_analysis_of/ for more on how the cost-benefit analysis was carried out, and on how varying certain parameters affected the determined expected value of receiving a flu shot.

Overview

The purpose of this post is to provide readers of LessWrong with a summary of what the literature has to say about the efficacy and safety of influenza vaccinations, as well as to weigh the costs of receiving yearly flu vaccinations against the benefits which healthy adults gain from vaccination. As illustrated in the "Cost-Benefit Analyses" section of this report, the expected value of receiving flu vaccinations is positive for healthy adults. Therefore, a further motivation for authoring this post is that writing this post may encourage LessWrong readers who have not yet been vaccinated this flu season to receive immediate vaccination.

Introduction and Review of Literature

Several meta-analyses on the efficacy and safety of live-attenuated influenza vaccines, trivalent inactivated influenza vaccines, and tetravalent inactivated influenza vaccines have been published within the last two years (see Coleman et. al, Demicheli et. al, Osterholm et. al). These meta- analyses reached broadly similar conclusions regarding the efficacy of flu vaccines, which groups were most at risk for being infected with influenza, the safety of being vaccinated, and the magnitude of social harm caused yearly by influenza. However, there was disagreement between some articles regarding whether or not vaccination of healthy adults against influenza should be pursued as a public health policy. Specifically, the Demicheli paper (wrongly) found "no evidence for the utilization of vaccination against influenza in healthy adults as a routine public health measure". The issue of whether or not healthy adults should receive flu shots will be examined in the "Cost-Benefit Analyses" section of this report.

While the severity of flu seasons varies greatly year-to-year, an average of 24,000 deaths from the flu occur yearly in the US (NCIRD); approximately 90% of these deaths are in people of at least 65 years of age (NCIRD, CDC Key Facts). For all flu seasons between 1976 and 2007, an average of 2,385 adults of ages 19-64 died each year from flu and flu-related causes in the US (Thompson). Between 5 and 20 percent of the US population becomes infected with flu virus each flu season (CDC Q&A).

The *efficacy* of a vaccine is a measure of how effective a vaccine is; if half of a population of 2,000,000 people were given a vaccine with 60% efficacy, and 100,000 of the 1,000,000 total unvaccinated people got sick, then 40,000 of the 1,000,000 vaccinated people would get sick, as well. Many sources report the average efficacy of the flu vaccine throughout the US population to be 60% (Demicheli et. al) or 59% (Osterholm et. al, Coleman et. al). The CDC reports that the flu vaccine is more efficacious in young adults (70-90% efficacy, depending on how closely active viruses match the ones included in the vaccines manufactured during a given season), and less efficacious in those over 65 years of age (NCIRD). This has led to increased efforts at targeting healthcare workers, nursing home attendants, and others who are in frequent contact with elderly persons for yearly vaccination.

While some health agencies only recommend that elderly, infants, healthcare workers, pregnant women, and adults with certain medical complications, such as respiratory diseases, receive flu shots, the CDC recommends that all people 6 months and older get a flu shot every year (CDC Key Facts). The value which such at-risk individuals gain from being immunized against the flu is higher than the value which healthy adults gain from receiving flu shots. Certain individuals with extremely rare conditions, such as Guillain-Barré Syndrome (GBS), or people who may experience life-threating allergic reactions to components of the flu shot, should not receive flu shot. A healthcare professional will be able to tell you whether or not it is safe for you to receive a flu shot prior to you receiving the immunization.

None of the meta-reviews examined in this report found any evidence that receiving an influenza vaccine can cause serious adverse responses in patients (Coleman et. al, Osterholm et. al, Demicheli et. al). Receiving the influenza vaccine is safe, and it is not at all possible to catch the flu from receiving an influenza vaccine (CDC). Flu shots can cause arm pain or soreness, and can cause headache, mild fever, and muscle pain (Coleman et. al, Demicheli et. al).

Cost-Benefit Analyses

Estimates of the expected monetary values of possible flu-related outcomes were calculated relative to the value of not getting sick despite not receiving a flu shot, which was defined as having a utility of 0 USD. All payoffs shown are the payoffs which an average individual would derive from experiencing particular outcomes, rather than the value which either society, employers, or other parties would gain from a given individual either getting sick or not. Probabilities were assigned to each outcome, as shown in Figure 1, and a calculation of the expected value of receiving or not receiving a flu shot in a given year was carried out. The motivation for simplifying the calculation of the expected value of receiving a flu shot by restricting the outcome space as shown in Figure 1 was to demonstrate that, despite using conservative estimates and ignoring certain benefits of vaccination in the model, the expected value of vaccination is still positive for healthy adults. Since other demographics are expected to benefit even more from receiving flu vaccinations than healthy adults benefit from receiving flu vaccinations, the fact that healthy adults would benefit from receiving yearly flu vaccinations strongly suggests that all individuals above 6 months of age would benefit from receiving flu shots, excepting e.g. patients with GBS or allergies to components of the flu shot.

The cost of getting a flu shot was calculated as being 30 USD, given that it costs around 20 USD to receive a flu shot out of pocket, and given that it takes around 30 minutes to get a flu shot at a clinic. I have estimated the value of one's time as being 20 USD/hour for this calculation.

The value of not feeling sick for 3-10 days was subjectively estimated as being 200 USD for those who caught the flu, yet did not receive a flu shot. The outcome in which one catches the flu despite receiving a flu shot was given a payoff of - 230 USD, which was calculated by adding the cost of being vaccinated against the flu to the cost of feeling sick from getting the flu, calculated above.

The value a given individual would gain from not dying was estimated as being 5,000,000 USD.

Figure 1. Decision Tree for Assessing the Impact of Immunization in Healthy Adults

Although the costs of the possible outcomes shown in Figure 1 were calculated under the assumption that the individual receiving the flu shout was uninsured, having an insurance policy increases the expected value of receiving a flu shot, as many insurance companies will completely cover the cost of receiving a flu shot. Some governmental health insurance programs do not cover the cost of flu vaccinations. If one has insurance which covers the cost of the flu vaccine, the expected value of being vaccinated against the flu rises by 20 USD.

There are several positive benefits of receiving flu shots which have not been included in the above model. In particular, being vaccinated against the flu protects others in your community from becoming sick; this effect is known as the herd immunity effect. Also, the above analysis assumed that an individual would not lose income from missing work due to being sick from the flu; the effect which making this assumption had on the cost-benefit analysis presented here is examined in the link given at in the first paragraph of this post. Lastly, receiving the flu vaccine provides one with a small degree of protection against influenza-like infections (Coleman et. al, Demicheli et. al); this positive effect of the flu vaccine was not considered in the above assessment of the costs and benefits associated with healthy adults receiving the flu vaccine.

Again, the above analysis of the expected utility of receiving the flu vaccine each year was conducted with conservative estimates and a simple model which did not take into account all of the benefits of receiving the flu vaccine; this was done to show that the expected gain from receiving a flu shot is positive in the general case, given uncharitable assumptions.

Author's Reflections

I only read the "Methods", "Findings", and "Interpretation" sections of the Lancet article, as I did not have access to the full text of this paper.

Before writing this article and conducting the research which necessarily had to be conducted before writing it, I would have estimated the prior probability of elderly people, infants, pregnant woman, and asthmatics receiving a net benefit from vaccination as being very high, and the prior probability of healthy adults receiving a net benefit from influenza vaccination as moderately high.

I was raised in a family which, in general, valued being healthy, and, in particular, valued the practice of keeping up to date on one's vaccinations. However, I do not believe that the conclusions of this report would have been different if I had not come from such a culture.

Further Considerations

While this report is complete, I could have been more thorough. Part of why I am publishing this post now, rather than conducting more research before doing so, is that I expect that conducting additional research would be very unlikely to cause me to change any of the major conclusions of this report. To say the same thing from a decision-theoretic standpoint, information which has a very low chance of making one change their mind about something has little value, and I think that reading more papers on this topic would have a very low chance of changing any of my opinions on this topic.

References

1. Centers for Disease Control and Prevention. Key Facts About Seasonal Flu Vaccine. http://www.cdc.gov/flu/protect/keyfacts.htm (accessed 11/9, 2014).

2. Centers for Disease Control and Prevention. Seasonal Influenza Q&A. http://www.cdc.gov/flu/about/qa/disease.htm (accessed 11/9, 2014).

3. Coleman, B.; Cochrane, L.; Colas, L. Literature Review on Quadrivalent Influenza Vaccines. Public Health Agency of Canada 2014.

4. Demicheli, V.; Jefferson, T.; Al-Ansary, L.; Ferroni, E. Vaccines for preventing influenza in healthy adults. Cochrane Library 2014.

5. Milenkovic, M.; Russo, A.; Elixhauser, A. Hospital Stays for Influenza, 2004. Agency for Healthcare Research and Quality 2006.

6. National Center for Immunization and Respiratory Diseases. Epidemiology and Prevention of Vaccine-Preventable Diseases. http://www.cdc.gov/vaccines/pubs/pinkbook/flu.html (accessed 11/9, 2014).

7. Osterholm, M. T.; Kelley, N. S.; Sommer, A.; Belongia, E. A. Efficacy and effectiveness of influenza vaccines: a systematic review and meta-analysis. The Lancet infectious diseases 2012, 12, 36-44.

 

8. Thompson, M.; Shay, D.; Zhou, H.; Bridges, C. Estimates of Deaths Associated with Seasonal Influenza --- United States, 1976--2007. 2010.

 

 

On Walmart, And Who Bears Responsibility For the Poor

13 ChrisHallquist 27 November 2013 05:08AM

Note: Originally posted in Discussion, edited to take comments there into account.


Yes, politics, boo hiss. In my defense, the topic of this post cuts across usual tribal affiliations (I write it as a liberal criticizing other liberals), and has a couple strong tie-ins with main LessWrong topics:

  • It's a tidy example of a failure to apply consequentialist / effective altruist-type reasoning. And while it's probably true that the people I'm critiquing aren't consequentialists by any means, it's a case where failing to look at the consequences leads people to say some particularly silly things.
  • I think there's a good chance this is a political issue that will become a lot more important as more and more jobs are replaced by automation. (If the previous sentence sounds obviously stupid to you, the best I can do without writing an entire post on that is vaguely gesturing at gwern on neo-luddism, though I don't agree with all of it.)

The issue is this: recently, I've seen a meme going around to the effect that companies like Walmart that have a large number of employees on government benefits are the "real welfare queens" or somesuch, and with the implied message that all companies have a moral obligation to pay their employees enough that they don't need government benefits. (I say mention Walmart because it's the most frequently mentioned villain in this meme, but others, like McDonalds, get mentioned.)

My initial awareness of this meme came from it being all over my Facebook feed, but when I went to Google to track down examples, I found it coming out of the mouths of some fairly prominent congresscritters. For example Alan Grayson:

In state after state, the largest group of Medicaid recipients is Walmart employees. I'm sure that the same thing is true of food stamp recipients. Each Walmart "associate" costs the taxpayers an average of more than $1,000 in public assistance.

Or Bernie Sanders:

The Walmart family... here's an amazing story. The Walmart family is the wealthiest family in this country, worth about $100 billion. owning more wealth than the bottom 40 percent of the American people, and yet here's the incredible fact.

Because their wages and benefits are so low, they are the major welfare recipients in America, because many, many of their workers depend on Medicaid, depend on food stamps, depend on government subsidies for housing. So, if the minimum wage went up for Walmart, would be a real cut in their profits, but it would be a real savings by the way for taxpayers, who would not having to subsidize Walmart employees because of their low wages.

Now here's why this is weird: consider Grayson's claim that each Walmart employee costs the taxpayers on average $1,000. In what sense is that true? If Walmart fired those employees, it wouldn't save the taxpayers money: if anything, it would increase the strain on public services. Conversely, it's unlikely that cutting benefits would force Walmart to pay higher wages: if anything, it would make people more desperate and willing to work for low wages. (Cf. this this excellent critique of the anti-Walmart meme).

Or consider Sanders' claim that it would be better to raise the minimum wage and spend less on government benefits. He emphasizes that Walmart could take a hit in profits to pay its employees more. It's unclear to what degree that's true (see again previous link), and unclear if there's a practical way for the government to force Walmart to do that, but ignore those issues, it's worth pointing out that you could also just raise taxes on rich people generally to increase benefits for low-wage workers. The idea seems to be that morally, Walmart employees should be primarily Walmart's moral responsibility, and not so much the moral responsibility of the (the more well-off segment of) the population in general.

But the idea that employing someone gives you a general responsibility for their welfare (beyond, say, not tricking them into working for less pay or under worse conditions than you initially promised) is also very odd. It suggests that if you want to be virtuous, you should avoid hiring people, so as to keep your hands clean and avoid the moral contagion that comes with employing low wage workers. Yet such a policy doesn't actually help the people who might want jobs from you. This is not to deny that, plausibly, wealthy onwers of Walmart stock have a moral responsibility to the poor. What's implausible is that non-Walmart stock owners have significantly less responsibility to the poor.

This meme also worries me because I lean towards thinking that the minimum wage isn't a terrible policy but we'd be better off replacing it with guaranteed basic income (or an otherwise more lavish welfare state). And guaranteed basic income could be a really important policy to have as more and more jobs are replaced by automation (again see gwern if that seems crazy to you). I worry that this anti-Walmart meme could lead to an odd left-wing resistance to GBI/more lavish welfare state, since the policy would be branded as a subsidy to Walmart.

Links: so-called "knockout game" a "myth and a "bogus trend."

-11 ChrisHallquist 25 November 2013 04:21PM

When I started seeing stories about the "knockout game" (supposedly, teenagers playing a game where they try to knockout random strangers) a few days ago, I immediately resolved to avoid paying attention to them, because it sounded like a classic case of people taking a few isolated incidents and blowing them up into a big scary trend.

And then this morning, I see this blog post, which links back to an article from two years ago titled: "Knockout King: Kids call it a game. Academics call it a bogus trend. Cops call it murder." Turns out my knowledge of human biases has served me well... and it's especially significant that the article is from two years ago; this is not the first time the media has tried to get people scared about this "trend." From the article (emphasis added):

Mike Males, a research fellow at the nonprofit Center on Juvenile and Criminal Justice and who runs the website YouthFacts.org, says the media have made habit of cherry-picking isolated instances of "knockout games" in order to gin up sensational stories that demonize youth. "This knockout-game legend is a fake trend," Males contends.

Given that 4.3 million violent attacks were reported by U.S. citizens in 2009, according to the National Crime Victimization Survey, Males says reporters should know better than to highlight a handful of random attacks by kids and call it journalism. It's the same thing as plucking a few instances of attackers with Jewish surnames who beat up non-Jews and declaring it a "troubling new trend," he argues.

Still, over the years a handful of reports of "knockout" have emerged from cities in Missouri, Illinois, Massachusetts and New Jersey. And most criminologists and youth experts agree that unprovoked attacks by teenagers on strangers are a real, if extremely rare, phenomenon.

Less Wrong’s political bias

-6 Sophronius 25 October 2013 04:38PM

(Disclaimer: This post refers to a certain political party as being somewhat crazy, which got some people upset, so sorry about that. That is not what this post is *about*, however. The article is instead about Less Wrong's social norms against pointing certain things out. I have edited it a bit to try and make it less provocative.)

 

A well-known post around these parts is Yudkowski’s “politics is the mind killer”. This article proffers an important point: People tend to go funny in the head when discussing politics, as politics is largely about signalling tribal affiliation. The conclusion drawn from this by the Less Wrong crowd seems simple: Don’t discuss political issues, or at least keep it as fair and balanced as possible when you do. However, I feel that there is a very real downside to treating political issues in this way, which I shall try to explain here. Since this post is (indirectly) about politics, I will try to bring this as gently as possible so as to avoid mind-kill. As a result this post is a bit lengthier than I would like it to be, so I apologize for that in advance.

I find that a good way to examine the value of a policy is to ask in which of all possible worlds this policy would work, and in which worlds it would not. So let’s start by imagining a perfectly convenient world: In a universe whose politics are entirely reasonable and fair, people start political parties to represent certain interests and preferences. For example, you might have the kitten party for people who like kittens, and the puppy party for people who favour puppies. In this world Less Wrong’s unofficial policy is entirely reasonable: There is no sense in discussing politics, since politics is only about personal preferences, and any discussion of this can only lead to a “Jay kittens, boo dogs!” emotivism contest. At best you can do a poll now and again to see what people currently favour.

Now let’s imagine a less reasonable world, where things don’t have to happen for good reasons and the universe doesn’t give a crap about what’s fair. In this unreasonable world, you can get a “Thrives through Bribes” party or an “Appeal to emotions” party or a “Do stupid things for stupid reasons” party as well as more reasonable parties that actually try to be about something. In this world it makes no sense to pretend that all parties are equal, because there is really no reason to believe that they are.

As you might have guessed, I believe that we live in the second world. As a result, I do not believe that all parties are equally valid/crazy/corrupt, and as such I like to be able to identify which are the most crazy/corrupt/stupid. Now I happen to be fairly happy with the political system where I live. We have a good number of more-or-less reasonable parties here, and only one major crazy party that gives me the creeps. The advantage of this is that whenever I am in a room with intelligent people, I can safely say something like “That crazy racist party sure is crazy and racist”, and everyone will go “Yup, they sure are, now do you want to talk about something of substance?” This seems to me the only reasonable reply.

The problem is that Less Wrong seems primarily US-based, and in the US… things do not go like this. In the US, it seems to me that there are only two significant parties, one of which is flawed and which I do not agree with on many points, while the other is, well… can I just say that some of the things they profess do not so much sound wrong as they sound crazy? And yet, it seems to me that everyone here is being very careful to not point this out, because doing so would necessarily be favouring one party over the other, and why, that’s politics! That’s not what we do here on Less Wrong!

And from what I can tell, based on the discussion I have seen so far and participated in on Less Wrong, this introduces a major bias. Pick any major issue of contention, and chances are that the two major parties will tend to have opposing views on the subject. And naturally, the saner party of the two tends to hold a more reasonable view, because they are less crazy. But you can’t defend the more reasonable point of view now, because then you’re defending the less-crazy party, and that’s politics. Instead, you can get free karma just by saying something trite like “well, both sides have important points on the matter” or “both parties have their own flaws” or “politics in general are messed up”, because that just sounds so reasonable and fair who doesn’t like things to be reasonable and fair? But I don’t think we live in a reasonable and fair world.

It’s hard to prove the existence of such a bias and so this is mostly just an impression I have. But I can give a couple of points in support of this impression. Firstly there are the frequent accusations of group think towards Less Wrong, which I am increasingly though reluctantly prone to agree with. I can’t help but notice that posts which remark on for example *retracted* being a thing tend to get quite a few downvotes while posts that take care to express the nuance of the issue get massive upvotes regardless of whether really are two sides on the issue. Then there are the community poll results, which show that for example 30% of Less Wrongers favour a particular political allegiance even though only 1% of voters vote for the most closely corresponding party. I sincerely doubt that this skewed representation is the result of honest and reasonable discussion on Less Wrong that has convinced members to follow what is otherwise a minority view, since I have never seen any such discussion. So without necessarily criticizing the position itself, I have to wonder what causes this skewed representation. I fear that this “let’s not criticize political views” stance is causing Less Wrong to shift towards holding more and more eccentric views, since a lack of criticism can be taken as tacit approval. What especially worries me is that giving the impression that all sides are equal automatically lends credibility to the craziest viewpoint, as proponents of that side can now say that sceptics take their views seriously which benefits them the most. This seems to me literally the worst possible outcome of any politics debate.

I find that the same rule holds for politics as for life in general: You can try to win or you can give up and lose by default, but you can’t choose not to play.

A Gamification Of Education: a modest proposal based on the Universal Decimal Classification and RPG skill trees

13 Ritalin 07 July 2013 06:27PM

While making the inventory of my personal library and applying the Universal Decimal System to its classification, I found myself discovering a systematized classification of fields of knowledge, nested and organized and intricate, many of which I didn't even know existed. I couldn't help but compare how information was therein classified, and how it was imparted to me in engineering school. I also thought about how, often, software engineers and computer scientists were mostly self-thought, with even college mostly consisting of "here's a problem: go forth and figure out a way to solve it". This made me wonder whether another way of certified and certifiable education couldn't be achieved, and a couple of ideas sort of came to me.

It's pretty nebulous in my mind so far, but the crux of the concept would be a modular structure of education, where the academic institution essentially established what information precisely you need from each module, and lets you get on with the activity of learning, with periodic exams that you can sign up for, which will certify your level and area of proficiency in each module.

A recommended tree of learning can be established, but it should be possible to not take every intermediate test, if passing the final test proves that you've passed all the others behind it (this would allow people coming from different academic systems to certify their knowledge quickly and easily, thus avoiding the classic "Doctor in Physics from Former Soviet Union, current Taxi Driver in New York" scenario).

Thus, a universal standard of how much you have proven to know about what topics can be established.

Employers would then be free to request profiles in the format of such a tree. It need not be a binary "you need to have done all these courses and only these courses to work for us", they could be free to write their utility function for this or that job however they would see fit, with whichever weights and restrictions they would need.

Students and other learners would be free to advance in whichever tree they required, depending on what kind of profile they want to end up with at what age or point in time. One would determine what to learn based on statistical studies of what elements are, by and large, most desired by employers of/predictors of professional success in a certain field you want to work in.

One would find, for example, that mastering the peculiar field of railway engineering is great to be a proficient railway engineer, but also that having studied, say, things involved with people skills (from rhetoric to psychology to management), correlates positively with success in that field.

Conversely, a painter may find that learning about statistics, market predictions, web design, or cognitive biases correlates with a more successful career (whether it be on terms of income, or in terms of copies sold, or of public exposure... each one may optimize their own learning according to their own criteria).

One might even be able to calculate whether such complimentary education is actually worth their time, and which of them are the most cost-efficient.

I would predict that such a system would help society overall optimize how many people know what skills, and facilitate the learning of new skills and the updating of old ones for everyone, thus reducing structural unemployment, and preventing pigeonholing and other forms of professional arthritis.

I would even dare to predict that, given the vague, statistical, cluster-ish nature of this system, people would be encouraged to learn quite a lot more, and on a quite wider range of fields, than they do now, when one must jump through a great many hoops and endure a great many constraints in space and time and coin to get access to some types of educations (and to the acknowledgement of their acquisition thereof).

Acquiring access to the actual sources of knowledge, a library (virtual or otherwise), lectures (virtual or otherwise), and so on, would be a private matter, up to the learner:

  • some of them already have the knowledge and just need to get it certified,
  • others can actually buy the books they want/need, especially if keeping them around as reference will be useful to them in the future,
  • others can subscribe to one or many libraries, of the on-site sort or by correspondence
  • others can buy access to pre-recorded lectures, peruse lectures that are available for free, or enroll in academic institutions whose ostensible purpose is to give lectures and/or otherwise guide students through learning, more or less closely
  • the same applies to finding study groups with whom you can work on a topic together: I can easily imagine dedicated social networks could be created for that purpose, helping people pair up with each other based on mutual distance, predicted personal affinity, mutual goals, backgrounds, and so on. Who knows what amazing research teams might be borne of the intellectual equivalent of OK!Cupid.

A thing that I would like very much about this system is that it would free up the strange conflicts of interest that hamper the function of traditional educational institutions.

When the ones who teach you are also the ones who grade you, the effort they invest in you can feel like a zero-sum game, especially if they are only allowed to let a percentage of you pass.

When the ones who teach you have priorities other than teach (usually research, but some teachers are also involved in administrative functions, or even private interests completely outside of the university's ivory tower1), this can and often does reduce the energy and dedication they can/will allocate to the actual function of teaching, as opposed to the others.

By separating these functions, and the contradictory incentives they provide, the organizations performing them are free to optimize for each: 

  • Testing is optimized for predicting current and future competence in a subject: the testers whose tests are the most reliable have more employers requiring their certificates, and thus more people requesting that they test them
  • Teaching is optimized for getting the knowledge through whatever the heck the students want, whether it be to succeed at the tests or to simply master the subject (I don't know much game theory, but I'd naively guess that the spontaneous equilibrium between the teaching and testing institutions would lead to both goals becoming identical).
  • Researching is optimized for research (researchers are not teachers. dang it, those are very different skill-sets!). However researchers and other experts get to have a pretty big say in what the tests test for and how, because their involvement makes the tests more trustworthy for employers, and because they, too, are employers.
  • And of course entire meta-institutions can spring from this, whose role is to statistically verify, over the long term,
    • how good a predictor of professional success in this or that field is passing the corresponding test, and
    • how good a predictor of passing the test is to be taught by this or that teaching institution.
    • how good a predictor of the test being reliable is the input of these or those researchers and experts
  • It occurs to me now that, if one wished to be really nitpicky about who watches the watchmen, I suspect that there would be institutions testing the reliability of those meta-institutions, and so on and so forth... When does it stop? How to avoid vested interests and little cheats and manipulations pulling an academic equivalent of the AAA certification of sub-prime junk debt in 2008?

Another discrepancy I'd like to see solved is the difference between the official time it is supposed to take to obtain this or that degree, to learn this or that subject, and the actual statistical distribution of that time. Nowadays, a degree that's supposed to take you five years ends up taking up eight or ten years of your life. You find yourself having to go through the most difficult subjects again and again, because they are explained in an extremely rushed way, the materials crammed into a pre-formatted time. Other subjects are so exceedingly easy and thinly-spread that you find that going to class is a waste of time, and that you're better off preparing for it one week before finals. Now, after having written all of the above, my mind is quite spent, and I don't feel capable of either anticipating the effect of my proposed idea on this particular, nor of offering any solutions. Nevertheless, I wish to draw attention to this, so I'm leaving this paragraph in until I can amend it to something more useful/promising.

I hereby submit this idea to the LW community for screening and sound-boarding. I apologize in advance for your time, just in case this idea appears to be flawed enough to be unsalvageable. If you deem the concept good but flawed, we could perhaps work on ironing those kinks together. If, afterwards, this seems to you like a good enough idea to implement, know that good proposals are a dime a dozen; if there is any interest in seeing something like this happen, we can need to move on to proprely understanding the current state of secondary/superior/higher education, and figuring out of what incentives/powers/leverages are needed to actually get it implemented.

 



 

1By ivory tower I simply mean the protected environment where professors teach, researchers research, and students study, with multiple buffers between it and the ebb and flow of political, economical, and social turmoil. No value judgement is intended.

 


 

EDIT: And now I look upon the title of this article and realize that, though I had comparisons to games in mind, I never got around to writing them down. My inspirations here were mostly Civilization's Research Trees, RPG Skill Scores and Perks, and, in particular, Skyrim's skills and perks tree.

Basically, your level at whatever skill improves by studying and by practising it rather than merely by levelling up, and, when you need to perform a task that's outside your profile, you can go and learn it without having to commit to a class. Knowing the right combination of skills at the right level lets you unlock perks or access previously-unavailable skills and applications. What I like the most about it is that there's a lot of freedom to learn what you want and be who you want to be according to your own tastes and wishes, but, overall, it sounds sensible and is relatively well-balanced. And of course there's the fact that it allows you to keep a careful tally of how good you are at what things, and the sense of accomplishment is so motivating and encouraging!

Speaking of which, several netwroks and consoles' Achievement systems also strike me as motivators for keeping track of what one has achieved so far, to look back and be able to say "I've come a long way" (in an effect similar to that of gratitude journals), and also to accomplish a task and have this immediate and universal acknowledgement that you did it dammit (and, for those who care about that kind of thing, the chance to rub it the face of those who haven't).

I would think our educational systems could benefit from this kind of modularity and from this ability to keep track of things in a systematic way. What do you guys think?

Near-Term Risk: Killer Robots a Threat to Freedom and Democracy

10 Epiphany 14 June 2013 06:28AM

A new TED talk video just came out by Daniel Suarez, author of Daemon, explaining how autonomous combat drones with a capability called "lethal autonomy" pose a threat to democracy.  Lethal autonomy is what it sounds like - the ability of a robot to kill a human without requiring a human to make the decision.

He explains that a human decision-maker is not a necessity for combat drones to function.  This has potentially catastrophic consequences, as it would allow a small number of people to concentrate a very large amount of power, ruining the checks and balances of power between governments and their people and the checks and balances of power between different branches of government.  According to Suarez, about 70 countries have begun developing remotely piloted drones (like predator drones), the precursor to killer robots with lethal autonomy.

Daniel Suarez: The kill decision shouldn't belong to a robot

One thing he didn't mention in this video is that there's a difference in obedience levels between human soldiers and combat drones.  Drones are completely obedient but humans can throw a revolt.  Because they can rebel, human soldiers provide some obstacles to limit the power that would-be tyrants could otherwise obtain.  Drones won't provide this type of protection whatsoever.  Obviously, relying on human decision making is not perfect.  Someone like Hitler can manage to convince people to make poor ethical choices - but still, they need to be convinced, and that requirement may play a major role in protecting us.  Consider this - it's unthinkable that today's American soldiers might suddenly decide this evening to follow a tyrannical leader whose goal is to have total power and murder all who oppose.  It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risking a mutiny.  The amount and variety of power grabs a tyrant with a robot army of sufficient power can get away with is unlimited.

Something else he didn't mention is that because we can optimize technologies more easily than we can optimize humans, it may be possible to produce killer robots in less time than it takes to build armies of human soldiers and with less expense than training and paying those soldiers.  Considering the salaries and benefits paid to soldiers and the 18 year wait time on human development, it is possible that an overwhelmingly large army of killer robots could be built more quickly than human armies and with fewer resources.

Suarez's solution is to push for legislation that makes producing robots with lethal autonomy illegal.  There are, obviously, pros and cons to this method.  Another method (explored in Daemon) is that if the people have 3-D printers, then the people may be able to produce comparable weapons which will then check and balance their government's power.  This method has pros and cons as well. I came up with a third method which is here.  I think it's better than the alternatives but I would like more feedback.

As far as I know, no organization, not even MIRI (I checked), is dedicated to preventing the potential political disasters caused by near-term tool AI (MIRI is interested in the existential risks posed by AGI).  That means it's up to us - the people - to develop our understanding of this subject and spread the word to others.  Of all the forums on the internet, LessWrong is one of the most knowledgeable when it comes to artificial intelligence, so it's a logical place to fire up a discussion on this.  I searched LessWrong for terms like "checks and balances" and "Daemon" and I just don't see evidence that we've done a group discussion on this issue.  I'm starting by proposing and exploring some possible solutions to this problem and some pros and cons of each.

To keep things organized, let's put each potential solution, pro and con into a separate comment.

A Viable Alternative to Typing

2 fowlertm 06 June 2013 05:38AM

I'm thinking about writing a more substantive post about how humans work and how we can work better, a little like this one.  As is common with these sorts of things, once I started to do research and pull on various threads, it turned out that the field was pretty deep and would require time to understand.  But in the meantime, I just thought I would link to this video of someone programming using only their voice. 

As I suffer with symptoms of carpal tunnel syndrome, this is of particular interest to me.  Once I watched it I decided to start looking at different voice recognition software so that I could still get some work done while typing less.  I'm happy to say that even the default software for speech recognition which came with windows is actually very able and accurate.  I dictated almost this entire post using that software.

As far as I can tell, Dragon Naturally Speaking is the gold standard in voice recognition software.  It does come with a pretty hefty price tag, but it may be worth it if you have serious repetitive stress injuries, or as a preventative measure if you're someone who spends a lot of time at their computer.  And if that doesn't work, chances are good your computer has adequate software pre-installed.  

The autopilot problem: driving without experience

23 Stuart_Armstrong 13 May 2013 12:42PM

Consider a mixed system, in which an automated system is paired with a human overseer. The automated system handles most of the routine tasks, while the overseer is tasked with looking out for errors and taking over in extreme or unpredictable circumstances. Examples of this could be autopilots, cruise control, GPS direction finding, high-frequency trading – in fact nearly every automated system has this feature, because they nearly all rely on humans "keeping an eye on things".

But often the human component doesn't perform as well as it should do – doesn't perform as well as it did before part of the system was automated. Cruise control can impair driver performance, leading to more accidents. GPS errors can take people far more off course than following maps did. When the autopilot fails, pilots can crash their planes in rather conventional conditions. Traders don't understand why their algorithms misbehave, or how to stop this.

There seems to be three factors at work here:

  1. Firstly, if the automation performs flawlessly, the overseers will become complacent, blindly trusting the instruments and failing to perform basic sanity checks. They will have far less procedural understanding of what's actually going on, since they have no opportunity to exercise their knowledge.
  2. This goes along with a general deskilling of the overseer. When the autopilot controls the plane for most of its trip, pilots get far less hands-on experience of actually flying the plane. Paradoxically, less efficient automation can help with both these problems: if the system fails 10% of the time, the overseer will watch and understand it closely.
  3. And when the automation does fail, the overseer will typically lack situational awareness of what's going on. All they know is that something extraordinary has happened, and they may have the (possibly flawed) readings of various instruments to guide them – but they won't have a good feel for what happened to put them in that situation.

So, when the automation fails, the overseer is generally dumped into an emergency situation, whose nature they are going to have to deduce, and, using skills that have atrophied, they are going to have to take on the task of the automated system that has never failed before and that they have never had to truly understand.

And they'll typically get blamed for getting it wrong.

Similarly, if we design AI control mechanisms that rely on the presence of a human in the loop (such as tools AIs, Oracle AIs, and, to a lesser extent, reduced impact AIs), we'll need to take the autopilot problem into account, and design the role of the overseer so as not to deskill them, and not count on them being free of error.

Post ridiculous munchkin ideas!

55 D_Malik 15 May 2013 10:27PM

Thus spake Eliezer:

A Munchkin is the sort of person who, faced with a role-playing game, reads through the rulebooks over and over until he finds a way to combine three innocuous-seeming magical items into a cycle of infinite wish spells.  Or who, in real life, composes a surprisingly effective diet out of drinking a quarter-cup of extra-light olive oil at least one hour before and after tasting anything else.  Or combines liquid nitrogen and antifreeze and life-insurance policies into a ridiculously cheap method of defeating the invincible specter of unavoidable Death.  Or figures out how to build the real-life version of the cycle of infinite wish spells.

It seems that many here might have outlandish ideas for ways of improving our lives. For instance, a recent post advocated installing really bright lights as a way to boost alertness and productivity. We should not adopt such hacks into our dogma until we're pretty sure they work; however, one way of knowing whether a crazy idea works is to try implementing it, and you may have more ideas than you're planning to implement.

So: please post all such lifehack ideas! Even if you haven't tried them, even if they seem unlikely to work. Post them separately, unless some other way would be more appropriate. If you've tried some idea and it hasn't worked, it would be useful to post that too.

Googling is the first step. Consider adding scholarly searches to your arsenal.

19 Tenoke 07 May 2013 01:30PM

Related to: Scholarship: How to Do It Efficiently

There has been a slightly increased focus on the use of search engines lately. I agree that using Google is an important skill - in fact I believe that for years I have came across as significantly more knowledgeable than I actually am just by quickly looking for information when I am asked something.

However, There are obviously some types of information which are more accessible by Google and some which are less accessible. For example distinct characteristics, specific dates of events etc. are easily googleable1 and you can expect to quickly find accurate information on the topic. On the other hand, if you want to find out more ambiguous things such as the effects of having more friends on weight or even something like the negative and positive effects of a substance - then googling might leave you with some contradicting results, inaccurate information or at the very least it will likely take you longer to get to the truth.

I have observed that in the latter case (when the topic is less 'googleable') most people, even those knowledgeable of search engines and 'science' will just stop searching for information after not finding anything on Google or even before2 unless they are actually willing to devote a lot of time to find it. This is where my recommendation comes - consider doing a scholarly search like the one provided by Google Scholar.

And, no, I am not suggesting that people should read a bunch of papers on every topic that they discuss. By using some simple heuristics we can easily gain a pretty good picture of the relevant information on a large variety of topics in a few minutes (or less in some cases). The heuristics are as follows:

1. Read only or mainly the abstracts. This is what saves you time but gives you a lot of information in return and this is the key to the most cost-effective way to quickly find information from a scholary search. Often you wouldn't have immediate access to the paper anyway, however you can almost always read the abstract. And if you follow the other heuristics you will still be looking at relatively 'accurate' information most of the time. On the other hand, if you are looking for more information and have access to the full paper then the discussion+conclusion section are usually the second best thing to look at; and if you are unsure about the quality of the study, then you should also look at the method section to identify its limitations.3

2. Look at the number of citations for an article. The higher the better. Less than 10 citations in most cases means that you can find a better paper.

3. Look at the date of the paper. Often more recent = better. However, you can expect less citations for more recent articles and you need to adjust accordingly. For example if the article came out in 2013 but it has already been cited 5 times this is probably a good sign. For new articles the subheuristic that I use is to evaluate the 'accuracy' of the article by judging the author's general credibilty instead - argument from authority.

4. Meta-analyses/Systematic Reviews are your friend. This is where you can get the most information in the least amount of time!

5. If you cannot find anything relevant fiddle with your search terms in whatever ways you can think of (you usually get better at this over time by learning what search terms give better results).

That's the gist of it. By reading a few abstracts in a minute or two you can effectively search for information regarding our scientific knowledge on a subject with almost the same speed as searching for specific information on topics that I dubbed googleable. In my experience scholarly searches on pretty much anything can be really beneficial. Do you believe that drinking beer is bad but drinking wine is good? Search on Google Scholar! Do you think that it is a fact that social interaction is correlated with happiness? Google Scholar it! Sure, some things might seem obvious to you that X but it doesn't hurt to search on google scholar for a minute just to be able to cite a decent study on the topic to those X disbelievers.

 

This post might not be useful to some people but it is my belief that scholarly searches are the next step of efficient information seeking after googling and that most LessWrongers are not utilizing this enough. Hell, I only recently started doing this actively and I still do not do it enough. Furthermore I fully agree with this comment by gwern:

My belief is that the more familiar and skilled you are with a tool, the more willing you are to reach for it. Someone who has been programming for decades will be far more willing to write a short one-off program to solve a problem than someone who is unfamiliar and unsure about programs (even if they suspect that they could get a canned script copied from StackExchange running in a few minutes). So the unwillingness to try googling at all is at least partially a lack of googling skill and familiarity.

A lot of people will be reluctant to start doing scholarly searches because they have barely done any or because they have never done it. I want to tell those people to still give it a try. Start by searching for something easy, maybe something that you already know from lesswrong or from somewhere else. Read a few abstracts, if you do not understand a given abstract try finding other papers on the topic - some authors adopt a more technical style of writing, others focus mainly on statistics, etc. but you should still be able to find some good information if you read multiple abstracts and identify the main points. If you cannot find anythinr relevant then move on and try another topic.

 

P.S. In my opinion, when you are comfortable enough to have scholarly searches as a part of your arsenal you will rarely have days when there is nothing to check for. If you are doing 1 scholarly search per month for example you are most probably not fully utilizing this skill.

 


1. By googleable I mean that the search terms are google friendly - you can relatively easily and quickly find relevant and accurate information.
2. If the people in question have developed a sense for what type of information is more accessible by google then they might not even try to google the less accessible-type things.
3. If you want to get a better and more accurate view on the topic in question you should read the full paper. The heuristic of mainly focusing on abstracts is cost-effective but it invariably results in a loss of information.

 

 

View more: Next