Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
When is it faster to rediscover something on your own than to learn it from someone who already knows it?
Sometimes it's faster to re-derive a proof or algorithm than to look it up. Keith Lynch re-invented the fast Fourier transform because he was too lazy to walk all the way to the library to get a book on it, although that's an extreme example. But if you have a complicated proof already laid out before you, and you are not Marc Drexler, it's generally faster to read it than to derive a new one. Yet I found a knowledge-intensive task where it would have been much faster to tell someone nothing at all than to tell them how to do it.
A Gamification Of Education: a modest proposal based on the Universal Decimal Classification and RPG skill trees
While making the inventory of my personal library and applying the Universal Decimal System to its classification, I found myself discovering a systematized classification of fields of knowledge, nested and organized and intricate, many of which I didn't even know existed. I couldn't help but compare how information was therein classified, and how it was imparted to me in engineering school. I also thought about how, often, software engineers and computer scientists were mostly self-thought, with even college mostly consisting of "here's a problem: go forth and figure out a way to solve it". This made me wonder whether another way of certified and certifiable education couldn't be achieved, and a couple of ideas sort of came to me.
It's pretty nebulous in my mind so far, but the crux of the concept would be a modular structure of education, where the academic institution essentially established what information precisely you need from each module, and lets you get on with the activity of learning, with periodic exams that you can sign up for, which will certify your level and area of proficiency in each module.
A recommended tree of learning can be established, but it should be possible to not take every intermediate test, if passing the final test proves that you've passed all the others behind it (this would allow people coming from different academic systems to certify their knowledge quickly and easily, thus avoiding the classic "Doctor in Physics from Former Soviet Union, current Taxi Driver in New York" scenario).
Thus, a universal standard of how much you have proven to know about what topics can be established.
Employers would then be free to request profiles in the format of such a tree. It need not be a binary "you need to have done all these courses and only these courses to work for us", they could be free to write their utility function for this or that job however they would see fit, with whichever weights and restrictions they would need.
Students and other learners would be free to advance in whichever tree they required, depending on what kind of profile they want to end up with at what age or point in time. One would determine what to learn based on statistical studies of what elements are, by and large, most desired by employers of/predictors of professional success in a certain field you want to work in.
One would find, for example, that mastering the peculiar field of railway engineering is great to be a proficient railway engineer, but also that having studied, say, things involved with people skills (from rhetoric to psychology to management), correlates positively with success in that field.
Conversely, a painter may find that learning about statistics, market predictions, web design, or cognitive biases correlates with a more successful career (whether it be on terms of income, or in terms of copies sold, or of public exposure... each one may optimize their own learning according to their own criteria).
One might even be able to calculate whether such complimentary education is actually worth their time, and which of them are the most cost-efficient.
I would predict that such a system would help society overall optimize how many people know what skills, and facilitate the learning of new skills and the updating of old ones for everyone, thus reducing structural unemployment, and preventing pigeonholing and other forms of professional arthritis.
I would even dare to predict that, given the vague, statistical, cluster-ish nature of this system, people would be encouraged to learn quite a lot more, and on a quite wider range of fields, than they do now, when one must jump through a great many hoops and endure a great many constraints in space and time and coin to get access to some types of educations (and to the acknowledgement of their acquisition thereof).
Acquiring access to the actual sources of knowledge, a library (virtual or otherwise), lectures (virtual or otherwise), and so on, would be a private matter, up to the learner:
- some of them already have the knowledge and just need to get it certified,
- others can actually buy the books they want/need, especially if keeping them around as reference will be useful to them in the future,
- others can subscribe to one or many libraries, of the on-site sort or by correspondence
- others can buy access to pre-recorded lectures, peruse lectures that are available for free, or enroll in academic institutions whose ostensible purpose is to give lectures and/or otherwise guide students through learning, more or less closely
- the same applies to finding study groups with whom you can work on a topic together: I can easily imagine dedicated social networks could be created for that purpose, helping people pair up with each other based on mutual distance, predicted personal affinity, mutual goals, backgrounds, and so on. Who knows what amazing research teams might be borne of the intellectual equivalent of OK!Cupid.
A thing that I would like very much about this system is that it would free up the strange conflicts of interest that hamper the function of traditional educational institutions.
When the ones who teach you are also the ones who grade you, the effort they invest in you can feel like a zero-sum game, especially if they are only allowed to let a percentage of you pass.
When the ones who teach you have priorities other than teach (usually research, but some teachers are also involved in administrative functions, or even private interests completely outside of the university's ivory tower1), this can and often does reduce the energy and dedication they can/will allocate to the actual function of teaching, as opposed to the others.
By separating these functions, and the contradictory incentives they provide, the organizations performing them are free to optimize for each:
- Testing is optimized for predicting current and future competence in a subject: the testers whose tests are the most reliable have more employers requiring their certificates, and thus more people requesting that they test them
- Teaching is optimized for getting the knowledge through whatever the heck the students want, whether it be to succeed at the tests or to simply master the subject (I don't know much game theory, but I'd naively guess that the spontaneous equilibrium between the teaching and testing institutions would lead to both goals becoming identical).
- Researching is optimized for research (researchers are not teachers. dang it, those are very different skill-sets!). However researchers and other experts get to have a pretty big say in what the tests test for and how, because their involvement makes the tests more trustworthy for employers, and because they, too, are employers.
- And of course entire meta-institutions can spring from this, whose role is to statistically verify, over the long term,
- how good a predictor of professional success in this or that field is passing the corresponding test, and
- how good a predictor of passing the test is to be taught by this or that teaching institution.
- how good a predictor of the test being reliable is the input of these or those researchers and experts
- It occurs to me now that, if one wished to be really nitpicky about who watches the watchmen, I suspect that there would be institutions testing the reliability of those meta-institutions, and so on and so forth... When does it stop? How to avoid vested interests and little cheats and manipulations pulling an academic equivalent of the AAA certification of sub-prime junk debt in 2008?
Another discrepancy I'd like to see solved is the difference between the official time it is supposed to take to obtain this or that degree, to learn this or that subject, and the actual statistical distribution of that time. Nowadays, a degree that's supposed to take you five years ends up taking up eight or ten years of your life. You find yourself having to go through the most difficult subjects again and again, because they are explained in an extremely rushed way, the materials crammed into a pre-formatted time. Other subjects are so exceedingly easy and thinly-spread that you find that going to class is a waste of time, and that you're better off preparing for it one week before finals. Now, after having written all of the above, my mind is quite spent, and I don't feel capable of either anticipating the effect of my proposed idea on this particular, nor of offering any solutions. Nevertheless, I wish to draw attention to this, so I'm leaving this paragraph in until I can amend it to something more useful/promising.
I hereby submit this idea to the LW community for screening and sound-boarding. I apologize in advance for your time, just in case this idea appears to be flawed enough to be unsalvageable. If you deem the concept good but flawed, we could perhaps work on ironing those kinks together. If, afterwards, this seems to you like a good enough idea to implement, know that good proposals are a dime a dozen; if there is any interest in seeing something like this happen, we can need to move on to proprely understanding the current state of secondary/superior/higher education, and figuring out of what incentives/powers/leverages are needed to actually get it implemented.
1By ivory tower I simply mean the protected environment where professors teach, researchers research, and students study, with multiple buffers between it and the ebb and flow of political, economical, and social turmoil. No value judgement is intended.
EDIT: And now I look upon the title of this article and realize that, though I had comparisons to games in mind, I never got around to writing them down. My inspirations here were mostly Civilization's Research Trees, RPG Skill Scores and Perks, and, in particular, Skyrim's skills and perks tree.
Basically, your level at whatever skill improves by studying and by practising it rather than merely by levelling up, and, when you need to perform a task that's outside your profile, you can go and learn it without having to commit to a class. Knowing the right combination of skills at the right level lets you unlock perks or access previously-unavailable skills and applications. What I like the most about it is that there's a lot of freedom to learn what you want and be who you want to be according to your own tastes and wishes, but, overall, it sounds sensible and is relatively well-balanced. And of course there's the fact that it allows you to keep a careful tally of how good you are at what things, and the sense of accomplishment is so motivating and encouraging!
Speaking of which, several netwroks and consoles' Achievement systems also strike me as motivators for keeping track of what one has achieved so far, to look back and be able to say "I've come a long way" (in an effect similar to that of gratitude journals), and also to accomplish a task and have this immediate and universal acknowledgement that you did it dammit (and, for those who care about that kind of thing, the chance to rub it the face of those who haven't).
I would think our educational systems could benefit from this kind of modularity and from this ability to keep track of things in a systematic way. What do you guys think?
Alright guys. The main complaint of the discussion article was simply "hoax", yelled as loudly or as quietly as the user felt about it. Hopefully this won't get the same treatment.
We have been evaluating educational, grant-funded programs for 20 years. Throughout these years, we have witnessed a slow change in how students are selected for academic services. Traditionally, students were targeted for academic services and opportunities based on demographic characteristics—usually race and, until recently, family income status (based on free or reduced priced lunch). Wealthier, white students are given challenging lessons and tracked into the advanced courses, while their non-white and poorer peers are tracked low and given remediation services. The latter students are often referred to as “at-risk,” though we are finding more and more that the greatest risk these students face is being placed into inappropriate remedial courses which eventually bar them from access to advanced courses. After students have been labeled “at-risk,” and then tracked inappropriately and provided unnecessary (and often harmful) remediation, their downward trajectory continues throughout their education. The demographic gap this creates continues to expand, despite the lip service and excessive tax and grant funds paid to eliminate—or at least lessen—this very gap. This “at-risk” model of assigning services is slowly being replaced by a “pro-equity” model. The driving force behind this change is the availability and use of data.
The literature is full of documentation that certain demographic groups have traditionally had less access to advanced math and science courses than equally scoring students belonging to demographic groups thought to be “not at risk.” Some examples from research follow.
• Sixth grade course placement is the main predictor of eighth grade course placement, and social factors--mainly race---are key predictors of sixth grade course placement (O’Connor, Lewis, & Mueller, 2007).
• Among low-income students, little is done to assess which are high achievers. Few programs are aimed at them, and their numbers are lumped in with “adequate” achievers in No Child Left Behind reporting. As a result, little is known about effective practices for low-income students (Wyner, Bridgeland, & DiIulio Jr., 2007).
• In a California school district, researchers found that of students who demonstrated the ability to be admitted to algebra, 100% of the Asians, 88% of the whites, 51% of the Blacks, and 42% of the Latinos were admitted (Stone & Turba, 1999).
• Tracking has been described as “a backdoor device for sorting students by race and class.” Many researchers agree (Abu El-Haj & Rubin, 2009).
• When course grades are used to determine placement, studies show that some students’ grades “matter” more than others. Perceptions of race and social class are often used to determine placement (Mayer, 2008).
• Studies show that when schools allow students the freedom to choose which track they’ll take, teachers and counselors discourage many previously lower tracked students from choosing the higher track (Yonezawa, Wells, & Serna, 2002).
• The sequence of math students take in middle school essentially determines their math track for high school. In North Carolina, this is true because of math prerequisites for higher level math (North Carolina Department of Public Instruction, 2009).
We are seeing a move toward using objective data for placement into gateway courses, such as 8th grade algebra. Many school districts are beginning to use Education Value Added Assessment (EVAAS) and other data system scores that predict success in 8th grade algebra for criteria to enroll. This pro-equity model is replacing the traditional, at-risk model that relied on professional judgment. One example of this is in Wake County, North Carolina. Superintendent Tony Tata attributed a 44% increase in the number of students enrolled in algebra to the use of the predictive software, EVAAS, to identify students likely to be successful. The success rate in the course increased with the addition of these students (KeungHu, 2012).
Although the pro-equity model of using objective data to assign students to more rigorous courses has proven successful, many people resist it. These people cling to the at-risk model, dismissing the objective data as inconclusive. Many of the overlooked students who were predicted to succeed, yet were placed in lower tracks (disproportionately minorities), are “weaker,” according to the old-school staff, and allowing these students into the gateway 8th-grade algebra course would be a disservice to them. (Not allowing them into this course ensures their bleak academic future.) Review of the data had shown that strong students were being overlooked, and this objective use of data helps identify them (Sanders, Rivers, Enck, Leandro, & White, 2009).
The changes in education began with concern for aligning academic services with academic need. Aligning opportunities for rigor and enrichment is only just beginning. In the past, a large proportion of federal grant funds were for raising proficiency rates. In the at-risk model, grant funds were provided for services to the minority and poor demographic groups with the goals of raising academic proficiency rates. When we first started evaluating grant-funded programs, most federal grants were entirely in the at-risk model. The students were targeted for services based on demographic characteristics. The goals were to deliver the services to this group. Staff development was often designed to help staff understand children in poverty and what their lives are like, rather than helping them learn how to deliver an effective reading or math intervention. The accountability reports we were hired to write consisted of documentation that the correct demographic group was served, the program was delivered, and staff received their professional development. Proficiency rates were rarely a concern.
In 2004, the federal government developed the Program Assessment Rating Tool (PART) to provide accountability to grant-funded programs by rating their effectiveness. The PART system assigned scores to programs based on services being related to goals, showing that the goals were appropriate for the individuals served, and student success measured against quality standards and assessments. PART rated programs that could not demonstrate whether they have been effective or not because of lack of data or clear performance goals with the rating “Results Not Demonstrated” (U.S. Office of Management and Budget and Federal Agencies, n.d. "The Program Assessment Rating Tool") . In 2009, nearly half (47%) of U.S. Department of Education grant programs rated by the government are given this rating, thus illustrating the difficulties of making this transition to outcome based accountability (U.S. Office of Management and Budget and Federal agencies, n.d. "Department of Education programs"). The earliest changes were in accountability, not in program services or how to target students. Accountability reports began asking for pre- and post-comparisons of academic scores. For example, if funds were for raising the proficiency rates in reading, then evaluation reports were required to compare pre- and post-reading scores. This was a confusing period, because programs still targeted students based on demographic information and provided services that often had no research basis linking them to academic achievement; professional development often remained focused on empathizing with children in poverty, although the goals and objectives would now be written in terms of the participants raising their academic achievement to proficiency. We evaluators were often called in at the conclusion of programs to compare pre- and post-academic scores, and determine whether participants improved their scores to grade-level proficiency. We often saw the results of capable students treated like low-achievers, thought to have no self-esteem, and provided remedial work. Such treatment damaged the participants who had previously scored at or above proficient prior to services.
A typical narrative of an evaluation might read:
The goal of the program was to raise the percentage of students scoring proficient in reading. The program targeted and served low-income and minority students. Staff received professional development on understanding poor children. Services offered to students included remedial tutorials and esteem-building activities. When the program ended, pre-reading scores were obtained and compared with post-scores to measure progress toward the program objective. At that time, it was discovered that a large percentage of participants were proficient prior to receiving services.
Rather than cite our own evaluations, we found many examples from the school districts reporting on themselves.
Accelerated Learning Program.
The following is a direct quote from a school system in North Carolina:
. . . Although ALP [Accelerated Learning Program] was designed primarily to help students reach proficiency as measured by End-of-Grade (EOG) tests, only 41.1% of those served showed below-grade-level scores on standard tests before service in literacy. In mathematics, 73.3% of students served had below-grade-level scores. ALP served about 40% of students who scored below grade level within literacy and within mathematics, with other services supporting many others. . . . Compared to those not served, results for Level I-II students were similar, but results for Level III-IV students were less positive. One third of non- proficient ALP mathematics students reached proficiency in 2008, compared to 42.1% of other students. (Lougee & Baenen, 2009).
Foundations of Algebra
This program was designed for students who fit specific criteria, yet it served many students who did not. Students who were below proficient or almost proficient were to be placed in courses to eventually prepare them for Algebra I. When criteria for placement are not met, determining program effectiveness is difficult, if not impossible. Students were likely entered into the program based on teacher recommendations, which were subsequently based on demographic factors such as race. The teachers “mistook” these students for below-proficient students when they were not. Had objective data, such as actual proficiency scores, been consulted, the proper students could have been served. The report indicates a success, as a higher percentage of these students than similar students who were not served enrolled in Algebra I. However, it is not known if this comparison group includes only students who actually meet the criteria, or if they are a heterogeneous mix of students of varying abilities. Missing data also makes program effectiveness evaluation difficult (Paeplow, 2010).
Partnership for Educational Success (PES)
This program was purportedly for students who are “at risk,” which is defined as students who scored below grade level on EOG (below proficiency) and have been “identified by the PES team as having family issues that interfere with school success.” What is meant by “family issues” is unclear. The majority of students served are Economically Disadvantaged (ED) (91.3%) and Black (71.5%). More than half the students served, according to the evaluation, were at or above grade level on their EOGs when they began the program, thus making program effectiveness difficult to judge. The family component is an integral part of the program, and outside agencies visit families. Many community organizations are involved. But if the staff could miss so easy a datum as EOG scores for so many students, one has to wonder about such a subjective criterion as “family issues.” The program appears to have targeted ED students, with little regard to prior performance data. Data for many students (43.5%) was missing. Teachers indicate that parents of the targeted families have become more involved in the school, but little else has changed (Harlow & Baenen, 2004).
Helping Hands was initiated based on data indicating that Black males lag behind other groups in academic achievement. The program is supposed to serve Black males, and most of the participants fit these criteria. The program is also designed to improve academics, and to curtail absenteeism and suspensions. Although the percentage of selected participants who needed improvement in these areas was higher than it was for the overall population of the students served, not all students served demonstrated a need for intervention. Many students were at grade level, were not chronically absent, and had not been suspended. Yet they were served because they were Black and male (Paeplow, 2009).
At Hodge Road Elementary School, students were tutored with remedial work in an after-school program. The only criterion the students had to meet to be allowed into the program was the inability to pay full price for their lunch. Their academic performance was irrelevant. (To be fair, these criteria were instituted by No Child Left Behind, and not the school system.) Most students were already reading and doing math at or above grade level (the two subjects for which tutoring was provided). The evaluation shows that giving remedial coursework to students who are at or above grade level, as if they were below grade level, can actually harm them. In the final statistics, 11.1% of Level III & IV 3rd through 5th graders scored below grade level after being served, compared with only 2% of a comparable group who were not served. An astonishing 23% of students in kindergarten through 2nd grade served who were at or above grade level prior to the tutoring scored below grade level afterward, compared with 8% of comparable students who were not served (Paeplow & Baenen, 2006).
AVID is a program designed for students who may be the first in their families to attend college, and who are average academic performers. The program, developed in the 1980s, maintains that by providing support while holding students to high academic standards, the achievement gap will narrow as students succeed academically and go on to successfully complete higher level education. Fidelity of implementation is often violated, which, as proponents admit on AVID’s own website (www.AVID.org) may compromise the entire program. Student participants must have a GPA of 2.0-3.5. We were asked to evaluate Wake County Public School Systems AVID program. Many students chosen for the program, however, did not fit the criteria (Lougee & Baenen, 2008). Because AVID requirements were not met, a meaningful evaluation was not possible.
This AVID program was implemented with the goal of increasing the number of under-represented students in 8th grade algebra. This was at a time when no criteria for enrollment in 8th grade algebra existed (i.e., a target to help the students reach didn’t exist), and high scoring students in this very group were not being referred for enrollment in algebra. Under these conditions, the program makes no sense. In summary, the goal of this program is to enroll in 8th grade algebra more low-income, minority, and students whose parents didn’t go to college. Only students recommended by teachers can enroll in 8th grade algebra. The data showed that very high-scoring, low-income and minority students were not being recommended for 8th grade algebra. Why do we think that students whose parents didn’t go to college can’t enroll in 8th grade algebra without being in an intervention program first? (Also, how it is determined that the students’ parents did not attend college is not addressed.) The program is for low-average students. They served high-average students. Then they still didn’t recommend them to be in 8th grade algebra. This program is very expensive. We have evaluated this program in many school districts and we find the same results, typically, as this report.
During this era, the interventions typically have not been related to the desired outcomes by research. For example, self-esteem-building activities were often provided to increase the odds of passing a math class, or to improve reading scores. Sometimes programs would be academic, but claims for success were not research-based, nor was the relationship between the activities and the desired outcomes. Although many interventions were at least related to the academic subject area the program was trying to impact, it was not unheard of to see relaxation courses alone for increasing math test scores, or make-overs and glamor shots for raising self-esteem, which in turn would allegedly raise reading scores.
During the last decade, education has slowly moved toward requiring accountability in terms of comparing pre- and post-scores. We saw this causing confusion and fear, rather than clarity. More than once, when we reported to school districts that they had served significant numbers of students who were already at or above proficiency levels, they thought we were saying they had served high-income students instead of their target population of low-income students. We have seen many school systems assess their own programs, write evaluation reports like the examples above, and then continue to implement the programs without any changes. We have worked with some educators whose eyes were opened to the misalignment of services and needs, and they learned to use data, to identify appropriate interventions, and keep records to make accountability possible. We’ve seen these innovators close their achievement gaps while raising achievement of the top. But, those around them didn’t see this as replicable.
Race to the Top will impact the rate of change from the at-risk to the pro-equity model. Teacher and principal evaluations are going to include measures of growth in student learning (White House Office of the Press Secretary, 2009). EVAAS will be used to measure predicted scores with observed scores. If high-achieving students who are predicted to succeed in 8th grade algebra are tracked into the less rigorous 9th grade algebra, they are not likely to make their predicted growth .
We are moving out of this era, and the pace of change toward identifying student needs using appropriate data is picking up. North Carolina’s newly legislated program, Read to Achieve, mandates that reading interventions for students in K-3 be aligned to the literacy skills the students struggle with, and that data be used to determine whether students are struggling with literacy skills. Schools must also keep records for accountability. Although this approach seems logical, it is quite innovative compared with the past reading interventions that targeted the wrong students (North Carolina State Board of Education; Department of Public Instruction, n.d.).
Education Grant programs are now requiring that applicants specify what data they will use to identify their target population, and how the intervention relates to helping the participants achieve the program goals. Staff development must relate to delivering the services well, and accountability must show that these things all happened correctly, while documenting progress toward the program objectives. It is a new era. We are not there yet, but it is coming.
Harlow, K., & Baenen, N. (2004). E & R Report No. 04.09: Partnership for Educational Success 2002-03: Implementation and outcomes. Raleigh, NC: Wake County Public School System. Retrieved from http://www.wcpss.net/evaluation-research/reports/2004/0409partnership_edu.pdf
KeungHu. (2012). Wake County Superintendent Tony Tata on gains in Algebra I enrollment and proficiency. Retrieved from http://blogs.newsobserver.com/wakeed/wake-county-superintendent-tony-tata-on-gains-in-algebra-i-enrollment-and-proficiency
Lougee, A., & Baenen, N. (2008). E & R Report No. 08.07: Advancement Via Individual Determination (AVID): WCPSS Program Evaluation. Retrieved from http://www.wcpss.net/evaluation-research/reports/2008/0807avid.pdf
Lougee, A., & Baenen, N. (2009). E&R Report No. 09.27: Accelerated Learning Program (ALP) grades 3-5: Evaluation 2007-08. Retrieved from http://www.wcpss.net/evaluation-research/reports/2009/0927alp3-5_2008.pdf
Mayer, A. (2008). Understanding how U.S. secondary schools sort students for instructional purposes: Are all students being served equally? . American Secondary Education , 36(2), 7–25.
North Carolina Department of Public Instruction. (2009). Course and credit requirements. Retrieved from http://www.ncpublicschools.org/curriculum/graduation
North Carolina State Board of Education; Department of Public Instruction. (n.d.). North Carolina Read to Achieve: A guide to implementing House Bill 950/S.L. 2012-142 Section 7A. Retrieved from https://eboard.eboardsolutions.com/Meetings/Attachment.aspx?S=10399&AID=11774&MID=783
O’Connor, C., Lewis, A., & Mueller, J. (2007). Researching “Black” educational experiences and outcomes: Theoretical and methodological considerations. Educational Researcher. Retrieved from http://www.sociology.emory.edu/downloads/O%5c’Connor_Lewis_Mueller_2007_Researching_black_educational_experiences_and_outcomes_theoretical_and_methodological_considerations.pdf
Paeplow, C. (2009). E & R Report No. 09.30: Intervention months grades 6-8: Elective results 2008-09. Raleigh, NC: Wake County Public School System. Retrieved from http://www.wcpss.net/evaluation-research/reports/2009/0930imonths6-8.pdf
Paeplow, C. (2010). E & R Report No. 10.28: Foundations of Algebra: 2009-10. Raleigh, NC: Wake County Public School System. Retrieved from http://assignment.wcpss.net/results/reports/2011/1028foa2010.pdf
Paeplow, C., & Baenen, N. (2006). E & R Report No. 06.09: Evaluation of Supplemental Educational Services at Hodge Road Elementary School 2005-06. Raleigh. Retrieved from http://www.wcpss.net/evaluation-research/reports/2006/0609ses_hodge.pdf
Sanders, W. L., Rivers, J. C., Enck, S., Leandro, J. G., & White, J. (2009). Educational Policy Brief: SAS® Response to the “WCPSS E & R Comparison of SAS © EVAAS © Results and WCPSS Effectiveness Index Results,” Research Watch, E&R Report No. 09.11, March 2009. Cary, NC: SAS. Retrieved from http://content.news14.com/pdf/sas_report.pdf
Stone, C. B., & Turba, R. (1999). School counselors using technology for advocacy. Journal of Technology in Counseling. Retrieved from http://jtc.colstate.edu/vol1_1/advocacy.htm
U.S. Office of Management and Budget and Federal Agencies. (n.d.). The Program Assessment Rating Tool (PART). Retrieved from http://www.whitehouse.gov/omb/expectmore/part.html
U.S. Office of Management and Budget and Federal agencies. (n.d.). Department of Education programs. Retrieved from http://www.whitehouse.gov/omb/expectmore/agency/018.html
White House Office of the Press Secretary. (2009). Fact Sheet: The Race to the Top. Washington D.C. Retrieved from http://www.whitehouse.gov/the-press-office/fact-sheet-race-top
Wyner, J. S., Bridgeland, J. M., & DiIulio Jr., J. J. (2007). Achievement trap: How America is failing millions of high-achieving students from low-income families. Jack Kent Cooke Foundation, Civic Enterprises, LLC. Retrieved from www.jkcf.org/assets/files/0000/0084/Achievement_Trap.pdf
Yonezawa, S., Wells, A. S., & Serna, I. (2002). Choosing tracks:“Freedom of choice” in detracking schools. American Educational Research Journal , 39(1), 37–67.
In this article I invite LessWrong users to learn the very basic math of something that is useful to both our community's goal of making better thinkers as well as many of the unrelated discussions that we often have here. I also present resources for further study to those interested. I made it based on the karma feedback given to this post in the monthly open thread.
Recently there has been a series of contributions made in main that serve more as introductory and logistic material than novel contributions. Because of this and because I hope It will grab more attention from newer members, I posted this in main rather than discussion section.
What is "game theory"?
Game theory is a mathematical method for analyzing calculated circumstances, such as in games, where a person’s success is based upon the choices of others. More formally, it is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers." An alternative term suggested "as a more descriptive name for the discipline" is interactive decision theory.
Game theory attempts to mathematically capture behaviour in strategic situations, in which an individual's success in making choices depends on the choices of others.
From both definitions it should be clear how this relates to the art of refining human rationality. Besides the general admonition that rationalist should win, for us humans being the social animals that we are, there few things in our lives that do not depend at least partially on the choices of others. Game theory is extensively used in and connected to fields as disparate as economics, psychology, political science, logic, sports and evolutionary biology.
As many have argued before, it is an important part of the map of the real world:
Again and again, I’ve undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that its not sucking wouldn’t have been a Nash equilibrium.
You may not know it yet, but it is impossible to read this site for a extended period of time without running into concepts that are intimately tied to this field of study. Nash equilibrium, Pareto optimal, Prisoners Dilemma, non-zero sum, zero sum, the Decision theory talk that breaks out every now and then,...
You can take the concepts one at a time, reading up on a few lines from a dictionary like definition and trying to assimilate them without doing any of the connected mathematics. I wouldn't want to discourage you from that, its better than guessing! But this approach has its limitations, one risks misunderstanding something or even more subtly just failing to appreciate nuance and running into practical difficulties when trying to apply this knowledge in the real world. At the very least guessing the teachers password is a problem. Those of you that looked up these phrases and concepts on-line probably realized that they fit into a wider framework, a framework I hope you can now begin to explore with simple math, even if only with just a few tentative steps.
So what are the videos I should watch?
This fall (2011) there has been an ongoing class offered by two Stanford professors, Sebastian Thrun and Peter Norvig called "Introduction to Artificial intelligence". It has been talked about extensively on LW in several threads here, here and here. Many LWers have showed interest, quite a few signed up and several of us are now preparing for its final exam. Among the material covered is a introduction to game theory. I've been on live lectures about the subject and even watched some recorded ones and in comparison this is one of the better short introductions I've seen so far. I especially like how each of the videos is a self-contained unit just a few minutes in length. Instead of having to commit to watching a 40 or 60 minutes lecture, you just need to commit 2-5 minutes at a time.
The relevant Units of the material that cover this are 13. Games and 14. Game Theory. These units are presented by Peter Norvig. They are not recordings of a professor presenting something to a class in front of a blackboard, but rather aim towards the feeling of having a private tutor sitting down with you and explaining a few things with the help of a pen and a few pieces of paper (reminiscent of the style seen on Khan Academy). Currently you can still go directly to the site and view these videos logged in as a visitor (recommended). But just to avoid a trivial inconvenience and in case the youtube videos outlast the current state of the website I'm going to link directly to the youtube videos and write down any relevant comments and missing information as well. Unit 13 especially, assumes some previous knowledge you probably don't have, it deals primarily with complexity of games and how computationally demanding it is to find solutions. It can be useful for getting to know some terminology, but is otherwise skippable.
Don't worry. If you look up or feel you know what an agent or player is and what utility is, the missing exotic stuff (ala POMDPs) that isn't explained as you go along doesn't matter much for our purposes.
13. Games (optional)
- Technologies Question ? (Solution) [One choice per row]
- Games Question ? (Solution) [Multiple choice per row]
- Single Player Game
- Two Player Game
- Two Player Function
- Time Complexity Question ? (Solution)
- Space Complexity Question ? (Solution)
- Chess Question ? (Solution)
- Complexity Reduction Question ? (Solution)
- Review Question ? (Solution)
- Reduce B
- Reduce B Question ? (Solution)
- Reduce M
- Computing State Values
- Complexity Reduction Benefits
- Pacman Question ? (Solution)
- Chance Question ? (Solution)
- Terminal State Question ? (Solution)
- Game Tree Question 1 ? (Solution)
- Game Tree Question 2 ? (Solution)
14. Game Theory
- Dominant Strategy Question ? (Solution) [This is where you learn about the famous Prisoners dilemma!]
- Pareto Optimal Question ? (Solution) [rot13 after solving: Gur dhvm vapbeerpgyl vqragvsvrf bayl gur obggbz evtug bhgpbzr nf Cnergb bcgvzny, ohg obgu gur hccre evtug naq obggbz yrsg ner nyfb Cnergb bcgvzny. Va gur hccre evtug ab bgure bhgpbzr vf zber cersreerq ol O. Yvxrjvfr sbe gur ybjre yrsg ab bgure bhgpbzr vf zber cersreerq ol N.]
- Equilibrium Question ? (Solution)
- Game Console Question 1 ? (Solution)
- Game Console Question 2 ? (Solution)
- 2 Finger Morra
- Tree Question ? (Solution)
- Mixed Strategy
- Solving the Game
- Mixed Strategy Issues
- 2x2 Game Question 1 ? (Solution) [Please enter probabilities and not percentages.]
- 2x2 Game Question 2 ? (Solution)
- Geometric Interpretation
- Game Theory Strategies
- Fed vs Politicians Question ? (Solution)
- Mechanism Design
- Auction Question ? (Solution)
At any point feel free to ask questions here in the comment section, I'm sure someone will gladly help you. Also the AI class reddit may be a good resource. Once you are done with the short series of lectures test your knowledge with these assignments.
- Max Min Question ? (Solution)
- Game Tree Question ? (Solution) [Unit 13 material. You should check children of pruned nodes as being pruned as well.]
- Strategy Question ? (Solution)
Note: I present this material in the form of a link to the video, followed by a "?" question mark if there is an answerable question that has a solution video posted. The link to the solution are posted as "(Solution)". Any additional comments made as corrections to the videos or some information that may be otherwise missing in this format, will be added in square brackets "[...]". I encourage people who are solving this via the links rather than the site to not watch the solutions straight away but first work out what they think the answer should be, don't worry if you get it wrong, sometimes the questions are unlikely to be answered correctly with the knowledge you have at that point, their role is to make you better remember and engage the material, not gauge your performance. The exception to this are the videos that come after Unit 14.
"I don't get it." or "It's not working." or "I didn't bother to watch more than a few."
First off for those who didn't for whatever reason like the lectures given here or find them dull or over your head, don't despair! If you feel you don't understand something, ask questions, I can guarantee that either me or someone else will answer it. To those of you who feel they are understanding the material but just don't like the videos or the lecturer, don't worry there are several other ways to approach the field. To just point you on your way here is a wide variety of quality alternatives, some of which may have approaches you prefer:
- Academic Earth site has several related classes, including an introductory one. They include additional non-video material.
- 2012 Game Theory online Stanford class (one of the many interesting classes inspired by "Introduction to AI")
I will keep this list updated and add any quality recommendations proposed by fellow LWers.
Unfortunately for those wanting just the introduction and most basic approach, many of these are more in depth and longer (this is also fortunate for those wanting a bit more). So if you just watch, comprehend and learn to use the information presented in the first lecture or two in one of these recommendations, you have done as much or more as someone who completed Unit 13 and 14. If you don't like video format in general and learn better from written material or live interaction... well this is mostly the wrong article for you. But I do present some additional non-video material in the next section you may find useful.
I watched the lectures and I think I understood them, where do I go from here?
Cool! Well check out some of the alternative videos and classes listed above, most of them are quite extensive. Try to complete one! If you'd like and try to take one ask around the comment section, maybe enough people would be interested to start a study group. Also MIT open course-ware has some material you may be interested even if you don't feel like doing the full classes.
A good AI textbook might be something you would like to explore. LessWrong has a great article with recommendations for a variety of textbooks for several interesting subjects (all recommendations must be made by people who've read at least two other titles on the subject)... but none for game theory. :/
In the thread Bgesop requested a recommendation:
Unfortunately it was the plea went unanswered. I'd love to just recommend you the textbook I first learned the subject from, but most readers are probably English speakers, so that's a no go. I'm not familiar with that many of them. I did skim Game Theory 2nd edition by Guillermo Owen, and it seemed ok. Hopefully me pointing this out will prompt someone to come up with a good recommendation. When they do I'll update this post accordingly, and lukeprog's great list can get another good textbook.
Say you're taking your car to an auto mechanic for repairs. You've been told he's the best mechanic in town. The mechanic rolls up the steel garage door before driving the car into the garage, and you look inside and notice something funny. There are no tools. The garage is bare - just an empty concrete space with four bay doors and three other cars.
You point this out to the mechanic. He shrugs it off, saying, "This is how I've always worked. I'm just that good. You were lucky I had an opening; I'm usually booked." And you believe him, having seen the parking lot full of cars waiting to be repaired.
You take your car to another mechanic in the same town. He, too, has no tools in his garage. You visit all the mechanics in town, and find a few that have some wrenches, and others with a jack or an air compressor, but no one with a full set of tools.
You notice the streets are nearly empty besides your car. Most of the cars in town seem to be in for repairs. You talk to the townsfolk, and they tell you how they take their cars from one shop to another, hoping to someday find the mechanic who is brilliant and gifted enough to fix their car.
I sometimes tell people how I believe that governments should not be documents, but semi-autonomous computer programs. I have a story that I'm not going to tell now, about incorporating inequalities into laws, then incorporating functions into them, then feedback loops, then statistical measures, then learning mechanisms, on up to the point where voters and/or legislatures set only the values that control the system, and the system produces the low-level laws and policy decisions (in a way that balances exploration and exploitation). (Robin's futarchy in which you "vote on values, bet on beliefs" describes a similar, though less-automated system of government.)
And one reaction - actually, one of the most intelligent reactions - is, "But then... legislators would have to understand something about math." As if that were a bug, and not a feature.
Followup to: Fake Explanations
When I was young, I read popular physics books such as Richard Feynman's QED: The Strange Theory of Light and Matter. I knew that light was waves, sound was waves, matter was waves. I took pride in my scientific literacy, when I was nine years old.
When I was older, and I began to read the Feynman Lectures on Physics, I ran across a gem called "the wave equation". I could follow the equation's derivation, but, looking back, I couldn't see its truth at a glance. So I thought about the wave equation for three days, on and off, until I saw that it was embarrassingly obvious. And when I finally understood, I realized that the whole time I had accepted the honest assurance of physicists that light was waves, sound was waves, matter was waves, I had not had the vaguest idea of what the word "wave" meant to a physicist.
There is an instinctive tendency to think that if a physicist says "light is made of waves", and the teacher says "What is light made of?", and the student says "Waves!", the student has made a true statement. That's only fair, right? We accept "waves" as a correct answer from the physicist; wouldn't it be unfair to reject it from the student? Surely, the answer "Waves!" is either true or false, right?
View more: Next