Musings on the LSAT: "Reasoning Training" and Neuroplasticity
The purpose of this post is to provide basic information about the LSAT including the format of the test and a few sample questions. I also wanted to bring light to some research that has found LSAT preparation to alter brain structure in ways that strengthen hypothesized "reasoning pathways". These studies have not been discussed here before; I thought they were interesting and really just wanted to call your collective attention to them.
I really like taking tests; I get energized by intense race-against-the-clock problem solving and, for better or worse, I relish getting to see my standing relative to others when the dust settles. I like the the purity of the testing situation --how conditions are standardized in theory and more or less the same for all comers. This guilty pleasure has played no small part in the course my life has taken: I worked as a test prep tutor for 3 years and loved every minute of it, I met my wife through academic competitions in high school, and I am a currently a graduate student doing lots of coursework in psychometrics.
Well, my brother-in-law is a lawyer, and when we chat the topic of the LSAT has served as some conversational common ground. Since I like taking tests for fun, he suggested I give it a whirl because he thought it was interesting and felt like it was a fair assessment of one's logical reasoning ability. So I did, I took a practice test cold a couple Saturdays ago and I was very impressed. Here the one I took. (This is a full practice exam provided by the test-makers; it's also like the top google result for "LSAT practice test".) I wanted to post here about it because the LSAT hasn't been discussed very much on this site and I thought that some of you might find it useful to know about.
A brief run-down of the LSAT:
The test has four parts: two Logical Reasoning sections, a Critical Reading section (akin to SAT et al.), and an Analytical Reasoning, or "logic games", section. Usually when people talk about the LSAT, the logic games get emphasized because they are unusual and can be pretty challenging (the only questions I missed were of this type; I missed a few and I ran out of time). Essentially, you get a premise and a bunch of conditions from which you are required to draw conclusions. Here's an example:
A cruise line is scheduling seven week-long voyages for the ship Freedom.
Each voyage will occur in exactly one of the first seven weeks of the season: weeks 1 through 7.
Each voyage will be to exactly one of four destinations:Guadeloupe, Jamaica, Martinique, or Trinidad.
Each destination will be scheduled for at least one of the weeks.
The following conditions apply: Jamaica will not be its destination in week 4.
Trinidad will be its destination in week 7. Freedom will make exactly two voyages to Martinique,
and at least one voyage to Guadeloupe will occur in some week between those two voyages.
Guadeloupe will be its destination in the week preceding any voyage it makes to Jamaica.
No destination will be scheduled for consecutive weeks.
11. Which of the following is an acceptable schedule of destinations in order from week 1 through week 7?
(A) Guadeloupe, Jamaica, Martinique, Trinidad,Guadeloupe, Martinique, Trinidad
(B) Guadeloupe, Martinique, Trinidad, Martinique, Guadeloupe, Jamaica, Trinidad
(C) Jamaica, Martinique, Guadeloupe, Martinique, Guadeloupe, Jamaica, Trinidad
(D) Martinique, Trinidad, Guadeloupe, Jamaica, Martinique, Guadeloupe, Trinidad
(E) Martinique, Trinidad, Guadeloupe, Trinidad, Guadeloupe, Jamaica, Martinique
Clearly, this section places a huge burden on working memory and is probably the most g-loaded of the four. I'd guess that most LSAT test prep is about strategies for dumping this burden into some kind of written scheme that makes it all more manageable. But I just wanted to show you the logic games for completeness; what I was really excited by were the Logical Reasoning questions (sections II and III). You are presented with some scenario containing a claim, an argument, or a set of facts, and then asked to analyze, critique, or to draw correct conclusions. Here are most of the question stems used in these sections:
Which one of the following most accurately expresses the main conclusion of the economist’s argument?
Which one of the following uses flawed reasoning that most closely resembles the flawed reasoning in the argument?
Which one of the following most logically completes the argument?
The reasoning in the consumer’s argument is most vulnerable to criticism on the grounds that the argument...
The argument’s conclusion follows logically if which one of the following is assumed?
Which one of the following is an assumption required by the argument?
Heyo! This is exactly the kind of stuff I would like to become better at! Most of the questions were pretty straightforward, but the LSAT is known to be a tough test (score range: 120-180, 95th %ile: ~167, 99th %ile: ~172) and these practice questions probably aren't representative. What a cool test though! Here's a whole question from this section, superficially about utilitarianism:
3. Philosopher: An action is morally right if it would be reasonably expected
to increase the aggregate well-being of the people affected by it. An action
is morally wrong if and only if it would be reasonably expected to reduce the
aggregate well-being of the people affected by it. Thus, actions that would
be reasonably expected to leave unchanged the aggregate well-being of the
people affected by them are also right.
The philosopher’s conclusion follows logically if which one of the following is assumed?(A) Only wrong actions would be reasonably expected to reduce the aggregate
well-being of the people affected by them.
(B) No action is both right and wrong.
(C) Any action that is not morally wrong is morally right.
(D) There are actions that would be reasonably expected to leave unchanged the
aggregate well-being of the people affected by them.
(E) Only right actions have good consequences.
Also, the LSAT is a good test, in that it measures well one's ability to succeed in law school. Validity studies boast that “LSAT score alone continues to be a better predictor of law school performance than UGPA [undergraduate GPA] alone.” Of course, the outcome variable can be regressed on both predictors and account for more of the variance than either one taken singly, but it is uncommon for a standardized test to beat prior GPA in predicting a students future GPA.
Intensive LSAT preparation and neuroplasticity:
In two recent studies (same research team), learning to reason in the logically formal way required by the LSAT was found to alter brain structure in ways consistent with literature reviews of the neural correlates of logical reasoning. Note: my reading of these articles was pretty surface-level; I do not intend to provide a thorough review, only to bring them to your attention.
These researchers recruited pre-law students enrolling in an LSAT course and imaged their brains at rest using fMRI both before and after 3 months of this "reasoning training". As controls, they included age- and IQ-matched pre-law students intending to take LSAT in the future but not actively preparing for it.
The LSAT-prep group was found to have significantly increased connectivity between parietal and prefrontal cortices and the striatum, both within the left hemisphere and across hemispheres. In the first study, the authors note that
These experience-dependent changes fall into tracts that would be predicted by prior work showing that reasoning relies on an interhemispheric frontoparietal network (for review, see Prado et al., 2011). Our findings are also consistent with the view that reasoning is largely left-hemisphere dominent (e.g., Krawczyk, 2012), but that homologous cortex in the right hemisphere can be recruited as needed to support complex reasoning. Perhaps learning to reason more efficiently involves recruiting compensatory neural circuitry more consistently.
And in the second study, they conclude
An analysis of pairwise correlations between brain regions implicated in reasoning showed that fronto-parietal connections were strengthened, along with parietal-striatal connections. These findings provide strong evidence for neural plasticity at the level of large-scale networks supporting high-level cognition.
I think this hypothesized fronto-parietal reasoning network is supposed to go something like this:
The LSAT requires a lot of relational reasoning, the ability to compare and combine mental representations. The parietal cortex holds individual relationships between these mental representations (A->B, B->C), and the prefrontal cortex integrates this information to draw conclusions (A->B->C, therefore A->C). The striatum's role in this network would be to monitor the success/failure of reward predictions and encourage flexible problem solving. Unfortunately, my understanding here is very limited. Here are several reviews of this reasoning network stuff (I have not read any; just wanted to share them): Hampshire et al. (2011), Prado et al. (2011), Krawczyk (2012).
I hope this was useful information! According to the 2013 survey, only 2.2% of you are in law-related professions, but I was wondering (1) if anyone has personal experience studying for this exam, (2) if they felt like it improved their logical reasoning skills, and (3) if they felt that these effects were long-lasting. Studying for this test seems to have the potential to inculcate rationalist habits-of-mind; I know it's just self-report, but for those who went on to law school, did you feel like you benefited from the experience studying for the LSAT? I only ask because the Law School Admission Council, a non-profit organization made up of 200+ law schools, seems to actively encourage preparation for the exam, member schools say it is a major factor in admissions, preparation tends to increase performance, and LSAT performance is correlated moderately-to-strongly with first year law school GPA (r= ~0.4).
Others' predictions of your performance are usually more accurate
Sorry if the positive illusions are old hat, but I searched and couldn't find any mention of this peer prediction stuff! If nothing else, I think the findings provide a quick heuristic for getting more reliable predictions of your future behavior - just poll a nearby friend!
Peer predictions are often superior to self-predictions. People, when predicting their own future outcomes, tend to give far too much weight to their intentions, goals, plans, desires, etc., and far to little consideration to the way things have turned out for them in the past. As Henry Wadsworth Longfellow observed,
"We judge ourselves by what we feel capable of doing, while others judge us by what we have already done"
...and we are way less accurate for it! A recent study by Helzer and Dunning (2012) took Cornell undergraduates and had them each predict their next exam grade, and then had an anonymous peer predict it too, based solely on their score on the previous exam; despite the fact that the peer had such limited information (while the subjects have presumably perfect information about themselves), the peer predictions, based solely on the subjects' past performance, were much more accurate predictors of subjects' actual exam scores.
In another part of the study, participants were paired-up (remotely, anonymously) and rewarded for accurately predicting each other's scores. Peers were allowed to give just one piece of information to help their partner predict their score; further, they were allowed to request just one piece of information from their partner to aid them in predicting their partner's score. Across the board, participants would give information about their "aspiration level" (their own ideal "target" score) to the peer predicting them, but would be far less likely to ask for that information if they were trying to predict a peer; overwhelmingly, they would ask for information about the participant's past behavior (i.e., their score on the previous exam), finding this information to be more indicative of future performance. The authors note,
There are many reasons to use past behavior as an indicator of future action and achievement. The overarching reason is that past behavior is a product of a number of causal variables that sum up to produce it—and that suite of causal variables in the same proportion is likely to be in play for any future behavior in a similar context.
They go on to say, rather poetically I think, that they have observed "the triumph of hope over experience." People situate their representations of self more in what they strive to be rather than in who they have already been (or indeed, who they are), whereas they represent others more in terms of typical or average behavior (Williams, Gilovich, & Dunning, 2012).
I found a figure I want to include from another interesting article (Kruger & Dunning, 1999); it illustrates this "better than average effect" rather well. Depicted below is an graph summarizing the results of study #3 (perceived grammar ability and test performance as a function of actual test performance):

Along the abscissa, you've got reality: the quartiles represent scores on a test of grammatical ability. The vertical axis, with decile ticks, corresponds to the same peoples' self-predicted ability and test scores. Curiously, while no one is ready to admit mediocrity, neither is anyone readily forecasting perfection; the clear sweet spot is 65-70%. Those in the third quartile seem most accurate in their estimations while those the highest quartile often sold themselves short, underpredicting their actual achievement on average. Notice too that the widest reality/prediction gap is for those the lowest quartile.
Minerva Project: the future of higher education?
Right now, the inaugural class of Minerva Schools at KGI (part of the Claremont Colleges) is finishing up its first semester of college. I use the word "college" here loosely: there are no lecture halls, no libraries, no fraternities, no old stone buildings, no sports fields, no tenure... Furthermore, Minerva operates for profit (which may raise eyebrows), but appeals to a decidedly different demographic than DeVry etc; billed as the first "online Ivy", it relies on a proprietary online platform to apply pedagogical best practices. Has anyone heard of this before?
The Minerva Project's instructional innovations are what's really exciting. There are no lectures. There are no introductory classes. (There are MOOCs for that! "Do your freshman year at home.") Students meet for seminar-based online classes which are designed to inculcate "habits of mind"; professors use a live, interactive video platform to teach classes, which tracks students' progress and can individualize instruction. The seminars are active and intense; to quote from a recent (Sept. 2014) Atlantic article,
"The subject of the class ...was inductive reasoning. [The professor] began by polling us on our understanding of the reading, a Nature article about the sudden depletion of North Atlantic cod in the early 1990s. He asked us which of four possible interpretations of the article was the most accurate. In an ordinary undergraduate seminar, this might have been an occasion for timid silence... But the Minerva class extended no refuge for the timid, nor privilege for the garrulous. Within seconds, every student had to provide an answer, and [the professor] displayed our choices so that we could be called upon to defend them. [The professor] led the class like a benevolent dictator, subjecting us to pop quizzes, cold calls, and pedagogical tactics that during an in-the-flesh seminar would have taken precious minutes of class time to arrange."
It sounds to me like Minerva is actually making a solid effort to apply evidence-based instructional techniques that are rarely ever given a chance. There are boatloads of sound, reproducible experiments that tell us how people learn and what teachers can do to improve learning, but in practice they are almost wholly ignored. To take just one example, spaced repetition and the testing effect are built into the seminar platform: students have a pop quiz at the beginning of each class and another one at a random moment later in the class. Terrific! And since it's all computer-based, the software can keep track of student responses and represent the material at optimal intervals.
Also, much more emphasis is put on articulating positions and defending arguments, which is known to result in deeper processing of material. In general though, I really like how you are called out and held to account for your answers (again, from the Atlantic article:
...it was exhausting: a continuous period of forced engagement, with no relief in the form of time when my attention could flag or I could doodle in a notebook undetected. Instead, my focus was directed relentlessly by the platform, and because it looked like my professor and fellow edu-nauts were staring at me, I was reluctant to ever let my gaze stray from the screen... I felt my attention snapped back to the narrow issue at hand, because I had to answer a quiz question or articulate a position. I was forced, in effect, to learn.
Their approach to admissions is also interesting. The Founding Class had a 2.8% acceptance rate (a ton were enticed to apply on promise of a full scholarship) and features students from ~14 countries. In the application process, no consideration is given to diversity, balance of gender, or national origin, and SAT/ACT scores are not accepted: applicants must complete a battery of proprietary computer-based quizzes, essentially an in-house IQ test. If they perform well enough, they are invited for an interview, during which they must compose a short essay to ensure an authentic writing sample (i.e., no ghostwriters). After all is said and done, the top 30 applicants get in.
Anyway, I am a student and researcher in the field of educational psychology so this may not be as exciting to others. I'm surprised that I hadn't heard of it before though, and I'm really curious to see what comes of it!
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)