Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Recently, Portland Lesswrong played a game that was a perfect trifecta of: difficult mental exercise; fun; and an opportunity to learn about biases and recognize them in yourself and others. We're still perfecting it, and we'd welcome feedback, especially from people who try it.
The Short Version
The game is a combination of Pandemic, a cooperative board game that is cognitively demanding, and the idea of roleplaying cognitive biases. Our favorite way of playing it (so far), everyone selects a bias at random, and then attempts to exaggerate that bias in their arguments and decisions during the game. Everyone attempts to identify the biases in the other players, and, when a bias is guessed, the guessed player selects a new bias and begins again.
This article aims to prove that Ace Attorney is possibly the first rationalist game in the lesswrongian sense, or at least a remarkable proto-example, and that it subliminally works to raise the sanity waterline in the general population, and might provide a template on which to base future works that aim to achieve a similar effect.
The Ace Attorney series of games for the Nintendo DS console puts you in the shoes of Phoenix Wright, an attorney who, in the vein of Perry Mason, takes on difficult cases to defend his clients from a judicial system that is heavily inspired by that of Japan, in which the odds are so stacked against the defense it's practically a Kangaroo Court where your clients are guilty until proven innocent.
For those unfamiliar with the game, and those who want to explore the "social criticism" aspect of the game, I wholeheartedly recommend this most excellent article from The Escapist. Now that that's out of the way, we can move on to what makes this relevant for Less Wrong. What makes this game uniquely interesting from a Rationalist POV is that the entire game mechanics are based on
- gathering material evidence
- finding the factual contradictions in the witnesses' testimonies
- using the evidence to bust the lies open and force the truth out
You may have heard about IARPA's Sirius Program, which is a proposal to develop serious games that would teach intelligence analysts to recognize and correct their cognitive biases. The intelligence community has a long history of interest in debiasing, and even produced a rationality handbook based on internal CIA publications from the 70's and 80's. Creating games which would systematically improve our thinking skills has enormous potential, and I would highly encourage the LW community to consider this as a potential way forward to encourage rationality more broadly.
While developing these particular games will require thought and programming, the proposal did inspire the NYC LW community to play a game of our own. Using a list of cognitive biases, we broke up into groups of no larger than four, and spent five minutes discussing each bias with regards to three questions:
- How do we recognize it?
- How do we correct it?
- How do we use its existence to help us win?
The Sirius Program specifically targets Confirmation Bias, Fundamental Attribution Error, Bias Blind Spot, Anchoring Bias, Representativeness Bias, and Projection Bias. To this list, I also decided to add the Planning Fallacy, the Availability Heuristic, Hindsight Bias, the Halo Effect, Confabulation, and the Overconfidence Effect. We did this Pomodoro style, with six rounds of five minutes, a quick break, another six rounds, before a break and then a group discussion of the exercise.
Results of this exercise are posted below the fold. I encourage you to try the exercise for yourself before looking at our answers.
Game theory. You've studied the posts, you've laughed at the comics, you've heard the music1. But the best way to make it Truly Part Of You is to play a genuine game, and I have yet to find any more effective than Diplomacy.
Diplomacy is a board game for seven people played on a map of WWI Europe. The goal is to capture as many strategic provinces ("supply centers") as possible; eighteen are needed to win. But each player's country starts off with the same sized army, and there is no luck or opportunity for especially clever tactics. The most common way to defeat an enemy is to form coalitions with other players. But your enemies will also be trying to form coalitions, and the most profitable move is often to be a "double agent", stringing both countries along as long as you can. All game moves are written in secret and revealed at the same time and there are no enforcement mechanisms, so alliances, despite their central importance, aren't always worth the paper they're printed on.
The conditions of Diplomacy - competition for scarce resources, rational self-interested actors, importance of coalitions, lack of external enforcement mechanisms - mirror the conditions of game theoretic situations like the Prisoner's Dilemma (and the conditions of most of human evolution!) and so make a surprisingly powerful laboratory for analyzing concepts like trust, friendship, government, and even religion.
Over the past few months, I've played two online games of Diplomacy. One I won through a particularly interesting method; the other I lost quite badly, but with an unusual consolation. This post is based on notes I took during the games about relevant game theoretic situations. You don't need to know the rules of Diplomacy to understand the post, but if you want a look you can find them here.
Earlier today I had an idea for a meta-game a group of people could play. It’d be ideal if you lived in an intentional community, or were at university with a games society, or somewhere with regular Less Wrong Meetups.
Each time you would find a new game. Each of you would then study the rules for half an hour and strategise, and then you’d play it, once. Afterwards, compare thoughts on strategies and meta-strategies. If you haven’t played Imperialism, try that. If you’ve never tried out Martin Gardner’s games, try them. If you’ve never played Phutball, give it a go.
It should help teach us to understand new situations quickly, look for workable exploits, accurately model other people, and compute Nash equilibrium. Obviously, be careful not to end up just spending your life playing games; the aim isn't to become good at playing games, it's to become good at learning to play games - hopefully including the great game of life.
However, it’s important that no-one in the group know the rules before hand, which makes finding the new games a little harder. On the plus side, it doesn’t matter that the games are well-balanced: if the world is mad, we should be looking for exploits in real life.
It could be really helpful if people who knew of good games to play gave suggestions. A name, possibly some formal specifications (number of players, average time of a game), and some way of accessing the rules. If you only have the rules in a text-file, rot13 them please, and likewise for any discussion of strategy.
Fun on the whole is a pretty amorphous concept and being reasonable about it is tricky, however there are some routes of enquiry.
My personal understanding of fun comes from the experience of programming gameplay mechanics (such as character control, AI and minigames) and through designing and pitching games professionally. This has led me to create a number of theories about why games (and other forms of entertainment) are fun.
These ideas are built on my experiences of adjusting games and game pitches to make them more enjoyable. On the whole this sense of enjoyment is based on the opinions of those making (or paying for the development of) the games (where groupthink is a problem). However, many of the games and their pitches have been evaluated by focus groups and gameplay recordings, performed in relatively controlled settings. Additional information comes from finding patterns in sales figures and other representations of what people enjoy (e.g. the types of magazine available for sale in newsagents).
From these experiences I've attempted to find simple theoretical justifications for the behaviour I've observed. In some cases, these theories have some validation through external research, but on the whole are not experimentally validated. I take the view that it is better to have some sort of theory rather than nothing, and indeed these theories have been very useful in guiding my work.
These ideas will focus on the fun (or more generally the source of motivation) provided by computer games however I believe there are many generalities that can be made from them.
Here’s a game that you can play for real against a human opponent. If the administrators don’t mind, you can play it right here in the comments.
The game is called Pract.
I've written a program which tests positive bias using Wason's procedure from "On the failure to eliminate hypotheses in a conceptual task" (Quarterly Journal of Experimental Psychology, 12: 129-140, 1960). If the user does not discover the correct rule, the program attempts to guess, based on the user's input, what rule the user did find, and explains the existence of the more general rule. The program then directs the user here.
I'd like to use a better set of triplets, and perhaps include more wrong rules. The program should be fairly flexible in this way.
I'd also like to set up a web-based front-end to the program, but I do not currently know any cgi.
I'm not completely happy with the program's textual output. It still feels a bit like the program is scolding the user at the end. Not quite sure how to fix this.
ETA: Here is a macintosh executable version of the program. I do not have any means to make an exe file, but if anyone does, I can host it.
If you're on Linux, I'm just going to assume you know what to do with a .cpp file =P
Here is a sample run of the program (if you're unfamiliar with positive bias, or the wason test, I'd really encourage you to try it yourself before reading):