The Scylla of Error and the Charybdis of Paralysis

14 Johnicholas 26 September 2009 04:01PM

We're interested in improving human rationality. Many of our techniques for improving human rationality take time. In real-time situations, you can lose by making the wrong decision, or by making the "right" decision too slowly. Most of us do not have inflexible-schedule, high-stakes decisions to make, though. How often does real-time decision making really come up?

Suppose you are making a fairly long-ranged decision. Call this decision 1. While analyzing decision 1, you come to a natural pause. At this pause you need to decide whether to analyze further, or to act on your best-so-far analysis. Call this decision 2. Note that decision 2 is made under tighter time pressure than decision 1. This scenario argues that decision-making is recursive, and so if there are any time bounds, then many decisions will need to be made at very tight time bounds.

A second, "covert" goal of this post is to provide a definitely-not-paradoxical problem for people to practice their Bayseian reasoning on. Here is a concrete model of real-time decisionmaking, motivated by medical-drama television shows, where the team diagnoses and treats a patient over the course of each episode. Diagnosing and treating a patient who is dying of an unknown disease is a colorful example of real-time decisionmaking.

To play this game, you need a coin, two six-sided dice, a deck of cards, and a helper to manipulate these objects. The manipulator sets up the game by flipping a coin. If heads (tails) the patient is suffering from an exotic fungus (allergy). Then the manipulator prepares a deck by removing all of the clubs (diamonds) so that the deck is a red-biased (black-biased) random-color generator. Finally, the manipulator determines the patients starting health by rolling the dice and summing them. All of this is done secretly.

Play proceeds in turns. At the beginning of each turn, the manipulator flips a coin to determine whether test results are available. If test results are available, the manipulator draws a card from the deck and reports its color. A red (black) card gives you suggestive evidence that the patient is suffering from a fungus (allergy). You choose whether to treat a fungus, allergy, or wait. If you treat correctly, the manipulator leaves the patient's health where it is (they're improving, but on a longer timescale). If you wait, the manipulator reduces the patient's health by one. If you treat incorrectly, the manipulator reduces the patient's health by two.

Play ends when you treat the patient for the same disease for six consecutive turns or when the patient reaches zero health.

Here is some Python code simulating a simplistic strategy. What Bayesian strategy yields the best results? Is there a concise description of this strategy? 

The model can be made more complicated. The space of possible actions is small. There is no choice of what to investigate next. In the real world, there are likely to be diminishing returns to further tests or further analysis. There could be uncertainty about how much time pressure there is. There could be uncertainty about how much information future tests will reveal. Every complication will make the task of computing the best strategy more difficult.

We need fast approximations to rationality (even quite bad approximations, if they're fast enough), as well as procedures that spend time in order to purchase a better result.

How to use SMILE to solve Bayes Nets

11 Johnicholas 20 September 2009 12:08PM

This is an account of downloading and using SMILE, a free-as-in-beer-but-not-open-source bayes net library. SMILE powers GENIE, a graphical bayes net tool. SMILE can do a lot of things, but I only used the simplest features - building a network and, given evidence, inferring probability distributions on the unobserved features.

continue reading »

Formalizing reflective inconsistency

3 Johnicholas 13 September 2009 04:23AM

In the post Outlawing Anthropics, there was a brief and intriguing scrap of reasoning, which used the principle of reflective inconsistency, which so far as I know is unique to this community:

If your current system cares about yourself and your future, but doesn't care about very similar xerox-siblings, then you will tend to self-modify to have future copies of yourself care about each other, as this maximizes your expectation of pleasant experience over future selves.

This post expands upon and attempts to formalize that reasoning, in hopes of developing a logical framework for reasoning about reflective inconsistency.

continue reading »

Formalizing informal logic

12 Johnicholas 10 September 2009 08:16PM

As an exercise, I take a scrap of argumentation, expand it into a tree diagram (using FreeMind), and then formalize the argument (in Automath). This towards the goal of creating  "rationality augmentation" software. In the short term, my suspicion is that such software would look like a group of existing tools glued together with human practices.

About my choice of tools: I investigated Araucaria, Rationale, Argumentative, and Carneades. With the exception of Rationale, they're not as polished graphically as FreeMind, and the rigid argumentation-theory structure was annoying in the early stages of analysis. Using a general-purpose mapping/outlining tool may not be ideal, but it's easy to obtain. The primary reason I used Automath to formalize the argument was because I'm somewhat familiar with it. Another reason is that it's easy to obtain and build (at least, on GNU/Linux).

Automath is an ancient and awesomely flexible proof checker. (Of course, other more modern proof-checkers are often just as flexible, maybe more flexible, and may be more useable.) The amount of "proof checking" done in this example is trivial - roughly, what the checker is checking is: "after assuming all of these bits and pieces of opaque human reasoning, do they form some sort of tree?" - but cutting down a powerful tool leaves a nice upgrade path, in case people start using exotic forms of logic.  However, the argument checkers built into the various argumentation-theory tools do not have such upgrade paths, and so are not really credible as candidates to formalize the arguments on this site.

continue reading »

Argument Maps Improve Critical Thinking

24 Johnicholas 30 August 2009 05:34PM

Charles R. Twardy provides evidence that a course in argrument mapping, using a particular software tool improves critical thinking. The improvement in critical thinking is measured by performance on a specific multiple choice test (California Critical Thinking Skills Test). This may not be the best way to measure rationality, but my point is that unlike almost everybody else, there was measurement and statistical improvement!

Also, his paper is the best, methodologically, that I've seen in the field of "individual rationality augmentation research".

To summarize my (clumsy) understanding of the activity of argument mapping:

One takes a real argument in natural language. (op-eds are a good source of short arguments, philosophy is a source of long arguments). Then elaborate it into a tree structure, with the main conclusion at the root of the tree. The tree has two kinds of nodes (it is a bipartite graph). The root conclusion is a "claim" node. Every claim node has approximately one sentence of english text associated. The children of a claim are "reasons", which do NOT have english text associated. The children of a reason are claims. Unless I am mistaken, the intended meaning of the connection from a claim's child (a reason) to the parent is implication, and the meaning of a reason is the conjunction of its children.

In elaborating the argument, it's often necessary to insert implicit claims. This should be done abiding by the "Principle of Charity", that you should interpret the argument in such a way as to make it the strongest argument possible. 

There are two syntactic rules which can easily find flaws in argument maps:

continue reading »

Wits and Wagers

3 Johnicholas 04 August 2009 04:39PM

Wits and Wagers is apparently a board game, where players compete to be well-calibrated with respect to their trivia knowledge. I haven't played it.

Has someone else here played it? If so, what was your experience? Would it be good rationalist/bayesian training?

 

Dialectical Bootstrapping

19 Johnicholas 13 March 2009 05:10PM

"Dialectical Bootstrapping" is a simple procedure that may improve your estimates. This is how it works:

  1. Estimate the number in whatever manner you usually would estimate. Write that down.
  2. Assume your first estimate is off the mark.
  3. Think about a few reasons why that could be. Which assumptions and considerations could have been wrong?
  4. What do these new considerations imply? Was the first estimate rather too high or too low?
  5. Based on this new perspective, make a second, alternative estimate.

Herzog and Hertwig find that average of the two estimates (in a historical-date estimating task) is more accurate than the first estimate, (Edit: or the average of two estimates without the "assume you're wrong" manipulation). To put the finding in a OB/LW-centric manner, this procedure (sometimes, partially) avoids Cached Thoughts.

Adversarial System Hats

8 Johnicholas 11 March 2009 04:56PM

In Reply to: Rationalization, Epistemic Handwashing, Selective Processes

Eliezer Yudkowsky wrote about scientists defending pet hypotheses, and prosecutors and defenders as examples of clever rationalization. His primary focus was advice to the well-intentioned individual rationalist, which is excellent as far as it goes. But Anna Salamon and Steve Rayhawk ask how a social system should be structured for group rationality.

The adversarial system is widely used in criminal justice. In the legal world, roles such as Prosecution, Defense, and Judge are all guaranteed to be filled, with roughly the same amount of human effort applied to each side. Suppose individuals chose their own roles. It is possible that one role turns out more popular. Because different effort is applied to different sides, selecting for the positions with the strongest arguments will no longer much select for positions that are true.

continue reading »

Software tools for community truth-seeking

1 Johnicholas 10 March 2009 01:20PM

In reply to: Community Epistemic Practice

There are software tools, possibly helpful for community truth-seeking. For example, truthmapping.com is described very well here. Also, debategraph.org, and I'm sure there are others.

 

Checklists

12 Johnicholas 07 March 2009 03:47PM

Checklists are a rationality technique, mentioned previously on OB. Everyone knows this, but we don't hear about them as often as we should, possibly because they seem prosaic and boring.

In the context of doing something over and over, there is a checklist improvement cycle.

  • You try to make the thing (e.g. the blog post, the rational decision, the mathematical proof)
  • For each kind of error in your checklist, you search the thing for that kind of error, and fix it if it occurs.
  • When something that passed your checklist turns out to have had an error, you add that kind of error to your checklist.

There are many caveats to this description: Some checklists are not primarily lists of errors, but primarily ordered procedures. You may want to complicate the cycle to track the cost and benefit of the items on the checklist. We're assuming that errors are eventually discovered. I want to pass over those caveats and claim that this kind of checklist-of-errors is very successful. If you agree, my question is: What feature or features of our minds are checklists compensating for? If we understood that, then we would be able to use checklists even more effectively.

continue reading »

View more: Prev | Next