Follow-up to: Argument Maps Improve Critical Thinking, Software Tools for Community Truth-Seeking
We are here, among other things, in an attempt to collaboratively refine the art of human rationality.
Rationality is hard, because the wetware we run rationality on is scavenged parts originally intended for other purposes; and collaboration is hard, I believe because it involves huge numbers of tiny decisions about what information others need. Yet we get by, largely thanks to advances in technology.
One of the most important technologies for advancing both rationality and collaboration is the written word. It affords looking at large, complex issues with limited cognitive resources, by the wonderful trick of "external cached thoughts". Instead of trying to hold every piece of the argument at once, you can store parts of it in external form, refer back to them, and communicate them to other people.
For some reason, it seems very hard to improve on this six-thousand-year-old technology. Witness LessWrong itself, which in spite of using some of the latest and greatest communication technologies, still has people arguing by exchanging sentences back and forth.
Previous posts have suggested that recent software tools might hold promise for improving on "traditional" forms of argument. This kind of suggestion is often more valuable when applied to a real and relevant case study. I found the promise compelling enough to give a few tools a try, in the context of the recent (and recurrent) cryonics debate. I report back here with my findings.
I. Argunet
The first tool I tried was Argunet, an Open Source offering from the Institute of Philosophy in Berlin. I was seduced by the promise of reconstructing the logical structure of an argument, and by the possiblity of collaborating online with others on an argument.
Like other products in that category, the basic principle of operation of Argunet is that of a visual canvas, on which you can create and arrange boxes which represents statements, portions of an argument. Relationships between parts of an arguments are then materialized using links or arrows.
Argunet supports two types of basic relationship between statements, Supports and Attacks. It also supports several types of "inference patterns".
Unfortunately, when I tried using the Editor I soon found it difficult to the point of being unusable. The default expectation of being able to move boxes around by clicking and dragging is violated. Further, I was unable to find any way to move my boxes after initially creating them.
I ended up frustrated and gave up on Argunet.
II. bCisive Online
I had somewhat better luck with the next tool I tried, bCisive online. This is a public beta of a commercial offering by Austhink, the company already referenced in the previous posts on argument mapping. (It is a spin-off of their range of products marketed for decision support rather than argument support, but is also their only online, collaborative tool so far.)
The canvas metaphor proved to be implemented more effectively, and I was able in a relatively short time to sketch out a map of my thinking about cryonics (which I invite you to browse and comment on).
bCisive supports different types of statements, distinguished by the icons on their boxes: questions; arguments pro or con; evidence; options; "fixes", and so on. At present it doesn't appear to *do* anything valuable with these distinctions, but they proved to be an effective scheme for organizing my thoughts.
III. Preliminary conclusions
I was loath to invest much more time in updating my cryonics decision map, for two reasons. One is that what I would like to get from such a tool is to incorporate others' objections and counter-objections; in fact, it seems to me that the more valuable approach would be a fully collaborative effort. So, while it was worthwhile to structure my own thinking using the tool, and (killing two birds with one stone) that served as a test drive for the tool, it seems pointless to continue without outside input.
The other, more important reason is that bCisive seems to provide little more than a fancy mindmapping tool at the moment, and the glimpse I had of tool support for structuring a debate has already raised my expectations beyond that.
I have my doubts that the "visual" aspect is as important as the creators of such software tools would like everyone to think. It seems to me that what helped focus my thinking when using bCisive was the scheme of statement types: conclusion, arguments pro and con, evidence and "fixes". This might work just as well if the tool used a textual, tabular or other representation.
The argument about cryonics is important to me, and to others who are considering cryonics. It is a life decision of some consequence, not to be taken lightly and without due deliberation. For this reason, I found myself wishing that the tool could process quantitative, not just qualitative, aspects of my reasoning.
IV. A wish list for debate support
Based on my experiences, what I would look for is a tool that distinguishes between, and support the use of:
- a conclusion or a decision, which is to be "tested" by the use of the tool
- various hypotheses, which are offered in support or in opposition to the conclusion, with degrees of plausibility
- logical structure, such as "X follows from Y"
- challenges to logical structure, such as "X may not necessarily follow from Y, if you grant Z"
- elements of evidence, which make hypotheses more or less probable
- recursive relations between these elements
The tool should be able to "crunch numbers", so that it gives an overall indication of how much the total weight of evidence and argumentation contributes to the conclusion.
It should have a "public" part, representing what a group of people can agree on regarding the structure of the debate; and a "private" part, wherein you can adduce evidence you have collected yourself, or assign private degrees of belief in various statements.
In this way, the tool would allow "settling" debates even while allowing disagreement to persist, temporarily or durably: you could agree with the logical structure but allow that your personal convictions rationally lead you to different conclusions. Highlighting the points of agreement and contention in this way would be a valuable way to focus further debate, limiting the risk of "logical rudeness".
I would recommend that we try to create our own debate-mapping tool. It might end up being surprisingly easy.
I've already used PHP, GraphViz, and MediaWiki to implement a vaguely similar project, the Transhumanist Wiki Scenarios Map.
Unfortunately, that project ended up being less useful than I had hoped, and has been abandoned for now.
Today, I made a rough sketch of what a debate-mapping tool based on these tools might look like.
A VERY rough sketch.
Pretty much every detail is probably going to need to be changed in order for it to be useable.
Anyway, here's a link to that experiment
Once again, It didn't turn out as well as I had hoped.
The basic idea is that you take a chat log of a debate, and add some annotations, marking which are the main claims of the argument, and indicating which arguments support or oppose which other arguments.
Then, run a script on this annotated chat log, and it will output a graph of the arguments in the debate.
One advantage of this method is that the text and the annotations can be updated as the debate continues, and the graph will be updated to match this new data.
Some ideas for things to change:
change the formatting of the annotations. the word "claim" is unnecessary
set up the actual PHP script. These example graphs were generated by manually formatting the annotations in the graphviz format.
set up different formats for the output. A graph is not the most useful format. A better idea would be a table summarizing the info for each of the claims:
perhaps each claim could have its own wiki page, similar to how the scenarios map works
add more keywords, besides just "supports" and "opposes". Some examples are:
add a way to indicate which speaker agrees with which claims, and deduce from that which conclusions are supported by implications of their assumptions
set up the script to automatically generate the graphs as the wiki page is updated
more?
I'm surprised more tools to do this kind of thing don't already exist. It reminds me of the Truth Maintenance Systems I learned about in AI classes in the mid-90s.