First, my own observation agrees with GreenRoot. My view is less systematic but much longer, I've been watching this area since the 70s. (Perhaps longer, I was fascinated in my teens by Leibnitz's injunction "Let us calculate".)
Empirically I think several decades of experiment have established that no obvious or simple approach will work. Unless someone has a major new idea we should not pursue straightforward graphical representations.
On the other hand we do have a domain where machine usable representation of thought has been successful, and where in fact that representation has evolved fairly rapidly. That domain is "programming" in a broad sense.
Graphical representations of programs have been tried too, and all such attempts have been failures. (I was a project manager for such an attempt in the 80s.) The basic problem is that a program is naturally a high-dimensional object, and when mapped down into a two dimensional picture it is about as comprehensible as a bowl of spagetti.
The really interesting aspect of programming for representing arguments isn't the mainstream "get it done" perspective, but the background work that has been done on tools for analyzing, transforming, optimizing, etc. code. These tools all depend on extracting and maintaining the semantics of the code through a lot of non-trivial changes. Furthermore over time the representations they use have evolved from imperative, time-bound ones toward declarative ones that describe relationships in a timeless way.
At the same time programming languages have evolved to move more of the "mechanical" semantics into runtimes or implicit operations during compile time, such as type inference. This turns out to be essential to keep down the clutter in the code, and to maintain global consistency.
The effect is that programming languages are moving closer to formal symbolic calculi, and program transformations are moving closer to automated proof checking (while automated proof checking is evolving to take advantage of some of these same ideas).
In my opinion, all of that is necessary for any kind of machine support of the semantics of rational discussion. But it is not sufficient. The problem is that our discussion allows, and realistically has to allow a wide range of vagueness, while existing programming semantics are never nearly vague enough. In our arguments we have to refer to only partially specified, or in some cases nearly unspecified "things", and then refine our specification of those things over time as necessary. (An extremely limited but useful form of this is already supported in advanced programming languages as "lazy", potentially infinite data structures. These are vague only about how many terms of a sequence will be calculated -- as many as you ask for, plus possibly more.)
For example look at the first sentence of my paragraph above. What does "all of that" refer to? You know enough from context to understand my point. But if we actually ended up pursuing this as a project, by the time we could build anything that works we'd have an extremely complex understanding of the previous relevant work, and how to tie back to it. In the process we would have looked at a lot of stuff that initially seemed relevant (i.e.currently included in "all of that") but that after due consideration we found we needed to exclude. If we had to specify "all of that" in advance (even in terms of sharp criteria for inclusion) we'd never get anywhere.
So any representation of arguments has to allow vagueness in all respects, and also allow the vagueness to be challenged and elaborated as necessary. The representation has to allow multiple versions of the argument, so different approaches can be explored. It has to allow different (partial) successes to be merged, resolving any inconsistencies by some combination of manual and machine labor. (We have pretty good tools for versioning and merging in programming, to the extent the material being manipulated has machine-checkable semantics.)
The tools for handling vagueness are coming along (in linguistic theory and statistical modeling) but they are not yet at the engineering cookbook level. However if an effort to build semantic argumentation tools on a programming technology base got started now, the two trajectories would probably intersect in a fairly useful way a few years out.
The implications of all of this for AI would be interesting to discuss, but perhaps belong in another context.
Graphical representations of programs have been tried too, and all such attempts have been failures. (I was a project manager for such an attempt in the 80s.) The basic problem is that a program is naturally a high-dimensional object, and when mapped down into a two dimensional picture it is about as comprehensible as a bowl of spagetti.
The problem has consitently appeard to me to be related to the use of incorrect abstractions. Most of the visual attempts I've seen have been roughly equivalent to printing binary code to screen as an attempt for a text...
Follow-up to: Argument Maps Improve Critical Thinking, Software Tools for Community Truth-Seeking
We are here, among other things, in an attempt to collaboratively refine the art of human rationality.
Rationality is hard, because the wetware we run rationality on is scavenged parts originally intended for other purposes; and collaboration is hard, I believe because it involves huge numbers of tiny decisions about what information others need. Yet we get by, largely thanks to advances in technology.
One of the most important technologies for advancing both rationality and collaboration is the written word. It affords looking at large, complex issues with limited cognitive resources, by the wonderful trick of "external cached thoughts". Instead of trying to hold every piece of the argument at once, you can store parts of it in external form, refer back to them, and communicate them to other people.
For some reason, it seems very hard to improve on this six-thousand-year-old technology. Witness LessWrong itself, which in spite of using some of the latest and greatest communication technologies, still has people arguing by exchanging sentences back and forth.
Previous posts have suggested that recent software tools might hold promise for improving on "traditional" forms of argument. This kind of suggestion is often more valuable when applied to a real and relevant case study. I found the promise compelling enough to give a few tools a try, in the context of the recent (and recurrent) cryonics debate. I report back here with my findings.
I. Argunet
The first tool I tried was Argunet, an Open Source offering from the Institute of Philosophy in Berlin. I was seduced by the promise of reconstructing the logical structure of an argument, and by the possiblity of collaborating online with others on an argument.
Like other products in that category, the basic principle of operation of Argunet is that of a visual canvas, on which you can create and arrange boxes which represents statements, portions of an argument. Relationships between parts of an arguments are then materialized using links or arrows.
Argunet supports two types of basic relationship between statements, Supports and Attacks. It also supports several types of "inference patterns".
Unfortunately, when I tried using the Editor I soon found it difficult to the point of being unusable. The default expectation of being able to move boxes around by clicking and dragging is violated. Further, I was unable to find any way to move my boxes after initially creating them.
I ended up frustrated and gave up on Argunet.
II. bCisive Online
I had somewhat better luck with the next tool I tried, bCisive online. This is a public beta of a commercial offering by Austhink, the company already referenced in the previous posts on argument mapping. (It is a spin-off of their range of products marketed for decision support rather than argument support, but is also their only online, collaborative tool so far.)
The canvas metaphor proved to be implemented more effectively, and I was able in a relatively short time to sketch out a map of my thinking about cryonics (which I invite you to browse and comment on).
bCisive supports different types of statements, distinguished by the icons on their boxes: questions; arguments pro or con; evidence; options; "fixes", and so on. At present it doesn't appear to *do* anything valuable with these distinctions, but they proved to be an effective scheme for organizing my thoughts.
III. Preliminary conclusions
I was loath to invest much more time in updating my cryonics decision map, for two reasons. One is that what I would like to get from such a tool is to incorporate others' objections and counter-objections; in fact, it seems to me that the more valuable approach would be a fully collaborative effort. So, while it was worthwhile to structure my own thinking using the tool, and (killing two birds with one stone) that served as a test drive for the tool, it seems pointless to continue without outside input.
The other, more important reason is that bCisive seems to provide little more than a fancy mindmapping tool at the moment, and the glimpse I had of tool support for structuring a debate has already raised my expectations beyond that.
I have my doubts that the "visual" aspect is as important as the creators of such software tools would like everyone to think. It seems to me that what helped focus my thinking when using bCisive was the scheme of statement types: conclusion, arguments pro and con, evidence and "fixes". This might work just as well if the tool used a textual, tabular or other representation.
The argument about cryonics is important to me, and to others who are considering cryonics. It is a life decision of some consequence, not to be taken lightly and without due deliberation. For this reason, I found myself wishing that the tool could process quantitative, not just qualitative, aspects of my reasoning.
IV. A wish list for debate support
Based on my experiences, what I would look for is a tool that distinguishes between, and support the use of:
The tool should be able to "crunch numbers", so that it gives an overall indication of how much the total weight of evidence and argumentation contributes to the conclusion.
It should have a "public" part, representing what a group of people can agree on regarding the structure of the debate; and a "private" part, wherein you can adduce evidence you have collected yourself, or assign private degrees of belief in various statements.
In this way, the tool would allow "settling" debates even while allowing disagreement to persist, temporarily or durably: you could agree with the logical structure but allow that your personal convictions rationally lead you to different conclusions. Highlighting the points of agreement and contention in this way would be a valuable way to focus further debate, limiting the risk of "logical rudeness".