As an exercise, I take a scrap of argumentation, expand it into a tree diagram (using FreeMind), and then formalize the argument (in Automath). This towards the goal of creating "rationality augmentation" software. In the short term, my suspicion is that such software would look like a group of existing tools glued together with human practices.
About my choice of tools: I investigated Araucaria, Rationale, Argumentative, and Carneades. With the exception of Rationale, they're not as polished graphically as FreeMind, and the rigid argumentation-theory structure was annoying in the early stages of analysis. Using a general-purpose mapping/outlining tool may not be ideal, but it's easy to obtain. The primary reason I used Automath to formalize the argument was because I'm somewhat familiar with it. Another reason is that it's easy to obtain and build (at least, on GNU/Linux).
Automath is an ancient and awesomely flexible proof checker. (Of course, other more modern proof-checkers are often just as flexible, maybe more flexible, and may be more useable.) The amount of "proof checking" done in this example is trivial - roughly, what the checker is checking is: "after assuming all of these bits and pieces of opaque human reasoning, do they form some sort of tree?" - but cutting down a powerful tool leaves a nice upgrade path, in case people start using exotic forms of logic. However, the argument checkers built into the various argumentation-theory tools do not have such upgrade paths, and so are not really credible as candidates to formalize the arguments on this site.
Here's a piece of argumentation, abstracted from something that I was really thinking about at work:
There aren't any memory leaks in this method, but how would I argue it? If I had tested it with a tool like Valgrind or mtrace, I would have some justification - but I didn't. By eye, it doesn't look like it does any allocations from the heap. Of course, if a programmer violated coding standards, they could conceal an allocation from casual inspection. However, the author of the code wouldn't violate the coding standard that badly. Why do I believe that they wouldn't violate the coding standard? Well, I can't think of a reason for them to violate the coding standard deliberately, and they're competent enough to avoid making such an egregious mistake by accident.
In order to apply the argument diagramming technique, try to form a tree structure, with claims supported by independent reasons. Reasons are small combinations of claims, which do not support the conclusion alone, but do support it together. This is what I came up with (png, svg, FreeMind's native mm format).
Some flaws in this analysis:
1. The cursory inspection might be an independent reason for believing there are no allocations.
2. There isn't any mention in the analysis of the egregiousness of the mistake.
3. The treatment of benevolence and competence is asymmetrical.
(My understanding with argument diagramming so far is that bouncing between "try to diagram the argument as best you can" and "try to explain in what ways the diagram is still missing aspects of the original text" is helpful.)
In order to translate this informal argument into Automath, I used a very simple logical framework. Propositions are a kind of thing, and each justification is of some proposition. There are no general-purpose inference rules like modus ponens or "From truth of a proposition, conclude necessity of that proposition". This means that every reason in the argument needs to assume its own special-case inference rule, warranting that step.
# A proposition is a kind of thing, by assumption.
* proposition
: TYPE
= PRIM
# a justification of a proposition is a kind of thing, by assumption
* [p : proposition]
justification
: TYPE
= PRIM
To translate a claim in this framework, we create (by assuming) a the proposition, with a chunk like this:
* claim_foo
: proposition
= PRIM
To translate a reason for a claim "baz" from predecessor claims "foo" and "bar", first we create (by assuming) the inference rule, with a chunk like this:
* [p:justification(claim_foo)]
[q:justification(claim_bar)]
reason_baz
: justification(claim_baz)
= PRIM
Secondly, we actually use the inference rule (exactly once), with a chunk like this:
* justification_baz
: justification(claim_baz)
= reason_baz(justification_foo, justification_bar)
My formalization of the above scrap of reasoning is here. Running "aut" is really easy. Do something like:
aut there_are_no_leaks.aut
Where to go next:
- Formalize more arguments, and bigger arguments.
- Investigate more tools (a more modern proof checker?).
- The translation from Freemind diagram to Automath is probably automatable via FreeMind's XSLT export.
- How do we incorporate other tools into the workflow? For example Google, Genie&Smile, or some sort of checklist tool?
- What argument schemes (reflective inconsistency? don't be continually surprised?) do we actually use?
- Try to get some experimental evidence that formalization actually helps rationality somehow.
Reading this comment makes me think that the problem of building formal tools to aid inference is um, not exactly a million miles away from the object-level problem of building AGI. Hierarchical models, bayes nets, meta-arguments about reliability of approximations. Perhaps the next thing we'll be asking for is some way to do interventions in the real world to support causal reasoning?
Do you mean formal arguments that someone should do an experiment because knowledge of the results would improve future decision-making? These would be a special case of formal arguments that someone should do something.
Do you mean that if someone automated the experiments there might be an AI danger? I know a lot of arguments for ways there could be dangers similar to AI dangers from these kinds of tools, but they are mostly about automation of the reasoning.