It seems to me that the concept of an argument is closely related to the idea of bounded rationality. Given any way in which rationality can be bounded, there is probably a form of argument which (if applied properly) can assist a particularly bounded agent to achieve feats of rationality that would otherwise be beyond its bounds.
By this reasoning, there are as many forms of argument as there are forms of bounded rationality.
You may find it useful to look into "proof assistants" like Agda. In a sense, they allow you to feed in an 'argument' in the form of proof sketches, proof strategies, and hints, and then to automatically generate a checkable proof.
Background on Agorics:
The idea of software agents cooperating in an open market or "agora". Described by Mark Miller and Eric Drexler here: http://e-drexler.com/d/09/00/AgoricsPapers/agoricpapers.html Depicted by Greg Egan in his novel "Diaspora", exerpt here: http://gregegan.customer.netspace.net.au/DIASPORA/01/Orphanogenesis.html
Background on Argument: http://en.wikipedia.org/wiki/Argument
Let's start by supposing that an argument is a variety of persuasive message. If Bob trusts Alice though, Bob could be persuaded by simply recieving a claim from Alice. That is a kind of persuasive message, but it's not an argument. If Bob is insecure, then Bob's mind could be hacked and therefore changed. However, that's not an argument either. (The "Buffer Overflow Fallacy"?)
Possibly arguments are witnesses (or "certificates"), as used in computational complexity. Alice could spend exp-time to solve an instance of an NP-complete problem, then send a small witness to B, who can then spend poly-time to verify it. The witness would be an argument.
I'm not sure if that's a definition, but we have an overgeneral category (persuasive messages) that is, a superset of arguments, two subcategories of persuasive messages that are specifically excluded, and one subcategory that is specifically included, which seems like enough to go on with.
We know what witnesses to SAT problems look like - they look like satisfying assignments. That is, if Bob were considering a SAT problem, and Alice sent Bob a putative satisfying assignment, and Bob verified it, then Bob ought (rationally) to be convinced that the problem is satisfiable.
What do other kinds of witnesses look like? What about probabilistic computation? What if Alice and Bob may have different priors?