Background on Agorics:

The idea of software agents cooperating in an open market or "agora". Described by Mark Miller and Eric Drexler here: http://e-drexler.com/d/09/00/AgoricsPapers/agoricpapers.html Depicted by Greg Egan in his novel "Diaspora", exerpt here: http://gregegan.customer.netspace.net.au/DIASPORA/01/Orphanogenesis.html

Background on Argument: http://en.wikipedia.org/wiki/Argument

Let's start by supposing that an argument is a variety of persuasive message. If Bob trusts Alice though, Bob could be persuaded by simply recieving a claim from Alice. That is a kind of persuasive message, but it's not an argument. If Bob is insecure, then Bob's mind could be hacked and therefore changed. However, that's not an argument either. (The "Buffer Overflow Fallacy"?)

Possibly arguments are witnesses (or "certificates"), as used in computational complexity. Alice could spend exp-time to solve an instance of an NP-complete problem, then send a small witness to B, who can then spend poly-time to verify it. The witness would be an argument.

I'm not sure if that's a definition, but we have an overgeneral category (persuasive messages) that is, a superset of arguments, two subcategories of persuasive messages that are specifically excluded, and one subcategory that is specifically included, which seems like enough to go on with.

We know what witnesses to SAT problems look like - they look like satisfying assignments. That is, if Bob were considering a SAT problem, and Alice sent Bob a putative satisfying assignment, and Bob verified it, then Bob ought (rationally) to be convinced that the problem is satisfiable. 

What do other kinds of witnesses look like? What about probabilistic computation? What if Alice and Bob may have different priors?

New to LessWrong?

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 7:10 AM

Bob has a collection of propositions he believes and a set of inference rules. He'd like to derive more propositions from his existing ones, but this is a computationally intensive process and he's generally forced to just run a naive breadth-first search as a low-priority background task.

Alice has been lucky enough to derive a new proposition recently, using existing propositions that she knows Bob already has. She transmits the complete derivation process to Bob. Bob verifies the derivation process, thus arriving at the new proposition sooner than he would have gotten to it using his own thought processes.

It seems to me that the concept of an argument is closely related to the idea of bounded rationality. Given any way in which rationality can be bounded, there is probably a form of argument which (if applied properly) can assist a particularly bounded agent to achieve feats of rationality that would otherwise be beyond its bounds.

By this reasoning, there are as many forms of argument as there are forms of bounded rationality.

You may find it useful to look into "proof assistants" like Agda. In a sense, they allow you to feed in an 'argument' in the form of proof sketches, proof strategies, and hints, and then to automatically generate a checkable proof.

My supervisor's most cited paper is called "A logic of argumentation for reasoning under uncertainty" which was meant to be used in a multi-agent scenario. I haven't actually gone through it but I suspect it may be relevant and if any one has questions I can probably get him to answer them.

That is a nice paper, thanks. I particularly liked the idea of "epistemic entrenchment ordering" - if and when we encounter the enemy (contradiction), what will we sacrifice first?

I didn't entirely understand the section on adding disjunction; it was quite brief, and it seemed like somone had an insight, or at least, encountered a stumbling block and then found a workaround.

Ideally, I think you'd like to allow "arguing in the alternative", where if you can derive the same conclusion from several individually consistent (though mutually inconsistent) scenarios, the support for the conclusion should be stronger than it could be committing to any one scenario.

But that doesn't seem to be possible?

if and when we encounter the enemy (contradiction), what will we sacrifice first?

See also the notion of http://en.wikipedia.org/wiki/Paraconsistent_logic .

In this case, contradiction isn't worrisome due to explosion, because the paper uses an intuitionistic logic that doesn't explode. It's a different question - if we have evidence for A and also evidence against A, what should we believe regarding A?

Paraconsistent logics might help with that, of course.

When I was working on the model of argumentation referred to above, Tony Hunter and Philippe Besnard started to look at paraconsistent logics. But these typically end up supporting conclusions that are somewhat counter intuitive. So they moved towards the preferred solution in the argumentation community of working with consistent subsets as the basis for an argument. In the case where we have on un-attacked argument for A and another against A then it is hard (not possible?) to find a rational way of preferring one or other outcome. Most models of argumentation allow a mechanism of undercutting, where a further argument can contradict a proposition in the support of an argument. That in turn can be attacked ... So without any notion of weighting of propositions, one is able to give a notion of preference of conclusions on the basis of preferring arguments where all their defeaters can themselves be attacked. In cases where ordinal or cardinal weights are allowed, then finer grained preferences can be supported. Going back to an earlier part of the discussion - it is possible to allow reinforcement between arguments if weights are supported. But you do need to account for any dependencies between the arguments (so there is no double counting). Our "probabilistic valuation" did just this (see section 5.3 of the paper Alexandros cited). In cases where you are unsure of the relationship between sources of evidence, the possibilistic approach of just weighting support for a proposition by its strongest argument (use of "max" for aggregating strengths of arguments) is appropriately cautious.

Is there a story behind "Bob" and "Alice" being the default characters we use? I feel like we see them a lot here. (Nothing wrong with it, just curious if there was an interesting reason)

They're not just used on LessWrong if that's what your thinking. They're very commonly used in cryptography, where Alice tries to talk to Bob without Eve (the eavesdropper) intercepting the message. Essentially they're just the canonical names beginning with A and B.

Don't forget Trent, the trusted authenticator, or Mallory, the malicious attacker!

I feel like I've seen them on Less Wrong in the Sequences. Maybe it was just once in the past day and I'm paying more attention to it than is warranted.

They are on less wrong in the sequences. They're on a lot of websites, wherever anyone needs to use some generic names for people. Using the same names every time makes it easy to remember who's "person 1" in a scenario and who's "person 2", and makes it clear that the people are hypothetical.

Makes sense. Thanks.

In terms of certifying things with communication, the formal CS studies are done under the name http://en.wikipedia.org/wiki/Interactive_proof_system . In particular, you might be interested in the notion of a http://en.wikipedia.org/wiki/Probabilistically_checkable_proof .

So you're rephrasing the concept of argumentation in terms of computational complexity and crypto. Interesting! Definitely something worth thinking about. I would only suggest that you link your use of "SAT" with Boolean Satisfiability problem so people don't confuse it with the aptitude test.

All I can add right now is that you maybe shouldn't exclude the "trusted arguer" case you mention here:

If Bob trusts Alice though, Bob could be persuaded by simply recieving a claim from Alice.

Intuitively, being informed that the relevant expert has reached a conclusion should suffice to move your probability distribution, and instances of doing this are typically classified as arguments, e.g. "Mainstream physicists hold that ... so you should too.". (Though perhaps for your purposes here there's a reason to exclude it?)

Formally, Alice may persuade Bob of an assertion by presenting a signature for the claim, generated from the private key of the expert, where the signature is verifiable from the expert's public key in polynomial time. (In everyday use, of course, Alice doesn't need to involve a public key infrastructure; she would just show Bob a trusted, hard-to-modify book or hyperlink to a hard-to-hack, known website that speaks for the expert.)