12 08 May 2012 08:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 08 May 2012 10:12:14PM 1 point [-]

The thing being proved is not the action, it's an undefined propositional constant symbol. The action responds to the fact that the propositional symbol gets proved. See the example at the end of the second section, and keep track of the distinction between A and B.

Comment author: 08 May 2012 10:45:19PM 0 points [-]

Thanks, I understand now. The implications:

Prf(B) => A=1
Prf(~B) => A=2

mislead me. I thought they were another set of on-the-fly axioms, but if they are decision rules, this means something like:

``````A():
generate various proofs
if found a proof of B return 1
if found a proof of ~B return 2
``````

And then there's no LĂ¶bean issues. Cool! This agent can prove A=Ai without any problems. This should work great for ASP.

Comment author: 08 May 2012 10:49:44PM *  0 points [-]

I thought they were another set of on-the-fly axioms, but if they are decision rules, this means something like:

They are both, since triggering the moral axiom for B requires having those implications in the consequentialist theory (they are part of the definition of A, and A is part of the definition of U, so the theory knows them).

It does seem that a consequentialist theory could prove what its agents' actions are, if we somehow modify the axiom schema so that it doesn't explode as a result of proving the maximality of U following from the statements (like B) that trigger those actions. At least the old reasons for why this couldn't be done seem to be gone, even if now there are new reasons for why this currently can't be done.