Vladimir_Nesov comments on Consequentialist Formal Systems - Less Wrong

12 Post author: Vladimir_Nesov 08 May 2012 08:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 08 May 2012 09:52:37PM *  0 points [-]

How do you get to Prf(S)=>S? It works with something more like [Prf(S)=>Q]=>S for non-obvious Q. This setup does have a problem with exploding after figuring out how to maximize utility, but it doesn't seem to explode prior to that point (and I hope there might be a less perfectionist variant that gradually improves the situation without exploding at any point, instead of making a single leap to maximal utility value).

Comment author: gRR 08 May 2012 10:06:23PM 0 points [-]

No, I mean, the agent actually performs the action when it proved it, right? So Prf(A=A1) implies that A will indeed be =A1. Assuming the agent has its own source, it will know that.

Comment author: Vladimir_Nesov 08 May 2012 10:12:14PM 1 point [-]

The thing being proved is not the action, it's an undefined propositional constant symbol. The action responds to the fact that the propositional symbol gets proved. See the example at the end of the second section, and keep track of the distinction between A and B.

Comment author: gRR 08 May 2012 10:45:19PM 0 points [-]

Thanks, I understand now. The implications:

Prf(B) => A=1
Prf(~B) => A=2

mislead me. I thought they were another set of on-the-fly axioms, but if they are decision rules, this means something like:

A():
generate various proofs
if found a proof of B return 1
if found a proof of ~B return 2

And then there's no Löbean issues. Cool! This agent can prove A=Ai without any problems. This should work great for ASP.

Comment author: Vladimir_Nesov 08 May 2012 10:49:44PM *  0 points [-]

I thought they were another set of on-the-fly axioms, but if they are decision rules, this means something like:

They are both, since triggering the moral axiom for B requires having those implications in the consequentialist theory (they are part of the definition of A, and A is part of the definition of U, so the theory knows them).

It does seem that a consequentialist theory could prove what its agents' actions are, if we somehow modify the axiom schema so that it doesn't explode as a result of proving the maximality of U following from the statements (like B) that trigger those actions. At least the old reasons for why this couldn't be done seem to be gone, even if now there are new reasons for why this currently can't be done.