You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Perplexed comments on Lokhorst, 'Computational Meta-ethics' (2011) - Less Wrong Discussion

13 Post author: lukeprog 09 March 2011 07:13AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 09 March 2011 06:03:56PM *  2 points [-]

I admit to not really understanding how different deontic logics work. I would hope it's like propositional logic in that you can't generate false statements out of true premises and all you have to do is set up a theorem-prover and let it do its thing.

Well, it is like propositional logic in that the pure logical system is not particularly useful. What you really want is an applied logical system in which you supply additional axioms as a kind of "domain knowledge". But, whenever you add axioms, you run the risk of making the system inconsistent.

Generally speaking, proofs of consistency are difficult, but proofs of relative consistency are somewhat straightforward. One way of describing what Lokhorst did is that he automated proofs and disproofs of relative consistency. It is an easy thing to do in simple propositional logic, moderately difficult in most modal logics, and more difficult in full first-order logic.

I would describe Lokhorst's accomplishment as simple mechanized meta-logic, rather than simple mechanized meta-ethics. Though it does bear a close resemblance to the traditional ethical reasoning technique of evaluating an ethical system against a set of ethical 'facts'.

ETA: As for "just setting up a theorem prover and letting it do its thing", I'm pretty sure that when (admissible) new axioms are added to a proof system, the theorem prover needs to be "re-tuned" for efficiency. Particularly so in modal logics. So, a certain amount of meta- thinking is going to need to be done by any self-improving system.