gRR comments on A model of UDT without proof limits - Less Wrong

13 Post author: cousin_it 20 March 2012 07:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread.

Comment author: gRR 20 March 2012 09:07:56PM 2 points [-]

I'm not sure why do you need a fast-growing function at all? It seems, L+1 would work just as well. Proofs longer than L are not considered in Step 1, and if Step 2 ensures that no spurious proof is found for length <=L, then the agent should be fine. Shouldn't it?

Comment author: Vladimir_Nesov 20 March 2012 09:30:36PM *  1 point [-]

The problem with L+1 is that it permits for there to be two different moral arguments, say evaluating the hypothetical action A==1 as having different utility, that have approximately the same proof length (L and L+10, say). If that is the case, it's not clear in what sense is the shorter one any better (more "natural", less "spurious") than the longer one, since the difference in proof length is so insignificant.

On the other hand, if we have the shortest moral argument of length less than L, and then the next moral argument is of length at least 10^L, then the first moral argument looks clearly much better. An important point is that step 2 is not just checking that the proofs of the longer moral arguments are much longer, instead it actually makes it so, it wouldn't be the case if step 2 were absent.

Comment author: Wei_Dai 20 March 2012 09:59:54PM *  4 points [-]

Having f(L)=L+1 would be sufficient to protect against any "moral arguments" that involve first proving agent()!=a for some a. Do we have any examples of "spurious" proofs that are not of this form?

Comment author: cousin_it 20 March 2012 09:18:10PM *  1 point [-]

I agree that f(L)=L+1 should work okay in practice. Having f(L) grow fast is more for psychological reassurance and making it easier to prove things about the agent.