shminux comments on Pascal's Muggle: Infinitesimal Priors and Strong Evidence - Less Wrong

43 Post author: Eliezer_Yudkowsky 08 May 2013 12:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (404)

You are viewing a single comment's thread. Show more comments above.

Comment author: endoself 09 May 2013 02:14:08AM *  4 points [-]

Maybe I was unclear. I don't dismiss Y=TL4 as wrong, I ignore it as untestable and therefore useless for justifying anything interesting, like how an AI ought to deal with tiny probabilities of enormous utilities.

He's not saying that the leverage penalty might be correct because we might live in a certain type of Tegmark IV, he's saying that the fact that the leverage penalty would be correct if we did live in Tegmark IV + some other assumptions shows (a) that it is a consistent decision procedure and¹ (b) it is the sort of decision procedure that emerges reasonably naturally and is thus a more reasonable hypothesis than if we didn't know it comes up natuarally like that.

It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.

¹ The word 'and' isn't really correct here. It's very likely that EY means one of (a) and (b), and possibly both.

Comment author: shminux 09 May 2013 05:15:04AM 0 points [-]

It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.

You are right, I am out of my depth math-wise. Maybe that's why I can't see the relevance of an untestable theory to AI design.

Comment author: wedrifid 09 May 2013 02:08:15PM 5 points [-]

Maybe that's why I can't see the relevance of an untestable theory to AI design.

It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).

Comment author: Eliezer_Yudkowsky 09 May 2013 08:10:51PM 1 point [-]

(Also yep.)