Kawoomba comments on Pascal's Muggle: Infinitesimal Priors and Strong Evidence - Less Wrong

43 Post author: Eliezer_Yudkowsky 08 May 2013 12:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (404)

You are viewing a single comment's thread.

Comment author: Kawoomba 13 May 2013 07:52:21PM *  3 points [-]

This is probably obvious, but if this problem persisted, a Pascal-Mugging-vulnerable AI would immediately get mugged even without external offers or influence. The possibility alone, however remote, of a certain sequence of characters unlocking a hypothetical control console which could potentially access an above Turing computing model which could influence (insert sufficiently high number) amounts of matter/energy, would suffice. If an AI had to decide "until what length do I utter strange tentative passcodes in the hope of unlocking some higher level of physics", it would get mugged by the shadow of a matrix lord every time.