Armok_GoB comments on If epistemic and instrumental rationality strongly conflict - Less Wrong

5 [deleted] 10 May 2012 01:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 10 May 2012 01:45:53PM 18 points [-]

I agree that the conclusion follows from the premises, but nonetheless it's hypothetical scenarios like this which cause people to distrust hypothetical scenarios. There is no Omega, and you can't magically stop believing in red pandas; when people rationalize the utility of known falsehoods, what happens in their mind is complicated, divorces endorsement from modeling, and bears no resemblance to what they believe they're doing to themselves. Anti-epistemology is a huge actual danger of actual life,

Comment author: Armok_GoB 11 May 2012 07:18:23PM 0 points [-]

Idea I got just now and haven't though about for 5 min or looked for flaws in yet but an stating before I forget it:

Unless omega refers to human specific brain-structures, shouldn't UDT automaticaly "un-update" on the existance of red pandas in this case?

Also, through some unknown intuitive pathways, the unsolvedness of logical uncertainty is brought up a an association.