You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TimS comments on Rationality, Transhumanism, and Mental Health - Less Wrong Discussion

8 Post author: ialdabaoth 14 October 2012 09:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread.

Comment author: Kawoomba 14 October 2012 10:35:42AM *  3 points [-]

My name is Brent, and I'm probably insane.

Chorus: Hi, Brent.

what is someone who wishes to be rational supposed to do, when the underlying hardware simply won't cooperate?

Being aware of the biases, yet unable to adapt your reasoning to compensate, seems to be contradictory. When you say "I know I only think X because of bias Y, so my actual belief should be Z", you seem to already have solved the problem in that instance, by just switching them out (in lambda calculus: E[X:=Z]).

The unknown unknowns are in my opinion the crux of the problem: those biases you did not (yet) recognize in specific situations, regardless of how well you trained yourself to reflect upon your own reasoning. Due to the nature of the problem, we wouldn't even be aware of how much progress we made in recognizing biases, and how much is left to be done. (Comparing the variance among reasoning agents would help: Based on Aumann, we can in principle eliminate - or at least notice the existence of - biases that we do not share, but two agents with a shared kind of bias would still converge on the same belief and thus be oblivious to it*).

What do? Do the best with the hand dealt to you, e.g. if it were the case (as a cosmic joke) that Occam's Razor didn't hold true for vetting ToE's after all, too bad. At least we did our very best then.

* I'm not certain this is a formal result, it should be the case for a majority of cases. Comments welcome.

Comment author: faul_sname 14 October 2012 07:55:08PM 1 point [-]

"I know I only think X because of bias Y, so my actual belief should be Z", you seem to already have solved the problem in that instance, by just switching them out (in lambda calculus: E[X:=Z]).

You'd think that should be the case, but beliefs don't actually work like that. You may believe that you believe Z, but you'll still behave as if you believe X. It's possible to override belief X, but it's not as easy as simply recognizing that you should override (or at least, that's the case in my experience. Yours may vary).