army1987 comments on The Useful Idea of Truth - Less Wrong

77 Post author: Eliezer_Yudkowsky 02 October 2012 06:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 12 December 2012 12:08:25PM 0 points [-]

in a few (really annoying!) cases

I think that if you're human, these cases are way more common than ISTM certain people realize. So in such discussions I'd always make clear if I'm talking about actual humans, about future AIs, or about idealized Cartesian agents whose cognitive algorithms cannot affect the world in any way, shape or form until they act on them.

Comment author: Normal_Anomaly 14 December 2012 12:49:15AM 0 points [-]

Can I have a couple examples other than placebo affect? Preferably only one of which is in the class "confidence that something will work makes you better at it"? Partly because it's useful to ask for examples, partly because it sounds useful to know about situations like this.

Comment author: [deleted] 15 December 2012 12:17:17AM *  0 points [-]

Actually, pretty much all I had in mind was in the class "confidence that something will work makes you better at it" -- but looking up “Self-fulfilling prophecy” on Wikipedia reminded me of the Observer-expectancy effect (incl. the Clever Hans effect and similar). Some of Bostrom's information hazards also are relevant.