Eliezer_Yudkowsky comments on Less Wrong Rationality and Mainstream Philosophy - Less Wrong

106 Post author: lukeprog 20 March 2011 08:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (328)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 20 March 2011 11:32:23PM 6 points [-]

Thanks so much. I didn't know about Quine, and from what you've quoted it seems quite clearly in the same vein as LessWrong.

Also, out of curiosity, do you know if anything's been written about whether an agent (natural or artificial) needs goals in order to learn? Obviously humans and animals have values, at least in the sense of reward and punishment or positive and negative outcomes -- does anyone think that this is of practical importance for building processes that can form accurate beliefs about the world?

Comment author: Eliezer_Yudkowsky 21 March 2011 12:01:46AM 8 points [-]

What you care about determines what your explorations learn about. An AI that didn't care about anything you thought was important, even instrumentally (it had no use for energy, say) probably wouldn't learn anything you thought was important. A probability-updater without goals and without other forces choosing among possible explorations would just study dust specks.

Comment author: [deleted] 21 March 2011 12:23:20AM 2 points [-]

That was my intuition. Just wanted to know if there's more out there.

Comment author: Eliezer_Yudkowsky 21 March 2011 12:36:36AM 5 points [-]

What, you mean in mainstream philosophy? I don't think mainstream philosophers think that way, even Quineans. The best ones would say gravely, "Yes, goals are important" and then have a big debate with the rest of the field about whether goals are important or not. Luke is welcome to prove me wrong about that.

Comment author: utilitymonster 21 March 2011 11:09:01AM 4 points [-]

I actually don't think this is about right. Last time I asked a philosopher about this, they pointed to an article by someone (I.J. Good, I think) about how to choose the most valuable experiment (given your goals), using decision theory.

Comment author: lukeprog 21 March 2011 01:18:21AM 4 points [-]

Yes, that's about right.

AI research is where to look in regards to your question, SarahC. Start with chapter 2 and the chapters with 'decisions' in the title in AI: A Modern Approach.

Comment author: [deleted] 21 March 2011 01:19:25AM 1 point [-]

Thank you!