Eliezer_Yudkowsky comments on Less Wrong Rationality and Mainstream Philosophy - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (328)
Thanks so much. I didn't know about Quine, and from what you've quoted it seems quite clearly in the same vein as LessWrong.
Also, out of curiosity, do you know if anything's been written about whether an agent (natural or artificial) needs goals in order to learn? Obviously humans and animals have values, at least in the sense of reward and punishment or positive and negative outcomes -- does anyone think that this is of practical importance for building processes that can form accurate beliefs about the world?
What you care about determines what your explorations learn about. An AI that didn't care about anything you thought was important, even instrumentally (it had no use for energy, say) probably wouldn't learn anything you thought was important. A probability-updater without goals and without other forces choosing among possible explorations would just study dust specks.
That was my intuition. Just wanted to know if there's more out there.
What, you mean in mainstream philosophy? I don't think mainstream philosophers think that way, even Quineans. The best ones would say gravely, "Yes, goals are important" and then have a big debate with the rest of the field about whether goals are important or not. Luke is welcome to prove me wrong about that.
I actually don't think this is about right. Last time I asked a philosopher about this, they pointed to an article by someone (I.J. Good, I think) about how to choose the most valuable experiment (given your goals), using decision theory.
Yes, that's about right.
AI research is where to look in regards to your question, SarahC. Start with chapter 2 and the chapters with 'decisions' in the title in AI: A Modern Approach.
Thank you!