You are not a Bayesian homunculus whose reasoning is 'corrupted' by cognitive biases.
You just are cognitive biases.
You just are attribution substitution heuristics, evolved intuitions, and unconscious learning. These make up the 'elephant' of your mind, and atop them rides a tiny 'deliberative thinking' module that only rarely exerts itself, and almost never according to normatively correct reasoning.
You do not have the robust character you think you have, but instead are blown about by the winds of circumstance.
You do not have much cognitive access to your motivations. You are not Aristotle's 'rational animal.' You are Gazzaniga's rationalizing animal. Most of the time, your unconscious makes a decision, and then you become consciously aware of an intention to act, and then your brain invents a rationalization for the motivations behind your actions.
If an 'agent' is something that makes choices so as to maximize the fulfillment of explicit desires, given explicit beliefs, then few humans are very 'agenty' at all. You may be agenty when you guide a piece of chocolate into your mouth, but you are not very agenty when you navigate the world on a broader scale. On the scale of days or weeks, your actions result from a kludge of evolved mechanisms that are often function-specific and maladapted to your current environment. You are an adaptation-executor, not a fitness-maximizer.
Agency is rare but powerful. Homo economicus is a myth, but imagine what one of them could do if such a thing existed: a real agent with the power to reliably do things it believed would fulfill its desires. It could change its diet, work out each morning, and maximize its health and physical attractiveness. It could learn and practice body language, fashion, salesmanship, seduction, the laws of money, and domain-specific skills and win in every sphere of life without constant defeat by human hangups. It could learn networking and influence and persuasion and have large-scale effects on societies, cultures, and nations.
Even a little bit of agenty-ness will have some lasting historical impact. Think of Benjamin Franklin, Teddy Roosevelt, Bill Clinton, or Tim Ferris. Imagine what you could do if you were just a bit more agenty. That's what training in instrumental rationality is all about: transcending your kludginess to attain a bit more agenty-ness.
And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.
(This post was inspired by some conversations with Michael Vassar.)
Hang on, you are going to claim that my comments are obviously false then argue over definitions and when definitions are agreed upon walk away without stating what is obviously false?
I seriously feel that I gotten the run around from you rather than at any time a straight answer. My only possible conclusions are you are being evasive or you have inconsistent beliefs about the subject (or both).
Luke isn't arguing over definitions as far as I could see, he was checking to see if there was a possibility of communication.
A heuristic is a quick and dirty way of getting an approximation to what you want, when getting a more accurate estimate would not be worth the extra effort/energy/whatever it would cost. As I see it the confusion here arises from the fact that you believe this has something to do with goals and utility functions. It doesn't. These can be arbitrary for all we care. But, any intelligence no matter it's goals or utility function will ... (read more)