Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

lukeprog comments on The Power of Agency - Less Wrong

57 Post author: lukeprog 07 May 2011 01:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 07 May 2011 06:05:52AM *  2 points [-]

A lot of body language, fashion, salesmanship, seduction, networking, influence, and persuasion are dependent entirely on heuristics and intuition.

Sure. But are you denying these skills can be vastly improved by applying agency?

You mention severe autistics. I'm not sure how much an extra dose of agency could help a severe autistic. Surely, there are people for whom an extra dose of agency won't help much. I wasn't trying to claim that agency would radically improve the capabilities of every single human ever born.

Perhaps you are reacting to the idea that heuristics are universally bad things? But of course I don't believe that. In fact, the next post in my Intuitions and Philosophy sequence is entitled 'When Intuitions are Useful.'

Comment author: JohnH 07 May 2011 06:15:38AM 1 point [-]

And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.

This is what I am reacting to, especially when combined with what I previously quoted.

Comment author: lukeprog 07 May 2011 06:30:40AM 2 points [-]

Oh. So... are you suggesting that a software agent can't learn body language, fashion, seduction, networking, etc.? I'm not sure what you're saying.

Comment author: JohnH 07 May 2011 06:48:30AM *  1 point [-]

I am saying that without heuristics or intuitions what is the basis for any desires? If an agent is a software agent without built in heuristics and intuitions then what are its desires, what are its needs, and why would it desire to survive, to find out more about the world, to do anything? Where do the axioms it uses to think that it can modify the world or conclude anything come from?

Our built in heuristics and intuitions are what allow us to start building models of the world on which to reason in the first place and removing any of them demonstrably makes it harder to function in normal society or to act normally. Things that appear reasonable to almost everyone are utter nonsense and seem pointless to those that are missing some of the basic heuristics and intuitions.

If all such heuristics (e.g. no limits of human hardware or software) are taken away then what is left to build on?

Comment author: byrnema 09 May 2011 12:57:50PM 2 points [-]

I'll jump in this conversation here, because I was going to respond with something very similar. (I thought about my response, and then was reading through the comments to see if it had already been said.)

And, imagine what an agent could do without the limits of human hardware or software.

I sometimes imagine this, and what I imagine is that without the limits (constraints) of our hardware and software, we wouldn't have any goals or desires.

Here on Less Wrong, when I assimilated the idea that there is no objective value, I expected I would spiral into a depression in which I realized nothing mattered, since all my goals and desires were finally arbitrary with no currency behind them. But that's not what happened -- I continued to care about my immediate physical comfort, interacting with people, and the well-being of the people I loved. I consider that my built-in biological hardware and software came to the rescue. There is no reason to value the things I do, but they are built into my organism. Since I believe that it was being an organism that saved me (and by this I mean the product of evolution), I do not believe the organism (and her messy goals) can be separated from me.

I feel like this experiment helped me identify which goals are built in and which are abstract and more fully 'chosen'. For example, I believe I did lose some of my values, I guess the ones that are most cerebral. (I only doubt this because with a spiteful streak and some lingering anger about the nonexistence of objective values, I could be expressing this anger by rejecting values that seem least immediate). I imagine with a heightened ability to edit my own values, I would attenuate them all, especially wherever there were inconsistencies.

These thoughts apply to humans only (that is, me) but I also imagine (entirely baselessly) that any creature without hardware and software constraints would have a tough time valuing anything. For this, I am mainly drawing on intuition I developed that if a species was truly immortal, they would be hard pressed to think of anything to do, or any reason to do it. Maybe, some values of artistry or curiosity could be left over from an evolutionary past.

Comment author: lukeprog 07 May 2011 06:53:25AM 2 points [-]

Depends what kind of agent you have in mind. An advanced type of artificial agent has its goals encoded in a utility function. It desires to survive because surviving helps it achieve utility. Read chapter 2 of AIMA for an intro to artificial agents.