Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

JohnH comments on The Power of Agency - Less Wrong

57 Post author: lukeprog 07 May 2011 01:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread.

Comment author: JohnH 07 May 2011 04:58:33AM 10 points [-]

real agent with the power to reliably do things it believed would fulfill its desires. It could change its diet, work out each morning, and maximize its health and physical attractiveness. It could learn body language and fashion and salesmanship and seduction and the laws of money and win in every sphere of life without constant defeat by human hangups. It could learn networking and influence and persuasion and have large-scale effects on societies, cultures, and nations.

A lot of body language, fashion, salesmanship, seduction, networking, influence, and persuasion are dependent entirely on heuristics and intuition.

In the real world those that have less access to these traits (being people of the autistic spectrum, for example) tend to have a much harder time learning how to accomplish any of the named tasks. They also, for most of those tasks, have a much harder time seeing why one would wish to accomplish those tasks.

Extrapolating to a being that has absolutely no such intuition or heuristics then one is left with the question of what it is that they wish to actually do? Perhaps some of the severely autistic really are like this and never learn language as it never occurs to them that language could be useful and so have no desire to learn language.

With no built in programing to determine what is to be desired and what is not to be desired and no built in programing as to how the world works or does not work then how is one to determine what should be desirable or how to accomplish what is desired? As far as I can determine an agent without human hardware or software may be left spending its time attempting to figure out how anything works and figuring out what, if anything, it wants to do.

It may not even attempt to figure out anything at all if Curiosity is not rational but a built in heuristic. Perhaps someone has managed to build a rational AI but has neglected to give it built in desires and/or built in Curiosity and it did nothing so was assumed to not have worked.

Isn't even the desire to survive a heuristic?

Comment author: lukeprog 07 May 2011 06:05:52AM *  2 points [-]

A lot of body language, fashion, salesmanship, seduction, networking, influence, and persuasion are dependent entirely on heuristics and intuition.

Sure. But are you denying these skills can be vastly improved by applying agency?

You mention severe autistics. I'm not sure how much an extra dose of agency could help a severe autistic. Surely, there are people for whom an extra dose of agency won't help much. I wasn't trying to claim that agency would radically improve the capabilities of every single human ever born.

Perhaps you are reacting to the idea that heuristics are universally bad things? But of course I don't believe that. In fact, the next post in my Intuitions and Philosophy sequence is entitled 'When Intuitions are Useful.'

Comment author: JohnH 07 May 2011 06:15:38AM 1 point [-]

And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.

This is what I am reacting to, especially when combined with what I previously quoted.

Comment author: lukeprog 07 May 2011 06:30:40AM 2 points [-]

Oh. So... are you suggesting that a software agent can't learn body language, fashion, seduction, networking, etc.? I'm not sure what you're saying.

Comment author: JohnH 07 May 2011 06:48:30AM *  1 point [-]

I am saying that without heuristics or intuitions what is the basis for any desires? If an agent is a software agent without built in heuristics and intuitions then what are its desires, what are its needs, and why would it desire to survive, to find out more about the world, to do anything? Where do the axioms it uses to think that it can modify the world or conclude anything come from?

Our built in heuristics and intuitions are what allow us to start building models of the world on which to reason in the first place and removing any of them demonstrably makes it harder to function in normal society or to act normally. Things that appear reasonable to almost everyone are utter nonsense and seem pointless to those that are missing some of the basic heuristics and intuitions.

If all such heuristics (e.g. no limits of human hardware or software) are taken away then what is left to build on?

Comment author: byrnema 09 May 2011 12:57:50PM 2 points [-]

I'll jump in this conversation here, because I was going to respond with something very similar. (I thought about my response, and then was reading through the comments to see if it had already been said.)

And, imagine what an agent could do without the limits of human hardware or software.

I sometimes imagine this, and what I imagine is that without the limits (constraints) of our hardware and software, we wouldn't have any goals or desires.

Here on Less Wrong, when I assimilated the idea that there is no objective value, I expected I would spiral into a depression in which I realized nothing mattered, since all my goals and desires were finally arbitrary with no currency behind them. But that's not what happened -- I continued to care about my immediate physical comfort, interacting with people, and the well-being of the people I loved. I consider that my built-in biological hardware and software came to the rescue. There is no reason to value the things I do, but they are built into my organism. Since I believe that it was being an organism that saved me (and by this I mean the product of evolution), I do not believe the organism (and her messy goals) can be separated from me.

I feel like this experiment helped me identify which goals are built in and which are abstract and more fully 'chosen'. For example, I believe I did lose some of my values, I guess the ones that are most cerebral. (I only doubt this because with a spiteful streak and some lingering anger about the nonexistence of objective values, I could be expressing this anger by rejecting values that seem least immediate). I imagine with a heightened ability to edit my own values, I would attenuate them all, especially wherever there were inconsistencies.

These thoughts apply to humans only (that is, me) but I also imagine (entirely baselessly) that any creature without hardware and software constraints would have a tough time valuing anything. For this, I am mainly drawing on intuition I developed that if a species was truly immortal, they would be hard pressed to think of anything to do, or any reason to do it. Maybe, some values of artistry or curiosity could be left over from an evolutionary past.

Comment author: lukeprog 07 May 2011 06:53:25AM 2 points [-]

Depends what kind of agent you have in mind. An advanced type of artificial agent has its goals encoded in a utility function. It desires to survive because surviving helps it achieve utility. Read chapter 2 of AIMA for an intro to artificial agents.