lukeprog comments on The Power of Agency - Less Wrong

60 Post author: lukeprog 07 May 2011 01:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 07 May 2011 03:42:19PM *  3 points [-]

Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.

Desires/goals/utility functions are non-rational, but I don't know what you mean by saying that an artificial agent needs restrictions and assumptions in order to do something. Are you just saying that it will need heuristics rather than (say) AIXI in order to be computationally tractable? If so, I agree. But that doesn't mean it needs to operate under anything like the limits of humans hardware and software, which is all I claimed.

The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn't have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing.

Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.

There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions

Agreed. This is the Humean theory of motivation, which I agree with. I don't see how anything I said disagrees with the Humean theory of motivation.

This is said as a bad thing when it is a necessary thing.

I didn't say it as a bad thing, but a correcting thing. People think they have more access to their motivations than they really do. Also, it's not a necessary thing that we don't have much cognitive access to our motivations. In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.

JohnH, I kept asking what you meant because the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly. I'm still mostly assuming that, actually.

Comment author: wedrifid 07 May 2011 04:54:42PM 2 points [-]

an artificial agent needs restrictions and assumptions in order to do something.

You need to assume inductive priors. Otherwise you're pretty much screwed.