lukeprog comments on The Power of Agency - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (73)
Desires/goals/utility functions are non-rational, but I don't know what you mean by saying that an artificial agent needs restrictions and assumptions in order to do something. Are you just saying that it will need heuristics rather than (say) AIXI in order to be computationally tractable? If so, I agree. But that doesn't mean it needs to operate under anything like the limits of humans hardware and software, which is all I claimed.
Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.
Agreed. This is the Humean theory of motivation, which I agree with. I don't see how anything I said disagrees with the Humean theory of motivation.
I didn't say it as a bad thing, but a correcting thing. People think they have more access to their motivations than they really do. Also, it's not a necessary thing that we don't have much cognitive access to our motivations. In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.
JohnH, I kept asking what you meant because the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly. I'm still mostly assuming that, actually.
You need to assume inductive priors. Otherwise you're pretty much screwed.