Kawoomba comments on Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (39)
I appreciate your point.
Mostly, I'm concerned that "strictly speaking, humans don't have VNM-utility functions, so that's that, full stop" can be interpreted like a stop sign, when in fact humans do have preferences (clearly) and do tend to choose actions to try to satisfice those preferences at least part of the time. To the extent that we'd deny that, we'd deny the existence of any kind of "agent" instantiated in the physical universe. There is predictable behavior for the most part, which can be modelled. And anything that can be computationally modelled can be described by a function. It may not have some of the nice VNM properties, but we take what we can get.
If there's a more applicable term for the kind of model we need (rather than simply "utility function in a non-VNM sense"), by all means, but then again, "what's in a name" ...