TheAncientGeek comments on Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (39)
But if an AI can compromise on some fuzzy or simplified set if values, what happened to the full complexity and fragility of human value?
Why does the compromise have to be a function of simplified values? I don't think I implied that.