From what it looks like so far, recommender systems are amazingly good at figuring out what we want in the short term and giving them to us. But that is often misaligned to what we want in the longer terms. E.g. I have a YouTube shorts addiction that's ruining my productivity (yay!). So my answer for now is NOPE, unless we do something special.
I'm assuming when you say "human values" you mean what we want for ourselves in the long-term. But I would love it if you would elaborate on what exactly you meant by that.
Agree.
Human values are very complex and most recommender systems don't even try to model them. Instead most of them optimise for things like 'engagement' which they claim to be aligned with a user's 'revealed preference'. This notion of 'revealed preference' is a far cry from true preferences (which are very complex) let alone human values (which are also very complex). I recommend this article for an introduction to some of the issues here: https://medium.com/understanding-recommenders/what-does-it-mean-to-give-someone-what-they-want-the-nature-of-prefere...
Would it make sense to try to figure out human values with recommender systems? Why or why not? How could this be done?