Social media feeds 'misaligned' when viewed through AI safety framework, show researchers
In a study from September 17 a group of researchers from the University of Michigan, Stanford University, and the Massachusetts Institute of Technology (MIT) showed that one of the most widely-used social media feeds, Twitter/X, owned by the company xAI, is recognizably misaligned with the values of its users, preferentially showing them posts that rank highly for the values of 'stimulation' and 'hedonism' over collective values like 'caring' and 'universal concern.' Continue reading at foommagazine.org ...
I have found AI Village (and the updates from it) a pretty helpful source of insight.
Although, to be clear, my feeling about whole situation with agents is that it is fairly disturbing, and it is playing with fire. But if the reality is that these things are going to be rolled out like this—and obviously, they are—then we do need open testbeds like this to see what's happening.
This was the high-quality ****book before ****book hit the AI news echo chamber this past week. Although, to be fair, I guess that experiment demonstrated the more high-population, message board-focused variant of a similar setup.