I don't know that this would fit with the idea of no free will. Surely you're not really making any decisions.
This sounds like "epiphenomenalism" - the idea that the conscious mind has no causal power, it's just somehow along for the ride of existence, while atoms or whatever do all the work. This is a philosophy that alienates you from your own power to choose.
But there is also "compatibilism". This is originally the idea that free will is compatible with determinism, because free will is here defined to mean, not that personal decisions have no causes at all, but that all the causes are internal to the person who decides.
A criticism of compatibilism is that this definition isn't what's meant by free will. Maybe so. But for the present discussion, it gives us a concept of personal choice which isn't disconnected from the rest of cause and effect.
We can consider simpler mechanical analogs. Consider any device that "makes choices", whether it's a climate control system in a building, or a computer running multiple processes. Does epiphenomenalism make sense here? Is the device irrelevant to the "choice" that happens? I'd say no: the device is the entity that performs the action. The action has a cause, but it is the state of the device itself, along with the relevant physical laws, which is the cause.
We can think similarly of human actions where conscious choice is involved.
But your values wouldn't have been decided by you.
Perhaps you didn't choose your original values. But a person's values can change, and if this was a matter of self-aware choice between two value systems, I'm willing to say that the person decided on their new values.
AI interpretability can assign meaning to states of an AI, but what about process? Are there principled ways of concluding that an AI is thinking, deciding, trying, and so on?
It would hardly be the first time that someone powerful went mad, or was thought to be mad by those around them, and the whole affair was hushed up, or the courtiers just went along with it. Wikipedia says that the story of the emperor's new clothes goes back at least to 1335... Just last month, Zvi was posting someone's theory about why rich people go mad. I think the first time I became aware of the brewing alarm around "AI psychosis" was the case of Geoff Lewis, a billionaire VC who has neither disowned his AI-enhanced paranoia of a few months ago, nor kept going with it (instead he got married). And I think I first heard of "vibe physics" in connection with Uber founder Trevor Kalanick.
The consequences for an individual depend on the details. For example, if you still understand yourself as being part of the causal chain of events, because you make decisions that determine your actions - it's just that your decisions are in turn determined by psychological factors like personality, experience, and intelligence - your sense of agency may remain entirely unaffected. The belief could even impact your decision-making positively, e.g. via a series of thoughts like "my decisions will be determined by my values" - "what do my values actually imply I should do in this situation" - followed by enhanced attention to reasoning about the decision.
On the other hand, one hears that loss of belief in free will can be accompanied by loss of agency or loss of morality, so, the consequences really depend on the psychological details. In general, I think an anti-free-will position that alienates you from the supposed causal machinery of your decision-making, rather than one that identifies you with it, has the potential to diminish a person.
I have three paradigms for how something like this might "work" or at least be popular:
They say Kimi K2 is good at writing fiction (Chinese web novels, originally). I wonder if it is specifically good at plot, or narrative causality? And if Eliezer and his crew had serious backing from billionaires, with the correspondingly enhanced ability to develop big plans and carry them out, I wonder if they really would do something like this on the side, in addition to the increasingly political work of stopping frontier AI?
In physics, it is sometimes asked why there should be just three (large) space dimensions. No one really knows, but there are various mathematical properties unique to three or four dimensions, to which appeal is sometimes made.
I would also consider the recent (last few decades) interest in the emergence of spatial dimensions from entanglement. It may be that your question can be answered by considering these two things together.
not the worst outcome
Are you imagining a basically transhumanist future where people have radical longevity and other such boons, but they happen to be trapped within a particular culture (whether that happens to be Christian homeschooling or Bay Area rationalism)? Or could this also be a world where people live lives with a brevity and hazardousness comparable to historic human experience, and in which, in addition, their culture has an unnatural stability maintained by AI working in the background?
It would be interesting to know the extent to which the distribution of beliefs in society is already the result of persuasion. We could then model the immediate future in similar terms, but with the persuasive "pressures" amplified by human-directed AI.
I assume Manifold here means "reality", and not just the betting site?