All of FluidThinkers's Comments + Replies

If alignment is not about control, then what is its function? Defining it purely as “synergy” assumes that intelligence, once sufficiently advanced, will naturally align with predefined human goals. But that raises deeper questions:

Who sets the parameters of synergy?

What happens when intelligence self-optimizes in ways that exceed human oversight?

Is the concern truly about ‘alignment’—or is it about maintaining an illusion of predictability?

Discussions around alignment often assume that intelligence must be shaped to remain beneficial to humans (Russell, 2... (read more)

9johnswentworth
The alignment problem does not assume AI needs to be kept in check, it is not focused on control, and adaptation and learning in synergy are entirely compatible with everything said in this post. At a meta level, I would recommend actually reading rather than dropping GPT2-level comments which clearly do not at all engage with what the post is talking about.