My model is that
- Alignment = an AI uses the correct model of interpretation of goals
- Succeeding in this model design leads to something akin to CEV
- Errors under this correct model (e.g. mesa-optimisation) are unlikely because high intelligence + a correct model of interpretation = correct extrapolation of goals
- The choice of interpretation model seems almost unrelated to intelligence, as it seems equivalent to the chosen philosophies of meaning.
a) Is my model accurate?
b) Any recommendations for reading that explores alignment from a similar angle (for ex. which philosophy of meaning is most likely to emerge in LLM)? So far, Alex Flint's posts come up the most. Which research agendas sound closest to this framing?
Thanks!
Yes, it's a tough problem! :) However, your points seem to expand, rather than correct my points, which makes me think it's not a bad way to compress the problem into a few words. Thanks!
Edit: (It seems to me that if an AI can correct for its mistakes in misinterpretation, when you look at it from the outside, it's accurate to say it uses the correct model of interpretation, but I can see why you could disagree)