Embracing complexity when developing and evaluating AI responsibly
Complex Intervention Development and Evaluation Framework: A Blueprint for Ethical and Responsible AI Development and Evaluation The rapid evolution of artificial intelligence (AI) presents significant opportunities, but it also raises serious concerns about its societal impact. Ensuring that AI systems are fair, responsible, and safe is more critical than ever,...
The main point I am trying to make is that AGI risks cannot be deduced or theorised solely in abstract terms. They must be understood through rigorous empirical research on complex systems. If you view AI as an agent in the world, then it functions as a complex intervention. It may or may not act as intended by its designer, it may or may not deliver preferred outcomes, and it may or may not be acceptable to the user. There are ways to estimate uncertainty in each of these parameters through empirical research. Actually, there are degrees to which it acts as intended, degrees to which it is acceptable, and so on. This calls for careful empirical research and system-level understanding.