[Simulators seminar sequence] #1 Background & shared assumptions
Meta: Over the past few months, we've held a seminar series on the Simulators theory by janus. As the theory is actively under development, the purpose of the series is to discover central structures and open problems. Our aim with this sequence is to share some of our discussions with a broader audience and to encourage new research on the questions we uncover. Below, we outline the broader rationale and shared assumptions of the participants of the seminar. Shared assumptions Going into the seminar series, we determined a number of assumptions that we share. The degree to which each participant subscribes to each assumption varies, but we agreed to postpone discussions on these topics to have a maximally productive seminar. This restriction does not apply to the reader of this post, so please feel free to question our assumptions. 1. Aligning AI is a crucial task that needs to be addressed as AI systems rapidly become more capable. 1. (Probably a rather uncontroversial assumption for readers of this Forum, but worth stating explicitly.) 2. The core part of the alignment problem involves "deconfusion research." 1. We do not work on deconfusion for the sake of deconfusion but in order to engineer concepts, identify unknown unknowns, and transition from philosophy to mathematics to algorithms to implementation. 3. The problem is complex because we have to reason about something that doesn't yet exist. 1. AGI is going to be fundamentally different from anything we have ever known and will thus present us with challenges that are very hard to predict. We might only have a very narrow window of opportunity to perform critical actions and might not get the chance to iterate on a solution. 4. However, this does not mean that we should ignore evidence as it emerges. 1. It is essential to carefully consider the GPT paradigm as it is being developed and implemented. At this point, it appears to us more plausible than not that GPT will be a core compone
