AGI in sight: our look at the game board
From our point of view, we are now in the end-game for AGI, and we (humans) are losing. When we share this with other people, they reliably get surprised. That’s why we believe it is worth writing down our beliefs on this. 1. AGI is happening soon. Significant probability of it happening in less than 5 years. Five years ago, there were many obstacles on what we considered to be the path to AGI. But in the last few years, we’ve gotten: * Powerful Agents (Agent57, GATO, Dreamer V3) * Reliably good Multimodal Models (StableDiffusion, Whisper, Clip) * Just about every language tasks (GPT3, ChatGPT, Bing Chat) * Human and Social Manipulation * Robots (Boston Dynamics, Day Dreamer, VideoDex, RT-1: Robotics Transformer [1]) * AIs that are superhuman at just about any task we can (or simply bother to) define a benchmark, for We don’t have any obstacle left in mind that we don’t expect to get overcome in more than 6 months after efforts are invested to take it down. Forget about what the social consensus is. If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments, but please do not state what those obstacles are. 2. We haven’t solved AI Safety, and we don’t have much time left. We are very close to AGI. But how good are we at safety right now? Well. No one knows how to get LLMs to be truthful. LLMs make things up, constantly. It is really hard to get them not to do this, and we don’t know how to do this at scale. Optimizers quite often break their setup in unexpected ways. There have been quite a few examples of this. But in brief, the lessons we have learned are: * Optimizers can yield unexpected results * Those results can be very weird (like breaking the simulation environment) * Yet very few extrapolate from this and find these as worrying signs No one understands how large