I strongly believe the alignment problem is fundamentally impossible, another form of an undecidable problem. I, however, would prefer to die with dignity. I study methods of minimizing the chances of being wiped out after the advent of ASI.
My current line of research is computational neuroscience for human cognitive augmentation. I work on the heavily flawed theory that the higher the intelligence waterline of humanity, the better the chances we have ASI employs us as part of its goals, instead of 'recycling' us as biomass.
It is a nice thought experiment, but I've noticed that many AI researchers are devoted to their labour to the point of being comparable to religious fanaticism (on either camp, really). I don't think a fat pay check will make them stop their research so readily.
Very insightful, thanks for the clarification, as dooming as it is.
A nuclear reactor doesn't try to convince you intellectually with speech or text so that you behave in a way you would not have before interacting with the nuclear reactor. And that is assuming your statement 'current LLMs are not agentic' holds true, which seems doubtful.
As much as I agree that things are about to get really weird, that first diagram is a bit too optimistic. There is a limit to how much data humanity has available to train AI (here), and it seems doubtful we can make use of data x1000 times more effectively in such a short span of time. For all we know, there could be yet another AI winter coming - I don't think we will get that lucky, though.
Thank you for posting this. Been 'levelling up' my maths for machine learning lately and this is just perfect.
Even if pessimistic, it is invaluable to know that an idea is unlikely to succeed before you invest your only shot into it.
Thanks for the pointers, I will research them and reformulate my plan.
Our attention is one of the most valuable resources we have, and it is now through recent AI developments in NLP and machine vision that we are realizing that it might very well be a fundamental component of intelligence itself.
This post brings this point to attention (pun intended) by using video games as examples, and encourages us to optimize the way we use this limited resource to maximize information gain and to improve our cooperation skills by avoiding being 'sound absorbers'.