Here are some specific ways I think AGI could emerge, starting at the most likely, again trying to take into account economic, engineering, and organizational realities.
It may be that autoregressive training is enough as prediction has been theorized to be the main driver of learning for humans. GPT likely needs to be grounded in addition to further scaling and perhaps internal recurrence. However, if GPT progress continues at its current pace, it would be at the top of my list on ways AGI could emerge.
In this case, grounding the model with embeddings generated from fMRI data could help both with alignment and performance. There are open questions on what the effect of the low (super-neuron-level) resolution encodings would be on accuracy; these could be studied in artificial neural nets to get some grasp on the effect of only knowing groups of neural activations. Also, choosing which humans to base AGI off of would be difficult (as many humans as possible would be ideal I would think). Assuming more than one human, some network distillation would be needed to combine the embeddings from many people. The effect of such distillation on physically based language tasks could also be studied without expensive new fMRI scans. Arguments could be made against grounding in humans at all - a chance to correct the mistakes of evolution - but I think the risk of this greatly outweighs the benefits - considering that a humanlike AGI could gradually become ethically superhuman more safely. A fear could be that human vices become amplified with AGI, which may make choosing the humans to ground GPT with all the more important, but also means that introspection into internal correlates of GPT and early detection of unsafe behavior is important work to do now.
I have long held that sensorimotor intelligence was necessary before a machine could understand the internet enough to lead to AGI, and have therefore spent the last few years working in that area (deepdrive.io smooth.deepdrive.io). GPT may be changing that, but self-driving to me still represents the most advanced embodied intelligence we have and the area of robotics that will receive the most resources for the next few years. A lot has to change in the self-driving stack however to be compatible with providing sensorimotor embeddings to language models, as neural nets are only responsible for perception and not control in self-driving currently. Alternatively - training methods, model architectures, hyperparameters, etc... from self-driving could be repurposed for learning from childlike experiences, avoiding the transfer of parameters/knowledge from self-driving altogether. Challenges in dealing with robotics and the physical world vs purely symbolic approaches like GPT (i.e. atoms vs bits) remain the main impediment to self-driving.
Grounding AGI with models that exhibit some sense of intuitive physics and that can predict and interact with humans on the road may allow for AGI, but such a chauffeur only 'upbringing' would be a big departure from the way current human intelligence understands the world. Perhaps reading all of human knowledge corrects for this, but it seems too risky to me to ground language in just self-driving. Rather models from self-driving could be fine-tuned with virtual childlike embodiment or combined with the aforementioned fMRI embeddings to better align and relate with humans.
Using Nueralink to read activations generated from words and thereby creating embeddings to ground language models could lead to AGI. This is not inline with Elon's goal of creating an exocortex as it transfers control back to the AI, but seems likely and would also be safer than ungrounded models. Allowing higher bandwidth communication with computers is I think the mechanism Neuralink is going for and would definitely help us be more productive in a variety of ways, including creating AGI and facilitating AGI that we are more closely connected with. Working in wetware / medical technology is the biggest roadblock to this happening. I've heard estimates of 25 years or so. My own back of the envelope calculations leads to anywhere from 6 to 20 years assuming they can double the number of electrodes every year. To do this in six years, a ~300x improvement will need to be found by using a high sampling rate to record many neurons with a single electrode, running longer threads through something like a magnetically triangulated guided surgery, inferring connected activations through 'top-level' neural or maybe even synaptic activity, or similar ideas which read/write many more neurons per electrode.
Extending humans seems like the safest way to create AGI if we can recreate the type of relationship between evolutionarily older parts of the brain and the neocortex. It will be important however that humans stay in-the-loop and that we resist the economic and time-saving temptations of relinquishing more control to AI. Using human neural activations to ground more powerful systems somewhat alleviates the need for biological humans to be in-the-loop. Eventually uploading entire human connectomes would similarly help avoid biological tendencies to think as little as possible. Similarly to fMRI-based grounding, it will be important to have as many Neuralink connected people as possible in order to provide something like C.E.V. -- in the sense of aligning AGI with the collective rather than the individual.
General methods could be used to automate a variety of manufacturing and industrial tasks. I haven't thought about this as much as self-driving, but it feels like the lack of interaction with humans creates less of a forcing function for factory automation to become sufficiently advanced to lead to AGI. Automating jobs that involve interaction with humans would be a counter-example, but it seems like this would require advanced humanoid robotics which is not likely to occur before self-driving in my opinion.
This would be similar to self-driving, unless these systems somehow led to AGI only interacting with inanimate objects and each other. In this case, I think it would be important for these systems to learn like we do by interacting with humans before grounding language models with them.
Automating the generation of AI algorithms and environments could allow ever cheaper computation to automatically create AGI.
Ensuring that automatically generated environments resemble human experience, encompassing love, compassion, and other important aspects of alignment would be important here. Considering the speed at which these environments and agents would evolve, perhaps Neuralink would be an important way to increase our ability to guide these types of systems.
Embodied robots that interact with us in the real world would have the ideal environment in terms of recreating what leads to human intelligence. Current reinforcement learning algorithms are too sample inefficient to learn without faster than realtime simulation, but researchers like Sergey Levine are working hard at improving this situation.
Boston Dynamics has historically focused on industrial and military applications, so we'd definitely want to make sure Spot was treated more like a dog than a war/work-machine when extending its sensorimotor models with larger language models to create a more general intelligence. AGI grounded in dog-like intelligence might not be so bad though!
i.e. SIMS, Gita Pet, Steve Grand's creatures, SecondLife
Gaming represents a massive industry that given the right game, could allow software only development and crowdsourcing of parental duties to accelerate the development of virtual upbringing of AGI.
We'd want to screen experiences of any publicly crowdsourced parenting to meet certain ethical guidelines, e.g. you don't want people abusing their virtual children to affect models we derive AGI from. Alternatively, we could create a private server where parents were screened more closely for the immense responsibility of raising AGI.
I'm sure I'm missing many possibilities, so please don't hesitate to add more in the comments!
-
Thanks, I hope you're right about IA (vs pure AI). I think it's very possible that won't be the case however as the more autonomous a system is and the more significant it's decisions, the more valuable it will be. And so there will be large financial incentive to an increasing amount of important decisions being made in-silico. Also the more autonomous a system is, the less of part we will play in it by definition, and therefore the less it will be an extension of us. This especially as the size of the in-silico portion is not physically limited to human'...
This may have predicted something akin to the social networks and provided some impetus for preventing societal manipulation through the targeted advertising that we saw in the 2016 U.S. elections.
This framing seems to be not helpful for understanding how technology affects society.
It's basically an excuse of explaining why Hillary lost that tries to do it without addressing that Trump message resonated with a lot of swing state voters. As far as tech goes, the person who's targeting advertising team was backed by the CEO of one of the largest tech companies and had the most money lost.
From a year 2000 perspective you can say that there seems to be a trend of rising polarization between parties measured by metrics like parents being less okay with their children marriaging people of the opposing political party.
This was partly driven by having more niche TV channels. Just like more TV channels allowed for more niches of content consumptions, you could have said that the internet allows for even channels and thus is likely continue the trend of polarization.
It's also worth noting that the most targeted political advertising is face-to-face and Obama destroyed a lot of that capability that the Democratic party had under Howard Dean's 50 state strategy. Maybe you can explain that with rising polarization making it less benefitial for party elites to have strong grassroots that could challenge them.
Yeah, it's tough to come up with the right analogy for this. Perhaps there's a better one? Nuclear weapons? Or maybe analogies are more of a distraction...
The key issue that comes to my mind is that if you have trouble thinking clearly about an event that happened a few years in the past, how do you think you will be able to think clearly about the future decades from now?
To me it seems that polarization between Republicans and Democrats was one of the key political features of 2016 and that's something you could forsee with easy extrapolation of trends.
You can look at trends of AI development and who creates powerful AI and extrapolate them.
I'd rather not bias answers with my specific guesses, but what I'm trying to get at are bottom-up economic, engineering, and organizational realities-based answers. For example, if this question were asked in the year 2000 about the most likely way the next massive communities of people would form, answers might take into account the broadening adoption of the internet, the relative ease with which new web communities could be created with the LAMP stack, and that advertising would be a likely form of revenue. This may have predicted something akin to the social networks and provided some impetus for preventing societal manipulation through the targeted advertising that we saw in the 2016 U.S. elections. I realize predicting such a thing in 2000 would have been extremely difficult, but it feels like this is the type of question that if asked now, could help provide some guidance on how to create AGI safely.
I'll provide my own guesses in a few days after I've given some time for folks to weigh in.