Craig Quiter

Software engineer specialized in deep reinforcement learning for self-driving. c.f. smooth.deepdrive.io

Wiki Contributions

Comments

Sorted by
Answer by Craig Quiter20

Here are some specific ways I think AGI could emerge, starting at the most likely, again trying to take into account economic, engineering, and organizational realities.

GPT-X

Mechanism

It may be that autoregressive training is enough as prediction has been theorized to be the main driver of learning for humans. GPT likely needs to be grounded in addition to further scaling and perhaps internal recurrence. However, if GPT progress continues at its current pace, it would be at the top of my list on ways AGI could emerge.

Safety

In this case, grounding the model with embeddings generated from fMRI data could help both with alignment and performance. There are open questions on what the effect of the low (super-neuron-level) resolution encodings would be on accuracy; these could be studied in artificial neural nets to get some grasp on the effect of only knowing groups of neural activations. Also, choosing which humans to base AGI off of would be difficult (as many humans as possible would be ideal I would think). Assuming more than one human, some network distillation would be needed to combine the embeddings from many people. The effect of such distillation on physically based language tasks could also be studied without expensive new fMRI scans. Arguments could be made against grounding in humans at all - a chance to correct the mistakes of evolution - but I think the risk of this greatly outweighs the benefits - considering that a humanlike AGI could gradually become ethically superhuman more safely. A fear could be that human vices become amplified with AGI, which may make choosing the humans to ground GPT with all the more important, but also means that introspection into internal correlates of GPT and early detection of unsafe behavior is important work to do now.

Self-driving

Mechanism

I have long held that sensorimotor intelligence was necessary before a machine could understand the internet enough to lead to AGI, and have therefore spent the last few years working in that area (deepdrive.io smooth.deepdrive.io). GPT may be changing that, but self-driving to me still represents the most advanced embodied intelligence we have and the area of robotics that will receive the most resources for the next few years. A lot has to change in the self-driving stack however to be compatible with providing sensorimotor embeddings to language models, as neural nets are only responsible for perception and not control in self-driving currently. Alternatively - training methods, model architectures, hyperparameters, etc... from self-driving could be repurposed for learning from childlike experiences, avoiding the transfer of parameters/knowledge from self-driving altogether. Challenges in dealing with robotics and the physical world vs purely symbolic approaches like GPT (i.e. atoms vs bits) remain the main impediment to self-driving.

Safety

Grounding AGI with models that exhibit some sense of intuitive physics and that can predict and interact with humans on the road may allow for AGI, but such a chauffeur only 'upbringing' would be a big departure from the way current human intelligence understands the world. Perhaps reading all of human knowledge corrects for this, but it seems too risky to me to ground language in just self-driving. Rather models from self-driving could be fine-tuned with virtual childlike embodiment or combined with the aforementioned fMRI embeddings to better align and relate with humans.

Neuralink

Mechanism

Using Nueralink to read activations generated from words and thereby creating embeddings to ground language models could lead to AGI. This is not inline with Elon's goal of creating an exocortex as it transfers control back to the AI, but seems likely and would also be safer than ungrounded models. Allowing higher bandwidth communication with computers is I think the mechanism Neuralink is going for and would definitely help us be more productive in a variety of ways, including creating AGI and facilitating AGI that we are more closely connected with. Working in wetware / medical technology is the biggest roadblock to this happening. I've heard estimates of 25 years or so. My own back of the envelope calculations leads to anywhere from 6 to 20 years assuming they can double the number of electrodes every year. To do this in six years, a ~300x improvement will need to be found by using a high sampling rate to record many neurons with a single electrode, running longer threads through something like a magnetically triangulated guided surgery, inferring connected activations through 'top-level' neural or maybe even synaptic activity, or similar ideas which read/write many more neurons per electrode.

Safety

Extending humans seems like the safest way to create AGI if we can recreate the type of relationship between evolutionarily older parts of the brain and the neocortex. It will be important however that humans stay in-the-loop and that we resist the economic and time-saving temptations of relinquishing more control to AI. Using human neural activations to ground more powerful systems somewhat alleviates the need for biological humans to be in-the-loop. Eventually uploading entire human connectomes would similarly help avoid biological tendencies to think as little as possible. Similarly to fMRI-based grounding, it will be important to have as many Neuralink connected people as possible in order to provide something like C.E.V. -- in the sense of aligning AGI with the collective rather than the individual.

Factory Automation

Mechanism

General methods could be used to automate a variety of manufacturing and industrial tasks. I haven't thought about this as much as self-driving, but it feels like the lack of interaction with humans creates less of a forcing function for factory automation to become sufficiently advanced to lead to AGI. Automating jobs that involve interaction with humans would be a counter-example, but it seems like this would require advanced humanoid robotics which is not likely to occur before self-driving in my opinion.

Safety

This would be similar to self-driving, unless these systems somehow led to AGI only interacting with inanimate objects and each other. In this case, I think it would be important for these systems to learn like we do by interacting with humans before grounding language models with them.

Metalearning

Mechanism

Automating the generation of AI algorithms and environments could allow ever cheaper computation to automatically create AGI.

Safety

Ensuring that automatically generated environments resemble human experience, encompassing love, compassion, and other important aspects of alignment would be important here. Considering the speed at which these environments and agents would evolve, perhaps Neuralink would be an important way to increase our ability to guide these types of systems.

Boston Dynamics

Mechanism

Embodied robots that interact with us in the real world would have the ideal environment in terms of recreating what leads to human intelligence. Current reinforcement learning algorithms are too sample inefficient to learn without faster than realtime simulation, but researchers like Sergey Levine are working hard at improving this situation.

Safety

Boston Dynamics has historically focused on industrial and military applications, so we'd definitely want to make sure Spot was treated more like a dog than a war/work-machine when extending its sensorimotor models with larger language models to create a more general intelligence. AGI grounded in dog-like intelligence might not be so bad though!

Gaming

i.e. SIMS, Gita Pet, Steve Grand's creatures, SecondLife

Mechanism

Gaming represents a massive industry that given the right game, could allow software only development and crowdsourcing of parental duties to accelerate the development of virtual upbringing of AGI.

Safety

We'd want to screen experiences of any publicly crowdsourced parenting to meet certain ethical guidelines, e.g. you don't want people abusing their virtual children to affect models we derive AGI from. Alternatively, we could create a private server where parents were screened more closely for the immense responsibility of raising AGI.

I'm sure I'm missing many possibilities, so please don't hesitate to add more in the comments!

Yeah, it's tough to come up with the right analogy for this. Perhaps there's a better one? Nuclear weapons? Or maybe analogies are more of a distraction...

Thanks, I hope you're right about IA (vs pure AI). I think it's very possible that won't be the case however as the more autonomous a system is and the more significant it's decisions, the more valuable it will be. And so there will be large financial incentive to an increasing amount of important decisions being made in-silico. Also the more autonomous a system is, the less of part we will play in it by definition, and therefore the less it will be an extension of us. This especially as the size of the in-silico portion is not physically limited to human's cranial volume :). So the portion of AI's decision making vs humans is unbounded. Alignment may or may not result from IA, it's hard to tell. That's why I think we should deliberately build in alignment mechanisms in-silico ahead of time, and seek to achieve something akin to C.E.V. at small scales now.

If AGI emerges from automation, how can we build alignment into that?