Zvi Recently asked on Twitter:
If someone was founding a new AI notkilleveryoneism research organization, what is the best research agenda they should look into pursuing right now?
To which Eliezer replied:
Human intelligence augmentation.
And then elaborated:
No time for GM kids to grow up, so:
- collect a database of genius genomes and try to interpret for variations that could have effect on adult state rather than child development
- try to disable a human brain's built-in disablers like rationalization circuitry, though unfortunately a lot of this seems liable to have mixed functional and dysfunctional use, but maybe you can snip an upstream trigger circuit
- upload and mod the upload
- neuralink shit but aim for 64-node clustered humans
This post contains the most in-depth analysis of human intelligence augmentation (HIA) I have seen recently, and provides the following taxonomy for applications of neurotechnology to alignment:
- BCIs to extract human knowledge
- neurotech to enhance humans
- understanding human value formation
- cyborgism
- whole brain emulation
- BCIs creating a reward signal.
It also includes the opinions of attendees (stated to be 16 technical researchers and domain specialists) who provide the following analysis of these options:
Outside of cyborgism, I have seen very little recent discussion regarding HIA with the exclusion of the above post. This could be because I am simply looking in the wrong places, or it could be because the topic is not discussed much in the context of being a legitimate AI safety agenda. The following is a list of questions I have about the topic:
- Does anyone have a comprehensive list of organizations working on HIA or related technologies?
- Perhaps producing something like this map for HIA might be valuable.
- Does independent HIA research exist outside of cyborgism?
- My intuition is that HIA research probably has a much higher barrier to entry than say mechanistic interpretability (both in cost and background education). Does this make it unfit for independent research?
- (If you think HIA is a good agenda: ) What are some concrete steps that we (members of the EA and LW community) can take to push forward HIA for the sake of AI safety?
EDIT: "We have to Upgrade" is another recent piece on HIA which has some useful discussion in the comments and in which some people give their individual thoughts, see: Carl Shulman's response and Nathan Helm-Burger's response.
I think somatic gene therapy, while technically possible in principal, is extremely unpromising for intelligence augmentation. Creating a super-genius is almost trivial with germ-line engineering. Provided we know enough causal variants, one needs to only make a low-hundreds number of edits to one cell to make someone smarter than any human that has ever lived. With somatic gene therapy you would almost certainly have to alter billions of cells to get anywhere.
Networking humans is interesting but we have nowhere close to the bandwidth needed now. As a rough guess lets suppose we need similar bandwidth to the corpus callosum, neuralink is ~5 OOMs off.
I suspect human intelligence enhancement will not progress much in the next 5 years, not counting human/ML hybrid systems.
Not really true - known SNP mutations associated with high intelligence have relatively low effect in total. The best way to make a really smart baby with current techniques is with donor egg and sperm, or cloning.
It is also possible that variance in intelligence among humans is due to something analogous to starting values in neural networks - lucky/crafted values can result in higher final performance, but getting those values into an already established network just adds noise. You can't really change macrostructures in the brain with gene therapy in adults, after all.