Zvi Recently asked on Twitter:
If someone was founding a new AI notkilleveryoneism research organization, what is the best research agenda they should look into pursuing right now?
To which Eliezer replied:
Human intelligence augmentation.
And then elaborated:
No time for GM kids to grow up, so:
- collect a database of genius genomes and try to interpret for variations that could have effect on adult state rather than child development
- try to disable a human brain's built-in disablers like rationalization circuitry, though unfortunately a lot of this seems liable to have mixed functional and dysfunctional use, but maybe you can snip an upstream trigger circuit
- upload and mod the upload
- neuralink shit but aim for 64-node clustered humans
This post contains the most in-depth analysis of human intelligence augmentation (HIA) I have seen recently, and provides the following taxonomy for applications of neurotechnology to alignment:
- BCIs to extract human knowledge
- neurotech to enhance humans
- understanding human value formation
- cyborgism
- whole brain emulation
- BCIs creating a reward signal.
It also includes the opinions of attendees (stated to be 16 technical researchers and domain specialists) who provide the following analysis of these options:
Outside of cyborgism, I have seen very little recent discussion regarding HIA with the exclusion of the above post. This could be because I am simply looking in the wrong places, or it could be because the topic is not discussed much in the context of being a legitimate AI safety agenda. The following is a list of questions I have about the topic:
- Does anyone have a comprehensive list of organizations working on HIA or related technologies?
- Perhaps producing something like this map for HIA might be valuable.
- Does independent HIA research exist outside of cyborgism?
- My intuition is that HIA research probably has a much higher barrier to entry than say mechanistic interpretability (both in cost and background education). Does this make it unfit for independent research?
- (If you think HIA is a good agenda: ) What are some concrete steps that we (members of the EA and LW community) can take to push forward HIA for the sake of AI safety?
EDIT: "We have to Upgrade" is another recent piece on HIA which has some useful discussion in the comments and in which some people give their individual thoughts, see: Carl Shulman's response and Nathan Helm-Burger's response.
Maybe not as long as you're thinking; people can be very intelligent and creative at young ages (and this may be amplified with someone gene-edited to have high intelligence). 'Adolescence' is mostly a recent social construction, and a lot of norms/common beliefs about children exist more to keep them disempowered.