Zvi Recently asked on Twitter:
If someone was founding a new AI notkilleveryoneism research organization, what is the best research agenda they should look into pursuing right now?
To which Eliezer replied:
Human intelligence augmentation.
And then elaborated:
No time for GM kids to grow up, so:
- collect a database of genius genomes and try to interpret for variations that could have effect on adult state rather than child development
- try to disable a human brain's built-in disablers like rationalization circuitry, though unfortunately a lot of this seems liable to have mixed functional and dysfunctional use, but maybe you can snip an upstream trigger circuit
- upload and mod the upload
- neuralink shit but aim for 64-node clustered humans
This post contains the most in-depth analysis of human intelligence augmentation (HIA) I have seen recently, and provides the following taxonomy for applications of neurotechnology to alignment:
- BCIs to extract human knowledge
- neurotech to enhance humans
- understanding human value formation
- cyborgism
- whole brain emulation
- BCIs creating a reward signal.
It also includes the opinions of attendees (stated to be 16 technical researchers and domain specialists) who provide the following analysis of these options:
Outside of cyborgism, I have seen very little recent discussion regarding HIA with the exclusion of the above post. This could be because I am simply looking in the wrong places, or it could be because the topic is not discussed much in the context of being a legitimate AI safety agenda. The following is a list of questions I have about the topic:
- Does anyone have a comprehensive list of organizations working on HIA or related technologies?
- Perhaps producing something like this map for HIA might be valuable.
- Does independent HIA research exist outside of cyborgism?
- My intuition is that HIA research probably has a much higher barrier to entry than say mechanistic interpretability (both in cost and background education). Does this make it unfit for independent research?
- (If you think HIA is a good agenda: ) What are some concrete steps that we (members of the EA and LW community) can take to push forward HIA for the sake of AI safety?
EDIT: "We have to Upgrade" is another recent piece on HIA which has some useful discussion in the comments and in which some people give their individual thoughts, see: Carl Shulman's response and Nathan Helm-Burger's response.
Pretty positive. I suspect that playing a lot of ordinary video games as a child contributed at least somewhat positively to my current level of fluid intelligence.
Playing games or doing training exercises specifically designed to train fluid intelligence and reasoning ability, using a BCI or other neurotech, seems like it could plausibly move the needle at least a bit, in both children and adults.
And I think even small enhancements could lead to large, compounding benefits when applied at scale, due to better coordination ability and general improvements to baseline sanity.
The research on brain training seems to disagree with you about how much it could have helped non-task-specific intelligence.