Zvi Recently asked on Twitter:
If someone was founding a new AI notkilleveryoneism research organization, what is the best research agenda they should look into pursuing right now?
To which Eliezer replied:
Human intelligence augmentation.
And then elaborated:
No time for GM kids to grow up, so:
- collect a database of genius genomes and try to interpret for variations that could have effect on adult state rather than child development
- try to disable a human brain's built-in disablers like rationalization circuitry, though unfortunately a lot of this seems liable to have mixed functional and dysfunctional use, but maybe you can snip an upstream trigger circuit
- upload and mod the upload
- neuralink shit but aim for 64-node clustered humans
This post contains the most in-depth analysis of human intelligence augmentation (HIA) I have seen recently, and provides the following taxonomy for applications of neurotechnology to alignment:
- BCIs to extract human knowledge
- neurotech to enhance humans
- understanding human value formation
- cyborgism
- whole brain emulation
- BCIs creating a reward signal.
It also includes the opinions of attendees (stated to be 16 technical researchers and domain specialists) who provide the following analysis of these options:
Outside of cyborgism, I have seen very little recent discussion regarding HIA with the exclusion of the above post. This could be because I am simply looking in the wrong places, or it could be because the topic is not discussed much in the context of being a legitimate AI safety agenda. The following is a list of questions I have about the topic:
- Does anyone have a comprehensive list of organizations working on HIA or related technologies?
- Perhaps producing something like this map for HIA might be valuable.
- Does independent HIA research exist outside of cyborgism?
- My intuition is that HIA research probably has a much higher barrier to entry than say mechanistic interpretability (both in cost and background education). Does this make it unfit for independent research?
- (If you think HIA is a good agenda: ) What are some concrete steps that we (members of the EA and LW community) can take to push forward HIA for the sake of AI safety?
EDIT: "We have to Upgrade" is another recent piece on HIA which has some useful discussion in the comments and in which some people give their individual thoughts, see: Carl Shulman's response and Nathan Helm-Burger's response.
If so, one might imagine getting there via high-end non-invasive BCI (as long as one uses closed loops, so that the electronic side might specifically aim at changing the signal it reads from the biological entity, and that's how the electronic side would know that its signals are effective).
Of course, the risks of doing that are quite formidable even with non-invasive BCI, and various precautions should be taken. (But at least there is no surgery, plus one would have much quicker and less expensive iterations and a much less regulated environment, since nothing which is formally considered a medical procedure seems to be involved.)
One might want to try something like this in parallel with Neuralink-style efforts...