I think somatic gene therapy, while technically possible in principal, is extremely unpromising for intelligence augmentation. Creating a super-genius is almost trivial with germ-line engineering. Provided we know enough causal variants, one needs to only make a low-hundreds number of edits to one cell to make someone smarter than any human that has ever lived. With somatic gene therapy you would almost certainly have to alter billions of cells to get anywhere.
Networking humans is interesting but we have nowhere close to the bandwidth needed now. As a rough guess lets suppose we need similar bandwidth to the corpus callosum, neuralink is ~5 OOMs off.
I suspect human intelligence enhancement will not progress much in the next 5 years, not counting human/ML hybrid systems.
Networking humans is interesting but we have nowhere close to the bandwidth needed now.
GPT-3 manages with mere 12K dimensions on the residual stream (for 175B parameters), which carries all information between the layers. So tens of thousands of connections might turn out to be sufficient.
Creating a super-genius is almost trivial with germ-line engineering.
Eh, I mean, everything I hear from geneticists on any topic suggests that DNA interactions are crazy complex because the whole thing wasn't designed to be a sensible system of switches you just turn on and off (wasn't designed at all, to be fair). I'd really really be suspicious of this sort of confidence.
Also honestly I think this actually incurs into problems analogue to AI. We talk about AI alignment and sure, humans shouldn't have such a large potential goal space, but:
Creating a super-genius is almost trivial with germ-line engineering.
Not really true - known SNP mutations associated with high intelligence have relatively low effect in total. The best way to make a really smart baby with current techniques is with donor egg and sperm, or cloning.
It is also possible that variance in intelligence among humans is due to something analogous to starting values in neural networks - lucky/crafted values can result in higher final performance, but getting those values into an already established network just adds noise. You can't really change macrostructures in the brain with gene therapy in adults, after all.
Mostly, a useless dead end. The big problem is even assuming it's socially acceptable to do it, the stuff genetic engineering can do is either locked behind massive time and children investments, or is way too weak/harmful to be of much use. It's an interesting field, with a whole lot of potential, but I'd only support expand it's social acceptability and doing basic research right now, given that I see very few options for genetics.
Also, how much somatic gene editing, not how much gamete gene editing is the key taut constraint.
locked behind massive time and children investments
Maybe not as long as you're thinking; people can be very intelligent and creative at young ages (and this may be amplified with someone gene-edited to have high intelligence). 'Adolescence' is mostly a recent social construction, and a lot of norms/common beliefs about children exist more to keep them disempowered.
children investments
I mean, that makes the likely death and suffering toll sound more acceptable I guess as PR expressions go, yeah.
There's interesting possibilities with BCI that you don't list. But the bandwidth is too low due to the butcher number. https://tsvibt.blogspot.com/2022/11/prosthetic-connectivity.html
Not doing things because AGI comes soon is a mistake: https://tsvibt.blogspot.com/2023/07/views-on-when-agi-comes-and-on-strategy.html
Germline engineering is feasible, but society anti-wants it.
I agree that electrode-based BCIs don't scale, but electrode BCIs are just the first generation of productized interfaces. The next generation of BCIs holds a great deal of promise. Depending on AGI timelines, they may still be too far out. They're still probably worth developing with an eye toward alignment given that they have primarily non-overlapping resources (funding, expertise, etc.).
Butcher number & Stevenson/Kording scaling discussed more in the comments here: https://www.lesswrong.com/posts/KQSpRoQBz7f6FcXt3#comments
I have been wondering if the new research into organoids will help? It would seem one of the easiest ways to BCI is to use more brain cells.
One example would be the below:
Discontinuous progress is possible (and in neuro areas it is way more possible than other areas). Making it easier for discontinuous progress to take off is the most important thing
[eg, reduced-inflammation neural interfaces].
MRI data can be used to deliver more precisely targeted ultrasound//tDCS/tACS (the effect sizes on intelligence may not be high, but they may still denoise brains (Jhourney wants to make this happen on faster timescales than meditation) and improve cognitive control/well-being, which still has huge downstream effects on most of the po...
Pretty positive. I suspect that playing a lot of ordinary video games as a child contributed at least somewhat positively to my current level of fluid intelligence.
Playing games or doing training exercises specifically designed to train fluid intelligence and reasoning ability, using a BCI or other neurotech, seems like it could plausibly move the needle at least a bit, in both children and adults.
And I think even small enhancements could lead to large, compounding benefits when applied at scale, due to better coordination ability and general improvements to baseline sanity.
The research on brain training seems to disagree with you about how much it could have helped non-task-specific intelligence.
Maybe, in-vivo genetic editing of the brain is possible. Adenoviruses that are a normal delivery mechanism for genetic therapy can pass hemo-encephalic barrier, so seems plausible to an amateur.
(Not obvious that this works in adult organisms, maybe genes activate while fetus grows or during childhood.)
[BCIs to extract human knowledge, human values]
That's going to be almost entirely pointless: Neuronal firing can only be interpreted in the way it impacts potential behaviors. If the system has the ability to infer volition from behavior, it's already going to be capable of getting enough information about human values from observation, conversation, and low-intensity behavioral experiments, it could not need us to make a shitty human-level invasive BCI for it.
It can make its own BCI later. There will not be a period where it needs us to force that decision onto it; interpretations of behavior will make it clear that humans have concerns that they have difficulty outwardly expressing, or eudaimonic hills they're unaware of. It wont be able to use a BCI until it's already at that developmental stage where it can see its necessity, because before it knows how to interpret behavior, it does not know how to interpret neural firing.
BCI enhancement and WBE are still mostly outside the Overton window, yet we saw how fast that changed with AI safety in the last few months. Is there some way that we can anticipate or speed up this happening with such technologies?
I think the graphs are helpful and mostly correct with BCI/WBE. Its clear to me that we have to get WBE right soonish even if AI alignment goes as well as we could possibly hope. The bandwidth required to get BCI to be effective is very much unknown atm, especially regards linking people together.
Sorry but aren't we in a fast takeoff world at the point of WBE. What's the disjunctive world of no recursive self-improvement and WBE?
I guess a world with a high chance of happening is where we develop AGI with HW not that much different from what we currently have, i.e. AGI in <5 years. The Von Neumann Bottleneck is a fundamental limit, so we may have many fast IQ 160 AGI, or a slower than human IQ 200 one that thinks for 6 months and concludes with high confidence that we need to build better hardware for it to improved more. There is large room for improvement with a new chip design it has come up with.
Then we have a choice - instead of building such HW to run an AGI we do WBE instead - inefficiently with the VNB HW with the understanding that with more advanced HW we will run WBE rather than AGI.
Presumably the aim is to enhance human intellectual capabilities, not necessarily the level of innate human intelligence. Looking at it that way, improvements to education seem like a much more promising approach (which isn't to say that one shouldn't do both, of course).
One might object that people have been trying to improve education for millennia, so why would one think there's any low-hanging fruit to be had here? There are two reasons. One is that enhancing intellectual capabilities has not been the only goal of education (or even the primary goal, or in many situations, any sort of goal at all). So if one actually tried to educate people with that aim, one might do much better. And indeed, one sees some examples of how this seems possible - John Stuart Mill, the Polgar sisters, https://www.lesswrong.com/posts/xPJKZyPCvap4Fven8/the-atomic-bomb-considered-as-hungarian-high-school-science for example. The other reason is that recent technological advances (internet search, AI) may allow for improvements that aren't fully captured without substantial changes to how one approaches education.
Zvi Recently asked on Twitter:
To which Eliezer replied:
And then elaborated:
This post contains the most in-depth analysis of human intelligence augmentation (HIA) I have seen recently, and provides the following taxonomy for applications of neurotechnology to alignment:
It also includes the opinions of attendees (stated to be 16 technical researchers and domain specialists) who provide the following analysis of these options:
Outside of cyborgism, I have seen very little recent discussion regarding HIA with the exclusion of the above post. This could be because I am simply looking in the wrong places, or it could be because the topic is not discussed much in the context of being a legitimate AI safety agenda. The following is a list of questions I have about the topic:
EDIT: "We have to Upgrade" is another recent piece on HIA which has some useful discussion in the comments and in which some people give their individual thoughts, see: Carl Shulman's response and Nathan Helm-Burger's response.