I want to bring up a point that I almost never hear talked about in AGI discussions. But to me feels like the only route for humans to have a good future. I’m putting this out for people that already largely share my view on the trajectory of AGI. If you don’t agree with the main premises but are interested, there are lots of other posts that go into why these might be true.
A) AGI seems inevitable.
B) Seems impossible that humans (as they are now) don’t lose control soon after AGI. All the arguments for us retaining control don’t seem to understand that AI isn’t just another tool. I haven’t seen any that grapple with what it really means for a machine to be intelligent.
C) It seems very hard that AGI will be aligned with what humans care about. These systems are just so alien. Maybe we can align it for a little bit but it will be unstable. Very hard to see how alignment is maintained with a thing that is way smarter than us and is evolving on its own.
D) Even if I’m wrong about B or C, humans are not intelligent/wise enough to deal with our current technology level, much less super powerful AI.
Let's say we manage this incredibly difficult task of aligning or controlling AI to humans’ will. There are many amazing humans but also many many awful ones. The awful ones will continue to do awful things with way more leverage. This scenario seems pretty disastrous to me. We don’t want super powerful humans without an increase in wisdom.
To me the conclusion from A+B+C+D is: There is no good outcome (for us) without humans themselves also becoming super intelligent.
So I believe our goal should be to ensure humans are in control long enough to augment our mind with extra capability. (or upload but that seems further off) I’m not sure how this will work but I feel like the things that neuralink or science.xyz are doing, developing brain computer interfaces, are steps in that direction. We also need to figure out scalable technological ways to work on trauma/psychology/fulfilling needs/reducing fears. Humans will somehow have to connect with machines to become much wiser, much more intelligent, and much more enlightened. Maybe we can become something like the amygdala of the neo-neo-cortex.
There are two important timelines in competition here, length of time till we can upgrade, and length of time we can maintain control. We need to upgrade before we lose control. Unfortunately, in my view, on the current trajectory we will lose control before we are able to upgrade. I think we must work to make sure this isn’t the case.
Time Till Upgrade:
- My current estimate is ~15 years. (very big error bars here)
- Ways to shorten
- AI that helps people do this science
- AGI that is good at science and is aligned long enough to help us on this
- More people doing this kind of research
- More funding
- More status to this kind of research
- Maybe better interfaces to the current models will help in the short run and make people more productive thus speeding this development
Time Left With Control:
- My current estimate is ~6 years
- AGI ~3-4 years (less big error bars)
- Loss of control 2-3 years after AGI (pretty big error bars)
- Ways it could be longer
- AI research slows down
- Hope for safety
- Hope we aren’t as close as it seems
- Hope for a slowness to implement agentic behavior
- Competing Agents
- Alignment is pretty good and defense is easier than offense
- ?
In short, one of the most underrepresented ways to work on AI safety is to work on BCI.
The only way forward is through!
There is a way to do ultrasound-mediated delivery of genes across the blood-brain barrier. See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9137703/ and https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6546162/
Gene-editing short-sleeper genes into humans seems more tractable than intelligence/neuroplasticity/postsynaptic density genes (there is a very distinctly identifiable gene for this - https://lukepiette.posthaven.com/reducing-sleep-1 ). But given the stakes, it is totally worth trying gene-editing/gene therapy of intelligence genes too.
https://diyhpl.us/wiki/genetic-modifications/ / https://diyhpl.us/wiki/hplusroadmap/ have a lot.. (Bryan Bishop has more of a certain smg [correlates with knowing what all the right pointers are] than anyone else in the area does)
I have friends (Walter Patterson and Mac Davis) who run minicircle - a novel way of doing gene delivery - see https://www.rapamycin.news/t/minicircle-this-biohacking-company-is-using-a-crypto-city-to-test-controversial-gene-therapies-mit-tech-rev/5647 . Walter also has experience with ultrasound-mediated delivery techniques! They're among the most open-minded and approachable people I ever know. The pool of people willing to try Minicircle overlaps a lot with the pool of people (like Liz Parrish) willing to try radical interventions seen as "too risky" by others, but we need these people.
(https://rle4.life/longevitygenedeliverysystem may be more promising for gene therapy)
See more here:
As for intelligence-enhancing genes - you should ask people at the ISIR conference (Stephen Hsu, James J. Lee, etc...) Even Emil O. W. Kierkegaard has some pointers. See https://emilkirkegaard.dk/en/2019/02/a-partial-test-of-duf1220-for-population-differences-in-intelligence/
For developing new tools to interrogate biological systems (including brain-based diagnostics to get readouts of differences in the brain after gene therapy intervention [you can start in mice first]), Sam Rodriques and Adam Marblestone (and Ed Boyden lab members) should be broadly useful. Maybe brain organoids can move quickly enough to be worth the shot even if the translational relevance is far-from-guaranteed - Herophilus is broadly doing tech development for this (though idk if for gene therapy of intelligence/short-sleeper genes).
Also related - https://forum.effectivealtruism.org/posts/hGY3eErGzEef7Ck64/mind-enhancement-cause-exploration
rewind.ai is a way to bring in cyborgism. There are many in the MIT Media Lab (Social Physics, Affective Computing, Pattie Maes) who have many of the right parts (along with Neurable/Neurosity/etc), but it is unknown if they are nimble enough to make the necessary thing happen
Possibly important/relevant names: Mina Fahmi, https://www.linkedin.com/in/shagun-maheshwari-75b8b7150/, Stephen Frey
NEAR-TERM, AI will produce superabundance and give us the chance to "find more unique ways to increase intelligence" without increasing cognitive overload (increasing the space of "microimprovements" that are Pareto-efficient). This includes reducing microplastic load, reducing pollution load, better optimizing sleep, better optimizing the nutrition of AI/alignment researchers (88% of Americans are metabolically unhealthy and there are many Pareto-efficient improvements like rapamycin, acarbose, canagliflozin,and plasmalogens that may not incur any tradeoffs), or incorporating more support structures for the hundreds of students who now want to drop out of school b/c school is not "modern" enough to help them adapt to the age of AI (GPT4 was the wakeup call to many GenZ'ers that they don't want to be taking APs anymore or that "all of HS was useless")... People complain of "sucking at programming because they didn't learn it at age 11/12 now" - we can train young people to be BCI programmers at younger ages so that they won't have the same complaint when they're older. Eliezer Yudkowsky constantly wishing that he had the energy levels of a 25-year old is proof enough that many brain longevity improvements are Pareto-efficient (he is also proof that more unschooling is pro-"trustworthy AI") [as is professors in their 30s saying "don't count on your memory being as sharp as it was 10 years ago"]
Leopold Aschenbrenner says that we need WAY more AI alignment researchers, but the percent of people smart enough for AI alignment research at any level [*] is not high (pretty much EVERYONE I know doing alignment research has to be extremely smart - at minimum within the top few percentiles of human intelligence if not the top 0.5%). This leaves out many unless we pursue human enhancement.
[*] I strongly say at any level b/c it drastically goes down at the highest levels (eg at levels required to understand Vanessa Kosoy or MIRI-level work, then it's probably top 0.1% -> and even these levels may not be high enough to make a meaningful enough dent in AI risk).
Reducing any further global fluid intelligence decline with age (eg by reducing pollution/microplastic levels - we already see that Starcraft ability declines after age 24) is also necessary, esp b/c there is wide variation in the rate at which human brains decline, and the net effect of reducing aging rate on total integrated human compute may be larger now than ever before (b/c of human population size). Reducing intelligence decline w/age is also more tractable than "increasing intelligence", especially b/c American brains shrink way faster than brains of an indigenous tribe. The strength of brain waves recordable by EEG decreases with aging (making it way harder for BCIs to discriminate intent) - further proof that reducing brain aging rate is the most important/tractable thing for "upgrading".
More frictionless nootropics pipelines (that due to their low cognitive overheads, integrate well with better BCIs). The book "How to Change Your Mind" was written for psychedelics (which have strangely become more popular than nootropics), but it could have been used for nootropics instead. I'm friends with a nootropics startup founder who is trying decentralized ways of testing his nootropic combinations (the combos may have more potential than individual nootropics) and making it frictionless for people to integrate nootropics into their workflow. In an age of near-term AGI where old habits may guarantee extinction, we must change our openness/neuroplasticity to trying new things, and nootropics/injected peptides (like possibly p21 or cerebrolysin)/psychoplastogens could do a better job than psychedelics do at making people sustainably adapt* new habits into their daily pipelines (psychedelics massively disrupt one's day and you cannot take them too often - you can, however, take psychoplastogens or nootropics daily). cf
and david olsen's lab
There is a way of making ALL of this better frictionlessly integrate into people's pipelines (and see how they retroactively modify rewind.ai data => presumably one could even calculate differences in processing speed/WM just from rewind.ai'ish data). I don't know what the scaling curves for drug synthesis are, but there are paths through which they become cheaper much more rapidly (even if done in Roatan or Zuzalu), making mass A/B testing of psychoplastogens much easier than before.
[with all the data we collect from twitch streams and rewind.ai (on top of IRL neurosity/neurable data on brainwaves), it may already be possible to measure and sum up the tiny effects that **small practices in brain health improve]
[brainwave data is often used by cognitive control people - eg Randall O'Riley or David Badre and more enhanced cognitive control can do a lot even in the absence of intelligence markers. It's too bad the labs only collect proprietory data that is never integrated into a global database, but it could be done if we coordinate with the psychologists who study it]. Almost no one has even done a proper study on the effects of nootropics/stimulants on cognitive control or brainwaves, and this should at minimum be done by any institution to enhance human cognition, which could presumably attract loads of funding. Even Eliezer Yudkowsky has now suggested intelligence enhancement in humans as a strategy, especially if paired with an actual slowdown.
(I know biohacker circles who have experience with injected peptides - that's how I injected SS-31 into myself for the first time. I don't know the effect size of this on neuroplasticity, but if it can be done with minimal overhead [esp as AI drives down the cost of labor], it's worth trying)
Comprehensive metabolomic/proteomic profiling is also becoming way cheaper and can be done with minimal cognitive overhead. See SomaLogic and Snyder lab for more (there are labs that find results of sleep deprivation on SomaLogic proteomics - one can and should extend this to general patterns of "enhanced/deprived cognition" - and use this to predict which people lose out "less" from fewer hours of sleep) => this could be paired with brain-wave data. Some quantified-self'ers have many ideas in the right direction, but tbh they still aren't the most curious people, and I'd probably be the best one amongst them if not for my various hangups (oh wait, this is how I could apply for funding, obvs I also need to start taking focalin after a long hiatus). Even Mike Lustgarten has hangups over things I don't have hangups over.
[also biometrics X video games [or tutoring] => may even enable a "freemium" model for games]
Jhourney is a brain-inspired way to "shortcut" "revelations" or "Romeo-Stevens-like" states in people and is developed by very legit neuroscientists (it's possible that nootropics could be integrated into this already extensive brainwave data)
I know one person applying BCI technology to study gamers - his name is Alex Milenkovic and he is SUPER approachable (see my YouTube channel). Nootropics can easily be integrated into this pipeline to see how they affect EEG brainwaves)
Alignment means minimal loss in capturing the intent of human preferences (including memory and context loss, and loss in translation if someone mentors/tutors a single person but not other people who could benefit from the same training), AND loss in taste (taste is better-allocating attention/transformer layers better)
https://foresight.org/whole-brain-emulation-workshop-2023/
[FYI there is nothing to prevent us from cutting open the skull and enlarging the size of the brain (there are neural replacement/repair startups though it is unknown if the technology is mature yet)]
Milan Cvitkovic has also just written another article on the same lines: https://milan.cvitkovic.net/writing/neurotechnology_is_critical_for_ai_alignment/
https://cell.substack.com/p/darpa-neurotech
[perhaps some solutions to the biohackers/neurotech/law coordination problem will be discussed at https://zuzalu.super.site/about !]
The goal of transhumanism is to transcend our genetic limitations - to enable a greater pool of people to contribute to science/innovation than the genetically privileged. Maybe only 1-4% of the population is capable of doing cutting-edge scientific (or alignment) work, but we can massively increase this number via brain enhancement (finding ways to enhance the brains of the 50th percentile to be at the 99th percentile [though better AI-driven tutoring may also help] - and this may be easier than enhancing the 99th percentile brains, though the latter may be more important for the most global kind of risk). The pool of innovations adjacent to GPT4 will cause major disruptions to how we learn/prove ourselves within 1-2 years - originality is the only thing that matters, so break free from our old patterns and move towards what we know what the high-agency "ideal protagonist" (w/zero scarcity mindset) would value.
[neurofeedback is expensive, but I think there is a viable case study where I can ask for funding related for this and where I stream enough of myself to make others want to adopt it at an accelerated timetable]. I think some roughly have intuition about this, but I think this is where much of my unique value lies.
[maybe no one here will appreciate me yet, but I hope GPT5 will. There are many mixed order interactions (depending on 3 or more variables) with extremely large 3rd-or-higher-order coefficients that have not been realized yet simply b/c software/AI has not been powerful enough to implement higher-order interactions that depend on 3 or more variables or certain time-lagged regressions/dependencies that would previously have been forgotten... Gamma becomes more important in densely connected systems]
A workshop was held on organoid intelligence just a few weeks ago - https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2023.1017235
https://hub.jhu.edu/2023/02/28/organoid-intelligence-biocomputers/
(from https://www.nature.com/articles/s41467-021-22741-9 ) / https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6363383/
https://research.vu.nl/en/persons/natalia-goriounova/publications/ - her research has the most neurobiology X intelligence X dendritic complexity in it (way more biological than ISIR research)
https://alleninstitute.org/news/living-brain-donors-are-helping-us-better-understand-our-own-neurons-including-those-potentially-linked-to-alzheimers-disease/
https://research.vu.nl/en/persons/djai-heyer
For the genetic modifications like short sleeper or increasing intelligence, how many upgrades are targeting the somatic cells, and how many upgrades target germline cells?
If any genetic modification or upgrade applies to the somatic cells, how fast does it take effect, or when should you start expecting the genetic modification or upgrade to work?
How strong are the genetic modifications or upgrades can people get for various traits?