Especially targeting working-memory, long-term memory, and conceptual understanding.

This thread is exclusively for things like highly-risky self-gene-therapy, or getting a brain-computer interface surgically implanted. No "get more sleep" or "try melatonin" here.

(If the idea is really good/anti-inductive, you might DM or email it to me instead.)

New to LessWrong?

New Answer
New Comment


7 Answers sorted by

Human brains are most likely undertrained on text data. 

Looking at the scaling laws from DeepMind's Chinchilla work, it looks like a system with as many parameters and as much compute as the brain should be trained on vastly more text than any human can read in their lifetime. Thus, it seems plausible that "lack of training data" is a significant bottleneck on human cognitive capabilities. 

Thus, the question: how do we best increase the efficiency of our own text training process? There are two dimensions to this question:

  • What text should we include? 
    • High quality training text, which is related to the downstream domain of interest, is best. Finding such text is often a bottleneck in consuming it, so ideally we'd compile a large corpus of excellent quality human pretraining text.
    • One option might be to train a "relevant text classifier", which identifies well-written text about domains that are useful for alignment, such as alignment research, ML, math, neuroscience, biology, game theory, etc. Then, use that classifier to scour the internet and all journals / books / etc. for useful text and compile the results.
  • How should we "train" on as much text as possible?
    • One simple option is to just read more, but reading is slow. There's only so much time in the day, and reading takes time away from other activities. 
    • Another option is to convert the text into audio form using text to speech. This makes it vastly more convenient to listen to large quantities of text, but has other issues such as:
      • Images are unavailable.
      • Pausing or replaying past text is often inconvenient.
      • Math or LaTex are almost never captured well.
    • The third option, and the one I believe would be most powerful / scalable, is to use a multi-modal pretrained model to convert the text + images + math into latent representations, then to feed those latents (or dimension-reduced versions of those latents) to the human via one or more of their sensory channels. 
      • What I mean by that is to have some way of translating the latent representation of the text into sensory input for the human, e.g., into an audio signal played into the human's ears, or into vibrations which a system such as the Eyeronman vests delivers as tactile sensations.
        • Doing so would require training the human to decode these sensory representations, but that should be manageable. We'd show the human side-by-side instances of the original text / images / math along with the compressed sensory representations.
      • This approach allows for a greater throughput of information into the human, while avoiding the expense, risk and technical difficulties associated with brain computer interfaces. 
      • It also allows the human to take advantage of the preprocessing provided by the pretrained transformer, meaning that the effective compute available to the human increases as well. 
      • It may also lead to a form of "knowledge distillation" from the pretrained transformer to the human. 
        • Knowledge distillation is an approach in machine learning where the latent knowledge contained in a larger, more powerful ML model is "distilled" into a smaller model. The typical approach is for the smaller model to be trained to imitate the latent representations of the larger model on some reduced corpus of training data.
        • In this case, the human learns to process the latents generated by the model. If those latents contain representations of super-humanly performant abstractions, the human may pick up on such abstractions.
        • This could potentially aid in interpretability as well, if the human in question develops a sense for the model's internal representations.
      • We could also reduce the dimensionality of the model's latent representations or strip away irrelevant information so as to further increase the richness / density of the human's input data.

Overall, the approach that I think would be most effective is:

  • Collect a corpus of high quality text, images and math from books and articles that might be alignment relevant (maybe ~20 GB of text).
  • Take a multi-modal transformer pretrained on much more data than the corpus (basically, you'd use the best multi-modal model available).
  • Find some scalable method of translating the model's latents into information-dense, human-learnable sensory input. 
    • These would initially appear like random noise / images / vibrations to a human, but with exposure, start to make sense as the brain adapts to the new encoding. 
    • Analogously, the Eyeronman vests I mentioned translate 3-D scene representations into vibrations. After enough time with one, people can pick up a sense of what the environment around them is like through the vibrations from the vest.
  • Translate the corpus into those sensory representations.
  • Feed them to the human.

This is a pretty basic setup. Information only flows in one direction, from the model to the human. Most likely, there are ways of improving things by having the model learn to produce latent representations that are more useful for the cognitive tasks the human intends to perform. E.g., the methodology in "Training Language Models with Language Feedback" can be adapted so that the human can provide feedback on what sorts of things the model should focus more / less on. 

Note that the approach of translating external information into sensory inputs handles the "getting lots of information into the human" problem. The "getting lots of information out of the human" problem isn't quite so easy to handle. Humans receive more information from their senses than they transmit via their actions, so just watching human actions probably doesn't have as high a throughput. Potentially, we can use non-invasive brain imaging tech, which seem to be progressing faster than "read + write" brain computer interfaces. Having high-throughput input + output channels for the brain would let us properly do the whole "merging with technology thing" and keep up with mildly superhuman AIs[1], for a while at least.

  1. ^

    I expect horse versus automobiles comparisons in response to this point. I think this analogy isn't actually illuminating here because learning systems can be combined together much more easily than physical systems. E.g., deepmind's multi-modal Flamingo model took frozen layers from the text-only Chinchilla model and integrated them with a smaller number of trainable parameters for handling the image and image-to-text side of things. Provided you let two learning systems adapt to each other (or even just let one learning system adapt to the other), it's relatively straightforward to combine learning systems together.

The idea of generating and directly transferring a pre-digested latent representation is super interesting, but my prior is that this couldn't work. However a neural network trained from initially randomized weights represents concepts is likely to be highly idiosyncratic to that particular network. Perhaps this could be accomplished between AIs if we can somehow make that process and initial state less random, but how could that ever work for humans?

The highest-bandwith sensory input for humans is their eyes. Doesn't this idea just amount to diagrams of high-dimensional data?

4Quintin Pope
It works for AIs very easily. Just feed the patents from AI 1 into AI 2. No need for special engineering of the two AIs. It also works for humans, at least somewhat. E.g., the Eyeronman vests I mentioned translate 3-D scene representations into vibrations. After enough time with one, people can pick up a sense of what the environment around them is like through the vibrations from the vest. Translating LLM patents into visual input wouldn’t look like normal diagrams. It would look like a random-seeming mishmash of colors and shapes which encode the LLM’s latents. A person would then be shown many pairs of text and the encoded latents the model generated for the text. In time, I expect the person would gain a “text sense” where they can infer the meaning of the text from just the visual encoding of the model’s latents.
2gilch
I think I'm lacking some jargon here. What's a latent/patent in the context of a large language model? "patent" is ungoogleable if you're not talking about intellectual property law. The Eyeronman link didn't seem very informative. No explanation of how it works. I already knew sensory substitution was a thing, but is this different somehow? Is there some neural net pre-digesting its outputs? Is it similarly a random-seeming mismash? Are there any other examples of this kind of thing working for humans? Visually? Would the mismash from a smaller text model be any easier/faster for the human to learn?
4ESRogs
My money's on: typo.

Oooh, this is very promising! I had a semi-similar idea for images instead of text, basically like this but in reverse.

More of a joke, but this post has some ideas: Mad-science IQ Enhancement:

Slatestarcodex has a great post on the intelligence of birds.1

The big take-away: due do the weight restriction of flight, birds have been under huge evolutionary pressure to miniaturize their neurons. The result?

Driven by the need to stay light enough to fly, birds have scaled down their neurons to a level unmatched by any other group. Elephants have about 7,000 neurons per mg of brain tissue. Humans have about 25,000. Birds have up to 200,000. That means a small crow can have the same number of neurons as a pretty big monkey.

Bird brains are 10x more computationally dense than our own? This is a big deal. To put this in perspective, if you replaced just 10% of your brain volume with crow neurons you could almost double your computational capacity.

I know this sounds like complete insanity, but given how every intervention to raise IQ has been ineffective, this bird-brain scheme is probably much more promising than any pharmaceutical based approach.

This raises all sorts of interesting questions. How does miniaturization effect heat dissipation? Oxygenation? Energy consumption? Could one build a human-sized brain with bird-dense neurons?

Though the brain is immunologically privileged, there is still the neuroimmune system. Can we genetically modify crow cortical neuron progenitor cells to not trigger the neuroimmune system? Is there any chance of human neurons and bird-neurons integrating usefully?

Brain grafts have been unpromising for intelligence enhancement because you would have to replace much of the brain to make a difference. With bird tissue, this problem is 10x less relevant.

Or more ominously, what if we breed crows for brain size? Hyper-Intelligent flightless crows pecking keyboards in hedgefund basements? To paraphrase Douglas Adams, there is another theory which states that this is already happening.

Could one build a whale-sized brain with fairy wasp–sized neurons?

Neanderthals had a bigger braincase than Homo Sapiens. It's not likely that they were any smarter than us, but we can't totally rule that out. They're so closely related to us that their genes for bigger heads ought to be pretty compatible if we splice them in where they belong. Would literally bigger brain volume with otherwise Sapiens neurons make us any smarter?

Fairy wasp neurons are even smaller [citation needed].

I actually spent several years studying this question intensely while in grad school studying neuroscience. I have a lot of thoughts about promising avenues, but my ultimate conclusion was that there just wasn't time to get a working product before the world developed superintelligent AGI, and thus, it wasn't worth pursuing.

For the record, this seems bonkers to me. (Hope this isn't rude, I just want to be frank; not saying you're bonkers or anything, just that I super super don't see the logic here.) Like, are you saying that you're 90% sure we'll get AGI in the next 5 years? Or are you saying that you're 90% sure we'll get AGI in the next 15 years, and that we wouldn't get a 30 IQ point boosting drug within the next 10 years if we tried? IMO it would be valuable to the world for you to write up, even in very rough form, your ideas for intelligence enhancement, and your back of envelope wild guesses as to their costs and benefits and what it would take to investigate/test them.

9Nathan Helm-Burger
Yeah, I'm around 95% on AGI in the next fifteen years and less than 1% on 30 IQ boosting drug in that time even with lots of funding and smart people on the problem. What seems bonkers to me is that anyone smart enough to do novel neuroscience work of that caliber not already working full time on AGI alignment. I am of the opinion you just need to be smart, competent/agentic, and somewhat scientifically/mathematically educated to have a chance of meaningfully contributing to alignment research. I want more such people focusing on that ASAP.
7TekhneMakre
  [Maybe hard to explore this question in this context, but this seems likely mistaken (in particular, poorly calibrated). Curious why you think this if you're willing to share.]   Could you say more? What about 15 IQ points? What are the obstacles? What are methods you considered and why did they seem infeasible or ineffective?
4Nathan Helm-Burger
I'm working on a post about my neuroscience-informed timelines, and where my understanding agrees and disagrees with Ajeya's Bio Anchors report. I'll separately keep in mind your request for summary of my neuroscience research and why I think it's not tractable in the timeframe I think we have.
3TekhneMakre
Thanks!

I recently saw What's up with psychonetics?. It seems like a kind of meditation practice, but one focused on gaining access to and control of mental/perceptual resources. Not sure how risky this is, but the linked text had some warnings about misuse. It might be applicable to working or long-term memory, and specifically talks about conceptual understanding ("pure meanings") as a major component of the practice.

There's a tuplamancy-adjacent construct called a "servitor", which sounds like a kind of persistent hallucination that might be able to perform various automatic functions. I can't imagine such a thing being any more useful than a smartphone (probably much less), but perhaps it would have a faster direct-mental interface.

For example, I wonder if some kind of "notepad" servitor could expand one's working memory, which seems like a major bottleneck in humans. I.e. quickly offload writing/images to the persistent hallucination, and then just look at it to reload it. This would be easy enough to test with something like digit-span or dual-n-back.

Given the way reading fails to work in dreams (it gets completely confabulated based on expectations, and then erased/regenerated as soon as you look away/back), I think there's a significant chance a servitor couldn't persist writing with any reliability, but it might be worth a try.

Trepanning. I've heard rumors that this reduces CSF pressure on the brain allowing it to expand a bit and increase intelligence. It was once common practice among many ancient cultures, but that doesn't prove they had good reasons. I find the supposed effect and proposed mechanism of action highly dubious, and, of course, brain damage/infection may well kill you if done improperly, but you asked for "risky" and "wacky".

[This comment is no longer endorsed by its author]

 if I were a billionaire with some means of getting around the FDA

Do you have to get around the FDA if you're doing non-commercialized, non-academically-published research? Like, basically, friends making legal (because novel) chemicals for friends to take voluntarily?

IIRC Gwern's Nootropics and "Algernon Argument" pages basically came to the same idea: bearish on most nootropics, except stimulants and moda, due to evolutionary/biological tradeoffs.

4TekhneMakre
Algernon's Law doesn't apply if you can eliminate the tradeoff. E.g. if the tradeoff was intelligence vs. energy costs, and it's no longer difficult to feed your intelligence-enhanced human 5000 calories a day reliably.
1Nicholas / Heather Kross
Exactly, hence the support of stimulants and/or modafinil, which trades off calories.
2TekhneMakre
It would be interesting to see analyses of other tradeoffs, and use that to hypothesize other classes of nootropics. E.g. if there's some neural operation that's bottlenecked on some chemical that takes a long time to synthesize or requires rare materials or causes damage or something, we could supply that chemical, or figure out how to prevent that damage, or something. 
2 comments, sorted by Click to highlight new comments since:

Perhaps not wacky enough, so I'll just comment, but language is something of a tool of thought. Proper math notations can be the difference between something being unthinkable and obvious. Some notations, like APL, are extremely terse, allowing one to fit entire algorithms in ones head at once that would otherwise only be understandable in smaller chunks. Perhaps learning and expanding upon such notations could be valuable.

Similarly, there are conlangs with extreme terseness, such as Ithkuil. This language is extremely difficult, and even its creator cannot speak it with any fluency (and even if someone could, they probably couldn't express themselves any more quickly than they could in a natural language like English), however, once generated, one could probably fit more information in their auditory loop at once, effectively expanding ones (linguistic) working memory a bit.

fit entire algorithms in ones head at once that would otherwise only be understandable in smaller chunks. Perhaps learning and expanding upon such notations could be valuable.

My first reaction was to wonder how this is any different from what already happens in pure math, theoretical physics & TCS etc. Reflecting on this led to my second reaction, which is that jargon brevity correlates with (utility x frequency) which is domain-specific (cf. Terry Tao's remarks on useful notation), and cross-domain work requires a lot of overhead (to manage stuff like avoiding namespace collisions, but the more general version of this) and this overhead work plausibly increases superlinearly with number of domains, which would be reflected in the language as the sort of thing the late Fields medalist Bill Thurston mentioned re: formalizing math:

Mathematics as we practice it is much more formally complete and precise than other sciences, but it is much less formally complete and precise for its content than computer programs. The difference has to do not just with the amount of effort: the kind of effort is qualitatively different. In large computer programs, a tremendous proportion of effort must be spent on myriad compatibility issues: making sure that all definitions are consistent, developing “good” data structures that have useful but not cumbersome generality, deciding on the “right” generality for functions, etc. The proportion of energy spent on the working part of a large program, as distinguished from the bookkeeping part, is surprisingly small. Because of compatibility issues that almost inevitably escalate out of hand because the “right” definitions change as generality and functionality are added, computer programs usually need to be rewritten frequently, often from scratch.

In practice the folks who I'd trust most to have good opinions on how useful such notations-for-thought would be are breadth + detail folks (e.g. Gwern), people who've thought a lot about adjacent topics (e.g. Michael Nielsen and Bret Victor), and generalists who frequently correspond with experts (e.g. Drexler). I'd be curious to know what they think.