How can we make many humans who are very good at solving difficult problems?

Summary (table of made-up numbers)

I made up the made-up numbers in this table of made-up numbers; therefore, the numbers in this table of made-up numbers are made-up numbers.

Call to action

If you have a shitload of money, there are some projects you can give money to that would make supergenius humans on demand happen faster. If you have a fuckton of money, there are projects whose creation you could fund that would greatly accelerate this technology.

If you're young and smart, or are already an expert in either stem cell / reproductive biology, biotech, or anything related to brain-computer interfaces, there are some projects you could work on.

If neither, think hard, maybe I missed something.

You can DM me or gmail me at tsvibtcontact.

Context

The goal

What empowers humanity is the ability of humans to notice, recognize, remember, correlate, ideate, tinker, explain, test, judge, communicate, interrogate, and design. To increase human empowerment, improve those abilities by improving their source: human brains.

AGI is going to destroy the future's promise of massive humane value. To prevent that, create humans who can navigate the creation of AGI. Humans alive now can't figure out how to make AGI that leads to a humane universe.

These are desirable virtues: philosophical problem-solving ability, creativity, wisdom, taste, memory, speed, cleverness, understanding, judgement. These virtues depend on mental and social software, but can also be enhanced by enhancing human brains.

How much? To navigate the creation of AGI will likely require solving philosophical problems that are beyond the capabilities of the current population of humans, given the available time (some decades). Six standard deviations is 1 in 10^9, seven standard deviations is 1 in 10^12. So the goal is to create many people who are 7 SDs above the mean in cognitive capabilities. That's "strong human intelligence amplification". (Why not more SDs? There are many downside risks to changing the process that creates humans, so going further is an unnecessary risk.)

It is my conviction that this is the only way forward for humanity.

Constraint: Algernon's law

Algernon's law: If there's a change to human brains that human-evolution could have made, but didn't, then it is net-neutral or net-negative for inclusive relative genetic fitness. If intelligence is ceteris paribus a fitness advantage, then a change to human brains that increases intelligence must either come with other disadvantages or else be inaccessible to evolution.

Ways around Algernon's law, increasing intelligence anyway:

  • We could apply a stronger selection pressure than human-evolution applied. The selection pressure that human-evolution applied to humans is capped (somehow) by the variation of IGF among all germline cells. So it can only push down mutational load to some point.
  • Maybe (recent, perhaps) human-evolution selected against intelligence beyond some point.
  • We could come up with better design ideas for mind-hardware.
  • We could use resources that evolution didn't have. We have metal, wires, radios, basically unlimited electric and metabolic power, reliable high-quality nutrition, mechanical cooling devices, etc.
  • Given our resources, some properties that would have been disadvantages are no longer major disadvantages. E.g. a higher metabolic cost is barely a meaningful cost.
  • We have different values from evolution; we might want to trade away IGF to gain intelligence.

How to know what makes a smart brain

Figure it out ourselves

  • We can test interventions and see what works.
  • We can think about what, mechanically, the brain needs in order to function well.
  • We can think about thinking and then think of ways to think better.

Copy nature's work

  • There are seven billion natural experiments, juz runnin aroun doin stuff. We can observe the behaviors of the humans and learn what circumstances of their creation leads to fewer or more cognitive capabilities.
  • We can see what human-evolution invested in, aimed at cognitive capabilities, and add more of that.

Brain emulation

The approach

Method: figure out how neurons work, scan human brains, make a simulation of a scanned brain, and then use software improvements to make the brain think better.

The idea is to have a human brain, but with the advantages of being in a computer: faster processing, more scalable hardware, more introspectable (e.g. read access to all internals, even if they are obscured; computation traces), reproducible computations, A/B testing components or other tweaks, low-level optimizable, process forking. This is a "figure it out ourselves" method——we'd have to figure out what makes the emulated brain smarter.

Problems

  • While we have some handle on the fast (<1 second) processes that happen in a neuron, no one knows much about the slow (>5 second) processes. The slow processes are necessary for what we care about in thinking. People working on brain emulation mostly aren't working on this problem because they have enough problems as it is.

  • Experiments here, the sort that would give 0-to-1 end-to-end feedback about whether the whole thing is working, would be extremely expensive; and unit tests are much harder to calibrate (what reference to use?).

  • Partial success could constitute a major AGI advance, which would be extremely dangerous. Unlike most of the other approaches listed here, brain emulations wouldn't be hardware-bound (skull-size bound).

  • The potential for value drift——making a human-like mind with altered / distorted / alien values——is much higher here than with the other approaches. This might be especially selected for: subcortical brain structures, which are especially value-laden, are more physiologically heterogeneous than cortical structures, and therefore would require substantially more scientific work to model accurately. Further: because the emulation approach is based on copying as much as possible and then filling in details by seeing what works, many details will be filled in by non-humane processes (such as the shaping processes in normal human childhood).

Fundamentally, brain emulations are a 0-to-1 move, whereas the other approaches take a normal human brain as the basic engine and then modify it in some way. The 0-to-1 approach is more difficult, more speculative, and riskier.

Genomic approaches

These approaches look at the 7 billion natural experiments and see which genetic variants correlate with intelligence. IQ is a very imperfect but measurable and sufficient proxy for problem-solving ability. Since >7 of every 10 IQ points are explained by genetic variation, we can extract a lot of what nature knows about what makes brains have many capabilities. We can't get that knowledge about capable brains in a form usable as engineering (to build a brain from scratch), but we can at least get it in a form usable as scores (which genomes make brains with fewer or more capabilities). These are "copy nature's work" approaches.

Adult brain gene editing

The approach

Method: edit IQ-positive variants into the brain cells of adult humans.

See "Significantly Enhancing ...".

Problems

  • Delivery is difficult.

  • Editors damage DNA.

  • The effect is greatly attenuated, compared to germline genetics. In adulthood, learning windows have been passed by; many genes are no longer active; damage that accumulates has already been accumulated; many cells don't receive the edits. This adds up to an optimistic ceiling somewhere around +2 or +3 SDs.

Germline engineering

This is the way that will work. (Note that there are many downside risks to germline engineering, though AFAICT they can be alleviated to such an extent that the tradeoff is worth it by far.)

The approach

Method: make a baby from a cell that has a genome that has many IQ-positive genetic variants.

Subtasks:

  • Know what genome would produce geniuses. This is already solved well enough. Because there are already polygenic scores for IQ that explain >12% of the observed variance in IQ (pgscatalog.org/score/PGS003724/), 10 SDs of raw selection power would translate into trait selection power at a rate greater than √(1/9) = 1/3, giving >3.3 SDs of IQ trait selection power, i.e. +50 IQ points.

  • Make a cell with such a genome. This is probably not that hard——via CRISPR editing stem cells, via iterated meiotic selection, or via chromosome selection. My math and simulations show that several methods would achieve strong intelligence amplification. If induced meiosis into culturable cells is developed, IMS can provide >10 SDs of raw selection power given very roughly $10^5 and a few months.

  • Know what epigenomic state (in sperm / egg / zygote) leads to healthy development. This is not fully understood——it's an open problem that can be worked on.

  • Given a cell, make a derived cell (diploid mitotic or haploid meiotic offspring cell) with that epigenomic state. This is not fully understood——it's an open problem that can be worked on. This is the main bottleneck.

These tasks don't necessarily completely factor out. For example, some approaches might try to "piggyback" off the natural epigenomic reset by using chromosomes from natural gametes or zygotes, which will have the correct epigenomic state already.

See also Branwen, "Embryo Selection ...".

More information on request. Some of the important research is happening, but there's always room for more funding and talent.

Problems

  • It takes a long time; the baby has to grow up. (But we probably have time, and delaying AGI only helps if you have an out.)

  • Correcting the epigenomic state of a cell to be developmentally competent is unsolved.

  • The baby can't consent, unlike with other approaches, which work with adults. (But the baby can also be made genomically disposed to be exceptionally healthy and sane.)

  • It's the most politically contentious approach.

Signaling molecules for creative brains

The approach

Method: identify master signaling molecules that control brain areas or brain developmental stages that are associated with problem-solving ability; treat adult brains with those signaling molecules.

Due to evolved modularity, organic systems are governed by genomic regulatory networks. Maybe we can isolate and artificially activate GRNs that generate physiological states that produce cognitive capabilities not otherwise available in a default adult's brain. The hope is that there's a very small set of master regulators that can turn on larger circuits with strong orchestrated effects, as is the case with hormones, so that treatments are relatively simple, high-leverage, and discoverable. For example, maybe we could replicate the signaling context that activates childish learning capabilities, or maybe we could replicate the signaling context that activates parietal problem-solving in more brain tissue.

I haven't looked into this enough to know whether or not it makes sense. This is a "copy nature's work" approach: nature knows more about how to make brains that are good at thinking, than what is expressed in a normal adult human.

Problems

  • Who knows what negative effects might result.

  • Learning windows might be irreversibly lost after childhood, e.g. by long-range connections being irrecoverably pruned.

Brain-brain electrical interface approaches

Brain-computer interfaces don't obviously give an opportunity for large increases in creative philosophical problem-solving ability. See the discussion in "Prosthetic connectivity". The fundamental problem is that we, programming the computer part, don't know how to write code that does transformations that will be useful for neural minds.

But brain-brain interfaces——adding connections between brain tissues that normally aren't connected——might increase those abilities. These approaches use electrodes to read electrical signals from neurons, then transmit those signals (perhaps compressed/filtered/transformed) through wires / fiber optic cables / EM waves, then write them to other neurons through other electrodes. These are "copy nature's work" approaches, in the sense that we think nature made neurons that know how to arrange themselves usefully when connected with other neurons.

Problems with all electrical brain interface approaches

  • The butcher number. Current electrodes kill more neurons than they record. That doesn't scale safely to millions of connections.
  • Bad feedback. Neural synapses are not strictly feedforward; there is often reciprocal signaling and regulation. Electrodes wouldn't communicate that sort of feedback, which might be important for learning.

Massive cerebral prosthetic connectivity

Source: https://www.neuromedia.ca/white-matter/

Half of the human brain is white matter, i.e. neuronal axons with fatty sheaths around them to make them transmit signals faster. White matter is ~1/10 the volume of rodent brains, but ~1/2 the volume of human brains. Wiring is expensive and gets minimized; see "Principles of Neural Design" by Sterling and Laughlin. All these long-range axons are a huge metabolic expense. That means fast, long-range, high bandwidth (so to speak——there are many different points involved) communication is important to cognitive capabilities. See here.

A better-researched comparison would be helpful. But vaguely, my guess is that if we compare long-range neuronal axons to metal wires, fiber optic cables, or EM transmissions, we'd see (amortized over millions of connections): axons are in the same ballpark in terms of energy efficiency, but slower, lower bandwidth, and more voluminous. This leads to:

Method: add many millions of read-write electrodes to several brain areas, and then connect them to each other.

See "Prosthetic connectivity" for discussion of variants and problems. The main problem is that current brain implants furnish <10^4 connections, but >10^6 would probably be needed to have a major effect on problem-solving ability, and electrodes tend to kill neurons at the insertion site. I don't know how to accelerate this, assuming that Neuralink is already on the ball well enough.

Human / human interface

Method: add many thousands of read-write electrodes to several brain areas in two different brains, and then connect them to each other.

If one person could think with two brains, they'd be much smarter. Two people connected is not the same thing, but could get some of the benefits. The advantages of an electric interface over spoken language are higher bandwidth, lower latency, less cost (producing and decoding spoken words), and potentially more extrospective access (direct neural access to inexplicit neural events). But it's not clear that there should be much qualitative increase in philosophical problem-solving ability.

A key advantage over prosthetic connectivity is that the benefits might require a couple ooms fewer connections. That alone makes this method worth trying, as it will be probably be feasible soon.

Interface with brain tissue in a vat

Method: grow neurons in vitro, and then connect them to a human brain.

The advantage of this approach is that it would in principle be scalable. The main additional obstacle, beyond any neural-neural interface approaches, is growing cognitively useful tissue in vitro. This is not completely out of the question——see "DishBrain"——but who knows if it would be feasible.

Massive neural transplantation

The approach

Method: grow >10^8 neurons (or appropriate stem cells) in vitro, and then put them into a human brain.

There have been some experiments along these lines, at a smaller scale, aimed at treating brain damage.

The idea is simply to scale up the brain's computing wetware.

Problems

  • It would be a complex and risky surgery.
  • We don't know how to make high-quality neurons in vitro.
  • The arrangement of the neurons might be important, and would be harder to replicate. Using donor tissue might fix this, but becomes more gruesome and potentially risky.
  • It might be difficult to get transplanted tissue to integrate. There's at least some evidence that human cerebral organoids can integrate into mouse brains.
  • Problem-solving might be bottlenecked on long-range communication rather than neuron count.

Support for thinking

Generally, these approaches try to improve human thinking by modifying the algorithm-like elements involved in thinking. They are "figure it out ourselves" approaches.

The approaches

There is external support:

Method: create artifacts that offload some elements of thinking to a computer or other external device.

E.g. the printing press, the text editor, the search engine, the typechecker.

There is mental software:

Method: create methods of thinking that improve thinking.

E.g. the practice of mathematical proof, the practice of noticing rationalization, the practice of investigating boundaries.

There is social software:

Method: create methods of social organization that support and motivate thinking.

E.g. a shared narrative in which such-and-such cognitive tasks are worth doing, the culture of a productive research group.

Method: create methods of social organization that constitute multi-person thinking systems.

E.g. git.

Problems

  • The basic problem is that the core activity, human thinking, is not visible or understood. As a consequence, problems and solutions can't be shared / reproduced / analysed / refactored / debugged. Philosophers couldn't even keep paying attention to the question. There are major persistent blind spots around important cognitive tasks that have bad feedback.
  • Solutions are highly context dependent——they depend on variables that aren't controlled by the technology being developed. This adds to the unscalability of these solutions.
  • The context contains strong adversarial memes, which limits these properties of solutions: speed (onboarding time), scope (how many people), mental energy budget (fraction of each person's energy), and robustness (stability over time and context).

FAQ

What about weak amplification

Getting rid of lead poisoning should absolutely be a priority. It won't greatly increase humanity's maximum intelligence level though.

What about ...

  • BCIs? weaksauce
  • Nootropics? weaksauce
  • Brain training? weaksauce
  • Transplanting bird neurons? Seems risky and unlikely to work.
  • Something something bloodflow? weaksauce
  • Transcranial magnetic stimulation? IDK, probably weaksauce. This is a "counting up from negative up to zero" thing; might remove inhibitions or trauma responses, or add useful noise that breaks anti-helpful states, or something. But it won't raise the cap on insight, probably——people sometimes get to have their peak problem solving sometimes anyway.
  • Ultrasound? ditto
  • Neurofeedback? Possibly... seems like a better bet than other stuff like this, but probably weaksauce.
  • Getting good sleep? weaksauce——good but doesn't make supergeniuses
  • Gut microbiome? weaksauce
  • Mnemonic systems? weaksauce
  • Software exobrain? weaksauces
  • LLMs? no
  • Psychedelics? stop
  • Buddhism? Aahhh, I don't think you get what this is about
  • Embracing evil? go away
  • Rotating armodafinil, dextromethorphan, caffeine, nicotine, and lisdexamfetamine? AAHHH NOOO
  • [redacted]? Absolutely not. Go sit in the corner and think about what you were even thinking of doing.

The real intelligence enhancement is ...

Look, I'm all for healing society, healing trauma, increasing collective consciousness, creating a shared vision of the future, ridding ourselves of malign egregores, blah blah. I'm all for it. But it's a difficult, thinky problem. ...So difficult that you might need some good thinking help with that thinky problem...

Is this good to do?

Yeah, probably. There are many downside risks, but the upside is large and the downsides can be greatly alleviated.

New Comment
87 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Curated. Augmenting human intelligence seems like one of the most important things-to-think-about this century. I appreciated this post's taxonomy.

I appreciate the made of graph of made up numbers that Tsvi made up being clearly labeled as such.

I have a feeling that this post could be somewhat more thorough, maybe with more links to the places where someone could followup on the technical bits of each thread.

8adastra22
Do we really want to curate a post featuring a made up graph of entirely made up numbers? Maybe I entirely missed the point of leading off with that figure.
8Raemon
The point of made up numbers is that they are a helpful tool for teasing out some implicit information from your intuitions, which is often better than not doing that at all, but, it's important that they are useful in a pretty different way from numbers-you-empirically-got-from-somewhere, and thus it's important that they be clearly labeled as made up numbers that Tsvi made up numbers. See: If it's worth doing, it's worth doing with Made Up Statistics
7adastra22
Nothing in that text actually justifies using made up numbers. Note that your example is about explaining an idea—for which there is utility in making up an example to work through—but this discussion is about an article that is discussing very real possible futures. And the perhaps most important take away from this entire article is given in the table in question: the likelihood of success and a measure of the outcomes of various human intelligence augmentation approaches. And this table, it appears, is entirely made up. So why should we trust anything at all about the content of this article? What differentiates it from fiction?
2TsviBT
Basically what Raemon said. I wanted to summarize my opinions, give people something to disagree with (both the numbers and the rubric), highlight what considerations seem important to me (colored fields); but the numbers are made up (because they are predictions, which are difficult; and they are far from fully operationalized; and they are about a huge variety of complex things, so would be difficult to evaluate; and I've thought hard about some of the numbers, but not about most of them). It's better than giving no numbers, no?
7adastra22
No, it is not better than giving no numbers. Usually in quantitative fields (science, engineering, forecasting) it is far better to give no numbers at all than to establish an entirely artificial anchor which may be entirely off-base. If you want a rough number, then make a rough estimate and justify it. Just making up a number without context by which to evaluate it can very well be worse than making no prediction at all.

Brain emulation looks closer than your summary table indicates.

Manifold estimates a 48% chance by 2039.

Eon Systems is hiring for work on brain emulation.

6papetoast
Manifold is pretty weak evidence for anything >=1 year away because there are strong incentives to bet on short term markets.
5TsviBT
I'm not sure how to integrate such long-term markets from Manifold. But anyway, that market seems to have a very vague notion of emulation. For example, it doesn't mention anything about the emulation doing any useful cognitive work!
1Max Lee
Once we get superintelligence, we might get every other technology that the laws of physics allow, even if we aren't that "close" to these other technologies. Maybe they believe in a ≈38% chance of superintelligence by 2039. PS: Your comment may have caused it to drop to 38%. :)

This is great! Everybody loves human intelligence augmentation, but I've never seen a taxonomy of it before, offering handholds for getting started. 

I'd say "software exobrain" is less "weaksauce," and more "80% of the peak benefits are already tapped out, for conscientious people who have heard of OneNote or Obsidian." I also am still holding out for bird neurons with portia spider architectural efficiency and human cranial volume; but I recognize that may not be as practical as it is cool.

ditto

we have really not fully explored ultrasound and afaik there is no reason to believe it's inherently weaker than administering signaling molecules. 

2TsviBT
Signaling molecules can potentially take advantage of nature's GRNs. Are you saying that ultrasound might too?
2sarahconstantin
Neuronal activity could certainly affect gene regulation! so yeah, I think it's possible (which is not a strong claim...lots of things "regulate" other things, that doesn't necessarily make them effective intervention points)
4TsviBT
Yeah, of course it affects gene regulation. I'm saying that -- maayybe -- nature has specific broad patterns of gene expression associated with powerful cognition (mainly, creativity and learning in childhood); and since these are implemented as GRNs, they'll have small, discoverable on-off switches. You're copying nature's work about how to tune a brain to think/learn/create. With ultrasound, my impression is that you're kind of like "ok, I want to activate GABA neurons in this vague area of the temporal cortex" or "just turn off the amygdala for a day lol". You're trying to figure out yourself what blobs being on and off is good for thinking; and more importantly you have a smaller action space compared to signaling molecules -- you can only activate / deactivate whatever patterns of gene expression happen to be bundled together in "whatever is downstream of nuking the amygdala for a day".

If there's a change to human brains that human-evolution could have made, but didn't, then it is net-neutral or net-negative for inclusive relative genetic fitness. If intelligence is ceteris paribus a fitness advantage, then a change to human brains that increases intelligence must either come with other disadvantages or else be inaccessible to evolution.

You're assuming a steady state. Firstly, evolution takes time. Secondly, if humans were, for example, in an intelligence arms-race with other humans (for example, if smarter people can reliably con dumber... (read more)

I think you're underestimating meditation.

Since I've started meditating I've realised that I've been much more sensitive to vibes.

There's a lot of folk who would be scarily capable if the were strong in system 1, in addition to being strong in system 2.

Then there's all the other benefits that mediation can provide if done properly: additional motivation, better able to break out of narratives/notice patterns.

Then again, this is dependent on their being viable social interventions, rather than just aiming for 6 or 7 standard deviations of increase in intelligence.

Meditation has been practiced for many centuries and millions practice it currently.

Please list 3 people who got deeply into meditation, then went on to change the world in some way, not counting people like Alan Watts who changed the world by promoting or teaching meditation.

I think there are many cases of reasonably successful people who often cite either some variety of meditation, or other self-improvement regimes / habits, as having a big impact on their success. This random article I googled cites the billionaires Ray Dalio, Marc Benioff, and Bill Gates, among others. (https://trytwello.com/ceos-that-meditate/)

Similarly you could find people (like Arnold Schwarzenegger, if I recall?) citing that adopting a more mature, stoic mindset about life was helpful to them -- Ray Dalio has this whole series of videos on "life principles" that he likes. And you could find others endorsing the importance of exercise and good sleep, or of using note-taking apps to stay organized.

I think the problem is not that meditation is ineffective, but that it's not usually a multiple-standard-deviations gamechanger (and when it is, it's probably usually a case of "counting up to zero from negative", as TsviBT calls it), and it's already a known technique. If nobody else in the world meditated or took notes or got enough sleep, you could probably stack those techniques and have a big advantage. But alas, a lot of CEOs and other top performers already know to do this ... (read more)

6Viliam
To compare to the obvious alternative, is the evidence for meditation stronger than the evidence for prayer? I assume there are also some religious billionaires and other successful people who would attribute their success to praying every day or something like that.
8Jackson Wagner
Maybe other people have a very different image of meditation than I do, such that they imagine it as something much more delusional and hyperreligious? Eg, some religious people do stuff like chanting mantras, or visualizing specific images of Buddhist deities, which indeed seems pretty crazy to me. But the kind of meditation taught by popular secular sources like Sam Harris's Waking Up app, (or that I talk about in my "Examining The Witness" youtube series about the videogame The Witness), seems to me obviously much closer to basic psychology or rationality techniques than to religious practices. Compare Sam Harris's instructions about paying attention to the contents of one's experiences, to Gendlin's idea of "Circling", or Yudkowsky's concept of "sit down and actually try to think of solutions for five minutes", or the art of "noticing confusion", or the original Feynman essay where he describes holding off on proposing solutions. So it's weird to me when people seem really skeptical of meditation and set a very high burden of proof that they wouldn't apply for other mental habits like, say, CFAR techniques. I'm not like a meditation fanatic -- personally I don't even meditate these days, although I feel bad about not doing it since it does make my life better. (Just like how I don't exercise much anymore despite exercise making my day go better, and I feel bad about that too...) But once upon a time I just tried it for a few weeks, learned a lot of interesting stuff, etc. I would say I got some mundane life benefits out of it -- some, like exercise or good sleep, that only lasted as long as I kept up the habit. and other benefits were more like mental skills that I've retained to today. I also got some very worthwhile philosophical insights, which I talk about, albeit in a rambly way mixed in with lots of other stuff, in my aforementioned video series. I certainly wouldn't say the philosophical insights were the most important thing in my whole life, or anythi
3Viliam
Thanks for answering my question directly in the second half. I find the testimonies of rationalists who experimented with meditation less convincing than perhaps I should, simply because of selection bias. People who have pre-existing affinity towards "woo" will presumably be more likely to try meditation. And they will be more likely to report that it works, whether it does or not. I am not sure how much should I discount for this, perhaps I overdo it. I don't know. A proper experiment would require a control group -- some people who were originally skeptical about meditation and Buddhism in general, and only agreed to do some exactly defined exercises, and preferably the reported differences should be measurable somehow. Otherwise, we have another selection bias, that if there are people for whom meditation does nothing, or is even harmful, they will stop trying. So at the end, 100% of people who tried will report success (whether real or imaginary), because those who didn't see any success have selected themselves out. I approve of making the "secular version of Buddhism", but in a similar way, we could make a "secular version of Christianity". (For example, how is gratitude journaling significantly different from thanking God for all his blessing before you go sleep?) And yet, I assume that the objection against "secular Christianity" on Less Wrong would be much greater than against "secular Buddhism". Maybe I am wrong, but the fact that no one is currently promoting "secular Christianity" on LW sounds like weak evidence. I suspect, the relevant difference is that for an American atheist, Christianity is outgroup, and Buddhism is fargroup. Meditation is culturally acceptable among contrarians, because our neighbors don't do it. But that is unrelated to whether it works or not. Also, I am not sure how secular the "secular Buddhism" actually is, given that people still go to retreats organized by religious people, etc. It feels too much for me to trust that s
4MondSemmel
Re: successful people who meditate, IIRC in Tim Ferriss' book Tools of Titans, meditation was one of the most commonly mentioned habits of the interviewees.
2TsviBT
Are these generally CEO-ish-types? Obviously "sustainably coping with very high pressure contexts" is an important and useful skill, and plausibly meditation can help a lot with that. But it seems pretty different from and not that related to increasing philosophical problem solving ability.
2MondSemmel
This random article I found repeats the Tim Ferriss claim re: successful people who meditate, but I haven't checked where it appears in the book Tools of Titans: Other than that, I don't see why you'd relate meditation just to high-pressure contexts, rather than also conscientiousness, goal-directedness, etc. To me, it does also seem directly related to increasing philosophical problem-solving ability. Particularly when it comes to reasoning about consciousness and other stuff where an improved introspection helps most. Sam Harris would be kind of a posterchild for this, right? What I can't see meditation doing is to provide the kind of multiple SD intelligence amplification you're interested in, plus it has other issues like taking a lot of time (though a "meditation pill" would resolve that) and potential value drift.
6TsviBT
Got any evidence?

Not really.

Is "give the human a calculator and a scratchpad" not allowed in this list?  i.e. if you give a human brain the ability to instantly recall any fact and solve any math problem (by connecting the human brain to a computer via neuralink) seems like this would make us smarter.

We already see this effect in part. For example, having access to chatGPT allows me to program more complicated projects because I can offload sub-problems to the AI (thereby freeing up working-memory to focus on the remaining complexity).  Even just having a piece of paper I c... (read more)

3TsviBT
You mean, recall any fact that's been put into text-searchable form in the past and by you, and solve any calculation problem that's in a reasonably common form. I'm saying that the effect on philosophical problem-solving is just not very large. Yeah, if you've been spending 80% of your time on manually calculating things and 20% on "leaps of logic", and you could just as well spend 90% on the leaps, then calculators help a lot. But it's not by making you be able to do significantly better leaps. Maybe you can become better by getting more practice or something? But generally skills tend to plateau pretty sharply--there's always new bottlenecks, like a clicker game. If an improvement only addresses some smallish subset of the difficulty involved in some overall challenge, the overall challenge isn't addressed that much. Like, if you could do calculations with 10x less effort, what calculations would you do to solve alignment, or get AGI banned, or make sure everyone gets food, or fix the housing crisis, or ....? To put it a different way, I don't think Gödel's lack of a big fast calculator mattered too much?
2Logan Zoellner
  No, I do not mean that at all. An ideal system would store every piece of information its user has ever seen or heard in addition to every book/article/program ever written or recorded and be able to translate problems given in "common english" into objective mathematical proofs then giving an explanation of the answer in English again. This is an empirical question, but based on my own experience I would speculate the gain is quite significant.  Again, merely giving me access to a calculator and a piece of paper makes me better at math than 99.99% of people who do not have access to such tools. would I "solve alignment"? Yes. "get AGI banned" No, because I solved alignment. "make sure everyone gets food, or fix the housing crisis" Both of these are political problems that have nothing to do with "intelligence".  If everyone was 10x smarter, maybe they would stop voting for retarded self-destructive polices. Idk, though.
3TsviBT
That's what I said. It excludes, for example, a fact that the human thinks of, unless ze speaks or writes it. It makes you better at calculation, which is relevant for some kinds of math. It doesn't make you better at math in general though, no. If you're not familiar with higher math (the sort of things that grad students and professors do), you might not be aware: Most of the stuff that most of them do involves not very much that one would plug in to a calculator. What calculations would you plug into your fast-easy-calculator that result in you solving alignment?
2Logan Zoellner
  Already wrote an essay about this.
3TsviBT
I don't think most of those proposals make sense, but anyway, the ones that do make sense only make sense with a pretty extreme math oracle--not something that leaves the human to fill in the "leaps of logic". It's just talking about AGI, basically. Which defeats the purpose.
2Logan Zoellner
  A "math proof only" AGI avoids most alignment problems.  There's no need to worry about paperclip maximizing or instrumental convergence.
3TsviBT
Not true. This isn't the place for this debate, but if you want to know: 1. To get an AGI that can solve problems that require lots of genuinely novel thinking, you're probably pulling an agent out of a hat, and then you have an agent with unknown values and general optimization channels. 2. Even if you only want to solve problems, you still need compute, and therefore wish to conquer the universe (for science!).
2Logan Zoellner
  An agent that only thinks about math problems isn't going to take over the real world (it doesn't even have to know the real world exists, as this isn't a thing you can deduce from first principles). We're going to get compute anyway.  Mundane uses of deep learning already use a lot of compute.

I read some years ago that average IQ of kids is approximately 0.25*(Mom IQ + Dad IQ + 2x population mean IQ).  So simplest and cheapest means to lift population average IQ by 1 standard deviation is just use +4 sd sperm (around 1 in 30000), and high IQ ova if you can convince enough genius women to donate (or clone, given recent demonstration of male and female gamete production from stem cells).  +4sd mom+dad = +2sd kids on average.  This is the reality that allows ultra-wealthy dynasties to maintain ~1.3sd IQ average advantage over genera... (read more)

I think I'm more optimistic about starting with relatively weak intelligence augmentation. For now, I test my fluid intelligence at various times throughout the day (I'm working on better tests justified by algorithmic information theory in the style of Prof Hernandez-Orallo, like this one but it sucks to take https://github.com/mathemajician/AIQ but for now I use my own here: https://github.com/ColeWyeth/Brain-Training-Game), and I correlate the results with everything else I track about my lifestyle using reflect: https://apps.apple.com/ca/app/reflect-tr... (read more)

[-]TsviBT2118

I think it makes sense to pick the low-hanging fruit first (then attempt incrementally harder stuff with the benefit of being slightly smarter)

No, this doesn't make sense.

I think the stuff you're doing is probably fun / cool / interesting / helpful / something you like. That's great! You don't need to make an excuse for doing it, in terms of something about something else.

But no, that's not the right way to make really smart humans. The right way is to directly create the science and tech. You're saying something like "it stands to reason that if we can get a 5% boost on general intelligence, we should do that first, and then apply that to the tech". But

  • It's not a 5% boost to the cognitive capabilities that are the actual bottlenecks to creating the more powerful tech. It's less than that.
  • What you're actually doing is doing the 5% boost, and never doing the other stuff. Doing the other stuff is better for the purposes of making a bunch of supergeniuses. (Which, again, doesn't have to be your goal!)
1Cole Wyeth
I think there's a reasonable chance everything you said is true, except: I intend to do the other stuff after finishing my PhD - though its not guaranteed I'll follow through.  The next paragraph is low confidence because it is outside of my area of expertise (I work on agent foundations, not neuroscience): The problem with neuralink etc. is that they're trying to solve the bandwith problem which is not currently the bottleneck and will take too long to yield any benefits. A full neural lace is maybe similar to a technical solution to alignment in the sense that we won't get either within 20 years at our current intelligence levels. Also, I am not in a position where I have enough confidence in my sanity and intelligence metrics to tamper with my brain by injecting neurons into it and stuff. On the other hand, even minor non-invasive general fluid intelligence increase at the top of the intelligence distribution would be incredibly valuable and profits could be reinvested in more hardcore augmentation down the line. I'd be interested to here where you disagree with this.  It almost goes without saying that if you can make substantial progress on the hardcore approaches that would be much, much more valuable than what I am suggesting, and I encourage you to try.
2TsviBT
My guess is that it would be very hard to get to millions of connections, so maybe we agree, but I'm curious if you have more specific info. Why is it not the bottleneck though? That's fair. Germline engineering is the best approach and mostly doesn't have this problem--you're piggybacking off of human-evolution's knowledge about how to grow a healthy human. You're talking about a handful of people, so the benefit can't be that large. A repeatable method to make new supergeniuses is vastly more valuable.
1Cole Wyeth
I'm not a neuroscientist / cognitive scientist, but my impression is that rapid eye movements are already much faster than my conscious deliberation. Intuitively, this means there's already a lot of potential communication / control / measurement bandwidth left on the table. There is definitely a point beyond which you can't increase human intelligence without effectively adding more densely connected neurons or uploading and increasing clock speed. Honestly I don't think I'm equipped to go deeper into the details here.  I'm not sure I agree with either part of this sentence. If we had some really excellent intelligence augmentation software built into AR glasses we might boost on the order of thousands of people. Also I think the top 0.1% of people contribute a large chunk of economic productivity - say on the order of >5%.  
2TsviBT
I'm talking about neuron-neuron bandwith. https://tsvibt.blogspot.com/2022/11/prosthetic-connectivity.html I agree that neuron-computer bandwidth has easier ways to improve it--but I don't think that bandwidth matters very much.
1Cole Wyeth
Personally I'm unlikely to increase my neuron-neuron bandwidth anytime soon, sounds like a very risky intervention even if possible.

I feel like it would be beneficial to add another sentence or two to the “goal” section, because I’m not at all convinced that we want this. As someone new to this topic, my emotional reaction to reading this list is terror.

Any of these techniques would surely be available to only a small fraction of the world’s population. And I feel like that would almost certainly result in a much worse world than today, for many of the same reasons as AGI. It will greatly increase the distance between the haves and the have-nots. (I get the same feeling reading this as... (read more)

4TsviBT
Ok, I added some links to "Downside risks of genomic selection". Not true! This consideration is the main reason I included a "unit price" column. Germline engineering should be roughly comparable to IVF, i.e. available to middle class and up; and maybe cheaper given more scale; and certainly ought be subsidized, given the decreased lifetime healthcare costs alone. Eh, unless you can explain this more, I think you've been brainwashed by Gattaca or something. Gattaca conflates class with genetic endowment, which is fine because it's a movie about class via a genetics metaphor, but don't be confused that it's about genetics. Did the invention of smart phones increase or decrease the distance? In general, some technologies scale with money, and other technologies scale by bodycount. Each person only gets one brain to receive implants and stuff. Elon Musk, famously extremely rich and baby-obsessed, has what... 12 kids? A peasant could have 12 kids if they wanted to! Germline engineering would therefore be extremely democratic, at least for middle class and up. The solution, of course, is to make the tech even cheaper and more widely available, not to inflict preventable disease and disempowerment on everyone's kids. Stats or GTFO. First, the two specific things you listed are quite genetically heritable. Second, 7 SDs -- which is the most extreme form that I advocate for -- is only a little bit outside the Gaussian human distribution. It's just not that extreme of a change. It seems quite strange to postulate that a highly polygenic trait, if pushed to 5350 out of 10000 trait-positive variants, would suddenly cause major psychological problems, whereas natural-born people with 5250 or 5300 out of 10000 trait-positive variants are fine.
4Raemon
I think the terror reaction is honestly pretty reasonable. ([edit: Not, like, like, necessarily meaning one shouldn't pursue this sort of direction on balance. I think the risks of doing this badly are real and I think the risks of not doing anything are also quite real and probably great for a variety of reasons]) One reason I nonetheless think this is very important to pursue is that we're probably going to end up with superintelligent AI this century, and it's going to be dramatically more alien and scary than the tail-risk outcomes here. I do think the piece would be improved if it acknowledged and grappled with that more.
2TsviBT
The essay is just about the methods. But I added a line or two linking to https://tsvibt.blogspot.com/2022/08/downside-risks-of-genomic-selection.html

The genetic portions of this seem like a manifesto for creating highly intelligent, highly depressed, and thus highly unproductive people.

4TsviBT
What do you mean? Why would they be depressed? Do you mean because they'd be pressured into working on AGI alignment, or something? Yeah, don't do that. Same as with any other kids, you teach them to be curious and good and kind and strong and free and responsible and so on.

I don't understand. The hard problem of alignment/CEV/etc. is that it's not obvious how to scale intelligence while "maintaining" utility function/preferences, and this still applies for human intelligence amplification.

I suppose this is fine if the only improvement you can expect beyond human-level intelligence is "processing speed", but I would expect superhuman AI to be more intelligent in a variety of ways.

5TsviBT
Yeah, there's a value-drift column in the table of made-up numbers. Values matter and are always at stake, and are relatively more at stake here; and we should think about how to do these things in a way that avoids core value drift. You have major advantages when creating humans but tweaked somehow, compared to creating de novo AGI. * The main thing is that you're starting with a human. You start with all the stuff that determines human values--a childhood, basal ganglia giving their opinions about stuff, a stomach, a human body with human sensations, hardware empathy, etc. Then you're tweaking things--but not that much. (Except for in brain emulation, which is why it gets the highest value drift rating.) * Another thing is that there's a strong built-in limit on the strength of one human: skullsize. (Also other hardware limits: one pair of eyes and hands, one voicebox, probably 1 or 1.5 threads of attention, etc.) One human just can't do that much--at least not without interfacing with many other humans. (This doesn't apply for brain emulation, and potentially applies less for some brain-brain connectivity enhancements.) * Another key hardware limit is that there's a limit on how much you can reprogram your thinking, just by introspection and thinking. You can definitely reprogram the high-level protocols you follow, e.g. heuristics like "investigate border cases"; you can maybe influence lower-level processes such as concept-formation by, e.g., getting really good at making new words, but you maybe can't, IDK, tell your brain to allocate microcolumns to analyzing commonalities between the top 1000 best current candidate microcolumns for doing some task; and you definitely can't reprogram neuronal behavior (except through the extremely blunt-force method of drugs). * A third thing is that there's a more plausible way to actually throttle the rate of intelligence increase, compared to AI. With AI, there's a huge compute overhang, and you have no idea what dia

Questions I have:

  1. Why do you think the potential capability improvement of human-human interface is that high? Can you say more on how you imagine that working?
  2. For WBE my current not amazingly informed model thinks the bottleneck is finding a way to run it that wouldn't result in huge value drift. Are the 2% your guess that we could run it successfully without value drift, or that we can run it at all in a way that fooms even if it breaks alignment and potentially causes s-risk? For the latter case I'd have higher probability on that we could get that withi
... (read more)
2TsviBT
These are both addressed in the post.
1Towards_Keeperhood
Well for (1) I don't see what's written in the post matches your 2-20std estimation. You said yourself "But it's not clear that there should be much qualitative increase in philosophical problem-solving ability.". Like higher communication bandwidth would be nice, but it's not like more than 30 people can do significantly useful alignment research and even within those who can there's a huge heavytail IMO. If you could just write more like e.g. if you imagine a smart person effectively getting sth like a bigger brain by recruting areas from some other person (though then it'd presumably require a decent amount of artificial connections again?). Or do you imagine many people turing into sth like a hivemind (and how more precisely might the hivemind operate and why would they be able to be much smarter together than individually)? Such details would be helpful. For (2) I just want to ask for clarification whether your 2% estimate in the table includes mitigating the value drift problems you mentioned. (Which then would seem reasonable to me. But one might also read the table as "2% that it works at all and even then there would probably be significant value drift".) Like with a few billion dollars we could manufacture enough electronmicroscopes to get a human connectome and i'd unfortunately expect that it's not too hard to guess some of the important learning rules and simulate a bunch until the connectome seems like a plausible equilibrium given the firing and learning rules and then it can sorta run and bootstrap even if there's significant divergence from the original human.
3TsviBT
Ok some examples: * Multiple attention heads. * One person solves a problem that induces genuine creative thinking; the other person watches this, and learns how genuine creative thinking works. Not very feasible with current setup, maybe feasible with low-cost hardware access. * One person works on a difficult, high-context question; the other person remembers the stack trace, notices and remembers paths [noticed, but not taken, and then forgotten], debugs including subtle shifts, etc. Not very feasible currently without a bunch of distracting exposition. See TAP. * More direct (hence faster, deeper) implicit knowledge/skill sharing. But a lot of the point is that there are thoughtforms I'm not aware of, which would be created by networked people. The general idea is as I stated: you've genuinely moved somewhat away from several siloed human minds, toward something more integrated.
2TsviBT
(1): Do you think that one person with 2 or more brains would be 2-20 SDs? I have no idea, that's why the range is so high. (2): The .02 is, as the table says, "as described"; so it should be plausibly a realistic emulation of the human brain. That would include getting slower dynamics right-ish, but wouldn't exclude getting value drift anyway. Maybe. Why do you think this?
1Towards_Keeperhood
If I had another copy of my brain I'd guess that might give me like +1std or possibly +2std but very hard to predict. If a +6std person would get another brain from a +5std person the effect would be much lower I'd guess, maybe yielding overall +6.4std or possibly +6.8std. But idk the counterfactual seems hard to predict because I cannot imagine it that concretely. Could be totally wrong. This was maybe not that well expressed. I mostly don't know but it doesn't seem all that unlikely it could work. (I might read your timelines post within a week or so and maybe then I have a better model of your model to better locate cruxes, idk.)
3TsviBT
My main evidence is 1. It's much easier to see the coarse electrical activity, compared to 5-second / 5-minute / 5-hour processes. The former, you just measure voltage or whatever. The latter you have to do some complicated bio stuff (transcriptomics or other *omics). 2. I've asked something like 8ish people associated with brain emulation stuff about slow processes, and they never have an answer (either they hadn't thought about it, or they're confused and think it won't matter which I just think they're wrong about, or they're like "yeah totally but we've already got plenty of problems just understanding the fast electrical stuff"). 3. We have very little understanding of how the algorithms actually do their magic, so we're relying on just copying all the details well enough that we get the whole thing to work.
1Towards_Keeperhood
I mean you can look at neurons in vitro and see how they adopt to different stimuli. Idk I'd weakly guess that the neuron level learning rules are relatively simple, and that they construct more complex learning rules for e.g. cortical minicolumns and eventually cortical columns or sth, and that we might be able to infer from the connectome what kind of function cortical columns perhaps implement, and that this can give us a strong hint for what kind of cortical-column-level learning rules might select for the kind of algorithms implemented there abstractly, and that we can trace rules back to lower levels given the connectome. Tbc i don't think it might look exactly like that, just saying sth roughly like that, where maybe it's actually some common circut loops instead of cortical columns which are interesting or whatever.

Thanks for writing this amazing overview!

Some comments:

  • I think different people might imagine quite different intelligence levels when under +7std thinkoompf.
    • E.g. I think that from like +6.3std the heavytail becomes even a lot stronger because those people can bootstrap themselves extremely good mental software. (My rough guess is that Tsvi::+7std = me::+6.5std, though I'd guess many readers would need to correct in the other direction (aka they might imagine +7std as less impressive than Tsvi).)
  • I think one me::Tsvi::+7std person would probably be enough t
... (read more)
2TsviBT
I agree something like this happens, I just don't think it's that strong of an effect. * A single human still has pretty strong limitations. E.g. fixed skull size (without further intervention); other non-scalable hardware (~one thread of attention, one pair of eyes and hands); self-reprogramming is just hard; benefits of self-reprogramming don't scale (hard to share with other people). * Coercion is bad; without coercion, a supergenius might just not want to work on whatever is strategically important for humanity. * It doesn't look to me like we're even close to being able to figure out AGI alignment, or other gnarly problems for that matter (such as a decoding egregores). So we need a lot more brainpower, lots of lottery tickets. * There's a kind of power that comes from having many geniuses--think Manhattan project. Not sure what you're referring to here. Different methods have different curves. Adult brain editing would have diminishing returns, but nowhere near that diminishing. Plausibly, though I don't know of strong evidence for this. For example, my impression is that modern proof assistants still aren't in a state where a genius youngster with a proof assistant can unlock what feels like the possibility of learning a seemingly superhuman amount of math via direct dialogue with the truth--but I could imagine this being created soon. Do you have other evidence in mind?
1Towards_Keeperhood
Basically agree, but I think alignment is the kind of problem where one supergenius might matter more. E.g. for general relativity Einstein basically managed to find in 3 times faster or sth than the rest of physics would've. I don't think a Manhatton project would've helped there because even after Einstein published GR only relatively few people understood it (if i am informed correctly), and I don't think they could've made progress in the same way Einstein did but would've needed more experimental evidence. Plausible to me that there are other potentially pivotal problems that have something of this character, but idk. Well not very legible evidence, and I could be wrong, but some of my thoughts on mental software: It seems plausible to me that someone with +6.3std would be able to do some bootstrapping loop very roughly like: * find better ontology for modelling what is happening in my mind. * train to relatively-effortlessly model my thoughts in the new better ontology that compresses observations more and thus lets me notice a bit more of what's happening in my mind (and notice pieces where the ontology doesn't seem to fit well). * repeat. The "relatively-effortlessly model well what is happening in my mind" part might help significantly for getting much faster and richer feedback loops for learning thinking skills. When you have a good model of what happened in your mind to produce some output you can better see the parts that were useless and the parts that were important and see how you want your cognitive algorithms to look like and plan how to train yourself to shape them that way. When you master this kind of review-and-improving really well you might be able to apply the skill on itself and bootstrap your review process. It's generally hard to predict what someone smarter might figure out so I wouldn't be confident it's not possible.
2TsviBT
I agree that peak problem-solving ability is very important, which is why I think strong amplification is such a priority. I just... so far I'm either not understanding, or else you're completely making up some big transition between 6 and 6.5?
2Towards_Keeperhood
Yeah I sorta am. I feel like that's what I see from eyeballing the largest supergeniuses (in particular Einstein and Eliezer) but idk it's very few data and maybe I'm wrong.
2TsviBT
My guess would be that you're seeing a genuine difference, but that flavor/magnitude of difference is not not very special to the 6 -> 6.5 transition. See my other comment.
-2Sweetgum
I think you're massively overestimating Eliezer Yudkowsky's intelligence. I would guess it's somewhere between +2 and +3 SD.
6Mo Putera
Seems way underestimated. While I don't think he's at "the largest supergeniuses" level either, even +3 SD implies just top 1 in ~700 i.e. millions of Eliezer-level people worldwide. I've been part of more quantitatively-selected groups talent-wise (e.g. for national scholarships awarded on academic merit) and I've never met anyone like him.
2Sweetgum
But are you sure the way in which he is unique among people you've met is mostly about intelligence rather than intelligence along with other traits?
2TsviBT
Wait are you saying it's illegible, or just bad? I mean are you saying that you've done something impressive and attribute that to doing this--or that you believe someone else has done so--but you can't share why you think so?
1Towards_Keeperhood
Maybe bad would be a better word. Idk I feel like I have a different way of thinking about such intelligence-explosion-dynamics stuff that most people don't have (though Eliezer does) and I cannot really describe it all that well and I think it makes sensible predictions but yeah idk I'd stay sceptical given that I'm not that great at saying why I believe what I believe there. No I don't know of anyone who did that. It's sorta what I've been aiming for since very recently and I don't particularly expect a high chance of success but I'm also not quite +6.3std I think (though I'm only 21 and the worlds where it might succeed are the ones where I continue getting smarter for some time). Maybe I'm wrong but I'd be pretty surprised if sth like that wouldn't work for someone with +7std.
2TsviBT
I mean, I agree that intelligence explosion is a thing, and the thing you described is part of it, and humans can kinda do it, and it helps quite a lot to have more raw cognitive horsepower... I guess I'm not sure we're disagreeing about much here, except that 1. I don't know why you're putting some important transition around 6 SDs. I expect that many capabilities will have shitty precursors in people with less native horsepower; I also expect some capabilities will basically not have such precursors, and so will be "transitions"; I just expect there to be enough such things that you wouldn't see some major transition at one point. I do think there's an important different between 5.5 SD and 7.5 SD, which is that now you've created a human who's probably smarter than any human who's ever lived, so you've gone from 0 to 1 on some difficult thoughts; but I don't think that's special about this range, it would happen at any range. 2. I think that adding more 6 SD or 7 SD is really important, but you maybe don't as much? Not sure what you think.
1Towards_Keeperhood
First tbc, I'm always talking about thinkoompf, not just what's measured by IQ tests but also sanity and even drive. Idk I'm not at all sure about that but it seems to me like Nate and Eliezer might be a decent chunck more competent than all the other people I'm aware of. So maybe for the current era (by which I mostly mean "after the sequences were published") it's like 1 Person (Nate) per decade-or-a-bit-more who becomes really competent, which is very roughly +6std. (EDIT: Retracted because evidence too shaky. It still seems to me like the heavytail of intelligence gets very far very quickly though.) Like I'd guess before the sequences and without having the strong motivator of needing to save humanity the transition might rather have been +6.4std -- +6.8std. Idk. Though tbc I don't really expect to be like "yeah maybe from 6.3std it enters a faster improvement curve which is then not changing that much" but more like the curve just getting steeper and steeper very fast without there being a visible kink. I feel like if we now created someone with +6.3std the person would already become smarter than any person who ever lived because there are certain advantages of being born now which would help a lot for getting up to speed (e.g. the sequences, the Internet).
1Morpheus
Such high diminishing returns in g based on genes seems quite implausible to me, but would be happy if you can point to evidence to the contrary. If it works well for people with average Intelligence, I'd expect it to work at most half as well with +6sd.
1Towards_Keeperhood
Idk I'd be intuitively surprised if adult augmentation would get someone from +6 to +7. I'm like from +0 to +3 is a big difference, and from +6 to +6.3 is an almost as big difference too. But idk maybe not. Maybe partially it's also that I think that intelligence augmentation interventions get harder once you get into higher intelligence levels. Where there are previously easy improvement possibilities there might later need to be more entangled groups of genes that are good and it's harder to tune those. And it's hard to get very good data on what genes working together actually result in very high intelligence because we don't have that many very smart people.

I made up the made-up numbers in this table of made-up numbers; therefore, the numbers in this table of made-up numbers are made-up numbe

These hallucinated outputs are really getting out of hand

Thanks for the detailed writeup. I would personally be against basically all of the suggested methods that could create a significant improvement because the hard problem of consciousness remains hard and it seems very possible that an unconscious human race could result. I was a bit surprised to see no mention of this in the essay.

4TsviBT
I guess that falls under "value drift" in the table. But yeah I think that's extremely unlikely to happen without warning, except in the case of brain emulations. I do think any of these methods would be world-changing, that therefore extremely dangerous and would demand lots of care and caution.
1notfnofn
Could you explain what sort of warnings we'd get with, for instance, the interfaces approach? I don't see how that's possible. Also this is semantics I guess, but I wouldn't classify this under "value drift". If there is such a thing as the hard problem of consciousness and these post-modified humans don't have whatever that is, I wouldn't care whether or not their behaviors and value functions resemble those of today's humans
4TsviBT
Someone gets some kind of interface, and then they stop being conscious. So they act weird, and people are like "hey they're acting super weird, they seem not conscious anymore, this seems bad". https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies
2notfnofn
Yudkowsky's essay is explaining why he believes there is no hard problem of consciousness.

Somewhat surprised that this list doesn't include something along the lines of "punt this problem to a sufficiently advanced AI of the near future." This could potentially dramatically decrease the amount of time required to implement some of these proposals, or otherwise yield (and proceed to implement) new promising proposals. 

It seems to me in general that human intelligence augmentation is often framed in a vaguely-zero-sum way with getting AGI ("we have to all get a lot smarter before AGI, or else..."), but it seems quite possible that AGI or near-AGI could itself help with the problem of human intelligence augmentation.

3TsviBT
So your suggestion for accelerating strong human intelligence amplification is ...checks notes... "don't do anything"? Or are you suggesting accelerating AI research in order to use the improved AI faster? I guess technically that would accelerate amplification but seems bad to do. Maybe AI could help with some parts of the research. But 1. we probably don't need AI to do it, so we should do it now, and 2. if we're not all dead, there will still be a bunch of research that has to be done by humans. On a psychologizing note, your comment seems like part of a pattern of trying to wriggle out of doing things the way that is hard that will work. Looking for such cheat codes is good but not if you don't aggressively prune the ones that don't actually work -- hard+works is better than easy+not-works.
-5Cameron Berg

I think you're underestimating meditation.

Since I've started meditating I've realised that I've been much more sensitive to vibes.

There's a lot of folk who would be scarily capable if the were strong in system 1, in addition to being strong in system 2.

Then there's all the other benefits that mediation can provide if done properly: additional motivation, better able to break out of narratives/notice patterns.

I don't think you mentioned "nootropic drugs" (unless "signaling molecules" is meant to cover that, though it seems more specific).  I don't think there's anything known to give a significant enhancement beyond alertness, but in a list of speculative technologies I think it belongs.

[This comment is no longer endorsed by its author]Reply
5Mateusz Bagiński
mentioned in the FAQ