This seems unduly pessimistic to me. The whole interesting thing about g is that it's easy to measure and correlates with tons of stuff. I'm not convinced there's any magic about FSIQ compared to shoddier tests. There might be important stuff that FSIQ doesn't measure very well that we'd ideally like to select/edit for, but using FSIQ is much better than nothing. Likewise, using a poor man's IQ proxy seems much better than nothing.
This may have missed your point, you seem more concerned about selecting for unwanted covariates than 'missing things', which is reasonable. I might remake the same argument by suspecting that FSIQ probably has some weird covariates too -- but that seems weaker. E.g. if a proxy measure correlates with FSIQ at .7, then the 'other stuff' (insofar as it is heritable variation and not just noise) will also correlate with the proxy at .7, and so by selecting on this measure you'd be selecting quite strongly for the 'other stuff', which, yeah, isn't great. FSIQ, insofar as it had any weird unwanted covariates, would probably much less correlated with them than .7
Non-coding means any sequence that doesn't directly code for proteins. So regulatory stuff would count as non-coding. There tend to be errors (e.g. indels) at the edit site with some low frequency, so the reason we're more optimistic about editing non-coding stuff than coding stuff is that we don't need to worry about frameshift mutations or nonsense mutations which knock-out the gene where they occur. The hope is that an error at the edit site would have a much smaller effect, since the variant we're editing had a very small effect in the first place (and even if the variant is embedded in e.g. a sensitive binding site sequence, maybe the gene's functionality can survive losing a binding site, so at least it isn't catastrophic for the cell). I'm feeling more pessimistic about this than I was previously.
Another thing: if you have a test for which g explains the lion's share of the heritable variance, but there are also other traits which contribute heritable variance, and the other traits are similarly polygenic as g (similar number of causal variants), then by picking the top-N expected effect size edits, you'll probably mostly/entirely end up editing variants which affect g. (That said, if the other traits are significantly less polygenic than g then the opposite would happen.)
I should mention, when I wrote this I was assuming a simple model where the causal variants for g and the 'other stuff' are disjoint, which is probably unrealistic -- there'd be some pleiotropy.
Even out of this 10%, slightly less than 10% of that 10% responded to a 98-question survey, so a generous estimate of how many of their customers they got to take this survey is 1%. And this was just a consumer experience survey, which does not have nearly as much emotional and cognitive friction dissuading participants as something like an IQ test.
What if 23&me offered a $20 discount for uploading old SAT scores? I guess someone would set up a site that generates realistically distributed fake SAT scores that everyone would use. Is there a standardized format for results that would be easy to retrieve and upload but hard to fake? Eh, idk, maybe not. Could a company somehow arrange to buy the scores of consenting customers directly from the testing agency? Agree that this seems hard.
Statistical models like those involved in GWASes follow one of many simple rules: crap in, crap out. If you want to find a lot of statistically significant SNPs for intelligence and you try using a shoddy proxy like standardized test score or an incomplete IQ test score as your phenotype, your GWAS is going to end up producing a bunch of shoddy SNPs for "intelligence". Sample size (which is still an unsolved problem for the reasons aforementioned) has the potential to make up for obtaining a low amount of SNPs that have genome-wide significance, but it won't get rid of entangled irrelevant SNPs if you're measuring something other than straight up full-scale IQ.
This seems unduly pessimistic to me. The whole interesting thing about g is that it's easy to measure and correlates with tons of stuff. I'm not convinced there's any magic about FSIQ compared to shoddier tests. There might be important stuff that FSIQ doesn't measure very well that we'd ideally like to select/edit for, but using FSIQ is much better than nothing. Likewise, using a poor man's IQ proxy seems much better than nothing.
Thanks for leaving such thorough and thoughtful feedback!
You could elect to use proxy measures like educational attainment, SAT/ACT/GRE score, most advanced math class completed, etc., but my intuition is that they are influenced by too many things other than pure g to be useful for the desired purpose. It's possible that I'm being too cynical about this obstacle and I would be delighted if someone could give me good reasons why I'm wrong.
The SAT is heavily g-loaded: r = .82 according to Wikipedia, so ~2/3 of the variance is coming from g, ~1/3 from other stuff (minus whatever variance is testing noise). So naively, assuming no noise and that the genetic correlations mirror the phenotype correlations, if you did embryo selection on SAT, you'd be getting .82*h_pred/sqrt(2) SDs g and .57*h_pred/sqrt(2) SDs 'other stuff' for every SD of selection power you exert on your embryo pool (h_pred^2 is the variance in SAT explained by the predictor, we're dividing by sqrt(2) because sibling genotypes have ~1/2 the variance as the wider population). Which is maybe not good; maybe you don't want that much of the 'other stuff', e.g. if it includes personality traits.
It looks like the SAT isn't correlated much with personality at all. The biggest correlation is with openness, which is unsurprising due to the correlation between openness and IQ -- I figured conscientiousness might be a bit correlated, but it's actually slightly anticorrelated, despite being correlated with GPA. So maybe it's more that you're measuring specific abilities as well as g (e.g. non-g components of math and verbal ability).
Another thing: if you have a test for which g explains the lion's share of the heritable variance, but there are also other traits which contribute heritable variance, and the other traits are similarly polygenic as g (similar number of causal variants), then by picking the top-N expected effect size edits, you'll probably mostly/entirely end up editing variants which affect g. (That said, if the other traits are significantly less polygenic than g then the opposite would happen.)
this would be extremely expensive, as even the cheapest professional IQ tests cost at least $100 to administer
Getting old SAT scores could be much cheaper, I imagine (though doing this would still be very difficult). Also, as GeneSmith pointed out we aren't necessarily limited to western countries. Assembling a large biobank including IQ scores or a good proxy might be much cheaper and more socially permissible elsewhere.
The barriers involved in engineering the delivery and editing mechanisms are different beasts.
I do basically expect the delivery problem will gated by missing breakthroughs, since otherwise I'd expect the literature to be full of more impressive results than it actually is. (E.g. why has no one used angiopep coated LNPs to deliver editors to mouse brains, as far as I can find? I guess it doesn't work very well? Has anyone actually tried though?)
Ditto for editors, though I'm somewhat more optimistic there for a handful of reasons:
I mean, your basic argument was "you're trying to do 1000 edits, and the risks will mount with each edit you do", which yeah, maybe I'm being too optimistic here (e.g. even if not a problem at most target sites, errors will predictably be a big deal at some target sites, and it might be hard to predict which sites with high accuracy).
It's not clear to me how far out the necessary breakthroughs are "by default" and how much they could be accelerated if we actually tried, in the sense of how electric cars weren't going anywhere until Musk came along and actually tried (though besides sounding crazy ambitious, maybe this analogy doesn't really work if breakthroughs are just hard to accelerate with money, and AFAIK electric cars weren't really held up by any big breakthroughs, just lack of scale). Getting delivery+editors down would have a ton of uses besides intelligence enhancement therapy; you could target any mono/oligo/poly-genic diseases you wanted. It doesn't seem like the amount of effort currently being put in is concomitant with how much it would be worth, even putting 'enhancement' use cases aside.
one could imagine that if every 3rd or 4th or nth neuron is receiving, processing, or releasing ligands in a different way than either the upstream or downstream neurons, the result is some discordance that is more likely to be destructive than beneficial
My impression is neurons are really noisy, and so probably not very sensitive to small perturbations in timing / signalling characteristics. I guess things could be different if the differences are permanent rather than transient -- though I also wouldn't be surprised if there was a lot of 'spatial' noise/variation in neural characteristics, which the brain is able to cope with. Maybe this isn't the sort of variation you mean. I completely agree that its more likely to be detrimental than beneficial, it's a question of how badly detrimental.
Another thing to consider: do the causal variants additively influence an underlying lower dimensional 'parameter space' which then influences g (e.g. degree of expression of various proteins or characteristics downstream of that)? If this is the case, and you have a large number of causal variants per 'parameter', then if your cells get each edit with about the same frequency on average, then even if there's a ton of mosaicism at the variant level there might not be much at the 'parameter' level. I suspect the way this would actually work out is that some cells will be easier to transfect than others (e.g. due to the geography of the extracellular space that the delivery vectors need to diffuse through), so you'll have some cells getting more total edits than others: a mix of cells with better and worse polygenic scores, which might lead to the discordance problems you suggested if the differences are big enough.
For all of the reasons herein and more, it's my personal prediction that the only ways humanity is going to get vastly smarter by artificial means is through brain machine interfaces or iterative embryo selection.
BMI seems harder than in-vivo editing to me. Wouldn't you need a massive number of connections (10M+?) to even begin having any hope of making people qualitatively smarter? Wouldn't you need to find an algorithm that the brain could 'learn to use' so well that it essentially becomes integrated as another cortical area or can serve as an 'expansion card' for existing cortical areas? Would you just end up bottlenecked by the characteristics of the human neurons (e.g. low information capacity due to noise)?
I don't think this therapy as OP describes it is possible for reasons that have already been stated by HiddenPrior and other reasons
Can you elaborate on this? We'd really appreciate the feedback.
We'd edit the SNPs which have been found to causally influence the trait of interest in an additive manner. The genome would only become "extremely unlikely" if we made enough edits to push the predicted trait value to an extreme value -- which you probably wouldn't want to do for decreasing disease risk. E.g. if someone has +2 SD risk of developing Alzheimer's, you might want to make enough edits to shift them to -2 SD, which isn't particularly extreme.
You're right that this is a risk with ambitious intelligence enhancement, where we're actually interested in pushing somewhat outside the current human range (especially since we'd probably need to push the predicted trait value even further in order to get a particular effect size in adults) -- the simple additive model will break down at some point.
Also, due to linkage disequilibrium, there are things that could go wrong with creating "unnatural genomes" even within the current human range. E.g. if you have an SNP with alleles A and B, and there are mutations at nearby loci which are neutral conditional on having allele A and deleterious conditional on having allele B, those mutations will tend to accumulate in genomes which have allele A (due to linkage disequilibrium), while being purged from genomes with allele B. If allele B is better for the trait in question, we might choose it as an edit site in a person with allele A, which could be highly deleterious due to the linked mutations. (That said, I don't think this situation of large-conditional-effect mutations is particularly likely a priori.)
Promoters (and any non-coding regulatory sequence for that matter) are extremely sensitive to point mutations.
A really important question here is whether the causal SNPs that affect polygenic traits tend to be located in these highly sensitive sequences. One hypothesis would be that regulatory sequences which are generally highly sensitive to mutations permit the occasional variant with a small effect, and these variants are a predominant influence on polygenic traits. This would be bad news for us, since even the best available editors have non-negligible indel rates at target sites.
Another question: there tend to be many enhancers per gene. Is losing one enhancer generally catastrophic for the expression of that gene?
what improvements would be top of mind for you?
For each target the likely off-targets can be predicted, allowing one to avoid particularly risky edits. There may still be issues with sequence-independent off-targets, though I believe these are a much larger problem with base editors than with prime editors (which have lower off-target rates in general). Agree that this might still end up being an issue.
This is exactly it -- the term "off-target" was used imprecisely in the post to keep things simple. The thing we're most worried about here is misedits (mostly indels) at noncoding target sites. We know a target site does something (if the variant there is in fact causal), so we might worry that an indel will cause a big issue (e.g. disabling a promoter binding site). Then again, the causal variant we're targeting has a very small effect, so maybe the sequence isn't very sensitive and an indel won't be a big deal? But it also seems perfectly possible that the sequence could be sensitive to most mutations while permitting a specific variant with a small effect. The effect of an indel will at least probably be less bad than in a coding sequence, where it has a high chance of causing a frameshift mutation and knocking out the coded-for protein.
The important figure of merit for editors with regards to this issue is the ratio of correct edits to misedits at the target site. In the case of prime editors, IIUC, all misedits at the target site are reported as "indels" in the literature (base editors have other possible outcomes such as bystander edits or conversion to the wrong base). Some optimized prime editors have edit:indel ratios of >100:1 (best I've seen so far is 500:1, though IIUC this was just at two target sites, and the rates seem to vary a lot by target site). Is this good enough? I don't know, though I suspect not for the purposes of making a thousand edits. It depends on how large the negative effects of indels are at noncoding target sites: is there a significant risk the neuron gets borked as a result? It might be possible to predict this on a site-by-site basis with a better understanding of the functional genomics of the sequences housing the causal variants which affect polygenic traits (which would also be useful for finding the causal variants in the first place without needing as much data).