I'm looking for recommendations for frameworks/tools/setups that could facilitate machine checked math manipulations.
More details:
I agree with this critique; I think washing machines belong on the "light bulbs and computers" side of the analogy. The analogy has the form:
"germline engineering for common diseases and important traits" : "gene therapy for a rare disease" :: "widespread, transformation uses of electricity" : x
So x should be some very expensive, niche use of electricity that provides a very large benefit to its tiny user base (and doesn't arguably indirectly lead to large future benefits, e.g. via scientific discovery for a niche scientific instrument).
I think you're mixing up max with argmax?
Something I didn't realize until now: P = NP would imply that finding the argmax of arbitrary polynomial time (P-time) functions could be done in P-time.
Proof sketch
Suppose you have some polynomial time function f: N -> Q. Since f is P-time, if we feed it an n-bit input x it will output a y with at most max_output_bits(n) bits as output, where max_output_bits(n) is at most polynomial in n. Denote y_max and y_min as the largest and smallest rational numbers encodable in max_output_bits(n) bits.
Now define check(x, y) := f(x) >= y, and argsat(y) := x su
...
- The biggest discontinuity is applied at the threshold between spike and slab. Imagine we have mutations that before shrinkage have the values +4 IQ, +2 IQ, +1.9 IQ, and 1.95 is our spike vs. slab cutoff. Furthermore, let's assume that the slab shrinks 25% of the effect. Then we get 4→3, 2→1.5, 1.9→0, meaning we penalize our +2 IQ mutation much less than our +1.9 mutation, despite their similar sizes, and we penalize our +4 IQ effect size more than the +2 IQ effect size, despite it having the biggest effect, this creates an arbitrary cliff where similar-siz
My guess is that peak intelligence is a lot more important than sheer numbers of geniuses for solving alignment. At the end of the day someone actually has to understand how to steer the outcome of ASI, which seems really hard and no one knows how to verify solutions. I think that really hard (and hard to verify) problem solving scales poorly with having more people thinking about it.
Sheer numbers of geniuses would be one effect of raising the average, but I'm guessing the "massive benefits" you're referring to are things like coordination ability and qual...
Emotional social getting on with people vs logic puzzle solving IQ.
Not sure I buy this, since IQ is usually found to positively correlate with purported measures of "emotional intelligence" (at least when any sort of ability (e.g. recognizing emotions) is tested; the correlation seems to go away when the test is pure self reporting, as in a personality test). EDIT: the correlation even with ability-based measures seems to be less than I expected.
Also, smarter people seem (on average) better at managing interpersonal issues in my experience (anecdotal...
So on one hand, I sort of agree with this. For example, I think people giving IQ tests to LLMs and trying to draw strong conclusions from that (e.g. about how far off we are from ASI) is pretty silly. Human minds share an architecture that LLMs don't share with us, and IQ tests measure differences along some dimension within the space of variation of that architecture, within our current cultural context. I think an actual ASI will have a mind that works quite differently and will quickly blow right past the IQ scale, similar to your example of eagles and ...
I'm sort of confused by the image you posted? Von Neumann existed, and there are plenty of very smart people well beyond the "Nerdy programmer" range.
But I think I agree with your overall point about IQ being under stabilizing selection in the ancestral environment. If there was directional selection, it would need to have been weak or inconsistent; otherwise I'd expect the genetic low hanging fruit we see to have been exhausted already. Not in the sense of all current IQ-increasing alleles being selected to fixation, but in the sense of the tradeoffs beco...
Don't have much to say on it right now, I really need to do a deep dive into this at some point.
You should show your calculation or your code, including all the data and parameter choices. Otherwise I can't evaluate this.
The code is pretty complicated and not something I'd expect a non-expert (even a very smart one) to be able to quickly check over; it's not just a 100 line python script. (Or even a very smart expert for that matter, more like anyone who wasn't already familiar with our particular codebase.) We'll likely open source it at some point in the future, possibly soon, but that's not decided yet. Our finemapping (inferring causal effects) p...
I know the answers to those questions. But I’m not the audience that needs to be convinced.
The audience that needs to be convinced isn't the target audience of this post. But overall your point is taken.
I'll need to do a deep dive to understand the methods of the first paper, but isn't this contradicted by the recent Tan et. al. paper you linked finding SNP heritability of 0.19 for both direct and population effects of intelligence (which matches Savage Jansen 2018)? They also found ~perfect LDSC correlation between direct and population effects, which would imply the direct and population SNP heritabilities are tagging the exact same genetic effects. (Also interesting that 0.19 is the exactly in the middle of 0.24 and 0.14, not sure what to make of that if anything).
With a method similar to this. You can easily compute the exact likelihood function P(GWAS results | SNP effects), which when combined with a prior over SNP effects (informed by what we know about the genetic architecture of the trait) gives you a posterior probability of each SNP being causal (having nonzero effect), and its expected effect size conditional on being causal (you can't actually calculate the full posterior since there are 2^|SNPs| possible combinations of SNPs with nonzero effects, so you need to do some sort of MCMC or stochastic search). We may make a post going into more detail on our methods at some point.
You should show your calculation or your code, including all the data and parameter choices. Otherwise I can't evaluate this.
I assume you're picking parameters to exaggerate the effects, because just from the exaggerations you've already conceded (0.9/0.6 shouldn't be squared and attenuation to get direct effects should be 0.824), you've already exaggerated the results by a factor of sqrt(0.9/0.6)/0.824 for editing, which is around a 50% overestimate.
I don't think that was deliberate on your part, but I think wishful thinking and the desire to paint a comp...
This is based on inferring causal effects conditional on this GWAS. The assumed heritability affects the prior over SNP effect sizes.
If evolution has already taken all the easy wins, why do humans vary so much in intelligence in the first place? I don't think the answer is mutation-selection balance, since a good chunk of the variance is explained by additive effects from common SNPs. Further, if you look at the joint distribution over effect sizes and allele frequencies among SNPs, there isn't any clear skew towards rarer alleles being IQ-decreasing.
For example, see the plot below of minor allele frequency vs the effect size of the minor allele. (This is for Educational Attainment, a h...
But isn't the R^2 the relevant measure?
Not for this purpose! The simulation pipeline is as follows: the assumed h^2 and number of causal variants is used to generate the genetic effects -> generate simulated GWASes for a range of sample sizes -> infer causal effects from the observed GWASes -> select top expected effect variants for up to N (expected) edits.
The paper you called largest ever GWAS gave a direct h^2 estimate of 0.05 for cognitive performance. How are these papers getting 0.2? I don't understand what they're doing. Some type of meta analysis?
You're mixing up h^2 estimates with predictor R^2 performance. It's possible to get an estimate of h^2 with much less statistical power than it takes to build a predictor that good.
...The test-retest reliability you linked has different reliabilities for different subtests. The correct adjustment depends on which subtests are being used. If cognitive performance
I don't quite understand your numbers in the OP but it feels like you're inflating them substantially. Is the full calculation somewhere?
Not quite sure which numbers you're referring to, but if it's the assumed SNP heritability, see the below quote of mine from another comment talking about missing heritability for IQ:
...The SNP heritability estimates for IQ of (h^2 = ~0.2) are primarily based on a low quality test that has a test-retest reliability of 0.6, compared to ~0.9 for a gold-standard IQ test. So a simple calculation to adjust for this gets you a pre
For cognitive performance, the ratio was better, but it's not 0.824, it's .
That's variance explained. I was talking about effect size attenuation, which is what we care about for editing.
I checked supplementary table 10, and it says that the "direct-population ratio" is 0.656, not 0.824. So quite possibly the right value is even for cognitive performance.
Supplementary table 10 is looking at direct and indirect effects of the EA PGI on other phenotypes. The results for the Cog Perf PGI are in supplementary table 13.
sadism and wills to power are baked into almost every human mind (with the exception of outliers of course). force multiplying those instincts is much worse than an AI which simply decides to repurpose the atoms in a human for something else.
I don't think the result of intelligence enhancement would be "multiplying those instincts" for the vast majority of people; humans don't seem to end up more sadistic as they get smarter and have more options.
...i would argue that everyone dying is actually a pretty great ending compared to hyperexistential risks. it is e
The IQ GWAS we used was based on only individuals of European ancestry, and ancestry principal components were included as covariates as is typical for GWAS. Non-causal associations from subtler stratification is still a potential concern, but I don't believe it's a terribly large concern. The largest educational attainment GWAS did a comparison of population and direct effects for a "cognitive performance" PGI and found that predicted direct (between sibling) effects were only attenuated by a factor of 0.824 compared to predicted population level effects....
Plain GWAS, since there aren't any large sibling GWASes. What's the basis for the estimates being much lower and how would we properly adjust for them?
Your OP is completely misleading if you're using plain GWAS!
GWAS is an association -- that's what the A stands for. Association is not causation. Anything that correlates with IQ (eg melanin) can show up in a GWAS for IQ. You're gonna end up editing embryos to have lower melanin and claiming their IQ is 150
I mostly think we need smarter people to have a shot at aligning ASI, and I'm not overwhelmingly confident ASI is coming within 20 years, so I think it makes sense for someone to have the ball on this.
I'm curious about the basis on which you are assigning a probability of causality without a method like mendelian randomisation, or something that tries to assign a probability of an effect based on interpreting the biology like a coding of the output of something like SnpEff to an approximate probability of effect.
Using finemapping. I.e. assuming a model where nonzero additive effects are sparsely distributed among SNPs, you can do Bayesian math to infer how probable each SNP is to have a nonzero effect and its expected effect size conditional on observed GWAS results. Things like SnpEff can further help by giving you a better prior.
The SNP heritability estimates for IQ of (h^2 = ~0.2) are primarily based on a low quality test that has a test-retest reliability of 0.6, compared to ~0.9 for a gold-standard IQ test. So a simple calculation to adjust for this gets you a predicted SNP heritability of 0.2 * (0.9 / 0.6)^2 = 0.45 0.2 * (0.9 / 0.6) = 0.30 for a gold standard IQ test, which matches the SNP heritability of height. As for the rest of the missing heritability: variants with frequency less than 1% aren't accounted for by the SNP heritability estimate, and they might contribute a d...
Could you expand on what sense you have 'taken this into account' in your models? What are you expecting to achieve by editing non-causal SNPs?
If we have a SNP that we're 30% sure is causal, we expect to get 30% of its effect conditional on it being causal. Modulo any weird interaction stuff from rare haplotypes, which is a potential concern with this approach.
The first paper I linked is about epistasic effects on the additivity of a QTLs for quantitative trait, specifically heading date in rice, so this is evidence for this sort of effect on such a trait.
I didn't read your first comment carefully enough; I'll take a look at this.
I definitely don't expect additivity holds out to like +20 SDs. We'd be aiming for more like +7 SDs.
I think I'm at <10% that non-enhanced humans will be able to align ASI in time, and if I condition on them succeeding somehow I don't think it's because they got AIs to do it for them. Like maybe you can automate some lower level things that might be useful (e.g. specific interpretability experiments), but at the end of the day someone has to understand in detail how the outcome is being steered or they're NGMI. Not sure exactly what you mean by "automating AI safety", but I think stronger forms of the idea are incoherent (e.g. "we'll just get AI X to figure it all out for us" has the problem of requiring X to be aligned in the first place).
Much less impactful than automating AI safety.
I don't think this will work.
So you think that, for >95% of currently living humans, the implementation of their CEV would constitute an S-risk in the sense of being worse than extinction in expectation? This is not at all obvious to me; in what way do you expect their CEVs to prefer net suffering?
Because they might consider that other problems are more worth their time, since smartness changes change their values little.
I mean if they care about solving problems at all, and we are in fact correct about AGI ruin, then they should predictably come to view it as the most important problem and start to work on it?
Are you imagining they're super myopic or lazy and just want to think about math puzzles or something? If so, my reply is that even if some of them ended up like that, I'd be surprised if they all ended up like that, and if so that would be a ...
My interpretation is that you're 99% of the way there in terms of work required if you start out with humans rather than creating a de novo mind, even if many/most humans currently or historically are not "aligned". Like, you don't need very many bits of information to end up with a nice "aligned" human. E.g. maybe you lightly select their genome for prosociality + niceness/altruism + wisdom, and treat them nicely while they're growing up, and that suffices for the majority of them.
Sure, sounds hard though.
The SNP itself is (usually) not causal Genotyping arrays select SNPs the genotype of which is correlated with a region around the SNP, they are said to be in linkage with this region as this region tends to be inherited together when recombination happens in meiosis. This is a matter of degree and linkage scores allow thresholds to be set for how indicative a SNP is about the genotype a given region.
This is taken into account by our models, and is why we see such large gains in editing power from increasing data set sizes: we're better able to find the cau...
This paper found that the heritability of most traits is ~entirely additive, supposedly including IQ according to whatever reference I followed to the paper, though I couldn't actually find where in the paper it said/implied that.
IIUC that R = 0.55 number was just the raw correlation between the beta values of the sibling and population GWASes
Actually I don't think this is correct, it accounted for sampling error somehow. I'll need to look into this deeper.
Subtle population stratification not accounted for by the original GWAS could still be an issue, though I don't expect this would inflate the effects very much. If we had access to raw data we could take into account small correlations between distant variants during finemapping, which would automatically handle assortative mating and stratification.
We accounted for inflation of effect sizes due to assortative mating, assuming a mate IQ correlation of 0.4 and total additive heritability of 0.7 for IQ.
IIUC that R = 0.55 number was just the raw correlation between the beta values of the sibling and population GWASes, which is going to be very noisy given the small sample sizes and given that effects are sparse. You can see that the LDSC based estimate is nearly 1, suggesting ~null indirect effects.
If they're that smart, why will they need to be persuaded?
What would it mean for them to have an "ASI slave"? Like having an AI that implements their personal CEV?
(And that's not even addressing how you could get super-smart people to work on the alignment problem).
I mean if we actually succeeded at making people who are +7 SD in a meaningful way, I'd expect that at least good chunk of them would figure out for themselves that it makes sense to work on it.
In that case I'd repeat GeneSmith's point from another comment: "I think we have a huge advantage with humans simply because there isn't the same potential for runaway self-improvement." If we have a whole bunch of super smart humans of roughly the same level who are aware of the problem, I don't expect the ruthless ones to get a big advantage.
I mean I guess there is some sort of general concern here about how defense-offense imbalance changes as the population gets smarter. Like if there's some easy way to destroy the world that becomes accessible with IQ...
like the fact that any control technique on AI would be illegal because of it being essentially equivalent to brainwashing, such that I consider AIs much more alignable than humans
A lot of (most?) humans end up nice without needing to be controlled / "aligned", and I don't particularly expect this to break if they grow up smarter. Trying to control / "align" them wouldn't work anyway, which is also what I predict will happen with sufficiently smart AI.
I mean hell, figuring out personality editing would probably just make things backfire. People would choose to make their kids more ruthless, not less.
Not at all obvious to me this is true. Do you mean to say a lot of people would, or just some small fraction, and you think a small fraction is enough to worry?
I think I mostly agree with the critique of "pause and do what, exactly?", and appreciate that he acknowledged Yudkowsky as having a concrete plan here. I have many gripes, though.
...Whatever name they go by, the AI Doomers believe the day computers take over is not far off, perhaps as soon as three to five years from now, and probably not longer than a few decades. When it happens, the superintelligence will achieve whatever goals have been programmed into it. If those goals are aligned exactly to human values, then it can build a flourishing world beyond ou
You acknowledge this but I feel you downplay the risk of cancer - an accidental point mutation in a tumour suppressor gene or regulatory region in a single founder cell could cause a tumour.
For each target the likely off-targets can be predicted, allowing one to avoid particularly risky edits. There may still be issues with sequence-independent off-targets, though I believe these are a much larger problem with base editors than with prime editors (which have lower off-target rates in general). Agree that this might still end up being an issue.
...Unless you ar
This seems unduly pessimistic to me. The whole interesting thing about g is that it's easy to measure and correlates with tons of stuff. I'm not convinced there's any magic about FSIQ compared to shoddier tests. There might be important stuff that FSIQ doesn't measure very well that we'd ideally like to select/edit for, but using FSIQ is much better than nothing. Likewise, using a poor man's IQ proxy seems much better than nothing.
This may have missed your point, you seem more concerned about selecting for unwanted covariates than 'missing things', which i...
Non-coding means any sequence that doesn't directly code for proteins. So regulatory stuff would count as non-coding. There tend to be errors (e.g. indels) at the edit site with some low frequency, so the reason we're more optimistic about editing non-coding stuff than coding stuff is that we don't need to worry about frameshift mutations or nonsense mutations which knock-out the gene where they occur. The hope is that an error at the edit site would have a much smaller effect, since the variant we're editing had a very small effect in the first place (and...
LW feature request/idea: something like Quick Takes, but for questions (Quick Questions?). I often want to ask a quick question or for suggestions/recommendations on something, and it feels more likely I'd get a response if it showed up in a Quick Takes like feed rather than as an ordinary post like Questions currently do.
It doesn't feel very right to me to post such questions as Quick Takes, since they aren't "takes". (I also tried this once, and it got downvoted and no responses.)