All of kman's Comments + Replies

kman62

LW feature request/idea: something like Quick Takes, but for questions (Quick Questions?). I often want to ask a quick question or for suggestions/recommendations on something, and it feels more likely I'd get a response if it showed up in a Quick Takes like feed rather than as an ordinary post like Questions currently do.

It doesn't feel very right to me to post such questions as Quick Takes, since they aren't "takes". (I also tried this once, and it got downvoted and no responses.)

kman20

I'm looking for recommendations for frameworks/tools/setups that could facilitate machine checked math manipulations.

More details:

  • My current use case is to show that an optimized reshuffling of a big matrix computation is equivalent to the original unoptimized expression
    • Need to be able to index submatrices of an arbitrary sized matrices
  • I tried doing the manipulations with some CASes (SymPy and Mathematica), which didn't work at all
    • IIUC the main reason they didn't work is that they couldn't handle the indexing thing
  • I have very little experience with proof a
... (read more)
kman21

I agree with this critique; I think washing machines belong on the "light bulbs and computers" side of the analogy. The analogy has the form:

"germline engineering for common diseases and important traits" : "gene therapy for a rare disease" :: "widespread, transformation uses of electricity" : x

So x should be some very expensive, niche use of electricity that provides a very large benefit to its tiny user base (and doesn't arguably indirectly lead to large future benefits, e.g. via scientific discovery for a niche scientific instrument).

kman42

I think you're mixing up max with argmax?

2faul_sname
Oh, indeed I was getting confused between those. So as a concrete example of your proof we could consider the following degenerate example case def f(N: int) -> int: if N == 0x855bdad365f9331421ab4b13737917cf97b5e8d26246a14c9af1adb060f9724a: return 1 else: return 0 def check(x: int, y: float) -> bool: return f(x) >= y def argsat(y: float, max_search: int = 2**64) -> int or None: # We postulate that we have this function because P=NP if y > 1: return None elif y <= 0: return 0 else: return 0x855bdad365f9331421ab4b13737917cf97b5e8d26246a14c9af1adb060f9724a but we could also replace our degenerate f with e.g. sha256. Is that the gist of your proof sketch?
kman228

Something I didn't realize until now: P = NP would imply that finding the argmax of arbitrary polynomial time (P-time) functions could be done in P-time.

Proof sketch

Suppose you have some polynomial time function f: N -> Q. Since f is P-time, if we feed it an n-bit input x it will output a y with at most max_output_bits(n) bits as output, where max_output_bits(n) is at most polynomial in n. Denote y_max and y_min as the largest and smallest rational numbers encodable in max_output_bits(n) bits.

Now define check(x, y) := f(x) >= y, and argsat(y) := x su

... (read more)
7TsviBT
This is a cool fact I hadn't been aware of. An alternative sketch (it might be nonsense, not being careful): If P = NP then P = co-NP. The problem of "given x, say yes if x is the argmax of f" is co-NP, because you can polynomially verify "no" answers: a witness is y such that f(y) > f(x). So this is in P, i.e. we can polynomially answer "is this the argmax". Since we can polynomially verify this, the map from polytime TMs for some f, to argmax, is in FNP. (<- This step is the one I'm slightly unconfident about but seems right.) So this map is in P since FNP = P (presumably). I think this is spiritually quite similar to your proof, except that the binary search thing isn't necessary?
2faul_sname
Finding the input x such that f(x) == argmax(f(x)) is left as an exercise for the reader though.
kman191
  1. The biggest discontinuity is applied at the threshold between spike and slab. Imagine we have mutations that before shrinkage have the values +4 IQ, +2 IQ, +1.9 IQ, and 1.95 is our spike vs. slab cutoff. Furthermore, let's assume that the slab shrinks 25% of the effect. Then we get 4→3, 2→1.5, 1.9→0, meaning we penalize our +2 IQ mutation much less than our +1.9 mutation, despite their similar sizes, and we penalize our +4 IQ effect size more than the +2 IQ effect size, despite it having the biggest effect, this creates an arbitrary cliff where similar-siz
... (read more)
9Jan Christian Refsgaard
One of us is wrong or confused, and since you are the genetisist it is probably me, in which case I should not have guessed how it works from statistical intuition but read more, I did not because I wanted to write my post before people forgot yours. I assumed the spike and slap were across all SNPs, it sounds like it is per LD region, which is why you have multiple spikes?, I also assumed the slab part would shrink the original effect size, which was what I was mainly interested in. You are welcome to pm me to get my discord name or phone number if a quick call could give me the information to not misrepresent what you are doing My main critique is that I think there is insufficient shrinkage, so it's the shrinkage properties I am mostly interested in getting right :)
kman20

My guess is that peak intelligence is a lot more important than sheer numbers of geniuses for solving alignment. At the end of the day someone actually has to understand how to steer the outcome of ASI, which seems really hard and no one knows how to verify solutions. I think that really hard (and hard to verify) problem solving scales poorly with having more people thinking about it.

Sheer numbers of geniuses would be one effect of raising the average, but I'm guessing the "massive benefits" you're referring to are things like coordination ability and qual... (read more)

kman*20

Emotional social getting on with people vs logic puzzle solving IQ. 

Not sure I buy this, since IQ is usually found to positively correlate with purported measures of "emotional intelligence" (at least when any sort of ability (e.g. recognizing emotions) is tested; the correlation seems to go away when the test is pure self reporting, as in a personality test). EDIT: the correlation even with ability-based measures seems to be less than I expected.

Also, smarter people seem (on average) better at managing interpersonal issues in my experience (anecdotal... (read more)

kman*40

So on one hand, I sort of agree with this. For example, I think people giving IQ tests to LLMs and trying to draw strong conclusions from that (e.g. about how far off we are from ASI) is pretty silly. Human minds share an architecture that LLMs don't share with us, and IQ tests measure differences along some dimension within the space of variation of that architecture, within our current cultural context. I think an actual ASI will have a mind that works quite differently and will quickly blow right past the IQ scale, similar to your example of eagles and ... (read more)

kman20

I'm sort of confused by the image you posted? Von Neumann existed, and there are plenty of very smart people well beyond the "Nerdy programmer" range.

But I think I agree with your overall point about IQ being under stabilizing selection in the ancestral environment. If there was directional selection, it would need to have been weak or inconsistent; otherwise I'd expect the genetic low hanging fruit we see to have been exhausted already. Not in the sense of all current IQ-increasing alleles being selected to fixation, but in the sense of the tradeoffs beco... (read more)

2Donald Hobson
Yes. I expect extreme cases of human intelligence to come from a combination of fairly good genes, and a lot of environmental and developmental luck. Ie if you took 1000 clones of Von Neumann, you still probably wouldn't get that lucky again. (Although it depends on the level of education too) Some ideas about what the tradeoffs might be.  Emotional social getting on with people vs logic puzzle solving IQ.  Engineer parents are apparently more likely to have autistic children. This looks like a tradeoff to me. To many "high IQ" genes and you risk autism. How many angels can dance on the head of a pin. In the modern world, we have complicated elaborate theoretical structures that are actually correct and useful. In the pre-modern world, the sort of mind that now obsesses about quantum mechanics would be obsessing about angels dancing on pinheads or other equally useless stuff. 
kman40

Don't have much to say on it right now, I really need to do a deep dive into this at some point.

kman50

You should show your calculation or your code, including all the data and parameter choices. Otherwise I can't evaluate this.

The code is pretty complicated and not something I'd expect a non-expert (even a very smart one) to be able to quickly check over; it's not just a 100 line python script. (Or even a very smart expert for that matter, more like anyone who wasn't already familiar with our particular codebase.) We'll likely open source it at some point in the future, possibly soon, but that's not decided yet. Our finemapping (inferring causal effects) p... (read more)

kman60

I know the answers to those questions. But I’m not the audience that needs to be convinced.

The audience that needs to be convinced isn't the target audience of this post. But overall your point is taken.

kman20

I'll need to do a deep dive to understand the methods of the first paper, but isn't this contradicted by the recent Tan et. al. paper you linked finding SNP heritability of 0.19 for both direct and population effects of intelligence (which matches Savage Jansen 2018)? They also found ~perfect LDSC correlation between direct and population effects, which would imply the direct and population SNP heritabilities are tagging the exact same genetic effects. (Also interesting that 0.19 is the exactly in the middle of 0.24 and 0.14, not sure what to make of that if anything).

kman30

With a method similar to this. You can easily compute the exact likelihood function P(GWAS results | SNP effects), which when combined with a prior over SNP effects (informed by what we know about the genetic architecture of the trait) gives you a posterior probability of each SNP being causal (having nonzero effect), and its expected effect size conditional on being causal (you can't actually calculate the full posterior since there are 2^|SNPs| possible combinations of SNPs with nonzero effects, so you need to do some sort of MCMC or stochastic search). We may make a post going into more detail on our methods at some point.

LGS134

You should show your calculation or your code, including all the data and parameter choices. Otherwise I can't evaluate this.

I assume you're picking parameters to exaggerate the effects, because just from the exaggerations you've already conceded (0.9/0.6 shouldn't be squared and attenuation to get direct effects should be 0.824), you've already exaggerated the results by a factor of sqrt(0.9/0.6)/0.824 for editing, which is around a 50% overestimate.

I don't think that was deliberate on your part, but I think wishful thinking and the desire to paint a comp... (read more)

kman20

This is based on inferring causal effects conditional on this GWAS. The assumed heritability affects the prior over SNP effect sizes.

5LGS
I don't understand. Can you explain how you're inferring the SNP effect sizes?
kman*100

If evolution has already taken all the easy wins, why do humans vary so much in intelligence in the first place? I don't think the answer is mutation-selection balance, since a good chunk of the variance is explained by additive effects from common SNPs. Further, if you look at the joint distribution over effect sizes and allele frequencies among SNPs, there isn't any clear skew towards rarer alleles being IQ-decreasing.

For example, see the plot below of minor allele frequency vs the effect size of the minor allele. (This is for Educational Attainment, a h... (read more)

1tantrev
@kman - can you explain more about how you did the Bayesian adjustment you did?
2jbash
Who says humans vary all that much in intelligence? Almost all humans are vastly smarter, in any of the ways humans traditionally measure "intelligence", than basically all animals. Any human who's not is in seriously pathological territory, very probably because of some single, identifiable cause. The difference between IQ 100 and IQ 160 isn't like the difference between even a chimp and a human... and chimps are already unusual. Eagles vary in flying speed, but they can all outfly you. Furthermore, eagles all share an architecture adapted to the particular kind of flying they tend to do. There's easily measurable variance among eagles, but there are limits to how far it can go. The eagle architecture flat out can't be extended to hypersonic flight, no matter how much gene selection you do on it. Not even if you're willing to make the sorts of tradeoffs you have to make to get battery chickens.
2Walkabout
I've been over a big educational attainment GWAS, and one of the main problems with them seems to me to be that they make you think that the amount of schooling a human gets is somehow a function of their personal biochemistry. If you really want to look at this, you need to model social effects like availability, quality, and affordability of education, the different mind shapes needed to do well in school for people who are oppressed to different degrees or in different ways, whether people have access to education modalities or techniques shaped to fit their mind, whether the kid is super tall and gets distracted from grad school by a promising career in professional basketball, whether or not their mental illnesses are given proper care, and so on. If you measure how many years of education are afforded to a random human you mostly get social factors. If you're looking at the same big EA GWAS that threw out all non-Europeans that I'm thinking of, they didn't look at any of that. I don't believe a sufficient model is common practice, because as noted in the thread there is effectively no applied branch of the field that would expose the insufficiency of the common models.
2Donald Hobson
That is good evidence that we aren't in a mutation selection balance.  There are also game theoretic balances. Here is a hypothesis that fits my limited knowledge of genetics, and is consistent with the data as I understand it and implies no huge designer baby gains. It's a bit of a worst plausible case hypothesis. But suppose we were in a mutation selection balance, and then there was an environmental distribution shift. The surrounding nutrition and information environment has changed significantly between the environment of evolutionary adaptiveness, and today.  A large fraction of what was important in the ancestral world was probably quite emotion based. Eg calming down other tribe members. Winning friends and influencing people.  In the modern world, abstract logic and maths are somewhat more important than they were, although the emotional stuff still matters too.  Iq tests mostly test the more abstract logical stuff.  Now suppose that the optimum genes aren't that different compared to ambient genetic variation. Say 3 standard deviations. 
kman30

But isn't the R^2 the relevant measure?

Not for this purpose! The simulation pipeline is as follows: the assumed h^2 and number of causal variants is used to generate the genetic effects -> generate simulated GWASes for a range of sample sizes -> infer causal effects from the observed GWASes -> select top expected effect variants for up to N (expected) edits.

6LGS
I'm talking about this graph: What are the calculations used for this graph. Text says to see the appendix but the appendix does not actually explain how you got this graph.
kman50

The paper you called largest ever GWAS gave a direct h^2 estimate of 0.05 for cognitive performance. How are these papers getting 0.2? I don't understand what they're doing. Some type of meta analysis?

You're mixing up h^2 estimates with predictor R^2 performance. It's possible to get an estimate of h^2 with much less statistical power than it takes to build a predictor that good.

The test-retest reliability you linked has different reliabilities for different subtests. The correct adjustment depends on which subtests are being used. If cognitive performance

... (read more)
6LGS
  Thanks. I understand now. But isn't the R^2 the relevant measure? You don't know which genes to edit to get the h^2 number (nor do you know what to select on). You're doing the calculation 0.2*(0.9/0.6)^2 when the relevant calculation is something like 0.05*(0.9/0.6). Off by a factor of 6 for the power of selection, or sqrt(6)=2.45 for the power of editing
kman40

I don't quite understand your numbers in the OP but it feels like you're inflating them substantially. Is the full calculation somewhere?

Not quite sure which numbers you're referring to, but if it's the assumed SNP heritability, see the below quote of mine from another comment talking about missing heritability for IQ:

The SNP heritability estimates for IQ of (h^2 = ~0.2) are primarily based on a low quality test that has a test-retest reliability of 0.6, compared to ~0.9 for a gold-standard IQ test. So a simple calculation to adjust for this gets you a pre

... (read more)
4LGS
The paper you called largest ever GWAS gave a direct h^2 estimate of 0.05 for cognitive performance. How are these papers getting 0.2? I don't understand what they're doing. Some type of meta analysis? The test-retest reliability you linked has different reliabilities for different subtests. The correct adjustment depends on which subtests are being used. If cognitive performance is some kind of sumscore of the subtests, its reliability would be higher than for the individual subtests. Also, I don't think the calculation 0.2*(0.9/0.6)^2 is the correct adjustment. A test-retest correlation is already essentially the square of a correlation of the test with an underlying latent factor (both the test AND the retest have error). E.g. if a test T can be written as T=aX+sqrt(1-a)E where X is ability and E is error (all with standard deviation 1 and the error independent of the ability), then a correlation of T with a resample of T (with new independent error but same ability) would be a^2. But the adjustment to h^2 should be proportional to a^2, so it should be proportional to the test-retest correlation, not the square of the test-retest correlation. Am I getting this wrong?
kman90

For cognitive performance, the ratio was better, but it's not 0.824, it's .

That's variance explained. I was talking about effect size attenuation, which is what we care about for editing.

I checked supplementary table 10, and it says that the "direct-population ratio" is 0.656, not 0.824. So quite possibly the right value is  even for cognitive performance.

Supplementary table 10 is looking at direct and indirect effects of the EA PGI on other phenotypes. The results for the Cog Perf PGI are in supplementary table 13.

5LGS
Thanks! I understand their numbers a bit better, then. Still, direct effects of cognitive performance explain 5% of variance. Can't multiply the variance explained of EA by the attenuation of cognitive performance!    Do you have evidence for direct effects of either one of them being higher than 5% of variance?   I don't quite understand your numbers in the OP but it feels like you're inflating them substantially. Is the full calculation somewhere?
kman10

sadism and wills to power are baked into almost every human mind (with the exception of outliers of course). force multiplying those instincts is much worse than an AI which simply decides to repurpose the atoms in a human for something else.

I don't think the result of intelligence enhancement would be "multiplying those instincts" for the vast majority of people; humans don't seem to end up more sadistic as they get smarter and have more options.

i would argue that everyone dying is actually a pretty great ending compared to hyperexistential risks. it is e

... (read more)
kman92

The IQ GWAS we used was based on only individuals of European ancestry, and ancestry principal components were included as covariates as is typical for GWAS. Non-causal associations from subtler stratification is still a potential concern, but I don't believe it's a terribly large concern. The largest educational attainment GWAS did a comparison of population and direct effects for a "cognitive performance" PGI and found that predicted direct (between sibling) effects were only attenuated by a factor of 0.824 compared to predicted population level effects.... (read more)

7LGS
You should decide whether you're using a GWAS on cognitive performance or on educational attainment (EA). This paper you linked is using a GWAS for EA, and finding that very little of the predictive power was direct effects. Exactly the opposite of your claim: Then they compare this to cognitive performance. For cognitive performance, the ratio was better, but it's not 0.824, it's 0.8242=0.68. But actually, even this is possibly too high: the table in figure 4 has a ratio that looks much smaller than this, and refers to supplementary table 10 for numbers. I checked supplementary table 10, and it says that the "direct-population ratio" is 0.656, not 0.824. So quite possibly the right value is 0.6562=0.43 even for cognitive performance. Why is the cognitive performance number bigger? Well, it's possibly because there's less data on cognitive performance, so the estimates are based on more obvious or easy-to-find effects. The final, predictive power of the direct effects for EA and for cognitive performance is similar, around 3% of the variance, if I'm reading it correctly (not sure about this). So the ratios are somewhat different, but the population GWAS predictive power is also somewhat different in the opposite direction, and these mostly cancel out.
kman70

Plain GWAS, since there aren't any large sibling GWASes. What's the basis for the estimates being much lower and how would we properly adjust for them?

LGS11-3

Your OP is completely misleading if you're using plain GWAS!

GWAS is an association -- that's what the A stands for. Association is not causation. Anything that correlates with IQ (eg melanin) can show up in a GWAS for IQ. You're gonna end up editing embryos to have lower melanin and claiming their IQ is 150

kman65

I mostly think we need smarter people to have a shot at aligning ASI, and I'm not overwhelmingly confident ASI is coming within 20 years, so I think it makes sense for someone to have the ball on this.

1momom2
In that case, per my other comment, I think it's much more likely that superbabies concern only a small fraction of the population and exacerbates inequality without bringing the massive benefits that a generally more capable population would. Do you think superbabies would be put to work on alignment in a way that makes a difference due to geniuses driving the field? I'm having trouble understanding how concretely you think superbabies can lead to significantly improved chance of helping alignment.
kman52

I'm curious about the basis on which you are assigning a probability of causality without a method like mendelian randomisation, or something that tries to assign a probability of an effect based on interpreting the biology like a coding of the output of something like SnpEff to an approximate probability of effect.

Using finemapping. I.e. assuming a model where nonzero additive effects are sparsely distributed among SNPs, you can do Bayesian math to infer how probable each SNP is to have a nonzero effect and its expected effect size conditional on observed GWAS results. Things like SnpEff can further help by giving you a better prior.

3RichardJActon
(For people reading this thread who want an intro to finemapping this lecture is a great place to start for a high level overview https://www.youtube.com/watch?v=pglYf7wocSI)
kman*71

The SNP heritability estimates for IQ of (h^2 = ~0.2) are primarily based on a low quality test that has a test-retest reliability of 0.6, compared to ~0.9 for a gold-standard IQ test. So a simple calculation to adjust for this gets you a predicted SNP heritability of 0.2 * (0.9 / 0.6)^2 = 0.45 0.2 * (0.9 / 0.6) = 0.30 for a gold standard IQ test, which matches the SNP heritability of height. As for the rest of the missing heritability: variants with frequency less than 1% aren't accounted for by the SNP heritability estimate, and they might contribute a d... (read more)

kman40

Could you expand on what sense you have 'taken this into account' in your models? What are you expecting to achieve by editing non-causal SNPs?

If we have a SNP that we're 30% sure is causal, we expect to get 30% of its effect conditional on it being causal. Modulo any weird interaction stuff from rare haplotypes, which is a potential concern with this approach.

The first paper I linked is about epistasic effects on the additivity of a QTLs for quantitative trait, specifically heading date in rice, so this is evidence for this sort of effect on such a trait.

I didn't read your first comment carefully enough; I'll take a look at this.

3RichardJActon
I'm curious about the basis on which you are assigning a probability of causality without a method like mendelian randomisation, or something that tries to assign a probability of an effect based on interpreting the biology like a coding of the output of something like SnpEff to an approximate probability of effect. The logic of 30% of its effect based on 30% chance it's causal only seems like it will be pretty high variance and only work out over a pretty large number of edits. It is also assuming no unexpected effects of the edits to SNPs that are non-causal for whatever trait you are targeting but might do something else when edited.
4TsviBT
Can you comment your current thoughts on rare haplotypes?
kman87

I definitely don't expect additivity holds out to like +20 SDs. We'd be aiming for more like +7 SDs.

2kave
From population mean or from parent mean?
kman10

I think I'm at <10% that non-enhanced humans will be able to align ASI in time, and if I condition on them succeeding somehow I don't think it's because they got AIs to do it for them. Like maybe you can automate some lower level things that might be useful (e.g. specific interpretability experiments), but at the end of the day someone has to understand in detail how the outcome is being steered or they're NGMI. Not sure exactly what you mean by "automating AI safety", but I think stronger forms of the idea are incoherent (e.g. "we'll just get AI X to figure it all out for us" has the problem of requiring X to be aligned in the first place).

2Noosphere89
As far as what a plan to automate AI safety would work out in practice, assuming a relatively strong version of the concept is in this post below, and there will be another post that comes out by the same author talking more about the big risks discussed in the comments below: https://www.lesswrong.com/posts/TTFsKxQThrqgWeXYJ/how-might-we-safely-pass-the-buck-to-ai In general, I think the crux is that in most timelines (at a lower bound, 65-70%) that have AGI developed relatively soon (so timelines from 2030-2045, roughly), and the alignment problem isn't solvable by default/it's at least non-trivially tricky to solved, conditioning on alignment success looks more like "we've successfully figured out how to prepare for AI automation of everything, and we managed to use alignment and control techniques well enough that we can safely pass most of the effort to AI", rather than other end states like "humans are deeply enhanced" or "lawmakers actually coordinated to pause AI, and are actually giving funding to alignment organizations such that we can make AI safe."
kman32

Much less impactful than automating AI safety.

I don't think this will work.

0Noosphere89
How much probability do you assign to automating AI safety not working in time? Because I believe the preparing to automate AI safety work is probably the highest-value in pure ability to reduce X-risk probability, assuming it does work, so I assign much higher EV to automating AI safety, relative to other approaches.
kman20

So you think that, for >95% of currently living humans, the implementation of their CEV would constitute an S-risk in the sense of being worse than extinction in expectation? This is not at all obvious to me; in what way do you expect their CEVs to prefer net suffering?

kman10

Because they might consider that other problems are more worth their time, since smartness changes change their values little.

I mean if they care about solving problems at all, and we are in fact correct about AGI ruin, then they should predictably come to view it as the most important problem and start to work on it?

Are you imagining they're super myopic or lazy and just want to think about math puzzles or something? If so, my reply is that even if some of them ended up like that, I'd be surprised if they all ended up like that, and if so that would be a ... (read more)

2Noosphere89
More so that I'm imagining they might not even have heard of the argument, and it's helpful to note that people like Terence Tao, Timothy Gowers and more are all excellent people at their chosen fields, but most people that have a big impact on the world don't go into AI alignment. Remember, superintelligence is not omniscience. So I don't expect them to be self motivated to work on this specific problem without at least a little persuasion. I'd expect a few superintelligent adults to join alignment efforts, but nowhere near thousands or tens of thousands, and I'd upper bound it at 300-500 new researchers at most in 15-25 years. Much less impactful than automating AI safety.
kman10

My interpretation is that you're 99% of the way there in terms of work required if you start out with humans rather than creating a de novo mind, even if many/most humans currently or historically are not "aligned". Like, you don't need very many bits of information to end up with a nice "aligned" human. E.g. maybe you lightly select their genome for prosociality + niceness/altruism + wisdom, and treat them nicely while they're growing up, and that suffices for the majority of them.

2Noosphere89
I'd actually maybe agree with this, though with the caveat that there's a real possibility you will need a lot more selection/firepower as a human gets smarter, because you lack the ability to technically control humans in the way you can control AIs.
2TsviBT
Also true, though maybe only for O(99%) of people.
kman20

Sure, sounds hard though.

kman30

The SNP itself is (usually) not causal Genotyping arrays select SNPs the genotype of which is correlated with a region around the SNP, they are said to be in linkage with this region as this region tends to be inherited together when recombination happens in meiosis. This is a matter of degree and linkage scores allow thresholds to be set for how indicative a SNP is about the genotype a given region.

This is taken into account by our models, and is why we see such large gains in editing power from increasing data set sizes: we're better able to find the cau... (read more)

7RichardJActon
Could you expand on what sense you have 'taken this into account' in your models? What are you expecting to achieve by editing non-causal SNPs? The first paper I linked is about epistasic effects on the additivity of a QTLs for quantitative trait, specifically heading date in rice, so this is evidence for this sort of effect on such a trait. The general problem is without a robust causal understanding of what an edit does it is very hard to predict what shorts of problem might arise from novel combinations of variants in a haplotype. That's just the nature of complex systems, a single incorrect base in the wrong place may have no effect or cause a critical cascading failure. You don't know until you test it or have characterized the system so well you can graph out exactly what is going to happen. Just testing it in humans and seeing what happens is eventually going to hit something detrimental. When you are trying to do enhancement you tend to need a positive expectation that it will be safe not just no reason to think it won't be. Many healthy people would be averse to risking good health for their kid, even at low probability of a bad outcome.
kman20

This paper found that the heritability of most traits is ~entirely additive, supposedly including IQ according to whatever reference I followed to the paper, though I couldn't actually find where in the paper it said/implied that.

5TsviBT
And then suddenly it's different for personality? Kinda weird.
kman20

IIUC that R = 0.55 number was just the raw correlation between the beta values of the sibling and population GWASes

Actually I don't think this is correct, it accounted for sampling error somehow. I'll need to look into this deeper.

kman40

Subtle population stratification not accounted for by the original GWAS could still be an issue, though I don't expect this would inflate the effects very much. If we had access to raw data we could take into account small correlations between distant variants during finemapping, which would automatically handle assortative mating and stratification.

kman*40

We accounted for inflation of effect sizes due to assortative mating, assuming a mate IQ correlation of 0.4 and total additive heritability of 0.7 for IQ.

IIUC that R = 0.55 number was just the raw correlation between the beta values of the sibling and population GWASes, which is going to be very noisy given the small sample sizes and given that effects are sparse. You can see that the LDSC based estimate is nearly 1, suggesting ~null indirect effects.

1Kris Moore
I think you should take seriously that in the first paper linked in my comment, the population-wide SNP heritability for cognitive ability is estimated at 0.24 and the within-sibship heritability at 0.14. This is very far from the 0.7 estimate from twin studies. While a perfect estimate of direct additive heritability would be higher than 0.14, I don't think that rare variants (and gene-gene interactions, but this would no longer be additive heritability) would get you anywhere close to 0.7. Note also that UK Biobank with its purportedly poor IQ test represents only ~30% of the sample size in that paper. Instead, I think it is becoming clear that traditional twin studies made overly strong assumptions about shared and non-shared environments, such that they over-estimated the contribution of genetics to all kinds of traits from height to blood creatinine concentration (compare gold-standard RDR estimates vs twin estimates here). As implied in my original comment, this is likely especially true for traits strongly mediated by society and behaviour. I find it somewhat counter-intuitive, but this kind of finding keeps cropping up again and again in papers that estimate direct heritability with the most current methods.
2kman
Actually I don't think this is correct, it accounted for sampling error somehow. I'll need to look into this deeper.
4kman
Subtle population stratification not accounted for by the original GWAS could still be an issue, though I don't expect this would inflate the effects very much. If we had access to raw data we could take into account small correlations between distant variants during finemapping, which would automatically handle assortative mating and stratification.
kman10

If they're that smart, why will they need to be persuaded?

2Noosphere89
Because they might consider that other problems are more worth their time, since smartness changes change their values little. And maybe they believe that AI alignment isn't impactful for technical/epistemic reasons. I'm confused/surprised I need to make this point, because I don't automatically think they will be persuaded that AI alignment is a big problem they will need to work on, and some effort will likely still need to be required.
kman10

What would it mean for them to have an "ASI slave"? Like having an AI that implements their personal CEV?

1LWLW
Yeah something like that, the ASI is an extension of their will.
kman42

(And that's not even addressing how you could get super-smart people to work on the alignment problem).

I mean if we actually succeeded at making people who are +7 SD in a meaningful way, I'd expect that at least good chunk of them would figure out for themselves that it makes sense to work on it.

3Noosphere89
That requires either massive personality changes to make them more persuadable, or massive willingness of people to put genetic changes in their germline, and I don't expect either of these to happen before AI automates everything and either takes over, leaving us extinct or humans/other AI control/align AIs successfully. (A key reason for this is that Genesmith admitted that the breakthroughs in germline engineering can't transfer to the somatic side, and that means we'd have to wait 25-30 years in order for it to grow, minimum given that society won't maximally favor the genetically lucky, and that's way beyond most plausible AI timelines at this point)
kman22

In that case I'd repeat GeneSmith's point from another comment: "I think we have a huge advantage with humans simply because there isn't the same potential for runaway self-improvement." If we have a whole bunch of super smart humans of roughly the same level who are aware of the problem, I don't expect the ruthless ones to get a big advantage.

I mean I guess there is some sort of general concern here about how defense-offense imbalance changes as the population gets smarter. Like if there's some easy way to destroy the world that becomes accessible with IQ... (read more)

kman2-1

like the fact that any control technique on AI would be illegal because of it being essentially equivalent to brainwashing, such that I consider AIs much more alignable than humans

 

A lot of (most?) humans end up nice without needing to be controlled / "aligned", and I don't particularly expect this to break if they grow up smarter. Trying to control / "align" them wouldn't work anyway, which is also what I predict will happen with sufficiently smart AI.

6Noosphere89
I think this is my disagreement, in that I don't think that most humans are in fact nice/aligned to each other by default, and the reason why this doesn't lead to catastrophe broadly speaking is a combo of being able to rely on institutions/mechanism design that means even if people are misaligned, you can still get people well off under certain assumptions (capitalism and the rule of law being one such example), combined with the inequalities not being so great that individual humans can found their own societies, except in special cases. Even here, I'd argue that human autocracies are very often misaligned to their citizens values very severely. To be clear about what I'm not claiming, I'm not saying that alignment is worthless, or alignment always or very often fails, because it's consistent with a world where >50-60% of alignment attempts are successful. This means I'm generally much more scared of very outlier smart humans, for example a +7-12 SD human that was in power of a large group of citizens, assuming no other crippling disabilities unless they are very pro-social/aligned to their citizenry. I'm not claiming that alignment will not work, or even that will very often not work, but rather that the chance of failure is real and the stakes are quite high long-term. (And that's not even addressing how you could get super-smart people to work on the alignment problem).
kman*21

I mean hell, figuring out personality editing would probably just make things backfire. People would choose to make their kids more ruthless, not less. 

Not at all obvious to me this is true. Do you mean to say a lot of people would, or just some small fraction, and you think a small fraction is enough to worry?

2LWLW
I should have clarified, I meant a small fraction and that that is enough to worry. 
kman*103

I think I mostly agree with the critique of "pause and do what, exactly?", and appreciate that he acknowledged Yudkowsky as having a concrete plan here. I have many gripes, though.

Whatever name they go by, the AI Doomers believe the day computers take over is not far off, perhaps as soon as three to five years from now, and probably not longer than a few decades. When it happens, the superintelligence will achieve whatever goals have been programmed into it. If those goals are aligned exactly to human values, then it can build a flourishing world beyond ou

... (read more)
kman41

You acknowledge this but I feel you downplay the risk of cancer - an accidental point mutation in a tumour suppressor gene or regulatory region in a single founder cell could cause a tumour.

For each target the likely off-targets can be predicted, allowing one to avoid particularly risky edits. There may still be issues with sequence-independent off-targets, though I believe these are a much larger problem with base editors than with prime editors (which have lower off-target rates in general). Agree that this might still end up being an issue.

Unless you ar

... (read more)
kman10

This seems unduly pessimistic to me. The whole interesting thing about g is that it's easy to measure and correlates with tons of stuff. I'm not convinced there's any magic about FSIQ compared to shoddier tests. There might be important stuff that FSIQ doesn't measure very well that we'd ideally like to select/edit for, but using FSIQ is much better than nothing. Likewise, using a poor man's IQ proxy seems much better than nothing.

This may have missed your point, you seem more concerned about selecting for unwanted covariates than 'missing things', which i... (read more)

kman30

Non-coding means any sequence that doesn't directly code for proteins. So regulatory stuff would count as non-coding. There tend to be errors (e.g. indels) at the edit site with some low frequency, so the reason we're more optimistic about editing non-coding stuff than coding stuff is that we don't need to worry about frameshift mutations or nonsense mutations which knock-out the gene where they occur. The hope is that an error at the edit site would have a much smaller effect, since the variant we're editing had a very small effect in the first place (and... (read more)

4George3d6
I don't particularly see why the same class of errors in regulatory regions couldn't cause a protein to stop being expressed entirely or accidentally up/down-regulate expression by quite a lot, having similar side effects. But it's getting into the practical details of gene editing implementation so no idea.
Load More