Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I am not so sure about that. I am thinking back to the Minnesota Twin Study here, and the related fact that heritability of IQ increases with age (up until age 20, at least). Now, it might be that we're just not great at measuring childhood IQ, or that childhood IQ and adult IQ are two subtly different things. 

But it certainly looks as if there's factors related to adult brain plasticity, motivation (curiosity, love of reading, something) that continue to affect IQ development at least until the age of 18. 

Some points:

  • the 23andme dataset is probably not as useful as you project. They are working from a fixed set of variants, not full genomes or even a complete set of SNPs known to vary. There are certainly many SNPs of interest that just aren't in their data.
  • in projecting the gains from discovering further variants that affect intelligence, it's not clear whether you've accounted for the low hanging fruit effect. With these statistical approaches, we obviously discover the variants of largest effect first. Adding millions of additional genomes or genotypes will allow us to resolve thousands of additional common variants, but they are going to be the ones that have really tiny effect sizes.
  • On the other hand (contradicting point 2 somewhat), quite a substantial fraction of variation in intelligence and other traits is likely due to the genetic load - rare mutations, some likely of substantial effect, all deleterious by definition. Identifying these and their effects is a thorny statistical problem due to their rarity, but if we can, they would actually be very promising edit targets. The advantage being the likely lack of negative side effects, and the fact that the top few for any person would likely be of large effect. Some of them are also probably wide-effect boosts, fixes to fundamental bits of cellular machinery! Downside is that this would be a custom targeting job per person.
  • The use of '800 IQ' is a little grating. The tests only go to 200 or 210 (and are not convincingly normed at that level). Still, fully superhuman, entirely outside the normal human trait range... I guess it's a fair way to gesture at that.
  • our predictive models for IQ work significantly better for European or white populations because they were trained on that population. This implies that obtaining a bunch of data for Asian and African populations would allow us to identify additional targets. It surprises me that we don't have some huge dataset from China, but at least we recently developed a 100K+ genotype of Han individuals, which should turn up some additional hits.

Overall, really promising direction. I appreciate the writeup on new and improved edit methods - I had not been following the field closely, and was unaware we had advanced this much on the previously state of the art CRISPR/Cas9.

Answer by bbartlog42

We tried to model a complex phenomenon using a single scalar, and this resulted in confusion and clouded intuition.
It's sort of useful for humans because of restriction of range, along with a lot of correlation that comes from looking only at human brain operations when talking about 'g' or IQ or whatever. 
Trying to think in terms of a scalar 'intelligence' measure when dealing with non-human intelligences is not going to be very productive.

It's conceivable that current level of belief in homeopathy is net positive in impact. The idea here would be that the vast majority of people who use it will follow up with actual medical treatment if homeopathy doesn't solve their problem.

Assume also that medical treatment has non-trivial risks compared to taking sugar pills and infinitely dilute solutions (stats on deaths due to medical error support this thesis). And further that some conditions just get better by themselves. Now you have a situation where, just maybe, doing an initial 'treatment' with homeopathy gives you better outcomes because it avoids the risks associated with going to the doctor.

Probably not true. But the lack of any striking death toll from this relatively widespread belief makes me wonder. The modal homeopathy fan (of those I've personally known) has definitely been more along the lines of 'mild hypochondriac who feels reassured by their bank of strangely labeled sugar pills' than 'fanatic who will die of appendicitis due to complete lack of faith in modern medicine'.

It's not clear to me how you get to deceptive alignment 'that completely supersedes the explicit alignment'. That an AI would develop epiphenomenal goals and alignments, not understood by its creators, that it perceived as useful or necessary to pursue whatever primary goal it had been set, seems very likely. But while they might be in conflict with what we want it to do, I don't see how this emergent behavior could be such that it would be contradict the pursuit of satisfying whatever evaluation function the AI had been trained for in the beginning. Unless of course the AI made what we might consider a stupid mistake.

One design option that I haven't seen discussed (though I have not read everything ... maybe this falls in to the category of 'stupid newbie ideas') is that of trying to failsafe an AI by separating its evaluation or feedback in such a way that it can, once sufficiently superhuman, break in to the 'reward center' and essentially wirehead itself. If your AI is trying to move some calculated value to be as close to 1000 as possible, then once it understands the world sufficiently well it should simply conclude 'aha, by rooting this other box here I can reach nirvana!', follow through, and more or less consider its work complete. To our relief, in this case.

Of course this does nothing to address the problem of AI controlled by malicious human actors, which will likely become a problem well before any takeoff threshold is reached.