It's the same statement, plus an additional set of implications that come from reification. I want to say "mech interp (and AGI alignment and other things) is pre-good-relevant-paradigm". (Which people have been expressing as "pre-paradigm".) I want to say
This is much more easily done with a word for an adjective-like concept. It plants a flag and asserts its Thinghood. People can talk and coordinate about it. Words are good.
FWIW I also feel a bit bad about it being both commercial and also not literally a LW thing. (Both or neither seems less bad.) However, in this particular case, I don't actually feel that bad about it--because this is a site founded by Yudkowsky! So it kind of is a LW thing.
OMG! GEOFF! STOP STATING YOUR DEFERENTIAL PROBABILITY without also stating your first-order probability! If your first-order probability is >50% then say so! Otherwise you're making other people (ELON MUSK!) double count evidence from "other people".
https://www.youtube.com/watch?v=PTF5Up1hMhw&t=2283s
https://tsvibt.blogspot.com/2022/09/dangers-of-deferrence.html
So-and-so gives the following idea: Synthesize a chunk of DNA at the limit of what we can currently do--on the order of 1Mb. Choose the sequence so as to replace some chunk of a human chromosome, and to have lots of target variants. Then, somehow integrate this chunk into the genome of a reproductive cell (gamete, zygote, etc.).
IDK how you'd do the integration--but it might doable, e.g. with two DSBs. I.e. a really really big CRISPR edit. (Or something about transposons that I didn't understand?)
IDK how much this would disrupt the epigenomic state. You could probably avoid hitting many sex-linked imprinting regions, but the state of synthesized DNA might be otherwise weird--though it's not that much DNA so maybe it would be fine.
The fun question is, how much effect you can have? On a really naive model, if there's only 10 or 20 relevant regions on each chromosome, you're probably only getting 1 or 2 per 1Mb window. However, relevant regions will be concentrated away from centromeres and telomeres. Further, relevant regions will be somewhat randomly placed, rather than uniformly placed (I assume). So there should be a lot of variance in how many relevant regions show up in any given 1Mb window. An interesting math problem.
Someone points out that, to demonstrate large effects from strong genomic vectoring, we can look at the successful results of historical animal or crop breeding programs. This is true, and is somewhat convincing, but only somewhat. That's for three reasons:
If we have historical data on genomes of agricultural lines, we could maybe test this. For example, you could in theory train a PGS on a wild-type population that you expect is similar to the starting population of some livestock population; and then ask "is the genome-cloud current (bred) livestock population exactly what you'd get if you did strong one-shot genomic vectoring to the wild-type population, based on this PGS, or is it different?".
Another possibility, which I heard from @kman , is to make a PGS1; then look at a subset of the population that's at some tail of that PGS1; then retrain a new PGS2 on that subset; then compare PGS1 and PGS2. At a coarse level, if they are basically the same, that's some evidence that genomic vectoring should work sort-of far out; if not, that's some evidence against.
Right--the further issue being that for alignment, you have to understand minds, which are intrinsically to a significant extent holistic: studying some small fraction of a mind will always be leaving out the overarching mentality of the mind. Cf. https://tsvibt.github.io/theory/pages/bl_24_12_02_20_55_43_296908.html . E.g., several different mental elements have veto power over actions or self-modifications, and can take actions and do self-modifications. If you leave out several of these, your understanding of the overall dynamic is totally off-the-rails / totally divergent from what a mind actually does.
Cf. https://www.lesswrong.com/posts/Ht4JZtxngKwuQ7cDC/tsvibt-s-shortform?commentId=koeti9ygXB9wPLnnF
No we need actual words for concepts. It's important to have specifically words.
So, as a field, we don't have to be happy with the dominant paradigm. But just because we're not happy with it doesn't mean it's not there.
Um, ok fine, so what alternative term do you propose to replace "pre-paradigmatic" as it is currently used, to indicate that there's no remotely satisfactory paradigm in which to get going on the parts of the field-to-be that really matter?
In this interview, at the linked time: https://www.youtube.com/watch?v=HUkBz-cdB-k&t=847s
Terence Tao describes the notion of an "obstruction" in math research. I think part of the reason that AGI alignment is in shambles is that we haven't integrated this idea enough. In other words, a lot of researchers work on stuff that is sort-of known to not be able to address the hard problems.
(I give some obstruction-ish things here: https://tsvibt.blogspot.com/2023/03/the-fraught-voyage-of-aligned-novelty.html)
E.g. "...and it seems like there could exist good paradigms for this area, and we probably want good paradigms for this area, and our current work in this area ought to be shaped by the fact that we're pre-paradigm, and...."