Why? A human body without a meaningful nervous system inside of it isn't a morally relevant entity, and it could be used to save people who are morally relevant.
Isn't that what I just said? Not sure whether or not we disagree. I'm saying that if you just stunt the growth of the prefrontal cortex, maybe you can argue that this makes the person much less conscious or something, but that's not remotely enough for this to not be abhorrent with a nonnegligible probability; but if you prevent almost all of the CNS from growing in the first place, maybe this is preferable to xenochimeric organs.
If I imagine myself growing up blind, and then I learned that my parents had engineered my genome that way, I would absolutely see that as a boundary violation and a betrayal of bedrock civility.
Fair enough, I think I would too. As I argued in the article, this is one mechanism by which the long-term results of genomic liberty are supposed to be good: children whose parents made genomic choices that weren't prohibited but maybe should have been, can speak out, both to convince other people to not make those choices, and to get new laws made.
But if you mean something stronger, "would turn down sight if it were offered for free", it seems obvious to me that any blind person expressing that view has something seriously wrong in their head in addition to the blindness,
Ok. And does this opinion of yours cause you to believe that if given the chance, you ought to use state power to, say, involuntarily sterilize such a person?
We don't let adults abuse children in any other way, even if the adult was subject to the same sort of abuse as a child and says they approve of it.
We do let adults coerce their children in all sorts of ways. It's considered bad to not force your child to attend school, which causes very many children significant trauma, including myself. Corporal punishment of children is legal in the US. I think it's probably quite bad for parents to do that, but we don't prohibit it.
We may have uncertainty about whether a particular person we can conceptualize will actually come to exist in the future, but if they do come to exist in the future, then they aren't hypothetical even now.
I agree with this morally, but not as strongly in ethical terms, which is why I listed it under ethics (maybe politically/legally would have been more to the point though).
So it absolutely does make sense to have laws to protect future people just as much as current people.
Not just as much, no, I don't think so. Laws aren't about making things better in full generality; they're about just resolution of conflict, solving egregious collective action problems, protecting liberty from large groups--stuff like that.
Blind people don't strike me as a "type of person" in the relevant sense. A blind person is just a person who is damaged in a particular way, but otherwise they are the same person they would be with sight.
That's nice. I bet we could find lots of examples of people with some condition that you would argue should be prohibited from propagating in this way, and who you'd describe as "just a person who is damaged in a particular way", and who would object to the state imposing itself on their procreative liberty. Are you disagreeing with this statement? Or are you saying that the state should impose itself anyway?
Such people are monsters. They are the enemy. Depriving them of the power to effectuate their goals is a moral crusade worth making enormous sacrifices for.
Ok. So to check, you're saying that a world with far fewer total blind / deaf / dwarf people, and with far greater total health and capability for nearly literally everyone including the blind / deaf / dwarfs, is not worth there being a generation of a few blind kids whose parents chose for them to be blind? That could be your stance, but I want to check that I understand that that's what you're saying. If so, could you expand? Would you also endorse forcibly sterilizing currently living people with high-heritability blindness, who intend to have children anyway?
If you are concerned the politics of advancing genetic engineering, suggesting that it might be ok seems like a blunder
Not sure what you mean by "ok" here. I would strongly encourage parents to not make this decision, I'd advocate for clinics to discourage parents from making this decision, I wouldn't object to professional SROs telling clinicians to not offer this sort of service, and possibly I'd advocate for them to do so. I don't think it's a good decision to make. I also think it should not be prohibited by law.
I predict any reasonable cost-benefit analysis will find that intelligence and health and high happiness-set-point are good, and blindness and dwarfism are bad.
This is irrelevant to what I'm trying to communicate. I'm saying that you should doubt your valuations of other people's ways of being--NOT so much that you don't make choices for your own children based on your judgements about what would be good for them and for the world, or advocate for others to do similarly, but YES so much that you hesitate quite a lot (like years, or "I'd have to deeply investigate this from several angles") before deciding that we (the state) ought to use state force to impose our (some political coalition's) judgements about costs and benefits of traits on other people's reproduction.
I think good arguments for "protection of genomic liberty for all" exist, but I don't think "there are no unambiguous good directions for genomes to go" is one of them.
I think it is a good argument. Since it's ambiguous, and it's not an interpersonal conflict, and there are (at least potentially) people with a strong interest in both directions for their own children, the state should be involved as little as is reasonable. This is a policy about which I think it would be more truthful to say "a world following this policy ought to be desirable, or at least not terribly objectionable, to the great majority of citizens".
If you don't protect people's propagative liberty, some people will have good reason to strongly object to that world.
If you do protect people's propagative liberty, some other people might believe they have good reason to strongly object. I discuss at least one acknowledged exception to the proposed protection here: https://www.lesswrong.com/posts/rxcGvPrQsqoCHndwG/the-principle-of-genomic-liberty#Propagative_liberty
But I'm arguing to those people that their objection should not be so strong that they ought to fight to prohibit, by law, this sort of propagative liberty.
Excellent! Thank you for researching and writing up this article.
A few notes, from my discussion with Morpheus:
Thanks.
If one is going to create an organ donor, removing consciousness and self-awareness seems essential.
If you can't do it without removing almost all of the nervous system, I think it would be bad!
These are all worth doing if we can figure out how.
Possibly. I think all your examples are quite alarming in part because they remove a core aspect. Possibly we could rightly decide to do some of them, but that would require much more knowledge and deliberation. More to the point: I'm not making a strong statement like "prohibit these uses". I'm a weaker statement: "Genomic liberty doesn't really protect these uses, in the way it does protect propagation, beneficence, etc.". In other words, I'm just saying that those uses aren't in the territory that the principle of genomic liberty is trying to secure as its purview, as I'm proposing it.
On the negative externalities question, I actually strongly disagree with the counterexample of a blind couple choosing to blind their child. That's child abuse! It's no different than gouging your child's eyes out! Don't allow that!
I agree that it's a tough case, like several others of that type. I think there's both moral and ethical/political differences with harming a living child though. Some moral differences:
An ethical difference:
A political difference:
A moral philosopher might also argue that it's less "person harming" to make an alteration before the child has begun growing, though I'm not sure what that's intended to mean.
(BTW I think you asking about entanglement sequencing caused me to a few days later realize that for chromosome selection, you can do at least index sensing by taking 1 chromosome randomly from a cell, and then sequencing/staining the remaining 22 (or 45), and seeing which index is missing. So thanks :) )
IMO a not yet fully understood but important aspect of this situation is that what someone writes is in part testimony--they're asserting something that others may or may not be able to verify themselves easy, or even at all. This is how communication usually works, and it has goods (you get independent information) and bads (people can lie/distort/troll/mislead). If a person is posting AIgen stuff, it's much less so testimony from that person. It's more correlated with other stuff that's already in the water, and it's not revealing as much about the person's internal state--in particular, their models. I'm supposed to be able to read text under the presumption that a person with a life is testifying to the effect of what's written. Even if you go through and nod along with what the gippity wrote, it's not the same. I want you to generate it yourself from your models so I can see those models, I want to be able to ask you followup questions, and I want you to stake something of the value of your word on what you publish. To the extent that you might later say "ah, well, I guess I hadn't thought XYZ through really, so don't hold me to account for having apparently testified to such; I just got a gippity to write my notions up quickly", then I care less about the words (and they become spammier).
If there are some skilled/smart/motivated/curious ML people seeing this, who want to work on something really cool and/or that could massively help the world, I hope you'll consider reaching out to Tabula.
I chatted with Michael and Ammon. This made me somewhat more hopeful about this effort, because their plan wasn't on the less-sensible end of what I uncertainly imagined from the post (e.g. they're not going to just train a big very-nonlinear map from genomes to phenotypes, which by default would make the data problem worse not better).
I have lots of (somewhat layman) question marks about the plan, but it seems exciting/worth trying. I hope that if there are some skilled/smart/motivated/curious ML people seeing this, who want to work on something really cool and/or that could massively help the world, you'll consider reaching out to Tabula.
An example of the sort of thing they're planning on trying:
1: Train an autoregressive model on many genomes as base-pair sequences, both human and non-human. (Maybe upweight more-conserved regions, on the theory that they're conserved because under more pressure to be functional, hence more important for phenotypes.)
1.5: Hope that this training run learns latent representations that make interesting/important features more explicit.
2: Train a linear or linear-ish predictor from the latent activations to some phenotype (disease, personality, IQ, etc.).
IDK if I expect this to work well, but it seems like it might. Some question marks:
More generally, I'm excited about someone making a concerted and sane effort to try putting biological priors to use for genomic predictions. As a random example (which may not make much sense, but to give some more flavor): Maybe one could look at AlphaFold's predictions of protein conformation with different rare genetic variants that we've marked as deleterious for some trait. If the predictions are fairly similar for the different variants, we don't conclude much--maybe this rare variant has some other benefit. But if the rare variant makes AlphaFold predict "no stable conformation", then we take this as some evidence that the rare variant is purely deleterious, and therefore especially safe to alter to the common variant.
Something I'd like WBE researchers to keep in mind: It seems like, by default, the cortex is the easiest part to get a functionally working quasi-emulation of, because it's relatively uniform (and because it's relatively easier to tell whether problem solving works compared to whether you're feeling angry at the right times). But if you get a quasi-cortex working and not all the other stuff, this actually does seem like an alignment issue. One of the main arguments for alignment of uploads would be "it has all the stuff that humans have that produces stuff like caring, love, wisdom, reflection". But if you delete a bunch of stuff including presumably much of the steering systems, this argument would seem to go right out the window.
Not sure why you're saying "causality" here, but I'll try to answer: I'm trying to construct an agreement between several parties. If the agreement is bad, then it doesn't and shouldn't go through, and we don't get germline engineering (or get a clumsy, rich-person-only version, or something).
Many parties have worries that route through game-theory-ish things like slippery slopes, e.g. around eugenics. If the agreement involves a bunch of groups having their reproduction managed by the state, this breaks down simple barriers against eugenics. I suppose you might dismiss such worries, but I think you're probably wrong to do so--there is actually significant overlap between your apparent stances and the stances of eugenicists, though arguably there's a relevant distinction where you're thinking of harming children rather than social burdens, not sure. The overlap is that you think the state should make a bunch of decisions about individuals's reproduction according to what the state thinks is good, even if the individuals would strongly object and the children would have been fine.
So, first of all, I'm just not sufficiently sure that it's wrong to make your future child blind. I think it's wrong, but that's not a good enough reason to impose my will on others. Maybe in the future we could learn more such that we decide it is wrong, but I don't think that's happened yet. But if we're talking about forcibly erasing a type of person, it's not remotely enough to be like "yeah I did the EV calculation, being my way is better". For reference, certainly the state should prevent a parent from blinding their 5 year old; but the 5 year old is now a person. I acknowledge that the distinction is murky, but I think it's silly to ignore the distinction. Being already alive does matter.
Second of all, it's not just blind people. It's all the categories I listed and more. Are you going to tell gay people that they can't make their future child gay? Yeah? No? What about high-functioning autists? ADHD? Highly creative, high-functioning mild bipolar? How are you deciding? What criterion? Do you trust the state with this criterion? Should other people?