The Doomsday Argument is premised on the idea that one should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class. If we consider ourselves as randomly placed in the birth rank, then we should expect a doomsday event soon.
However, it seems that we should include the idea that one can reason about anthropics and maybe that one actually does reason about anthropics. Also, we could consider observer moments (OM) reasoning about anthropics. If we do this, then perhaps it is not AI that will kill us but enlighten us about our future or some central question about humanity. OM reasoning about anthropics could fall simply because nobody feels it necessary with superintelligence because all these sorts of questions are better answered through AI.
I use genetic enhancement. I think it’s good to be thinking about framing around this issue. I conceived of maybe calling it “harmless eugenics” in response to the accusation. Epilogenetics evokes the thoughts of eugenics more than genetic enhancement in my view.
The reason I support a lot of epilogenetics is because it’s eugenic. If a couple chose to pick the embryo they expected to suffer the most in life, is it epilogenetics? Do you support that? I don’t. Maybe that term is a little misleading
I think Widen et al. (2022) uses actual sibling pairs/trios (unless I'm misreading?), but a few other studies use simulated embryos such as Lencz et al. (2021) [1] and Turley et al. (2021) [2].
[1] "Utility of polygenic embryo screening for disease depends on the selection strategy"
[2] "Problems with Using Polygenic Scores to Select Embryos"
I think that morality is objective in the sense that you mentioned in paragraph one. I think that it has the feature of paragraph two that you are talking about but that isn't the definition objective in my view, it is merely a feature of the fact that we have moral intuitions.
Yes, you can get new information on morality that contradicts your current standpoint. It could never say anything objectionable because I am actually factually correct on baby torture.
I don't think morality is objective, but I still care greatly about what a future Holden - one who has reflected more, learned more, etc. - would think about the ethical choices I'm making today.
I think that an ethical theory that doesn't believe baby torture is objectively wrong is flawed. If there is no objective morality, then reflection and learning information cannot guide us toward any sort of "correct" evaluation of our past actions. I don't think preferences should change realist to quasi-realist. Is it any less realist to think murder is okay, but avoid it because we are worried about judgement from others? It seems like anti-realism + a preference. There already is a definition of quasi-realist which seems different from yours unless I'm misunderstanding yours [1].
I mean specifically to point to the changes that you (whoever is reading this) consider to be progress, whether because they are honing in on objective truth or resulting from better knowledge and reasoning or for any other good reason. Future-proof ethics is about making ethical choices that will still look good after your and/or society's ethics have "improved" (not just "changed").
Reasoning helps us reach truth better. If there are no moral facts, then it is not really reasoning and the knowledge is useless. Imagine I said that I do not believe ghosts exist, but I want to be sure that I look back on my past opinions on ghosts and hope they are correct. I want this because I expect to learn much more about the qualities and nature of ghosts and I will be a much more knowledgable person. The problem is that ghosts have no qualities or nature because they do not exist.
You wanted to use future-proof ethics as a meta-ethical justification for selecting an ethical system, if I recall your original post correctly. My point about circularity was that if I'm using the meta-ethical justification of future proof to pick my ethical system, I can't justify future proofing with the very same ethical system. The whole concept of progress, whether it be individual, societal or objective relies on a measure of morality to determine progress. I don't have that ethical system if I haven't used future proofing to justify it. I don't have future proofing, unless I've got some ethical truths already.
Imagine I want to come up with a good way of determining if I am good at math. I could use my roommate to check my math. How do I know my roommate is good at math? Well, in the past, I checked his math and it was good.
I am a moral realist. I believe there are moral facts. I believe that through examination of evidence and our intuitions, we become a more moral society generally speaking. I therefore think that the future will be more ethical. Future proof choices and ethical choices will correlate somewhat. I believe this because I believe that I can, in the present, determine ethical truths via my intuition. This is better than future proof because it gets at the heart of what I want.
How would you justify your "quasi realist" position. You want future Holden to look back on you. Why? Should others hold this preference? What if I wanted past Parrhesia to respect future Parrhesia. Should I weigh this more than future Parrhesia respecting past Parrhesia? I don't think this is meta-ethically justified. Can you really say there is nothing objectively wrong with torturing a baby for sadistic pleasure?
[1] see: https://en.wikipedia.org/wiki/Quasi-realism
I think aturchin had a similar idea where people simply lose interest in the DA.