What's your point?
First part (from computers to molecular biology): I was explaining why "AGI ... arbitrary atomic manipulation nanotech ... transhuman life extension" are now likely, in a way that wasn't in the 16th (or the 6th) century.
Second part: I'm trying to wake up your sense of change! You didn't answer Aris when he asked you where you think 21st-century progress will stop. Do you think the human race can understand the causality of the atom, the gene, and the brain, and then only apply that knowledge superficially? Chemists routinely apply their understanding of how atoms interact, to create molecules that have never existed in nature, and that is the future of life and intelligence too: living things and thinking things that have been designed from the molecular level up, having only broad structural properties in common with their natural prototypes.
You did say all this won't happen for "the foreseeable future". So maybe you just mean it's an affair of the year 3000, but not the year 2050. Let's try to pin this down. Consider a scenario for the future solar system where most of it is inhabited by artificial life and artificial intelligence. In some places it's still based on DNA, in some places it's all solid-state. But there are many inhabited worlds, with their own chemical ecosystems and nonhuman cultural histories. Do you consider such a future flatly impossible? Possible but unlikely? Likely but irrelevant to this discussion?
For very long I've been caring a lot for the preferences of my past selves.
Rules I established in childhood became sacred, much like laws are (can't find post in the sequences in which Yudkowsky is amazed by the fact that some things are good just because they are old), and that caused interesting unusual life choices, such as not wearing formal shoes and suits.
I was spending more and more time doing what my previous selves thought I should, in a sense, I was composed mostly of something akin to what Anna Salomon and Steve Rayhawk called Cached Selves.
That meant more dedication to long term issues (Longevity, Cryonics, Immortality). More dedication to spacially vast issues (Singularity, X-risk, Transhumanism).
Less dedication to the parts of one's self that have a shorter life-span. Such as the instantaneous gratification of philosophical traditions of the east (buddhism, hinduism) and some hedonistic traditions of the west (psychedelism, selfish instantaneous hedonism, sex and masturbation-ism, drugs-isms, thrill-isms).
Also less dedication to time spans such as three months. Personal projects visible, completable and doable in such scales.
This process of letting your past decisions trump your current decisions/feelings/emotions/intuitions was very fruitful for me, and for very long I thought (and still think) it made my life greater than the life of most around me (schoolmates, university peers, theater friends etc... not necessarily the people I choose to hang out with, after all, I selected those!).
At some point more recently, and I'm afraid this might happen to the Effective Altruist community and the immortalist community of Less Wrong, I started feeling overwhelmed, a slave of "past me". Even though a lot of "past me" orders were along the lines of "maximize other people's utility, help everyone the most regardless of what those around you are doing".
Then the whole edifice crumbled, and I took 2 days off of all of life to go to a hotel in the woods and think/write alone to figure out what my current values are.
I wrote several pages, thought about a lot of things. More importantly, I quantified the importance I give to different time-spans of my self (say 30 points to life-goals, 16 points to instantaneous gratification, 23 points to 3MonthGoals etc...). I also quantified differently sized circles of altruism/empathy (X points for immediate family, Y points for extended family, Z points for near friends, T points for smart people around the globe, U points for the bottom billion, K points for aliens, A points for animals etc...).
Knowing my past commitment to past selves, I'd expect these new quantificatonal regulatory forces I had just created to take over me, and cause me to spend my time in proportion to their now known quantities. In other words, I allowed myself a major change, a rewriting which dug deeper into my source code than previous re-writings. And I expected the consequences to be of the same kind than those previous re-writings.
Seems I was wrong. I've become unstable. Trying to give an outside description the algorithm as it feels from the inside, it seems that the natural order of attention allocation which I had, like a blacksmith, annealed over the years, has crumbled. Instead, I find myself being prone to an evolutionary fight between several distinct desires of internal selves. A mix of George Ainslie's piconomics and plain neural darwinism/multiple drafts.
Such instability, if not for anything else, for hormonal reasons, is bound not to last long. But thus far it carried me into Existentialism audiobooks, considering Vagabonding lifestyle as an alternative to a Utilitarian lifestyle, and considering allowing a personality dissolution into whatever is left of one's personality when we "allow it" (emotionally) to dissolve and reforge itself.
The instability doesn't cause anxiety, sadness, fear or any negative emotion (though I'm at the extreme tail of the happiness setpoint, the equivalent in happiness of having an IQ 145, or three standard deviations). Contrarywise. It is refreshing and gives a sense of freedom and choice.
This post can be taken to be several distinct things for different readers.
1) A warning for utilitarian life-style people that allowing deep changes causes an instability which you don't want to let your future self do.
2) A tale of a self free of past enslavery (if only for a short period of time), who is feeling well and relieved and open to new experiences. That is, a kind of unusual suggestion for unusual people who are in an unusual time of their lives.
(Note: because of the unusual set-point thing, positive psychology advice should be discarded as a basis for arguments, I've already achieved ~0 marginal returns after 2000pgs of it)
3) This is the original intention of writing: I wanted to know the arguments in favor of a selfish vagabonding lifestyle, versus the arguments in favor of the Utilitarian lifestyle, because this is a particularly open-minded moment in my life, and I feel less biased than in most other times. For next semester, assume money is not an issue (both Vagabond and Utililtarian are cheap, as opposed to "you have a million dollars"). So, what are the arguments you'd use to decide that yourself?