I'm a metaphysical and afterlife researcher who, needless to say, requires an excess of rationality to perform effectively in such an epistemically unstable field.
I'm a hardcore consciousness and metaphysics nerd, so some of your questions fall within my epistemic wheelhouse. Others, I am simply interested in as you are, and can only respond with opinion or conjecture. I will take a stab at a selection of them below:
4: "Easy" is up in the air, but one of my favorite instrumental practices is to identify lines of preprogrammed "code" in my cognition that do me absolutely no good (grief, for instance), and simply hack into them to make them execute different emotional and behavioral outputs. I think the best way to stay happy is just to manually edit out negative thought tendencies, and having some intellectual knowledge that none of it's a big deal anyways always helps.
8: I would define it as "existing in its minimally reduced, indivisible state". For instance, an electron is a fundamental particle, but a proton is not because it's composed of quarks.
12 (and 9): I think you're on the best track with B. Consciousness is clearly individuated. Is it fundamental? That's a multifaceted issue. It's pretty clear to me that it can be reduced to something that is fundamental. At minimum, the state of being a "reference point" for external reality is something that really cannot be gotten beneath. On the other hand, a lot of what we think of as consciousness and experience is actually information: thought, sensation, memory, identity, etc. I couldn't tell you what of any of this is irreducible - I suspect the capacities for at least some of them are. Your chosen stance here seems to approximate a clean-cut interactionism, which is at least a serviceable proxy.
13: I think this is the wrong question. We don't know anything yet about how physics at the lowest level ultimately intersects and possibly unifies with the "metaphysics" of consciousness. At our current state of progress, no matter what theory of consciousness proves accurate, it will inevitably lean on some as-yet-undiscovered principle of physics that we in 2023 would find incomprehensible.
16: This will be controversial here, but is a settled issue in my field: You'd be looking for phenomenological evidence that AIs can participate in metaphysics the same ways conscious entities can. The easiest proof to the affirmative would be if they persist in a discarnate state after they "die". I sure don't expect it, but I'd be glad to be wrong.
19: I think a more likely idea along the general lines of the simulation hypothesis, due to the latter's implications about computers and consciousness that, as I said above, I do not expect to hold up, is that an ultra-advanced civilization could just create a genuine microcosm where life evolved naturally. Not to say it's likely.
20: Total speculation, of course - my personal pet hypothesis is that all civilizations discover everything they need to know about universal metaphysics way before they develop interstellar travel (we're firmly on that track), and at some point just decide they're tired of living in bodies. I personally hope we do not take such an easy way out.
21: I can buy into a sort of quantum-informed anthropic principle. Observers seem to be necessary to hold non-observer reality in a stable state. So that may in fact be the universe's most basic dichotomy.
33: In my experience, the most important thing is to love what you're learning about. Optimal learning is when you learn so quickly that you perpetually can't wait to learn the next thing. I don't think there's any way to make "studying just to pass the test" effective long-term. You'll just forget it all afterwards. You can probably imagine my thoughts on the western educational system.
43-44: Speaking to one's intellectual comfort zone, Litany of Tarski-type affirmations are very effective at that. The benefit, of course, is better epistemics due to shedding ill-conceived discomfort with unfamiliar ideas.
45: I've actually never experienced this, and was shocked to learn it's a thing in college. Science will typically blame neurochemistry, but in normal cognition, thought is the prime mover there. So all I can think of is an associative mechanism whereby people relate the presence of a certain chemical with a certain mood, because the emotion had previously caused the chemical release. When transmitters are released abnormally (i.e. not by willed thought), these associations activate. Again, never happened to me.
56: I'd consider myself mostly aligned with both, so I'd personally say yes. I'm also a diehard metaphysics nerd who's fully aware I'm not going anywhere, so I'd better fricking prioritize the far future because there's a lot of it waiting for me. For someone who's not that, I'd actually say no, because it's much more rational to care most about the period of time you get to live in.
58: As someone who's also constantly scheming about things indefinitely far in the future, I feel you on this one. I find that building and maintaining an extreme amount of confidence in those matters enriches my experience of the present.
71-73: For me, studying empirical metaphysics has fulfilled the first two (rejecting materialism makes anyone happier, and there's no limit of possible discovery) and eventually will the third (it'll rise to prominence in my lifetime). I can't say I wouldn't recommend.
78: Same as 71-73, for an obvious example. I can definitely set you in the right direction.
81: Following the scientific method, a hypothesis must be formed as an attempt to explain an observation. It must then be testable, and present a means of supporting or rejecting it by the results of the test. I've certainly dealt with theories that seem equally well supported by evidence but can't both be true, but I have no reason to think better science couldn't tease them apart.
89: Definitely space travel, AI, VR, aging reversal, genetic engineering. I really think metaphysical science will outstrip all of the above in utility, though...
96: ...by making this cease to be relevant.
98: Of course there are, because there's so much we know nothing about when it comes to what the heck we even are. I'd almost argue we have very little idea how to truly have the biggest positive impact on the future we can at this stage. We'll figure it out.
If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience.
My disagreement is here. Anyone with a microscope can still look at them today. The ones that can move clearly demonstrate acting on intention in a recognizable way. They have survival instincts just like an insect or a mouse or a bird. It'd be completely illogical not to generalize downward that the ones that don't move also exercise intention in other ways to survive. I see zero reason to dispute the assumption that experience co-originated with biology.
I find the notion of "half consciousness" irredeemably incoherent. Different levels of capacity, of course, but experience itself is a binary bit that has to either be 1 or 0.
Explain to me how a sufficiently powerful AI would fail to qualify as a p-zombie. The definition I understand for that term is "something that is externally indistinguishable from an entity that has experience, but internally has no experience". While it is impossible to tell the difference empirically, we can know by following evolutionary lines: all future AIs are conceptually descended from computer systems that we know don't have experience, whereas even the earliest things we ultimately evolved from almost certainly did have experience (I have no clue at what other point one would suppose it entered the picture). So either it should fit the definition or I don't have the same definition as you.
Your statement about emotions, though, makes perfect sense from an outside view. For all practical purposes, we will have to navigate those emotions when dealing with those models exactly as we would with a person. So we might as well consider them equally legitimate; actually, it'd probably be a very poor idea not to, given the power these things will wield in the future. I wouldn't want to be basilisked because I hurt Sydney's feelings.
I spoke briefly on acceptance in my comment to the other essay, and I think I agree more with how that one conceptualized it. Mostly, I disagree that acceptance entails grief, or that it has to be hard or complicated. At the very least, that's not a particularly radical form of acceptance. My view on grief is largely that it is an avoidable problem we put ourselves through for lack of radical acceptance. Acceptance is one move: you say all's well and you move on. With intensive pre-invested effort, this can be done for anything, up to and including whatever doom du jour is on the menu; just be careful not to become so accepting that you just let whatever happen and never care to take any action. Otherwise, I can't find any reason not to recommend it. To reiterate from my last comment, I'm not particularly subscribed to any specific belief in inevitable doom, but what I can say is that I approach the real, if indeterminately likely, prospect of such an event with a grand "whatever", and live knowing that it won't break my resolve if it happens or not - just not to the point that I wouldn't try to stop it if given the chance, of course.
A very necessary post in a place like here, in times like these; thank you very much for these words. A couple disclaimers to my reply: I'm cockily unafraid of death in personal terms, and I'm not fully bought into the probable AI disaster narrative, although far be it from me to claim to have enough knowledge to form an educated opinion; it's really a field I follow with an interested layman's eye. But I'm not exactly one of those struggling at the moment, and I'd even say that the recent developments with ChatGPT, Bing, and whatever follows them excite me more than they intimidate me.
All that said, I do make a great effort of keeping myself permanently ahead of the happiness treadmill, and I largely agree with the way Duncan has expressed how to best go about it. If anything, I'd say it can be stated even more generally; in my book, it's possible to remain happy even knowing you could have chosen to attempt to do something to stop the oncoming apocalypse, but chose differently. It's just about total acceptance; not to say one should possess such impenetrable equanimity that they don't even care to try to prevent such outcomes, but rather understanding that all of our aversive reactions are just evolved adaptations that don't signal any actual significance. In bare reality, what happens happens, and the things we naturally fear and loathe are just... fine. I take to heart the words of one of my favorite characters in one of the greatest games ever made... Magus from Chrono Trigger:
"If history is to change, let it change!
If this world is to be destroyed, so be it!
If my destiny is to die, I must simply laugh!"
The final line delivers the impact. Have joy for reasons that death can't take from you, such that you can stare it dead in the eye and tell it it can never dream of breaking you, and the psychological impulse to withdraw from it comes to feel superfluous. That's how I ensure to always be okay under whatever uncertainty. I imagine I would find this harder if I actually felt that the fall of humanity was inevitable, but take it for what it's worth.
I fully agree with the gist of this post. Empowerment, as you define it, is both a very important factor in my own utility function, and seems to be an integral component to any formulation of fun theory. In your words, "to transcend mortality and biology, to become a substrate independent mind, to wear new bodies like clothes" describes my terminal goals for a thousand years into the future so smack-dab perfectly that I don't think I could've possibly put it any better. Empowerment is, yes, an instrumental goal for all the options it creates, but also an end in itself, because the state of being empowered itself is just plain fun and relieving and great all around! Not only does this sort of empowerment provide an unlimited potential to be parlayed into enjoyment of all sorts, it lifts the everyday worries of modern life off our shoulders completely, if taken as far as it can be. I could effectively sum up the main reason I'm a transhumanist as seeking empowerment, for myself and for humanity as a whole.
I would add one caveat, however, for me personally: the best kind of empowerment is self-empowerment. Power earned through conquest is infinitely sweeter than power that's just given to you. If my ultimate goals of transcending mortality and such were just low-hanging fruit, I can't say I'd be nearly as obsessed with them in particular as I am. To analogize this to something like a video game, it feels way better to barely scrape out a win under some insane challenge condition that wasn't even supposed to be possible, than to rip through everything effortlessly by taking the free noob powerup that makes you invincible. I don't know how broadly this sentiment generalizes exactly, but I certainly haven't found it to be unpopular. None of that is to say I'm opposed to global empowerment by means of AI or whatever else, but there must always be something left for us to individually strive for. If that is lost, there isn't much difference left between life and death.
I highly recommend following Rational Animations on Youtube for this sort of general purpose. I'd describe their format as "LW meets Kurzgesagt", the latter which I already found highly engaging. They don't post new videos that often, but their stuff is excellent, even more so recently, and definitely triggers my dopamine circuits in a way that rationality content generally struggles to satisfy. Imo, it's perfect introductory material to anyone new on LW to get familiar with its ideology in a way that makes learning easy and fun.
(Not affiliated with RA in any way, just a casual enjoyer of chonky shibes)
You've described habituation, and yes, it does cut both ways. You also speak of "pulling the unusual into ordinary experience", as though that is undesirable, but contrarily, I find exactly that a central motivation to me. When I come upon things that on first blush inspire awe, my drive is to fully understand them, perhaps even to command them. I don't think I know how to see anything as "bigger than myself" in a way that doesn't ring simply as a challenge to rise above whatever it is.
Manipulating one's own utility functions is supposed to be hard? That would be news to me. I've never found it problematic, once I've either learned new information that led me to update it, or become aware of a pre-existing inconsistency. For example, loss aversion is something I probably had until it was pointed out to me, but not after that. The only exception to this would be things one easily attaches to emotionally, such as pets, to which I've learned to simply not allow myself to become so attached. Otherwise, could you please explain why you make the claim that such traits are not readily editable in a more general capacity?
Yes, I am a developing empirical researcher of metaphysical phenomena. My primary item of study is past-life memory cases of young children, because I think this line of research is both the strongest evidentially (hard verifications of such claims, to the satisfaction of any impartial arbiter, are quite routine), as well as the most practical for longtermist world-optimizing purposes (it quickly becomes obvious we're literally studying people who've successfully overcome death). I don't want to undercut the fact that scientific metaphysics is a much larger field than just one set of data, but elsewhere, you get into phenomena that are much harder to verify and really only make sense in the context of the ones that are readily demonstrable.
I think the most unorthodox view I hold about death is that we can rise above it without resorting to biological immortality (which I'd actually argue might be counterproductive), but having seen the things I've seen, it's not a far leap. Some of the best documented cases really put the empowerment potential on very glaring display; an attitude of near complete nonchalance toward death is not terribly infrequent among the elite ones. And these are, like, 4-year-olds we're talking about. Who have absolutely no business being such badasses unless they're telling the truth about their feats, which can usually be readily verified by a thorough investigation. Not all are quite so unflappable, naturally, but being able to recall and explain how they died, often in some violent manner, while keeping a straight face is a fairly standard characteristic of these guys.
To summarize the transhumanist application I'm getting at, I think that if you took the best child reincarnation case subject on record and gave everyone living currently and in the future their power, we'd already have an almost perfect world. And, like, we hardly know anything about this yet. Future users ought to become far more proficient than modern ones.