If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
I'm having trouble figuring out what to prioritize in my life. In principle, I have a pretty good idea of what I'd like to do: for a while I have considered doing a Ph.D in a field that is not really high impact, but not entirely useful either, combining work that is interesting (to me personally) and hopefully a modest salary that I could donate to worthwhile causes.
But it often feels like this is not enough. Similar to what another user posted here a while ago, reading LessWrong and about effective altruism has made me feel like nothing except AI and maybe a few other existential risks are worth focusing on (not even things that I still consider to be enormously important relative to some others). In principle I could focus on those, as well. I'm not intelligent enough to do serious work on Friendly AI, but I probably could transition, relatively quickly, to working on machine learning and in data science, with perhaps some opportunities to contribute and likely higher earnings.
The biggest problem, however, is that whenever I seem to be on track towards doing something useful and interesting, a monumental existential confusion kicks in and my productivity plummets. This is mostly related to thinking about life and death.
EY recently suggested that we should care about solving AGI alignment because of quantum immortality (or its cousins). This is a subject that has greatly troubled me for a long time. Thinking logically, big world immortality seems like an inescapable conclusion from some fairly basic assumption. On the other hand, the whole idea feels completely absurd.
Having to take that seriously, even if I don't believe in it 100 percent, has made it difficult for me to find joy in the things that I do. Combining big world immortality with other usual ideas regarding existential risks and so on that are prevalent in the LW memespace sort of suggests that the most likely outcome I (or anybody else) can expect in the long run is surviving indefinitely as the only remaining human, or nearly certainly as the only remaining person among those that I currently know. Probably in an increasingly bad health as well.
It doesn't help that I've never been that interested in living for a very long time, like most transhumanists seem to be. Sure, I think aging and death are problems that we should eventually solve, and in principle I don't have anything against living for a significantly longer time than the average human lifespan, but it's not something that I've been very interested in actively seeking and if there's a significant risk that those very many years would not be very comfortable, then I quickly lose interest. So the theories that sort of make this whole death business seem like an illusion are difficult to me. And overall, the idea does make the mundane things that I do now seem even more meaningless. Obviously, this is taking its toll on my relationships with other people as well.
This has also led me to approach related topics a lot less rationally than I probably should. Because of this, I think both my estimate of the severity of the UFAI problem and our ability to solve this has gone up, as has my estimate of the likelihood that we'll be able to beat aging in my lifetime - because those are things that seem to be necessary to escape the depressing conclusions I've pointed out.
I'm not good enough at fooling myself, though. As I said, my ability to concentrate on doing anything useful is very weak nowadays. It actually often feels easier to do something that I know is an outright waste of time but gives something to think about, like watching YouTube, playing video games or drinking beer.
I would appreciate any input. Given how seriously people here take things like the simulation argument, the singularity or MWI, existential confusion cannot be that uncommon. How do people usually deal with this kind of stuff?
AI•ON is an open community dedicated to advancing Artificial Intelligence by:
http://ai-on.org/