I'm really curious about the question of how to go about one'd life now if you take AI risk seriously. I'm excited to read this article and really liked Zvi's AI: Practical Advice for the Worried. Anyone have other articles/resources/strategies on this topic that they recommend?
I am hearing something related to decoupling my self-worth from choosing to act in the face of x-risk (or any other moral action). Does that sound right?
I feel like this pairs pretty well with the concept of the inner child in psychology, where you basically give your own "inner child", which represents your emotions and very basic needs, a voice and try to take care of it. But on a higher level you still make rational decisions. In this context it would basically be "be your own god" I suppose? Accept that your inner child is scared of x-risk, and then treat yourself like you would a child that is scared like that.
I think this is one of those weird things where social pressure can direct you towards the right thing but corrupt your internal prioritzation process in ways that kind of ruin it.
Its kind of interesting how you focus on the difference between inner needs and societal needs. Personally I have never felt a big incentive to follow societal needs, and while I can not recommend that, does not help mental health, I do not feel the x-risk as much as others. I know its there, I know we should work against it and I try to dedicate my work to fighting that, but I dont really think about it emotionally?
I personally think a bit along the lines of "whatever happens happens, I will do my best and not care much about the rest". And for that its important to properly internalize the goals you have. Most humans main goal is a happy life somehow. Lowering x-risk is important for that, but so is maintaining a healthy work-life balance, mental health, physical health... They all work towards the big goals. I think thats important to realize on a basic level.
And lastly, two more small questions, what is wave and planecrash? And how do you define normie, I feel like thats kind of a tough term.
I suspect Wave refers to this company: https://www.wave.com/en/ (they are connected to EA)
Planecrash is a glowfic co-written by Yudkowsky: https://glowficwiki.noblejury.com/books/planecrash
I define "normie" as "not of the relevant subculture", which changes depending on the speaker and context
I suppose that could be defined as being further away from the self in their own world view than a certain radius permits? That makes sense. I have mostly seen this term in 4chan texts tbh, which is why I dislike it. I feel like normie normally refers to people who are seen as "more average" than oneself, which is a flawed concept in itself, as human properties are too sparse
I guess it can be seen as some more specific data, like world view in terms of x-risk or political, in which case our two protagonists here care about it more than average and the distance to the mean is quite far. In general I would be careful with the word normie tho.
Well, as the category we want to describe here simply does not exist, or is more like a set of people outside your own bubble, which is more like a negated set than a clearly definable set, there are a few options.
Firstly, maybe just "non-science" person, or "non-AI" person. Defining people by what they are not is also not great tho.
Secondly, we could embrace the "wrongness" of the avergae person and just say... average person. Still wrong, but at least not negative. And probably the correct meaning gets conveyed, which is not assured with the first one.
The last, probably most correct but also impractical one is to simply name what aspect you refer to. In this case probably "people who do not follow x-risks" would be most accurate.
But I despise getting told what to call certain groups because someone could get butthurt a bit, so personally I stick with average person - just with the knowledge the average person does not exist and if I think the other person doesnt know, I convey that.
This is a dialogue Elizabeth and I wrote about our journeys trying to live well while caring about the world not ending. We used a chess timer to govern who was talking at a given time and to keep the pace snappy and time-boxed.
We talk about separating whether we were worthy of existing from if we were helping with x-risk, relating to non-x-risk culture and also what concrete things are part of our conception of the good life.