It’s interesting to me how chill people sometimes are about the non-extinction future AI scenarios. Like, there seem to be opinions around along the lines of “pshaw, it might ruin your little sources of ‘meaning’, Luddite, but we have always had change and as long as the machines are pretty near the mark on rewiring your brain it will make everything amazing”. Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant, and want a lot of guarantees about the preservation of various very specific things they care about in life, and not be just like “oh sure, NYC has higher GDP/capita than my current city, sounds good”.
I read this as a lack of engaging with the situation as real. But possibly my sense that a non-negligible number of people have this flavor of position is wrong.
This isn't clear to me: does every option that involves someone being forcibly mandated to do something qualify as a catastrophe? Conceptually, there seems to be a lot of room between the two.
I understand the analogy in Katja's post as being: even in a great post-AGI world, everyone is forced to move to a post-AGI world. That world has higher GDP/capita, but it doesn't necessarily contain the specific things people value about their current lives.
Just listing all the positive aspects of living in NYC (even if they're very positive) might not remove all hesitation: I know my local community, my local parks, the beloved local festival that happens in August.
If all diseases have been cured in NYC and I'm hesitant because I'll miss out on the festival, I'm probably not adequately taking the benefits into account. But if you tell me not to worry at all about moving to NYC, you're also not taking all the costs into account / aren't talking in a way that will connect with me.