It’s interesting to me how chill people sometimes are about the non-extinction future AI scenarios. Like, there seem to be opinions around along the lines of “pshaw, it might ruin your little sources of ‘meaning’, Luddite, but we have always had change and as long as the machines are pretty near the mark on rewiring your brain it will make everything amazing”. Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant, and want a lot of guarantees about the preservation of various very specific things they care about in life, and not be just like “oh sure, NYC has higher GDP/capita than my current city, sounds good”.
I read this as a lack of engaging with the situation as real. But possibly my sense that a non-negligible number of people have this flavor of position is wrong.
I'm not as chill as all that, and I absolutely appreciate people worrying about those dimensions. But I do tend to act in day-to-day behavior (and believe, in the average sense (my probablistic belief range incudes a lot of scenarios, but the average and median are somewhat close together, which is probably a sign of improper heuristics)) as if it'll all be mostly-normal. I recently turned down a very good job offer in NYC (and happily, later found a better one in Seattle), but I see the analogy, and kind of agree it's a good one, but from the other side - even people who think they'd hate NYC are probably wrong - hedonic adaptation is amazingly strong. I'll try to represent those you're frustrated with.
There will absolutely be changes, many of which will be uncomfortable, and probably regress from my peak-preference. As long as it's not extinction or effective-extinction (a few humans kept in zoos or the like, but economically unimportant to the actual intelligent agents shaping the future), it'll be ... OK. Not necessarily great compared to imaginary utopias, but far better than the worst outcomes. Almost certainly better than any ancient person could have expected.
"value" means "net positive to the beings making decisions that impact me". Humans claim to and behave as if they care about other humans, even when those other humans are distant statistical entities, not personally-known.
The replacement consciousnesses will almost certainly not feel the same way about "legacy beings", and to the extent they preserve some humans, it won't be because they care about them as people, it'll be for more pragmatic purposes. And this is a very fragile thing, unlikely to last more than a few thousand years.
... (read more)