Review

I guess there’s maybe a 10-20% chance of AI causing human extinction in the coming decades, but I feel more distressed about it than even that suggests—I think because in the case where it doesn’t cause human extinction, I find it hard to imagine life not going kind of off the rails. So many things I like about the world seem likely to be over or badly disrupted with superhuman AI (writing, explaining things to people, friendships where you can be of any use to one another, taking pride in skills, thinking, learning, figuring out how to achieve things, making things, easy tracking of what is and isn’t conscious), and I don’t trust that the replacements will be actually good, or good for us, or that anything will be reversible.

Even if we don’t die, it still feels like everything is coming to an end.

New Comment
87 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'm middle-aged now, and a pattern I've noticed as I get older is that I keep having to adapt my sense of what is valuable, because desirable things that used to be scarce for me keep becoming abundant. Some of this is just growing up, e.g. when I was a kid my candy consumption was regulated by my parents, but then I had to learn to regulate it myself. I think humans are pretty well-adapted to that sort of value drift over the life course. But then there's the value drift due to rapid technological change, which I think is more disorienting. E.g. I invested a lot of my youth into learning to use software which is now obsolete. It feels like my youthful enthusiasm for learning new software skills, and comparative lack thereof as I get older, was an adaptation to a world where valuable skills learned in childhood could be expected to mostly remain valuable throughout life. It felt like a bit of a rug-pull how much that turned out not to be the case w.r.t. software.

But the rise of generative AI has really accelerated this trend, and I'm starting to feel adrift and rudderless. One of the biggest changes from scarcity to abundance in my life was that of interesting information, enabled ... (read more)

4ErickBall
I think our world actually has a great track record of creating artificial scarcity for the sake of creating meaning (in terms of enjoyment, striving to achieve a goal, sense of accomplishment). Maybe "purpose" in the most profound sense is tough to do artificially, but I'm not sure that's something most people feel a whole lot of anyway? I'm pretty optimistic about our ability to adapt to a society of extreme abundance by creating "games" (either literal or social) that become very meaningful to those engaged in them.
[-]lc4817

Thank you for saying this. I have tried several times to explain something like it in a post, but I don't think I have the writing skill to convey effectively how deeply distressed I am about these scenarios. It's essential to my ability to enjoy life that I be useful, have political capital, can effect meaningful change throughout the world, can compete in status games with others, can participate in an economy of other people like me, and can have natural and productive relationships with unartificial people. I don't understand at all how I'm supposed to be excited by the "good OpenAI ending" where every facet of human skill and interaction gets slowly commoditized, and that ending seems strictly worse to me in a lot of ways than just dying suddenly in an exploding ball of fire.

[-]Viliam2616

be useful, have political capital, can effect meaningful change throughout the world, can compete in status games with others, can participate in an economy of other people like me

How large part of this are zero-sum games, and the part that makes you happy is that you are winning? Would the person who is losing feel the same? What is the good ending for them?

[-]lc2012

WRT status games: I enjoy playing such games more when everybody agrees to the terms of the game and has a relatively even footing at the beginning and there are resets throughout. "Having more prestige" is great, but it's more important that I get to interact with other people in a meaningful way like that at all. The respect and prestige components people usually associate with winning status games are also not inherently zero-sum. It's possible to respect people even when they lose.

WRT political capital: Maybe it would be clearer if I said that I want to live in a world where humans have agency, and there's a History that feels like it's being shaped by actual people and not by brownian motion, and where the path to power is not always to subjugate your entire life and psychology to a moral maze. While most people won't outright endorse things like Prighozin's coup, because they realize it might end up being a lot more bad than good, they are obviously viscerally excited by the possibility that outsiders can win through gutsy action, and get depressed when they realize that's untrue. Contrast this with the default scenario of "some coalition of politicians and AGI lab heads and lobbyists decide how everything is going to be forever".

WRT everything else: Those things aren't zero sum at all. My laptop is useful and so am I. A laborer in Egypt is still participating in the economy.

[-]Viliam2111

Thank you! I agree. Things called "zero-sum" often become something else when we also consider their impact on third parties, i.e. when we model them as games of 3 players (Player 1, Player 2, World). It may be that the actions of Player 1 negate the actions of Player 2 from their relative perspectives (if we are in a running competition, and I start running faster, I get an advantage, but if you also start running faster, my advantage is lost), but both work in the same direction from the perspective of the World (if both of us run faster, the competition is more interesting to watch for the audience).

In some status games the effect on the third party is mostly "resources are wasted". (I try to buy a larger gold chain, you try to buy a larger gold chain, resources are wasted on mining gold and making chains.)

But if we compete at producing value for the third party, whether it is making jokes, or signaling wealth by sending money to charity, the effect on the third party is the value produced. Such games are good! If we could make producing value for the third party the only status game in town, the world would probably be a much nicer place.

That said, the concept of "useful" seems... (read more)

5lc
I have long wanted a society where there is a "constitutional monarchy" position that is high status and a magnet for interesting political skirmishes but doesn't have much control over public policy, and alongside that a "head of government" who is a boring accountant type and by law doesn't get invited to any of the interesting parties or fly around in a fancy jet.
9aphyer
If you died and went to a Heaven run by a genuinely benevolent and omnipotent god, would it be impossible for you to enjoy yourself in it?
4lc
It would be possible. "Fun Theory" describes one such environment the benevolent god could create.
4Tachikoma
How distressed would you be if the "good ending" were opt-in and existed somewhere far away from you? I've explored the future and have found one version that I think would satisfy your desire but I'm asking to get your perspective. Does it matter whether there are super-intelligent AIs but they leave our existing civilization alone and create a new one out on the fringes (the Artic, Antarctica or just out in space) and invite any humans to come along to join them without coercion? If you need more details, they're available at the Opt-In Revolution, in narrative form.
1Nikita Sokolsky
It's essential to my ability to enjoy life This assumes that we'll never have the technology to change our brain's wiring to our liking? If we live in the post-scarcity utopia, why won't you be able to just go change who you are as a person so that you'll fully enjoy the new world?
6lc
https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence
1sunwillrise
But you have also written yourself a couple of years ago: And indeed, when talking specifically about the Fun Theory sequence itself, you said: Do you no longer endorse this?

Rah to bringing back the short LessWrong post!

4Eli Tyre
Bringing back? When were there ever short LessWrong posts?

The first person that comes to mind for me with this is Wei Dai — here's a 5 paragraph post of theirs from 2010, and here's a 5-paragraph post of theirs from 2020. But also Hal Finney's historic Dying Outside is 6 paragraphs. Psychohistorian's short story about life extension is also 5-6 paragraphs. PhilGoetz's great post Have no heroes, and no villains is just 6 short paragraphs. On Saying The Obvious is under 500 words.

I don't currently share this sense of distress.

Insofar as we don't all die and we broadly continue to have agency over the world, I am kind of excited and up for the challenge of the new age. Given no-extinction and no-death and lots of other improvements to wealth and health and power, I'm up for the challenges and pains and difficulties that come with it.

I am further reminded of this quote.

Personally, I've been hearing all my life about the Serious Philosophical Issues posed by life extension, and my attitude has always been that I'm willing to grapple with those issues for as many centuries as it takes.

— Patrick Nielsen Hayden

3kave
I'd guess maybe @Katja Grace doesn't expect improvements to power (in the sense of human agency) in the default non-extinction future.
5Ben Pace
I would be interested in slightly more detail about what Katja imagines that world looks like.

Seems like there are a lot of possibilities, some of them good, and I have little time to think about them. It just feels like a red flag for everything in your life to be swapped for other things by very powerful processes beyond your control while you are focused on not dying. Like, if lesser changes were upcoming in people's lives such that they landed in near mode, I think they would be way less sanguine—e.g. being forced to move to New York City.

6Ben Pace
I agree it's very jarring. Everything you know is going to stop and a ton of new things will be happening instead. I can see being upset over the things that ended (friendships and other joys) and it hurting to learn the new ways that life is just harder now. That said, I note I don't feel bothered by your example. In the current era of video calls and shared slacks and LW dialogues I don't think I'd personally much mind being forced to move to New York and I might actually be excited to explore that place and its culture (acknowledging there will be a lot of friction costs as I adjust to a new environment).  Even without that, if I was basically going to live more than a trillion lifetimes, then being forced to move cities would just be a new adventure! I have a possibly-related not-bothered attitude in that I am not especially bothered about the possibility of my own death, as long as civilization lives is to live on.[1] I am excited for many more stories to be lived out, whether I'm a character in them or not. This is part of why I am so against extinction. 1. ^ Not that I wouldn't leap on the ability to solve aging and diseases.
3trevor
I can't speak for Katja, but the impression I get is that she thinks some of the challenges of slow takeoff might be impossible or unreasonably difficult for humans to overcome. I've written about clown attacks and social media addiction optimization, but I expect resisting clown attacks and quitting social media to be the fun kind of challenge.  Mitigating your agency loss from things like sensor exposure based influence and human lie detection will not be so fun, or even possible at all.
1xiann
I agree, I'm reminded of the quote about history being the search for better problems. The search for meaning in such a utopic world (from our perspective) thrills me, especially when I think about all the suffering that exists in the world today. The change may be chaotic & uncomfortable, but if I consider my personally emotions about the topic, it would be more frightening for the world to remain the same.

I feel pretty wary of the alignment community becoming temperamentally pessimistic, in a way that colors our expectations and judgments. I note this post as a fairly central example of that. (May say more about this later, but just wanted to register disagreement.)

2trevor
I think that the tradeoffs of making posts and comments vs. staying quiet is rather intensely complicated, e.g. you might think that clarifying your feelings through a keyboard or a conversation is worse than staying quiet about your feelings because if you stay quiet then you aren't outputting your thoughts as tokens. But if you attempt to make that calculation with only that much info, you'll be blindsided by game-changing unknown unknowns outside your calculation (e.g. staying quiet means your feelings might still be revealed without your knowledge as you scroll through social media, except as linear algebra instead of tokens).

i've written before about how aligned-AI utopia can very much conserve much of what we value now, including doing effort to achieve things that are meaningful to ourselves or other real humans. on top of alleviating all (unconsented) suffering and (unconsented) scarcty and all the other "basics", of course.

and without aligned-AI utopia we pretty much die-for-sure. there aren't really attractors in-between those two.

2Joe Collman
That's my guess too, but I'm not highly confident in the [no attractors between those two] part. It seems conceivable to have a not-quite-perfect alignment solution with a not-quite-perfect self-correction mechanism that ends up orbiting utopia, but neither getting there, nor being flung off into oblivion. It's not obvious that this is an unstable, knife-edge configuration. It seems possible to have correction/improvement be easier at a greater distance from utopia. (whether that correction/improvement is triggered by our own agency, or other systems) If stable orbits exist, it's not obvious that they'd be configurations we'd endorse (or that the things we'd become would endorse them).
2Tamsin Leake
okay, thining about it more, i think the reason i believe this is because of a slack-vs-moloch situation. if we get a lesser utopia, do we have enough slack to build up a better utopia, even slowly? if not, do we have enough slack to survive-at-all? i feel like "we have exactly enough slack to live but not improve our condition" is a pretty unlikely state of affairs; most likely, either we don't have enough slack to survive (and we die, though maybe slowly) or we have more than enough to survive (and we improve our condition, though maybe slowly, all the way to the greater-utopia-we-didn't-start-with).

No offense but I sense status quo bias in this post.

If you replace "AI" with "industrial revolution" I don't think the meaning of the text changes much and I expect most people would rather live today than in the Middle Ages.

One thing that might be concerning is that older generations (us in the future) might not have the ability to adapt to a drastically different world in the same way that some old people today struggle to use the internet.

I personally don't expect to be overly nostalgic in the future because I'm not that impressed by the current state of the world: factory farming, the hedonic treadmill, physical and mental illness, wage slavery, aging, and ignorance are all problems that I hope are solved in the future.

[-]Viliam1711

With adapting, the important question is what happens if you don't. If it only means you will miss out some fun, I don't mind. Kids these days use Instagram and TikTok, I... don't really understand the allure of that, so I stay away. I may change my mind in future, so it feels like I am choosing between two good things: the convenience of ignoring the new stuff, and the possible advantages of learning it.

It is different when the failure to adapt will make your life actively worse. Like people today who are old but not retired yet, who made the choice to ignore all that computer stuff, and now they can't get a job. Or the peasants during the industrial revolution who made a bet that "people will always need some food, so my job is safe, regardless of all this new stuff", and then someone powerful just took their fields and built a factory there, and let them starve to death (because they couldn't get a job in that factory).

If the future will have all the problems solved, including the problem of "how can I get food and healthcare in a society where a robot can do literally anything much better and cheaper than me", then... I will find a hobby; I never had a problem with that.

(I really hope the solution will not be "create stressful bullshit jobs".)

A separate question is whether I can survive the point that is halfway between "here" and "there".

8peterbarnett
I'm pretty worried about the future where we survive and build aligned AGI, but we don't manage to fully solve all the coordination or societal problems. Humans as a species still have control overall, but individuals don't really.  The world is crazy, and very good on most axes, but also disorienting and many people are somewhat unfulfilled.  It doesn't seem crazy that people born before large societal changes (eg industrial revolution, development of computers, etc) do feel somewhat alienated from what society becomes. I could imagine some pre-industrial revolution farmer kind of missing the simplicity and control they had over their life (although this might be romanticizing the situation). 
[-]aysja1914

Yeah :/ I've struggled for a long time to see how the world could be good with strong AI, and I've felt pretty alienated in that. Most of the time when I talk to people about it they're like "well the world could just be however you like!" Almost as if, definitionally, I should be happy because in the really strong success cases we'll have the tech to satisfy basically any preference. But that's almost the entire problem, in some way? As you say, figuring things out for ourselves, thinking and learning and taking pride in skills that take effort to acquire... most of what I cherish about these things has to do with grappling with new territory. And if I know that it is not in fact new, if all of it could be easier were I to use the technology right there... it feels as though something is corrupted... The beauty of curiosity, wonder, and discovery feels deeply bound to the unknown, to me. 

I was talking to a friend about this a few months ago and he suggested that because many humans have these preferences, that we ought to be able to make a world where we satisfy them—e.g., something like "the AI does its thing over there and we sit over here having basically normal human live... (read more)

1dirk
This is a very strange mindset. It's already not new! Almost everything you can learn is already known by other people; most thoughts you can think have been thought before; most skills, other people have mastered more thoroughly than you're likely to. (If you only mean new to you in particular, on the other hand, AI can't remove the novelty; you'd have to experience it for it to stop being novel). Why would you derive your value from a premise that's false?
1whestler
I realise this is a few months old but personally my vision for utopia looks something like the Culture in the Culture novels by Iain M. Banks. There's a high degree of individual autonomy and people create their own societies organically according to their needs and values. They still have interpersonal struggles and personal danger (if that's the life they want to lead) but in general if they are uncomfortable with their situation they have the option to change it. AI agents are common, but most are limited to approximately human level or below. Some superhuman AI's exist but they are normally involved in larger civilisational manouvering rather than the nitty gritty of individual human lives. I recommend reading it.  Caveats- 1: yes, this is a fictional example so I'm definitely in danger of generalising from fictional evidence. I mostly think about it as a broad template or cluster of attributes society might potentially be able to achieve. 2: I don't think this level of "good" AI is likely.
0Roman Leventov
Discovering and mastering one's own psychology may still be a frontier where the AI could help only marginally. So, more people will become monks or meditators?

Ah, if things go well, it will be an amazing opportunity to find out how much of our minds was ultimately motivated by fear. Suppose that you are effectively immortal and other people can't hurt you, what would you do? Would you still want to learn? Would you bother keeping friends? Or would you maybe just simulate million kinds of experience, and then get bored and decide to die or wirehead yourself?

I think I want to know the answer. If it kills me, so be it... the universe didn't have a better plan for me anyway.

It would probably be nicer to take things slowly. Stop death and pain, and then let people slowly figure out everything. That would keep a lot of life normal. The question is whether we could coordinate on that, because it would be tempting to cheat. If we all voluntarily slow down and try to do "life as normal, but without pain", a little bit of cheating ("hey AI, give me 20 extra IQ points and all university-level knowledge as of 2023, but don't tell anyone; otherwise give me the life as normal but without pain") would keep lot of the benefits of life as normal, but also give one a relative advantage and higher status. It's not even necessary to give me all the knowledg... (read more)

4M. Y. Zuo
Why would it be desirable to maintain this kind of 'division of labor' in an ideal future?
4Viliam
Maybe it won't, but it seems to me that people today build a lot of their interactions around that. (For example, I read Astral Codex Ten, because Scott is much better at writing than me.)

I think this might be mapping a regular getting older thing to catastrophe? Part of aging is the signaling landscape that one trained on changing enough that many of the things one once valued don't seem broadly valued any more.

I think you're confusing aging with the mere passage of time. You haven't forgotten, I trust, that aging is to be destroyed?

This is more or less why I chose to go into genetics instead of AI: I simply couldn't think of any realistic positive future with AGI in it. All positive scenarios rely on a benevolent dictator, or some kind of stable equilibrium with multiple superintelligent agents whose desires are so alien to mine, and whose actions are so unpredictable that I can't evaluate such an outcome with my current level of intelligence.

2lc
That probably doesn't lead to nice outcomes without additional constraints, either.

I've actually been moving in the opposite direction, thinking that the gameboard might not be flipped over, and actually life will stay mostly the same. Political movements to block superintelligence seem to be gaining steam, and people are taking it seriously.

(Even for more mundane AI, I think it's fairly likely that we'll be soon moving "backwards" on that as well, for various reasons which I'll be writing posts about in the coming week or two if all goes well.)

Also, some social groups will inevitably internally "ban" certain technologies if things get weird. There's too much that people like about the current world, to allow that to be tossed away in favor of such uncertainty.

these social movements only delay AI. unless you ban all computers in all countries, after a while someone, somewhere will figure out how to build {AI that takes over the world} in their basement, and the fate of the lightcone depends on whether that AI is aligned or not.

I am not a fan of the current state of the universe. Mostly the part where people keep dying and hurting all the time. Humans I know, humans I don't know, other animals that might or might not have qualia, possibly aliens in distant places and Everett branches. It's all quite the mood killer for me, to put it mildly. 

So if we pull off not dying and turning Earth into the nucleus of an expanding zero utility stable state, superhuman AI seems great to me.

Did you miss transhumanism? If it's truly important to you, to be useful, alignment would mean that superintelligence will find a way to lift you up and give you a role.

I suppose there might be a period during which we've figured out existential security but the FASI hasn't figured out human augmentation beyond the high priority stuff like curing aging. I wouldn't expect that period to be long.

[-]O O9-15

I can only say there was probably someone in every rolling 100 year period that thought the same about the next 100 years

I think this time is different. The implications simply so much broader, so much more fundamental.

Also, from a zoomed-out-all-of-history-view, the pace of technological progress and social change has been accelerating. The difference between 500 AD and 1500 AD is not 10x the difference between 1900 and 2000, it's arguably less than 1x. So even without knowing anything about this time, we should be very open to the idea that this time is more significant than all previous times.

4Shankar Sivarajan
That's what people said last time too. And the time before that.

That's so correct. But still so wrong - I'd like to argue.

Why?

Because replacing the brain is simply not the same as replacing just our muscles. In all the past we've just augmented our brain, with stronger muscle or calculation or writing power etc. using all sorts of dumb tools. But the brain remained the crucial all-central point for all action.

We will have now tools that are smarter, faster, more reliable than our brains. Probably even more empathic. Maybe more loving.

Statistics cannot be extrapolated when there's a visible structural break. Yes, it may have been difficult to anticipate, 25 years ago, that computers that calculate so fast etc., don't quickly change society all that fundamentally (although quite fundamentally) already, so the 'this time is different' guys 25 years ago were wrong. But in hindsight, it is not so surprising: As long as machines were not so truly smart, we could not change the world as fundamentally as we now foresee. But this time, we seem to be about to get the truly smart ones.

The future is a miracle, we cannot truly fathom how exactly it will look. So nothing is absolutely sure indeed. But merely looking back to the period where mainly muscles we... (read more)

Not sure about this, but to the extent it was so, often they were right that a lot of things they liked would be gone soon, and that that was sad. (Not necessarily on net, though maybe even on net for them and people like them.)

There was, but their arguments for it were terrible. If there are flaws in the superintelligence argument, please point them out. It's hard to gauge when, but with GPT4 being smarter than a human for most things, it's tough to imagine we won't have its gaps (memory, using its intelligence to direct continuous learning) within a couple of decades.

2trevor
Modern civilization is pretty OOD compared to the conditions that formed it over the last 100 years. Just look at the current US-China-Russia conflict, for example. Unlike the original Cold War, this current conflict was not started with intent to carpet bomb each other with nukes (carpet bombing was the standard with non-nuclear bombs during WW2, so when the Cold War started they assumed that they would do the carpet bombing with nukes instead).

Aligned AGI means people getting more of what they want. We'll figure out where to get challenge and fulfillment.

People will have more friends, not less, once we have more time for them, and more wisdom about how to get and keep them. And we will still want human support and advice, even if machines can do it better.

People who want to learn skills and knowledge will still do that, and create spaces to compete and show them off. I'm thinking of the many competitions that already happen with rules about not getting outside help.

Most humans have always lived ... (read more)

What do you make of the prospect of neurotech, e.g. Neuralink, Kernel, Openwater, Meta/CTRL-Labs, facilitating some kind of merge of biological human intelligence and artificial intelligence? If AI alignment is solved and AGI is safely deployed, then "friendly" or well-aligned AGI could radically accelerate neurotech. This sounds like it might obviate the sort of obsolescence of human intelligence you seem to be worried about, allowing humans alive in a post-AGI world to become transhuman or post-human cyborg entities that can possibly "compete" with AGI in domains like writing, explanation, friendship, etc. 

1StartAtTheEnd
I don't think rational human beings are human beings at all. Why wouldn't we make ourselves psychopaths in order to reduce suffering? Why would we not reduce emotions in order to become more rational? Friendships are a human thing, an emotional thing, and something which is ruined by excess logic and calculation. I argue that everything pretty in life, and everything optimal in life, are at odds with eachother. That good things are born from surplus, and even from wasting this surplus (e.g. parties, festivals, relaxation). And that everything ugly in life stems from optimization (exploitation, manipulation, bad faith). I don't even wish to become more rational than I am now, I can feel how it's making me more nihilistic, how I must keep myself from thinking about things in order to enjoy them, how mental models get in the way of experiencing and feeling life, and living in the moment. I'd even argue that seeking ones own advantage in every situation is a a symptom of bad health, a feeling of desperacy and neediness, a fear of inadequacy. 
2Viliam
Yet I would think that there is some positive relation between "optimization" and "surplus". You can enjoy wine at the party, because someone spent a lot of time thinking how to produce and distribute it cheaply.
1StartAtTheEnd
That's true. But somebody who optimizes excessively would consider it irrational to purchase any wine. This was just an example, it's not very valuable on its own, but if you generalize the idea or isolate the mechanism which leads to it, I think you will find that it's rather pervasive. To illustrate my point differently: Mass-producing cubes is much more efficient than it is to build pretty housing with some soul and aesthetical value. So optimization is already, in some sense, in conflict with human values. Extrapolating the current development of society, I predict that the already lacking sense of humanity and realness is going to disappear completely. You may think that the dead internet theory and such are unintended side-effects that we will deal with in time, but I believe that they're mathematically unavoidable consequences. "Human choice" and "optimal choice" go in different directions, and our human choices are being corrected in the name of optimization and safety. Being unoptimal is almost treated as a form of self-harm nowadays, but the life which is not genuine is not life at all, in my eyes. So I'm not deriving any benefits from being steered in such a mechanical direction (I'm not accusing you of doing this)
2[comment deleted]

I do not see why any of these things will be devalued in a world with superhuman AI.

At most of the things I do, there are many other humans who are vastly better at doing the same thing than me. For some intellectual activities, there are machines who are vastly better than any human. Neither of these stops humans from enjoying improving their own skills and showing them off to other humans.

For instance, I like to play chess. I consider myself a good player, and yet a grandmaster would beat me 90-95 percent of the time. They, in turn, would lose on average... (read more)

[-]jmh51

I kind of understand where that sentiment comes from but I do think it is "wrong". Wrong in the sense that it is neither a necessary position to hold nor a healthy one. There are plenty of things I do today in which I get a lot of satisfaction even though existing machines, or just other people, can do them much better than I can. The satisfaction comes from the challenge to my own ability level rather than some comparison to something outside me -- be it machine, environment or another person.

To me it sounds like you're dividing possible futures into extinction, dystopia, and utopia, and noticing that you can't really envision the latter. In which case, I agree, and I think if any of us could, we'd be a lot closer to solving alignment than we actually are. 

Where my intuition cuts differently is that I think most seemingly-dystopian futures, where humans exist but are disempowered and dissatisfied with our lives and the world, are unstable or at best metastable, and will eventually give way to one of the other two categories. I'm sure stabl... (read more)

I predict most humans choose to reside in virtual worlds and possibly have their brain altered to forget that it's not real. 

Short, as near as I can tell, true, and important. This expresses much of my feeling about the world.

You can just create personalized environments to your preferences. Assuming that you have power/money in the post-singularity world.

7KatjaGrace
Assuming your preferences don't involve other people or the world
4Roko
Most people, ultimately, do not care about something that abstract and will be happy living in their own little Truman Show realities that are customized to their preferences. Personally I find The World to be dull and constraining, full of things you can't do because someone might get offended or some lost-purposes system might zap you. Did you fill in your taxes yet!? Did you offend someone with that thoughtcrime?! Plus, there are the practical downsides like ill health and so on. I'd be quite happy to never see 99.9999999% of humanity ever again, to simply part ways and disappear off into our respective optimized Truman Shows. And honestly I think anyone who doesn't take this point of view is being insane. Whatever it is you like, you can take with you. Including select other people who mutually consent.

It really is. My conception of the future is so weighed by the very likely reality of an AI transformed world that I have basically abandoned any plans with a time scale over 5 years. Even my short term plans will likely be shifted significantly by any AI advances over the next few months/years. It really is crazy to think about, but I've gone over every single aspect of AI advances and scaling thousands of times in my head and can think of no reality in the near future not as alien to our current reality as ours is to pre-eukaryotic life.

My taxonomy of possible outcomes is x-risk (risk of extinction), s-risk (risk of suffering), w-risk (risk of a "weirdtopia"), and success. It seems like what you are worried about is a mix of s-risk and w-risk, maybe along lines that no-one has clearly conceptualized yet?

7Raemon
I mean there’s also like ‘regular ol’ (possibly subtle) dystopia?’ Like, it might also be a weirdtopia but it doesn’t seem necessary in the above description. (I interpret weirdtopia to mean ‘actually good, overall, but in a way that feels horrifying or strange’. If the replacements for friendship etc aren’t actually good, it might just be bad)
2Mitchell_Porter
This could be a reason for me not to call it a "w-risk". But this also highlights the slippery nature of some of the boundaries here.  My central idea of a w-risk and a weirdtopia, is that it's a world where the beings in it are happy, because it's being optimized/governed according to their values - but those values are not ours, and yet those beings are us, and/or our descendants, after being changed by some process to which we would not have consented beforehand, if we understood its nature.  On the other hand, your definition of weirdtopia could also include futures in which our present values are being satisfied, "but in a way that feels horrifying or strange" if it's described to us in the present. So it might belong to my fourth category - all risks successfully avoided - and yet we-in-the-present would reject it, at least at first. 

Superhumwn chess AI did not remove people's pleasure from learning/playing chess. I think people are adaptible and can find meaning. Surely, the world will not feel the same but I think there is significant potential for something much better. I wrote about tfhis a little on my blog:

https://martinkunev.wordpress.com/2024/05/04/living-with-ai/

[-]dirk10

If superhuman AI would prevent you from thinking, learning, or being proud of yourself; that seems to me like the result of some sort of severe psychological issue. I'm sorry that you have that going on, but... maybe get help?

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

I keep wondering if there is an afterlife, and if there is will they be able to break a twenty?

I sort of agree. But there is clearly a potential flip-side: we quite likely get to be post-scarcity on a lot of things (modulo fairness of distribution and absence of tyranny), including: customized art and stories, scientific and mathematical discoveries, medical care, plus any sort of economic goods that depend more on knowledge and technological inputs than material resources. So the video games will be awesome. We might even be somewhat less constrained on material resources, if there are rapid technological improvements in reusable energy, environmental remediation after mining, asteroid mining, or things like that.

This makes an interesting point about scarcity. On one hand, it sucks to be limited in the amount of stuff you have. On the other hand, struggling through adversity or having scarce skills can give people a sense of meaning. We know that people whose skills are automated can suffer a lot from it. 

 

I think that even once all of humans' cognitive skills can be replaced by AI, we will still be useful to one another. We will still relate to each other on account of our shared biological nature. I think that people will not for the most part have the ... (read more)

Thanks for writing this. I fully agree, by the way.

Anything like an utopia requires a form of stability. But we're going to speed everything up, by a lot. Nothing can change and yet remain the same. And I think it's silly to assume that optimization, or simply just improving things, can co-exist with things remaining as they are. We're necessarily something intermediate, so speeding things up doesn't seem like a good idea.

Furthermore, it seems that slowing down technological advancement is almost impossible, and that keeping people from making optimal choi... (read more)

In my head, I've sort of just been simplifying to two ways the future could go: human extinction within a relatively short time period after powerful AI is developed or a pretty good utopian world. The non-extinction outcomes are not ones I worry about at the moment, though I'm very curious about how things will play out. I'm very excited about the future conditional on us figuring out how to align AI. 

I'm curious about, for people who think similarly to Katja, what kind of story are you imagining that leads to that? Does the story involve authoritari... (read more)

[-]nim10

writing, explaining things to people, friendships where you can be of any use to one another, taking pride in skills, thinking, learning, figuring out how to achieve things, making things, easy tracking of what is and isn’t conscious

Used to be, we enjoyed doing those things ourselves through a special uniquely-human flavor of being clever.

Seems like post-AI, we'll get to those things through a special uniquely-human flavor of being un-clever.

It doesn't make sense to judge human accomplishment and effort differently from that of AI -- but humans are grea... (read more)

The Sea of Faith

Was once, too, at the full, and round earth’s shore

Lay like the folds of a bright girdle furled.

But now I only hear

Its melancholy, long, withdrawing roar,

Retreating, to the breath

Of the night-wind, down the vast edges drear

And naked shingles of the world.
 

Ah, love, let us be true

To one another! for the world, which seems

To lie before us like a land of dreams,

So various, so beautiful, so new,

Hath really neither joy, nor love, nor light,

Nor certitude, nor peace, nor help for pain;

And we are here as on a darkling plain

Swept with confused

... (read more)

Even if we don’t die, it still feels like everything is coming to an end.

Everything? I imagine there will be communities/nations/social groups that completely ban AI and those that are highly dependent on AI. There must be something between those two extremes.

This is like saying "I imagine there will be countries that renounce firearms". There aren't such countries. They got eaten by countries that use firearms. The social order of the whole world is now kept by firearms.

The same will happen with AI, if it's as much a game changer as firearms.

1Gesild Muka
I think I understand, we're discussing with different scales in mind. I'm saying individually (or if your community is a small local group) nothing has to end but if your interests and identity are tied to sizeable institutions, technical communities etc. many will be disrupted by AI to the point where they could fade away completely. Maybe I'm just an unrealistic optimist, I don't believe collective or individual meaning has to fade away just because the most interesting and cutting edge work is done exclusively by machines.
[+]sapphire-18-12