There is a view I’ve encountered somewhat often,[1] which can be summarized as follows: 

After the widespread deployment of advanced AGI, assuming humanity survives, material scarcity will largely disappear. Everyone will have sufficient access to necessities like food, housing, and other basic resources. Therefore, the only scarce resource remaining will be "social status". As a result, the primary activity humans will engage in will be playing status games with other humans.

I have a number of objections to this idea. I'll focus on two of my objections here.

My first objection is modest but important. In my view, this idea underestimates the extent to which AIs could participate in status games alongside us, not just as external tools or facilitators but as actual participants and peers in human social systems. Specifically, the idea that humans will only be playing status games with each other strikes me as flawed because it overlooks the potential for AIs to fully integrate into our social lives, forming genuinely deep relationships with humans as friends, romantic partners, social competitors, and other forms of meaningful social connections.

One common counterargument I’ve heard from people is that they don’t believe they would ever truly view an AI as a "real" friend or romantic partner. This reasoning often seems to rest on a belief that such relationships would feel inauthentic, as though you're interacting with a mere simulation. However, I think this skepticism is based on a misunderstanding of what AIs are capable of. In a way, this belief seems to stem from skepticism about AI capabilities: they are essentially saying that whatever it is that humans do that cause us to be good social partners can't be replicated in a machine.

In my view, there is no fundamental reason why a mind implemented on silicon should inherently feel less “real” or “authentic” than a mind implemented on a biological brain. The perceived difference is a matter of perspective, not an objective truth about what makes a relationship meaningful.

To illustrate this, consider a silly hypothetical: imagine discovering that your closest friend was, unbeknownst to you, a robot all along. Would this revelation fundamentally change how you view your relationship? I suspect that most people would not suddenly stop caring about that friend or begin treating them as a mere tool (though they'd likely become deeply confused, and have a lot of questions). My point is that the qualities that made the friendship meaningful—such as shared memories, and emotional connection—would not cease to exist simply because of the revelation that they are not a carbon-based lifeform. In the same way, I predict that as AIs improve and become more sophisticated, most humans will eventually overcome their initial hesitation and embrace AIs as true peers.

Right now, this might seem implausible because today’s AI systems are still limited in important ways. For example, current LLMs lack of robust long-term memory and so it's effectively impossible to have a meaningful relationship with them over long timespans. But these limitations are temporary. In the long run, there’s no reason to believe that AIs won’t eventually surpass humans in every domain that makes someone a good friend, romantic partner, or social peer. Advanced AIs will have great memory, excellent social intuition, and a good sense of humor. They could have outstanding courage, empathy, and creativity. Depending on the interface—such as a robotic body capable of human-like physical presence—they could be made to feel as "normal" to interact with as any human you know.

In fact, I would argue that AIs will ultimately make for better friends, partners, and peers than humans in practically every way. Unlike humans, AIs can be explicitly trained to embody the traits we most value in relationships—whether that’s empathy, patience, humor, intelligence, whatever—without the shortcomings and inconsistencies that are inherent to human behavior. While their non-biological substrate ultimately sets them apart, their behavior could easily surpass human standards of social connection. In this sense, AIs would not just be equal to humans as social beings but could actually become superior in the ways that matter most when forming social ties with them.

Once people recognize how fulfilling and meaningful relationships with AIs can be, I expect that social attitudes will shift. This change may start slowly, as more conservative or skeptical people will resist the idea at first. But over time, much like the adoption of smartphones into our everyday life, I predict that forming deep social bonds with AIs will become normalized. At some point, it won’t seem unusual or weird to have AIs as core members of one’s social circle. In fact, I think it’s entirely plausible that AIs will become the vast majority of people’s social connections. If this happens, the notion that humans will be primarily playing status games with each other becomes an oversimplification. Instead, the post-AGI social landscape will likely involve a complex interplay of dynamics between humans and AIs, with AIs playing a major—indeed, likely central—role as peers in these interactions.

But even in the scenario I’ve just outlined, where AIs integrate into human social systems and become peers, the world still feels far too normal to me. The picture I've painted seems to assume that not much will fundamentally change about our social structures or the ways we interact, even in a post-AGI world.

Yet, I believe the future will likely look profoundly strange—far beyond a simple continuation of our current world but with vast material abundance. Instead of just having more of what we already know, I anticipate the emergence of entirely new ways for people to spend their time, pursue meaning, and structure their lives. These new activities and forms of engagement could be so unfamiliar and alien to us today that they would be almost unrecognizable.

This leads me to my second objection to the idea that the primary activity of future humans will revolve around status games: humans will likely upgrade their cognitive abilities.

This could begin with biological enhancements—such as genetic modifications or neural interfaces—but I think pretty quickly after it becomes possible, people will start uploading their minds onto digital substrates. Once this happens, humans could then modify and upgrade their brains in ways that are currently unimaginable. For instance, they might make their minds vastly larger, restructure their neural architectures, or add entirely new cognitive capabilities. They could also duplicate themselves across different hardware, forming "clans" of descendants of themselves. Over time, this kind of enhancement could drive dramatic evolutionary changes, leading to entirely new states of being that bear little resemblance to the humans of today.

The end result of such a transformation is that, even if we begin this process as "humans", we are unlikely to remain human in any meaningful sense in the long-run. Our augmented and evolved forms could be so radically different that it feels absurd to imagine we would still be preoccupied with the same social activities that dominate our lives now—namely, playing status games with one another. And it seems especially strange to think that, after undergoing such profound changes, we would still find ourselves engaging in these games specifically with biological humans, whose cognitive and physical capacities would pale in comparison to our own.

  1. ^

    Here's a random example of a tweet that I think gestures at this idea.

New Comment


19 comments, sorted by Click to highlight new comments since:

I think that getting too fixated on status games is usually due to some kind of insecurity, e.g. feeling that you need to accumulate status in order to be accepted, respected or something like that. (One can certainly play status games for the fun of it without having such an insecurity, but that seems unlikely to lead to the level of fixation where status game would become the primary activity that humans engage in.) 

If every human can close friends or even lovers with AI systems as you suggest, then I would expect that to provide the kind of deep unconditional feeling of security that makes the need for playing status games fall away. If you feel deeply safe, loved, respected, etc., then status can certainly still feel nice to have, but it's unlikely to feel that important for most people. In the same way that e.g. a loving parent focused on taking care of and spending time with their family may find themselves becoming much less interested in spending their time playing status games.

Lots of people already form romantic and sexual attachments to AI, despite the fact that most large models try to limit this behavior. The technology is already pretty good. Nevermind if your AI GF/BF could have a body and actually fuck you. I already "enjoy" the current tech. 

I will say I was literally going to post "Why would I play status games when I can fuck my AI GF" before I read the content of the post, as opposed to just the title. I think this is what most people want to do. Not that this is going to sound better than "status games" to a lot of rationalists.

Sex is fun and awesome. Though it doesn’t feel fun and awesome to have sex all day everyday. You could probably do transhuman meth and make sex fun all the time. But a Pleasure Cube/Super Happy scenario makes me sad.

I’m also wondering who you’re talking about when you say “most people” here? I have the opposite model of most people.

I have spent weeks where pretty much all I did was:
-- have sex with my partner, hours per day
-- watch anime with my partner
-- eat food and ambiently hang with my partner

No work. Not much seeing other people. Of course given the amount of sex mundane situations were quite sexually charged. I'm not actually sure if it gets old on any human timeline. You also improve at having fun together. However this was not very good for our practical. But post singularity I probably wont need to worry about practical goals. 

In general I think you underestimate the sustainable fun available to at least some humans under minimal conditions. I also found my two months meditating in a tent quite fun. Many people report this never gets old on human timelines either. Until your heath is so terrible you cannot even meditate well it remains fun, or improves!

I do not think you need supermeth to enjoy hedonism. Current human bodies work fine as long as they are in good shape and you have the right disposition. The issue is that if you delve deep into hedonism you will lose out on other things you could have obtained.

We can already see what people do with their free time when basic needs are met. A number of technologies have enabled new hacks to set up 'fake' status games that are more positive-sum than ever before in history:

  • Watch broadcast sports, where you can feel like a winner (or at least feel connected to a winner), despite not having had to win yourself
  • Play video games with AI opponents, where you can feel like a winner, despite it not being zero-sum against other humans
  • Watch streamers and influencers to feel connected to high status people, without having to earn respect or risk rejection
  • Get into a niche hobby community in order to feel special, ignoring the other niche hobbies that other people join that you don't care about

Feels likely to me that advancing digital technology will continue to make it easier for us to spend time in constructed digital worlds that make us feel like valued winners. On the one hand, it would be sad if people retreat into fake digital siloes; on the other hand, it would be nice if people got to feel like winners more.

I'm wondering less if humans will want to date AGIs and more if AGIs will want to date humans.

Sure, if we solve the alignment problem we can build AGIs that want to date humans; but will we decide that's ethical?

The criteria for consciousness and moral worth are varied and debated. The answer to whether AGIs will be conscious and worthy is definitely sort of.

So: is creating a conscious being with a core motivation designed specifically so that it wants to date you a form of slavery? It definitely smacks of grooming or something....

One issue is whether AGIs will want to stay around the human cognitive level. There's an issue with power dynamics in a relationship between a nerd and a demigod.

Sure the humans can cognitively enhance too; what fraction of us will want to become demigods ourselves?

It's going to be wild if we can get there. And fun. Speaking of which, we won't be playing games mostly for status -- we'll mostly be playing for fun.

We won't all have the coolest friends, but we'll all have cool friends because we'll all be cool friends. Humans will no longer be repressed, neurotic messes because we'll have actual understanding of psychology and actual good, safe, supportive childhoods for essentially everyone.

It's gonna be wild if we can get there.

When I saw the beginning/title I thought the post would be a refutation of the material scarcity thesis; I found myself disappointed it is not.

I suppose that means it might be worth writing an additional post that more directly responds to the idea that AGI will end material scarcity. I agree that thesis deserves a specific refutation.

Reading novels with ancient powerful beings is probably the best direction you have for imagining how status games amongst creatures which are only loosely human look.

 

Resources being bounded, there will tend to always be larger numbers of smaller objects (given that those objects are stable).

There will be tiers of creatures. (In a society where this is all relevant)

While a romantic relationship skipping multiple tiers wouldn't make sense,  a single tier might.

 

The rest of this is my imagination :)

Base humans will be F tier, the lowest category while being fully sentient. (I suppose dolphins and similar would get a special G tier).

Basic AGIs (capable of everything a standard human is, plus all the spikey capabilities) and enhanced humans E tier.

Most creatures will be here.

D tier:

Basic ASIs and super enhanced humans (gene modding for 180+ IQ plus SOTA cyborg implants) will be the next tier, there will be a bunch of these in absolute terms but relative to the earlier tier rarer.

C tier:

Then come Alien Intelligence, massive compute resources supporting ASIs trained on immense amounts of ground reality data, biological creatures that have been redesigned fundamentally to function at higher levels and optimally synergize with neural connections (whether with other carbon based or silicon based lifeforms)

B tier:

Planet sized clusters running ASIs will be a higher tier.

A, S tiers:

Then you might get entire stars, then galaxies.

There will be much less at each level.

 

Most tiers will have a -, neutral or +.

- : prototype, first or early version. Qualitatively smarter than the tier below, but non-optimized use of resources, often not the largest gap from the + of the earlier tier

Neutral: most low hanging optimizations and improvements and some harder ones at this tier are implemented

+: highly optimized by iteratively improved intelligences or groups of intelligences at this level, perhaps even by a tier above. 

The the first objection: To the extent that AGI participates in status games with humans, they will win. They'll be better at it than we are. You could technically have a gang of toddlers play basketball with an NBA all star team, but I don't think you can really say they're competing with each other, or that they're both playing the same game in the sense the people talking about status games mean it.

To the second objection: It is not at all clear to me whether any biological intelligence augmentation path puts humans on a level playing field with AI systems, except insofar as we're talking about a limited subset of AI systems. At which point, how similar do they need to be before we deem the AIs to count as essentially human for this purpose, or the upgraded humans to not count as human? I don't find the semantic questions all that interesting in themselves, but I also don't think it would be interesting or enjoyable to play games with a fully digital version of a person with 1000x more compute than me. 

If we want future humans to face games and challenges they find meaningful, that puts extra constraints on what kinds of entities we can have them competing against.

FWIW I am mostly uninterested in human status games now, and don't anticipate that changing much in the future. I really don't like this vision of the future of humanity. It's not the worst, but I think we can do much better. We just have to be more creative in our understanding of what makes something meaningful.

I also think it is unlikely that AGIs will compete in human status games. Status games are not just about being the best: Deep Blue is not high status, sportsmen that take drugs to improve their performance are not high status.

Status games have rules and you only win if you do something impressive while competing within the rules, being an AGI is likely to be seen as an unfair advantage, and thus AIs will be banned from human status games, in the same way that current sports competitions are split by gender and weight.

Even if they are not banned given their abilities it will be expected that they do much better than humans, it will just be a normal thing, not a high status, impressive thing.

I think some of this is on target, but I also think there's insufficient attention to a couple of factors.

First, in the short and intermediate term, I think you're overestimating how much most people will actually update their personal feelings around AI systems. I agree that there is a fundamental reason that fairly near-term AI will be able to function as  better companion and assistant than humans - but as a useful parallel, we know that nuclear power is fundamentally better than most other power sources that were available in the 1960s, but people's semi-irrational yuck reaction to "dirty" or "unclean" radiation - far more than the actual risks - made it publicly unacceptable. Similarly, I think the public perception of artificial minds will be generally pretty negative, especially looking at current public views of AI. (Regardless of how appropriate or good this is in relation to loss-of-control and misalignment, it seems pretty clearly maladaptive for generally friendly near-AGI and AGI systems.)

Second, I think there is a paperclip maximizer aspect to status competition, in the sense Eliezer uses the concept. That is,  Specifically, given massively increased wealth, abilities, and capacity, even if a implausibly large 99% of humans find great ways to enhance their lives in ways that don't devolve into status competition, there are few other domains where an indefinite amount of wealth and optimization power can be applied usefully. Obviously, this is at best zero-sum, but I think there aren't lots of obvious alternative places for positive sum indefinite investments. And even where such positive-sum options exist, they often are harder to arrive at as equilibria. (We see a similar dynamic with education, housing, and healthcare, where increasing wealth leads to competition over often artificially-constrained resources rather than expansion of useful capacity.)

Finally and more specifically, your idea that we'd see intelligence enhancement as a new (instrumental) goal in the intermediate term seems possible and even likely, but not a strong competitor for, nor inhibitor of, status competition. (Even ignoring the fact that intelligence itself is often an instrumental goal for status competition!) Even aside from the instrumental nature of the goal, I will posit that some strongly reduced returns to investment will exist - regardless of the fact that it's unlikely on priors that these limits are near the current levels. Once those points are reached, the indefinite investment of resources will trade-off between more direct status competition and further intelligence increases, and as the latter shows decreased returns, as noted above, the former becomes the metaphorical paperclip which individuals can invest indefinitely into.

It's not clear that "a human which doesn't care about perceived status" is actually human.  A lot depends on whether you consider the AIs that populate the solar system after biological intelligence is obsolete to be "descendants" or "replacements" of today's humans.

On straightforward extrapolation of current technologies, it kind of seems like AI friends would be overly pliable and lack independent lives. One could obviously train an AI to seem more independent to their "friends", and that would probably make it more interesting to "befriend", but in reality it would make the AI less independent because its supposed "independence" would actually arise from a constraint generated by its "friends"'s perception, rather than from an attempt to live independently. This seems less like a normal friendship and more like a superstimulus simulating the appearance of a friendship for entertainment value. It seems reasonable enough to characterize it as non-authentic.

 

Do you disagree? What do you think would lead to a different trajectory?

This seems less like a normal friendship and more like a superstimulus simulating the appearance of a friendship for entertainment value. It seems reasonable enough to characterize it as non-authentic.

I assume some people people will end up wanting to interact with a mere superstimulus; however, other people will value authenticity and variety in their friendships and social experiences. This comes down to human preferences, which will shape the type of AIs we end up training. 

The conclusion that nearly all AI-human friendships will seem inauthentic thus seems unwarranted. Unless the superstimulus is irresistible, then it won't be the only type of relationship people will have. 

Since most people already express distaste at non-authentic friendships with AIs, I assume there will be a lot of demand for AI companies to train higher quality AIs that are not superficial and pliable in the way you suggest. These AIs would not merely appear independent but would literally be independent in the same functional sense that humans are, if indeed that's what consumers demand.

This can be compared to addictive drugs and video games, which are popular, but not universally viewed as worthwhile pursuits. In fact, many people purposely avoid trying certain drugs to avoid getting addicted: they'd rather try to enjoy what they see as richer and more meaningful experiences from life instead.

I don't think consumers demand authentic AI friends because they already have authentic human friends. Also it's not clear how you imagine the AI companies could train the AIs to be more independent and less superficial; generally training an AI requires a differentiable loss, but human independence does not originate from a differentiable loss and so it's not obvious that one could come up with something functionally similar via gradient descent.

As AIs don't have the same origin as humans, it is basically inconceivable to me that they will ever share the internal processes underlying their "emotions" no matter how good they get at surface "emoting", in my opinion this makes it impossible to have a true connection/meaningful relationship with them as humans, even if many people in the future will fail to see this, but otherwise I largely agree with your post (a brain implemented in silicon could be a meaningful friend to a human (eg ems), people will prob fully integrate AIs into the social scene (this is a mistake in my view), people will modify their brains in (what now seems like) radical ways).

I think it's entirely possible that AI will be able to create relationships which feel authentic. Arguably we are already at that stage.

I don't think it follows that I will feel like those relationships ARE authentic if I know that the source is AI. Relationships with different entities aren't necessarily equivalent if those entities have behaved identically until the present moment - we also have to account for background knowledge and how that impacts a relationship.

Much like it's possible to feel like you are in an authentic relationship with a psychopath, but once you understand that the other person is only simulating emotional responses rather than experiencing them, that knowledge undermines every part of the relationship, even if they have not yet taken any action to exploit or manipulate you, or behave similarly to a non-psychopathic friend.

I suppose the difference between AI/psychopath relationships vs relationships between empathetic humans is that in empathetic humans I can be reasonably confident that the pattern of response and action is a result of instinctual emotional responses, something which the person has no direct control over. They're not scheming to appear to like me and as a result there is less risk that they will radically alter their behaviour if circumstances change. I can trust another person much more readily if I can accurately model the thing which generates their responses to my actions, and have some kind of assurance that this behaviour will remain consistent even if circumstances change (or a clear idea of what kinds of circumstances might change the behaviour).

If my friendship with Josie has lasted for years and I'm confident that Josie is another empathetic human, generating her responses to me from much the same processes I use, then when I (for example) do something that our authoritarian government doesn't like, I might go to Josie seeking shelter.

If I have a similar relationship with Mark12, a autonomous AI cluster (but I'm not really clear on how Mark12 generates their behaviour) even if that they have been fun and shown kindness to me in the past, I'm unlikely to ask them for help given that my circumstances have radically changed. I can't know what kind of rules Mark12 ultimately runs by and I can't ever be sure I'm modelling them accurately. There are no sensible indicators or rate-limits to how quickly Mark12's behaviour might change. For all I know they could get an update overnight and be a completely different entity, whilst flawlessly mimicking their old behaviour.

In humans, if I know somebody untrustworthy for a while I am likely to notice something a bit /off/ about them and trust them less. This doesn't hold for AI though I think. They might never slip up- they can project the exact correct persona whilst holding a completely different core value system which I might not know about until a critical juncture, like a sleeper agent- this is something very few humans can do, so I can be much more confident that a human is trustable after building a relationship with them than with an AI agent.
 

[+][comment deleted]20