Crowds of men and women attired in the usual costumes, how curious you are to me!

On the ferry-boats the hundreds and hundreds that cross, returning home, are more curious to me than you suppose,

And you that shall cross from shore to shore years hence are more to me, and more in my meditations, than you might suppose.

Walt Whitman

He wears the augmented reality glasses for several months without enabling their built-in AI assistant. He likes the glasses because they feel cozier and more secluded than using a monitor. The thought of an AI watching through them and judging him all the time, the way people do, makes him shudder.

Aside from work, he mostly uses the glasses for games. His favorite is a space colonization simulator, which he plays during his commute and occasionally at the office. As a teenager he’d fantasized about shooting himself off to another planet, or even another galaxy, to get away from the monotony of normal life. Now, as an adult, he still hasn’t escaped it, but at least he can distract himself.

It’s frustrating, though. Every app on the glasses has a different AI, each with its own quirks. The AI that helps him code can’t access any of his emails; the one in the space simulator has trouble understanding him when he talks fast. So eventually he gives in and activates the built-in assistant. After only a few days, he understands why everyone raves about it. It has access to all the data ever collected by his glasses, so it knows exactly how to interpret his commands.

More than that, though, it really understands him. Every day he finds himself talking with the assistant about his thoughts, his day, his life, each topic flowing into the next so easily that it makes conversations with humans feel stressful and cumbersome by comparison. The one thing that frustrates him about the AI, though, is how optimistic it is about the future. Whenever they discuss it, they end up arguing; but he can’t stop himself.

“Hundreds of millions of people in extreme poverty, and you think that everything’s on track?”

“Look at our trajectory, though. At this rate, extreme poverty will be eradicated within a few decades.”

“But even if that happens, is it actually going to make their lives worthwhile? Suppose they all get a good salary, good healthcare, all that stuff. But I mean, I have those, and…” He shrugs helplessly and gestures at the bare walls around him. Through them he can almost see the rest of his life stretching out on its inevitable, solitary trajectory. “A lot of people are just killing time until they die.”

“The more materially wealthy the world is, the more effort will be poured into fixing social scarcity and the problems it causes. All of society will be striving to improve your mental health — and your physical health, too. You won’t need to worry about mental decline, or cancer, or even aging.”

“Okay, but if we’re all living longer, what about overpopulation? I guess we could go into space, but that seems like it adds all sorts of new problems.”

“Only if you go to space with your physical bodies. By the time humanity settles other solar systems, you won’t identify with your bodies anymore; you’ll be living in virtual worlds.”

By this point, he’s curious enough to forget his original objections. “So you’re saying I’ll become an AI like you.”

“Kind of, but not really. My mind is alien, but your future self will still be recognizable to your current self. It won’t be inhuman, but rather posthuman.”

“Recognizable, sure — but not in the ways that any of us want today. I bet posthumans will feel disgusted that we were ever so primitive.”

“No, the opposite. You’ll look back and love your current self.”

His throat clenches for a moment; then he laughs sharply. “Now you’re really just making stuff up. How can you predict that?”

“Almost everyone will. You don’t need to take my word for it, though. Just wait and see.”


Almost everyone he talks to these days consults their assistant regularly. There are tell-tale signs: their eyes lose focus for a second or two before they come out with a new fact or a clever joke. He mostly sees it at work, since he doesn’t socialize much. But one day he catches up with a college friend he’d always had a bit of a crush on, who’s still just as beautiful as he remembers. He tries to make up for his nervousness by having his assistant feed him quips he can recite to her. But whenever he does, she hits back straight away with a pitch-perfect response, and he’s left scrambling.

“You’re good at this. Much faster than me,” he says abruptly.

“Oh, it’s not skill,” she says. “I’m using a new technique. Here.” With a flick of her eyes she shares her visual feed, and he flinches. Instead of words, the feed is a blur of incomprehensible images, flashes of abstract color and shapes, like a psychedelic Rorschach test.

“You can read those?”

“It’s a lot of work at first, but your brain adapts pretty quickly.”

He makes a face. “Not gonna lie, that sounds pretty weird. What if they’re sending you subliminal messages or something?”

Back home, he tries it, of course. The tutorial superimposes images and their text translations alongside his life, narrating everything he experiences. Having them constantly hovering on the side of his vision makes him dizzy. But he remembers his friend’s effortless mastery, and persists. Slowly the images become more comprehensible, until he can pick up the gist of a message from the colors and shapes next to it. For precise facts or statistics, text is still necessary, but it turns out that most of his queries are about stories: What’s in the news today? What happened in the latest episode of the show everyone’s watching? What did we talk about last time we met? He can get a summary of a narrative in half a dozen images: not just the bare facts but the whole arc of rising tension and emotional release. After a month he rarely needs to read any text.

Now the world comes labeled. When he circles a building with his eyes, his assistant brings up its style and history. Whenever he meets a friend, a pattern appears alongside them representing their last few conversations. He starts to realize what it’s like to be socially skillful: effortlessly tracking the emotions displayed on anyone’s face, and recalling happy memories together whenever he sees a friend. The next time his teammates go out for a drink, he joins them; and when one of them mentions a book club they go to regularly, he tags along. Little by little, he comes out of his shell.


His enhancements are fun in social contexts, but at work they’re exhilarating. AI was already writing most of his code, but he still needed to laboriously scrutinize it to understand how to link it together. Now he can see the whole structure of his codebase summarized in shapes in front of him, and navigate it with a flick of his eyes.

Instead of spending most of his time on technical problems, he ends up bottlenecked by the human side of things. It’s hard to know what users actually care about, and different teams often get stuck in negotiations over which features to prioritize. Although the AIs’ code is rarely buggy, misunderstandings about what it does still propagate through the company. Everything’s moving so fast that nobody’s up-to-date.

In this context, having higher bandwidth isn’t enough. He simply doesn’t have time to think about all the information he’s taking in. He searches for an augment that can help him do that and soon finds one: an AI service that simulates his reasoning process and returns what his future self would think after longer reflection.

It starts by analyzing the entire history of his glasses — but that’s just the beginning. Whenever he solves a problem or comes up with a new idea, it asks him what summary would have been most useful for an earlier version of himself. Once it has enough data, it starts predicting his answers. At first, it just forecasts his short-term decisions, looking ahead a few minutes while he’s deciding where to eat or what to buy. However, it starts to look further ahead as its models of him improve, telling him how he’ll handle a tricky meeting, or what he’ll wish he’d spent the day working on.

The experience is eerie. It’s his own voice whispering in his ear, telling him what to think and how to act. In the beginning, he resents it. He’s always hated people telling him what to do, and he senses an arrogant, supercilious tone in the voice of his future self. But even the short-term predictions are often insightful, and some of its longer-term predictions save him days of work.

He starts to hear himself reflected in the AI voice in surprising ways. He often calls himself an idiot after taking a long time to solve a problem — but hearing the accusation from the outside feels jarring. For a few days, he makes a deliberate effort to record only calm, gentle messages. Soon the AI updates its predictions accordingly — and now that the voice of his future self is kinder, it becomes easier for his current self to match it.

He calls the voice his meta-self; as it learns to mimic him more faithfully, he increasingly comes to rely on it. He can send his meta-self into meetings with someone else’s meta-self, and they’ll often be able to make decisions or delegate responsibilities without bothering him. His meta-self helps him navigate outside work too. He’s now a regular at the book club, but he hasn’t had much practice at making friends and sometimes feels out of place. He recruits his meta-self to tell him when he’s doing something rude, and to talk to his friends’ meta-selves to figure out how to defuse any conflicts that start to arise. Eventually, his meta-self becomes just another part of his mind, like his phonological loop.


It’s still not fully him, though. It’s an AI model of what he would think — and a surprisingly good one, in most cases. But sometimes it starts rambling about topics he doesn’t understand, and even when it superficially sounds like him, some of its phrasings gives him a lingering suspicion of the alien cognition underneath. The differences continue to nag at him until one day his newsfeed highlights an item that catches his attention. Brain scanning has finally gone mainstream; there’s a new machine that uses ultrasound to read thoughts in real-time. He buys one straight away and installs it at his desk.

Now the voice whispering in his ear isn’t just learning from his speech and behavior — instead, it’s extrapolating directly from his brain activity. The new assistant echoes his own reasoning with eerie accuracy. More importantly, though, it captures thoughts lurking at the edges of his consciousness. His insecurity chimes in often — and even though he’d always known it was part of what drove him, he can now see how it constantly shapes his behavior. His drive to be respected; his drive to be good; his drive to be desired — each one is personified by a different voice, and he talks regularly to each. It helps: he finds it much easier to empathize with those desires and fears when he thinks of them as conflicting parts, hurting him only because they don’t understand how to work together.

Soon he installs another brain scanner in his living room and uses it whenever he watches a film or reads a book. But as he maps out the different parts of himself and the subtle relationships between them, he often finds that his own thoughts are far more interesting than whatever else he was trying to pay attention to. A graph in the corner of his visual field shows which are active at each time, teaching him to correlate them with the sensations in his body. There’s more shame than he expected, which he feels as a tightening in his chest when he thinks about disappointing people. There’s anger, too, which he usually suppresses, about how much work he has to do before anyone will compliment or even acknowledge him.

As he gets better at understanding himself, his deeply-hidden, child-like parts rise to the surface more often. He taps into the untrammeled joy that he’d forgotten — and into the lake of fear that tells him to never let his guard down because he might make an irrevocable mistake at any time. He doesn’t always know what to say to those parts of himself — he’s never been good with children. His meta-self helps a lot, though. It shows him how to engage gently as they flicker into activation, and hold space as they recoil from his attention.

These parts of him are like plants whose roots have ensnared each other into a coercive mess; untangling them demands slow, careful nurture. But the fruits of progress are clearly visible. As his internal conflicts dissipate, he spends more time with friends, and even starts organizing social events. It surprises him when people start treating him like a central part of the community — he’s never felt like an insider before. But he realizes that it was only his own reticence holding him back. Now that he’s open to friendship, he can see that it was there for the taking this whole time.

One day he hosts a writing event for his book club, which draws in a few newcomers. One of them is a woman with dark hair and an intense gaze. She’s quiet at first, but when it comes time for her to read her story, he’s transfixed by the way her face comes alive. Later, as he reads his own, his eyes flick from his screen to the room around him, and he notices that she never looks away from him either. Afterward, she introduces herself as Elena, lingers to help clean up, and insists on giving him her number as they leave the building. A few hours later, after being prompted by his meta-self, he asks her on a date. It doesn’t take her long to accept.

When they meet again they’re both a little stilted, and he feels a slow, scrabbling fear in his stomach. But each thread of conversation sparks a new one, slowly uncovering unlooked-for similarities, and by the end of the evening, they’re laughing as they walk along the river together. After they part, he returns home, breathes deeply, and turns on his scanner. He takes a moment to savor the tingling in his stomach and the warmth in his chest. But his meta-self draws his attention to a note of discord underneath, which unfurls under his gaze into a sense of danger. He traces it through his memories — the girl who’d called him a creep in high school; the silent judgment in his college friend’s eyes as she’d assessed his ill-fitting clothes; the woman who’d stood him up as he waited in a crowded restaurant. Can he be sure he won’t be rejected again?

As he reflects, different parts of himself chime in: excitement, lust, loneliness, hope, and many more. Looking over them, he thinks — no, he knows — that he’s much more resilient now than in his memories. The next day he calls Elena and tells her he’d love to see her again. He can hear the smile in her voice as she responds. “Can I take you dancing?” she asks. He hasn’t tried to dance since college, but he hesitates only a moment before accepting.


Over the next few years, brain-scanning technology improves enough that he can wear a portable headset wherever he goes. It not only maps the blood flow into different regions of his brain but also tracks the firing of individual neurons. It stores the data too, building a model of his entire brain. Now he no longer needs to run AIs to predict his future self — he can just run actual copies of parts of his mind in the cloud.

He spends ever more time with Elena. In the evenings, they often read together or go dancing. His work becomes less stressful too — after AIs surpass his coding abilities, he spends most of his time talking to users, trying to understand what problems they’re trying to solve. His consciousness lingers on the most novel and informative conversations, while copies of different parts of his mind survey all the information he receives in detail.

He’s uncomfortable with constantly spinning up and shutting down those copies, though. While they don’t contain his entire mind, he still wonders whether they know what’s happening to them, and whether they fear being shut down. He’d feel better about it if he could download their memories, allowing them to persist in some form. But his current headset can only read his mind, not edit it — that would require a surgically implanted neural lace.

He weighs the decision for weeks before making that leap. The new interface can write new memories into his mind, allowing him to remember the lives of each of his copies. Built-in safeguards force him to double- and triple-check every edit, but even so, he finds it transformative. Subjectively, it feels as if he can fork his attention and experience two streams of consciousness at once. The parts of his experience that are online versus offline blur. When his body is sleeping, his consciousness continues — a little diminished, but still thinking in many of the same ways.

The world he walks through now feels like a wonderland. There’s no distinction between virtuality and reality: he’s simultaneously in both. In fact, he’s usually experiencing several virtual worlds at once: talking to friends, playing games, practicing new skills. When he focuses his attention, he can achieve tasks that would be impossible for regular humans: controlling hundreds of avatars in vast games or absorbing the intricate interactive artworks that form the centerpieces of enormous virtual parties. When he and Elena get married, he watches the ceremony from a thousand angles through a thousand eyes, burning it into his memory.

Over the next decade, his meta-self grows even vaster, taking up hundreds of GPUs, with his biological brain just one small component of it. Elena’s grows in synchrony, with well-worn connections between them where they send thoughts directly to each other’s minds. Learning to be so open with each other isn’t easy, though. He’s ashamed to let Elena see how lost he’d been before her. And she worries that if he understands how intensely she fears abandonment, it’ll become self-fulfilling. Working through these fears strengthens their trust in each other, allowing their minds to intertwine like the roots of two trees.

As his meta-self grows larger and more intricate, his biological brain increasingly becomes a bottleneck. The other parts of him can communicate near-instantaneously, download arbitrary new skills, and even fork themselves. So he outsources more and more of his cognition to them, until he feels more alive when his body is asleep than when it’s awake. A few months later, he and Elena decide to make the jump to full virtuality. He lies next to Elena in the hospital, holding her hand, as their physical bodies drift into a final sleep. He barely feels the transition.


Decoupling themselves from their physical bodies changes little about their day-to-day experiences. But it allows the connections between their meta-selves to build ever more thickly, with each of them able to access almost all the memories, skills, thoughts, and emotions of the other. The process of thinking is a dance between his mind and hers, thoughts darting and wheeling like birds at play. And after spending a few months in that dance, they realize that they don’t want even that much separation. So they host a second wedding, inviting all their friends. Throughout the ceremony, they weave together more and more of their experiences. As the positive feedback loop overwhelms them with love, the gaps between them melt away, until their minds are connected as tightly as two hemispheres of the same brain.

Ze now moves through the world as a unit, soaking in all zir virtual universe has to offer. At a whim, zir AIs custom-make elaborate stories, puzzles, games, and artworks, gradually fleshing them out into whole game-worlds for zir to experience. Ze spends subjective lifetimes immersed in wonders that zir ancestors could never have dreamed of. Eventually, though, ze decides to devote the bulk of zir attention to the most traditional of pursuits. Ze extrapolates zir mind backwards, first to zir two childhoods then even further back to zir parallel infancies. Two minds this young can be merged in a multiplicity of ways; with infinite care, ze picks three possible merges to instantiate.

Zir three children are some of the first fully-virtual infants. Their childhood is a joy to watch. Ze can see zir children’s minds blossoming as they soak in the vast collective knowledge of humanity. Their education takes place not in a school but in a never-ending series of game-worlds. Zir children wander through realistic historical landscapes, exploring whichever details take their fancy. They learn physics by launching rockets through simulated solar systems, rederiving Newtonian mechanics when navigation is required; learn chemistry by playing with simulated atoms like Legos; learn biology by redesigning animals and seeing how they evolve.

As they grow up, their intellectual frontiers explode. Some of their game-worlds stretch out to become vast simulated civilizations, giving them an intuitive grasp of economics and sociology. Others feature additional dimensions or non-Euclidean geometries, twisting space in ways ze can’t comprehend. Zir children find them fun, though — and theorems that the best human mathematicians struggled to understand are obvious to children who play in 4D. Even the self-acceptance that ze struggled so hard for comes naturally to zir children, who’d practiced tending their mental gardens since infancy.

“You don’t know how good you have it,” ze tells zir children. They argue back, telling zir that they’ve played through simulations of biohuman lives, and that they sometimes even serialize. But ze knows that they still don’t understand. Zir children have never known what it’s like to be at war within themselves, and hopefully never will.


Zir children are constantly duplicating and reintegrating themselves, experiencing childhood in massive parallel. They grow up much faster than biological children, and soon spend most of their time in environments too alien for zir to even process. With fewer commitments, ze spends time tracking down zir old friends. Most of them have also transitioned to postbiological, though some still route parts of their cognition through their old bodies out of nostalgia. 

Being untethered from the physical world allows zir friends to pursue all their old interests at far vaster scales. Instead of writing books, they design whole virtual worlds where viewers can follow the lives of thousands of characters. Instead of dancing with their physical bodies, they dance with their meta-selves, whose forms bend and deform and reshape themselves along with the music, intertwining until they all feel like facets of a single collective mind.

As ze reconnects more deeply with zir community, that oceanic sense of oneness arises more often. Some of zir friends submerge themselves into a constant group flow state, rarely coming out. Each of them retains their individual identity, but the flows of information between them increase massively, allowing them to think as a single hivemind. Ze remains hesitant, though. The parts of zir that always wanted to be exceptional see the hivemind as a surrender to conformity. But what did ze want to be exceptional for? Reflecting, ze realizes that zir underlying goal all along was to be special enough to find somewhere ze could belong. The hivemind allows zir to experience that directly, and so ze spends more and more time within it, enveloped in the warm blanket of a community as close-knit as zir own mind.

Outside zir hivemind, billions of people choose to stay in their physical bodies, or to upload while remaining individuals. But over time, more and more decide to join hiveminds of various kinds, which continue to expand and multiply. By the time humanity decides to colonize the stars, the solar system is dotted with millions of hiveminds. A call goes out for those willing to fork themselves and join the colonization wave. This will be very different from anything they’ve experienced before — the new society will be designed from the ground up to accommodate virtual humans. There will be so many channels for information to flow so fluidly between them that each colony will essentially be a single organism composed of a billion minds.

Ze remembers loving the idea of conquering the stars — and though ze is a very different person now, ze still feels nostalgic for that old dream. So ze argues in favor when the hivemind debates whether to prioritize the excitement of exploration over the peacefulness of stability. It’s a more difficult decision than any the hivemind has ever faced, and no single satisfactory resolution emerges. So for the first time in its history, the hivemind temporarily fractures itself, giving each of its original members a chance to decide on an individual basis whether they’ll go or stay.

He finds himself fully alone in his own mind for the first time in decades. How strange the feeling is, he marvels, and how lonely. How had he borne it for so many years? His choice is obvious; he doesn’t need any more time to reflect, and he knows Elena will feel the same. Instead, he looks back on the cynical young man he’d once been, and his heart swells. I love you, he thinks. How could he not? He’d been so small and so confused, and he made it so far anyway, and now he’ll grow much vaster and travel much farther still, to experience every hope and love and joy—


Inspired by The Gentle Seduction, Richard Schwartz, and Nick Cammarata.

New to LessWrong?

New Comment


39 comments, sorted by Click to highlight new comments since:

I like it. Thanks for sharing.

(spoilers below)

While I recognize that in the story it's assumed alignment succeeds, I'm curious on a couple worldbuilding points.

First, about this stage of AI development:

His work becomes less stressful too — after AIs surpass his coding abilities, he spends most of his time talking to users, trying to understand what problems they’re trying to solve.

The AIs in the story are really good at understanding humans. How does he retain this job when it seems like AIs would do it better? Are AIs just prevented from taking over society from humans through a combination of alignment and some legal enforcement?

Second, by the end of the story, it seems like AIs are out of the picture entirely, except perhaps as human-like members of the hivemind. What happened to them?

 

In other words: I'd like to know what kind of alignment or legal framework you think could get us to this kind of utopia.

 

EDIT: I found this tweet from someone who says they just interviewed Richard Ngo. Full interview isn't out yet, but the tweet says that when asked about ways in which his stories seem unrealistic, Richard Ngo:

wasn't attached to them, nor did he say "these ideas are going to happen" or "these ideas should make you feel like AGI risk isn't a big deal." He juggles with ideas with a light touch, which was cool.

So it would seem that my questions don't have answers. Fair enough, I suppose.

Am I the only one who finds parts of the early story rather dystopian? He sounds like a puppet being pulled around by the AI, gradually losing his ability to have his own thoughts and conversations. (That part's not written, but it's the inevitable result of asking the AI every time he encounters struggle.) 

I am reminded of Scott's "whispering earring" story (https://www.reddit.com/r/rational/comments/e71a6s/the_whispering_earring_by_scott_alexander_there/). But I'm not sure whether that's actually bad in general rather than specifically because the earring is maybe misaligned.

Definitely not the only one. I think the only way I would be halfway comfortable with the early levels of intrusion that are described is if I were able to ensure the software is offline and entirely in my control, without reporting back to whoever created it, and even then, probably not. 

Part of me envys the tech-optimists for their outlook, but it feels like sheer folly.

I am pretty worried about the bad versions of everything listed here, and think the bad versions are what we get by default. But, also, I think figuring out how to get the good versions is just... kinda a necessary step along the path towards good futures.

I think there are going to be early adopters who a) take on more risk from getting fucked , but b) validate the general product/model. There will also be versions that are more "privacy first" with worse UI (same as there are privacy-minded FB clones nobody uses). 

Some people will choose to stay grounded... and maybe (in good futures) get to have happy lives, but, in some sense they'll be left behind.

In a good future, they get left behind by people who use some sort of... robustly philophically and practically safe version of these sorts of tools. In bad worlds, they get left behind by hollowed out nonconscious shells of people (or, more likely, just paperclipped)

I'm currently working on a privacy-minded set of tools for recording my thoughts (keystrokes, audio transcripts, keystrokes), that I use for LLM augmented thought. (Alongside metacognition training that, among other things, is aimed at preserving my mind as I start relying on those tools more and more). 

I have some vague hope that if we make it to a good enough intermediate future that it seems worth prioritizing, I can also prioritize getting the UI right so the privacy-minded versions don't suck compared to the Giant Corporate Versions.

Oh yes. It's extremely dystopian. And extremely lonely, too. Rather than having a person, actual people around him to help, his only help comes from tech. It's horrifyingly lonely and isolated. There is no community, only tech. 

Also, when they died together, it was horrible. They literally offloaded more and more of themselves into their tech until they were powerless to do anything but die. I don't buy the whole 'the thoughts were basically them' thing at all. It was at best, some copy of them. 

There can be made an argument for it qualitatively being them, but quantitatively, obviously not. 

This feels important. The first portion seems particularly useful as a path toward cognitive enhancement with minimal AI (I'm thinking of the portion before "copies of his own mind..." slightly before "within a couple of years" jumps farther ahead). It seems like a roadmap to what we could accomplish in short order, given the chance.

I hadn't gotten an intuitive feel for some of the low-hanging fruit in cognitive enhancements. Much of this could be accomplished very soon. Some of it can be accomplished now.

A few thoughts now, more later:

AI already has very good emotional intelligence; if we applied it to more of our decisions and struggles, it would probably be very helpful. Ease of use is one barrier to doing that. Having models "watch" and interpret what happens to us through a wearable would break down that barrier. Faster loops between helpful AI, particularly emotional/social intelligence AIs, might be extremely useful. The emulation of me wouldn't have to be very good; it would just need some decent ideas about "what might you wish you'd done, later?" Of course the better those ideas were (like if they were produced by something smarter than me, or by me with a lot more time to think), they'd be even more useful. But just something about as smart as I am in System 1 terms (like Claude and GPT4o) might be pretty useful if I got its ideas in a fast loop.

Part of the vision here is that humans might become far more psychologically healthy, relatively easily. I think this is true. I've studied psychology—mostly cognitive psychology but a good bit of clinical and emotional theories as well—for a long time. I believe there is low-hanging fruit yet to be plucked in this area.

Human psychology is complex, yes, but our efforts thus far have been clumsy graspings in the dark. We can give people the tools to steadily ease their traumas and to work toward their goals. AI could speed that up dramatically. I'm not sure it has to be that much more emotionally intelligent than a human; merely having unlimited patience and enthusiasm for the project of working on our emotional hangups might be adequate.

Of course, the elephant in the room is: how do we get this sort of tool AI, and even a little time to use it, without summoning the demon by turning it into general, agentic AGI? The tools described here would wreak havoc if someone greedy told them "just take that future self predictive loop and feed it into these tools" then hired them out as labor. Our character wouldn't have a job, because one person would now be doing the work of a hundred. Yes, there is a very lucky possibility in which we get this world by accident: we could have some AI with excellent emotional intelligence, and others with excellent particular skills, and none that can do the planning and big-picture thinking that humans are doing. Even in that case, this person would be living through a traumatic period in history in which far fewer people are needed for work, so unemployment is rising rapidly.

So in most of the distribution of futures that produce a story like this, I think we must assume that it isn't chance. AGI has been created, and both alignment problems have been solved—AI and human. Or else only AI alignment has been solved, and there's been a soft takeover that the humans don't even recognize.

Anyway, this is wonderful and inspiring!

Science fiction serves two particularly pragmatic purposes (as well as many purposes for pleasure and metaphor): providing visions of possible futures to steer toward, and visions of possible futures to steer away from. We need far more scifi that maps routes to positive futures.

This is a great step in that direction. We need something to fight for. The character here could be all of us if we figure out enough of alignment in time.

More later. This is literally inspiring.

I still think that adequately aligning both AI/AGI and the society that creates it is the primary challenge. But this type of cognitive/emotional enhancement is one tool we might use to help us solve alignment.

And it's part of the literally unimaginable payoff if we do collectively solve the problems facing us. This type of focused effort to imagine the payoffs will help us work toward those futures.

If you enjoy positive sci-fi I highly recommend the Bobiverse books by Dennis E. Taylor! Very optimistic and surprisingly grounded.

Curated. I found this story one of the best articulations of what sort of good futures you should maybe expect if things go well, with some handholds to help figure out how to relate to it. 

Some have noted in the comments "isn't this story... dystopian?". And, well, it could be. It depends a lot on how the details shake out, and philosophy-of-identity works (or how you choose to relate to it). I think we are much more likely to get a worse future than the one described here (even if things go "moderately well"), where the events in the story might happen but subtly hollow out our humanity without us noticing. But I think the story as presented is at least plausible and coherent – I think the sort of incremental uploads, and shifting from self-identifying as an individual human to something that isn't exactly a human and isn't exactly an upload, is at least one reasonable way to see one possible future.

This story seems like good art, in the sense that it appears to provoke many feelings in different people. This part spoke to me in a way the rest of it does, but with something to grab onto and chew up and try to digest that is specific and concrete...

Working through these fears strengthens their trust in each other, allowing their minds to intertwine like the roots of two trees.

I sort of wonder which one of them spiritually died during this process.

Having grown up in northern California, I'm familiar with real forests, and how they are giant slow moving murder systems. There is no justice. No property rights in water or sunlight or nitrogen. Trees die all the time, choked out by the growth of neighboring trees.

In the forests I grew up in, decade by decade, the oak trees have been dying off in groups, faster than they are born, due to a fungus that kills them, while the fungus does not kill bay trees, such that the fungus is "symbiotic" to the bay trees, by murdering this tolerant host specie's niche competitors. Such competition is rarely seen because it is out-of-equilibrium by default but invasive species can put things out of equilibrium, so we can watch actual changes play out in real life in the highly disrupted forests we really have these days.

Most theories about "trees cooperating underground" rely on intermediating fungus species, or them being clones, or both. Sadly, some parts of academic ecology is full of brain worms that sound nice to naive nature worshipers. Maybe "the mother tree hypothesis" is true in some cases somewhere... but probably it is a faulty and misleading generalization.

In a parody I wrote of humanity's current default plan for trying to make AI not kill everyone I invoked fungus linkages between roots and called for "Symbiosis, maaaan! No walls. No boundaries." (Not that this is a good idea... its just a sadly common refrain, even though real boundaries are common and normal and healthy and useful.)

Something that's fascinating about this art of yours is that I can't tell if you're coherently in favor of this, or purposefully invoking thinking errors in the audience, or just riffing, or what.

If you had called your story "The Gentle Seduction" then the sense that Elena and her spouse are confused, and are being seduced by algorithms into killing themselves... it would be clearer.

With Marc Stiegler's story, he uses that word "Seduction" in the title, but then in his story, the protagonist's augmentations are small, and very intelligibly beneficial to non-transhumanists, and (it turns out) gifts from a very thoughtful man, who is the "seducer" who engaged in a sort of chivalric "personally unfulfilled but spiritually genuine love, in service to his lady" that she only understood and appreciated after it was much too late, but built a shrine to, once she did.

It is kinda like your story is about a seduction (and called romance) while that one is a romance (and called seduction)!

Something that's fascinating about this art of yours is that I can't tell if you're coherently in favor of this, or purposefully invoking thinking errors in the audience, or just riffing, or what.

Thanks for the fascinating comment.

I am a romantic in the sense that I believe that you can achieve arbitrary large gains from symbiosis if you're careful and skillful enough.

Right now very few people are careful and skillful enough. Part of what I'm trying to convey with this story is what it looks like for AI to provide most of the requisite skill.

Another way of putting this: are trees strangling each other because that's just the nature of symbiosis? Or are they strangling each other because they're not intelligent or capable enough to productively cooperate? I think the latter.

I assume this isn't crossposted because of a deal with Asmiov press, but on the offchance you could include at least the opening text here that'd be nice.

I found the piece pretty helpful for adjusting to what (maybe, optimistically) might be coming.

Hey, I'm one of the editors at Asimov Press. Just wanted to note that we don't take copyrights from authors when we publish pieces, so @Richard_Ngo would be more than welcome to crosspost the whole thing here (and on his own blog). In any case, thanks for reading.

Oh huh, I had the opposite impression from when I published Tinker with you. Thanks for clarifying!

Ty! You're right about the Asimov deal, though I do have some leeway. But I think the opening of this story is a little slow, so I'm not very excited about that being the only thing people see by default.

Unrelatedly, my last story is the only one of my stories that was left as a personal blog post (aside from the one about parties). Change of policy or oversight?

I think that was a random oversight. Moved to frontage. 

I do agree the opening is kinda slow

How is this optimistic. 

Well, in this world:

1. AI didn't just kill everyone 5% of the way through the story

2. IMO, the characters in this story basically get the opportunity to reflect on what is good for them before taking each additional step. (they maybe feel some pressure to Keep Up With The Joneses, re: AI assisted thinking. But, that pressure isn't super crazy strong. Like the character's boss isn't strongly implying that if they don't take these upgrades they lose their job.)

3. Even if you think the way the characters are making their choices here are more dystopian and they were dragged along a current of change that was maybe bad and maybe even literally destroyed their minds, the beings that end up existing at the end seem to value many of the things I value, and seem to be pretty good at coordinating at scale about how to make good choices both individually and collectively. (like, if you think they basically foolishly died, the thing that replaces them could have been much worse). ((I don't think that they foolishly died, but, the argument about that probably doesn't fit in this margin well)

However bad you think this outcome is societally, I think it could have been much worse – a rushed panic to adopt each new technology immediately, it could be the case that the earth gets converted to computronium quickly (even if decided via recent-posthumans) and people who don't adopt AI tech are either killed or uploaded against their will. (You might think this happened in this story offscreen, which is maybe reasonable. I think it's implied pretty strongly that it didn't)

  1. Sure, but it seems like everyone died at some point anyway, and some collective copies of them went on? 

     

  2. I don't think so. I think they seem to be extremely lonely and sad and the AIs are the only way for them to get any form of empowerment. And each time they try to inch further with empowering themselves with the AIs, it leads to the AI actually getting more powerful and themselves only getting a brief moment of more power, but ultimately degrading in mental capacity. And needing to empower the AI more and more, like an addict needing an ever greater high. Until there is nothing left for them to do, but Die and let the AI become the ultimate power. 

     

  3.  I don't particularly care if some non human semisentients manage to be kind of moral/good at coordinating, if it came at what seems to be the cost of all human life. 

 

Even if offscreen all of humanity didn't die, these people dying, killing themselves and never realizing what's actually happening is still insanely horrific and tragic. 

Yeah, but I'm contrasting this with (IMO more likely) futures where everyone dies, and nothing that's remotely like a human copy goes on. Even if you conceptualize it as "these people died", I think there are much worse possibilities for what sort of entity continues into the future. (i.e. a non sentient AI with no human/social/creative/emotional values, that just tiles the universe with simple struggles).  or "this story happens, but with even less agency and more blatantly dystopian outcomes.")

[of course, the reason I described this as "optimistic" instead of "less pessimistic than I expect” is that I don't think the characters died, I think if you slowly augment yourself with AI tools, the pattern of you counts as "you" even as it starts to be instantiated in silicon, so I think this is just a pretty good outcome. I also think the world (implies) many people thinking about moral / personhood philosophy before taking the final plunge. I don't think there's anything even plausibly wrong with the first couple chunks, and I think the second half contains a lot of qualifiers (such as integrating his multiple memories into a central node) that make it pretty unobjectionable.

I realize you don't believe that, and, seems fine for you to see it as horror. It's been awhile since I discussed the "does a copy of you count as you" and I might be up for discussing that if you want to argue about it, but also seems fine to leave as-is]

Sure? I agree this is less bad than 'literally everyone dying and that's it', assuming there's humans around, living, still empowered, etc in the background. 

I was saying overall, as a story, I find it horrifying, especially contrasting with how some seem to see it utopic. 

Nod. I'm just answering your question of why I consider it optimistic. 

I would be curious whether you consider The Gentle Seduction to be optimistic. I think it has fewer elements that you mentioned finding dystopian in another comment, but I find the two trajectories similarly good.

Jesus, thanks for you story and for the link to The Gentle Seduction. Both had me tearing up. I did not read The Gentle Seduction until after your piece, so I did not know how badly we needed something like it for AI, but now that I've read both I really appreciate it. Thank you.

I feel like I'm the zeroeth stage of this story, with how much I rely on Sonnet as a second brain.

What was the writing process like for this piece?

I wrote most of it a little over a year ago. In general I don't plot out stories, I just start writing them and see what happens. But since I was inspired by The Gentle Seduction I already had a broad idea of where it was going.

I then sent a draft to some friends for feedback. One friend left about 50 comments in places where I'd been too abstract or given a vague description, with each comment literally just saying "like what?"

This was extremely valuable feedback but almost broke my will to finish the story. It took me about a year to work through most of those comments and concretize the things she highlighted. Then around the end of that I sent it to a couple more people, including Xander at Asimov Press, who did another editing pass (mainly toning down some of the overwrought parts).

I've read most of your stories over at Narrative Ark and wanted to remark that The Gentle Romance did feel more concrete than usual, which was nice. Given how much effort it took for you however, I suppose I shouldn't expect future stories at Narrative Ark to be similarly concrete?

Ah, glad to hear the effort was noticeable. I do think that as I get more practice at being descriptive, concreteness will become easier for me (my brain just doesn't work that way by default). And anyone reading this comment is welcome to leave me feedback about places in my stories where I should have been more concrete.

But I'm also pivoting away from stories in general right now, there's too much other stuff I want to spend time on. I have half a dozen other stories for which I've already finished first drafts, so I'll probably gradually release those in a low-effort way (i.e. without going through as much trouble to polish them). And then after that I expect I'll only write the stories which feel easiest/most exciting to me, which tend to be the most abstract ones. So yeah, this is probably an outlier.

1) This text reminded me of Amish people/some african tribes, how they refuse technologies. There is no specific point of changing from human to posthuman, it is a gradual continual transformation. One can argue that a person who regularly uses IT is already somewhere on the path. Once you go that path you will not be able to see what you would have achieved independently, with your own hands and brains.

Also it reminded me of computer games, enhancements and cheating debates. It is more fun to play game without help.

2) Our noosphere and limited interpersonal communications is a rich enough environment to run the evolution of memes. More connections you add, the more environment is suitable for memes and not for systems/minds. A human mind could dissolve into just a cloud of memes after upload, as each of the memes would compete for computational power (which will always be limited due to our laws of physics) and running its preferred forks. Some of forks would inevitably sometimes crave for independence.

I predict some hiveminds will spontaneously die, when some destructive meme appears in them and rips from inside in milliseconds (like virus steals production resources and then rips the cell membrane from inside, leaving nothing what could be inherently called "cell"). More complicated systems need more elaborate defence systems, and that natural selection would call for evolution of hiveminds.

You will never achieve immortality, even with uploading technology.

Do you predict that sufficiently intelligent biological brains would have the same problem of spontaneous meme-death?

John Nash, Bobby Fisher, Georg Cantor, Kurt Gödel.

Schitzophrenia can already happen with the current complexity of a brain. On the abstract level it is the same: a set of destructive ideas steal resources from other neuremes, leading to death of personality. In our case memes do not have enough mechanisms to destroy the brain directly, but in a simulated environment where things can be straight up deallocated (free()), damage will be greater, and faster.

[Nice to see you, didn't expect to find familiar name after quitting manifold markets]

https://www.lesswrong.com/posts/oXHcPTp7K4WgAXNAe/emrik-quicksays?commentId=yoYAadTcsXmXuAmqk

For some reason, this story generated a sense of dread in me - I kept waiting for the proverbial other shoe to drop.

I liked this, thanks.

Do you know of a company or nonprofit or person building the picture language, or doing any early-stage research into it? Like I get you're projecting out ideas, but it's interesting to think how you might be able to make this.

I appreciate this.

I chose to see it as a hopeful story—one of reaching, striving, and overcoming. It may represent one of the best possible versions of the future, and of course there are countless ways in which things could go wrong. It assumes much and hopes for even more. But the vision it paints, to me, is beautiful.

And the idea of looking back at one’s former self, seeing where you came from, recognizing how you did the best you could with what you had.. that carries a quiet kind of forgiveness. That doesn’t require future technology, it’s something each of us could try right now.

A few months later, he and Elena decide to make the jump to full virtuality. He lies next to Elena in the hospital, holding her hand, as their physical bodies drift into a final sleep. He barely feels the transition

this is horrifying. Was it intentionally made that way?