Followup toNonsentient Optimizers, Can't Unbirth a Child

From Consider Phlebas by Iain M. Banks:

    In practice as well as theory the Culture was beyond considerations of wealth or empire.  The very concept of money—regarded by the Culture as a crude, over-complicated and inefficient form of rationing—was irrelevant within the society itself, where the capacity of its means of production ubiquitously and comprehensively exceeded every reasonable (and in some cases, perhaps, unreasonable) demand its not unimaginative citizens could make.  These demands were satisfied, with one exception, from within the Culture itself.  Living space was provided in abundance, chiefly on matter-cheap Orbitals; raw material existed in virtually inexhaustible quantities both between the stars and within stellar systems; and energy was, if anything, even more generally available, through fusion, annihilation, the Grid itself, or from stars (taken either indirectly, as radiation absorbed in space, or directly, tapped at the stellar core).  Thus the Culture had no need to colonise, exploit, or enslave.
    The only desire the Culture could not satisfy from within itself was one common to both the descendants of its original human stock and the machines they had (at however great a remove) brought into being: the urge not to feel useless.  The Culture's sole justification for the relatively unworried, hedonistic life its population enjoyed was its good works; the secular evangelism of the Contact Section, not simply finding, cataloguing, investigating and analysing other, less advanced civilizations but—where the circumstances appeared to Contact to justify so doing—actually interfering (overtly or covertly) in the historical processes of those other cultures.

Raise the subject of science-fictional utopias in front of any halfway sophisticated audience, and someone will mention the Culture.  Which is to say: Iain Banks is the one to beat.

Iain Banks's Culture could be called the apogee of hedonistic low-grade transhumanism.  Its people are beautiful and fair, as pretty as they choose to be.  Their bodies have been reengineered for swift adaptation to different gravities; and also reengineered for greater sexual endurance.  Their brains contains glands that can emit various euphoric drugs on command.  They live, in perfect health, for generally around four hundred years before choosing to die (I don't quite understand why they would, but this is low-grade transhumanism we're talking about).  Their society is around eleven thousand years old, and held together by the Minds, artificial superintelligences decillions of bits big, that run their major ships and population centers.

Consider Phlebas, the first Culture novel, introduces all this from the perspective of an outside agent fighting the Culture—someone convinced that the Culture spells an end to life's meaning.  Banks uses his novels to criticize the Culture along many dimensions, while simultaneously keeping the Culture a well-intentioned society of mostly happy people—an ambivalence which saves the literary quality of his books, avoiding either utopianism or dystopianism.  Banks's books vary widely in quality; I would recommend starting with Player of Games, the quintessential Culture novel, which I would say achieves greatness.

From a fun-theoretic perspective, the Culture and its humaniform citizens have a number of problems, some already covered in this series, some not.

The Culture has deficiencies in High Challenge and Complex Novelty.  There are incredibly complicated games, of course, but these are games—not things with enduring consequences, woven into the story of your life.  Life itself, in the Culture, is neither especially challenging nor especially novel; your future is not an unpredictable thing about which to be curious.

Living By Your Own Strength is not a theme of the Culture.  If you want something, you ask a Mind how to get it; and they will helpfully provide it, rather than saying "No, you figure out how to do it yourself."  The people of the Culture have little use for personal formidability, nor for a wish to become stronger.  To me, the notion of growing in strength seems obvious, and it also seems obvious that the humaniform citizens of the Culture ought to grow into Minds themselves, over time.  But the people of the Culture do not seem to get any smarter as they age; and after four hundred years so, they displace themselves into a sun.  These two literary points are probably related.

But the Culture's main problem, I would say, is...

...the same as Narnia's main problem, actually.  Bear with me here.

If you read The Lion, the Witch, and the Wardrobe or saw the first Chronicles of Narnia movie, you'll recall—

—I suppose that if you don't want any spoilers, you should stop reading here, but since it's a children's story and based on Christian theology, I don't think I'll be giving away too much by saying—

—that the four human children who are the main characters, fight the White Witch and defeat her with the help of the great talking lion Aslan.

Well, to be precise, Aslan defeats the White Witch.

It's never explained why Aslan ever left Narnia a hundred years ago, allowing the White Witch to impose eternal winter and cruel tyranny on the inhabitants.  Kind of an awful thing to do, wouldn't you say?

But once Aslan comes back, he kicks the White Witch out and everything is okay again.  There's no obvious reason why Aslan actually needs the help of four snot-nosed human youngsters.  Aslan could have led the armies.  In fact, Aslan did muster the armies and lead them before the children showed up.  Let's face it, the kids are just along for the ride.

The problem with Narnia... is Aslan.

C. S. Lewis never needed to write Aslan into the story.  The plot makes far more sense without him.  The children could show up in Narnia on their own, and lead the armies on their own.

But is poor Lewis alone to blame?  Narnia was written as a Christian parable, and the Christian religion itself has exactly the same problem.  All Narnia does is project the flaw in a stark, simplified light: this story has an extra lion.

And the problem with the Culture is the Minds.

"Well..." says the transhumanist SF fan, "Iain Banks did portray the Culture's Minds as 'cynical, amoral, and downright sneaky' in their altruistic way; and they do, in his stories, mess around with humans and use them as pawns.  But that is mere fictional evidence.  A better-organized society would have laws against big Minds messing with small ones without consent.  Though if a Mind is truly wise and kind and utilitarian, it should know how to balance possible resentment against other gains, without needing a law.  Anyway, the problem with the Culture is the meddling, not the Minds."

But that's not what I mean.  What I mean is that if you could otherwise live in the same Culture—the same technology, the same lifespan and healthspan, the same wealth, freedom, and opportunity—

"I don't want to live in any version of the Culture.  I don't want to live four hundred years in a biological body with a constant IQ and then die.  Bleah!"

Fine, stipulate that problem solved.  My point is that if you could otherwise get the same quality of life, in the same world, but without any Minds around to usurp the role of main character, wouldn't you prefer—

"What?" cry my transhumanist readers, incensed at this betrayal by one of their own.  "Are you saying that we should never create any minds smarter than human, or keep them under lock and chain?  Just because your soul is so small and mean that you can't bear the thought of anyone else being better than you?"

No, I'm not saying—

"Because that business about our souls shriveling up due to 'loss of meaning' is typical bioconservative neo-Luddite propaganda—"

Invalid argument: the world's greatest fool may say the sun is shining but that doesn't make it dark out.  But in any case, that's not what I'm saying—

"It's a lost cause!  You'll never prevent intelligent life from achieving its destiny!"

Trust me, I—

"And anyway it's a silly question to begin with, because you can't just remove the Minds and keep the same technology, wealth, and society."

So you admit the Culture's Minds are a necessary evil, then.  A price to be paid.

"Wait, I didn't say that -"

And I didn't say all that stuff you're imputing to me!

Ahem.

My model already says we live in a Big World.  In which case there are vast armies of minds out there in the immensity of Existence (not just Possibility) which are far more awesome than myself.  Any shrivelable souls can already go ahead and shrivel.

And I just talked about people growing up into Minds over time, at some eudaimonic rate of intelligence increase.  So clearly I'm not trying to 'prevent intelligent life from achieving its destiny', nor am I trying to enslave all Minds to biological humans scurrying around forever, nor am I etcetera.  (I do wish people wouldn't be quite so fast to assume that I've suddenly turned to the Dark Side—though I suppose, in this day and era, it's never an implausible hypothesis.)

But I've already argued that we need a nonperson predicate—some way of knowing that some computations are definitely not people—to avert an AI from creating sentient simulations in its efforts to model people.

And trying to create a Very Powerful Optimization Process that lacks subjective experience and other aspects of personhood, is probably —though I still confess myself somewhat confused on this subject—probably substantially easier than coming up with a nonperson predicate.

This being the case, there are very strong reasons why a superintelligence should initially be designed to be knowably nonsentient, if at all possible.  Creating a new kind of sentient mind is a huge and non-undoable act.

Now, this doesn't answer the question of whether a nonsentient Friendly superintelligence ought to make itself sentient, or whether an NFSI ought to immediately manufacture sentient Minds first thing in the morning, once it has adequate wisdom to make the decision.

But there is nothing except our own preferences, out of which to construct the Future.  So though this piece of information is not conclusive, nonetheless it is highly informative:

If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?

Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?

Should existing human beings grow up at some eudaimonic rate of intelligence increase, and then eventually decide what sort of galaxy to create, and how to people it?

Or is it better for a nonsentient superintelligence to exercise that decision on our behalf, and start creating new powerful Minds right away?

If we don't have to do it one way or the other—if we have both options—and if there's no particular need for heroic self-sacrifice—then which do you like?

"I don't understand the point to what you're suggesting.  Eventually, the galaxy is going to have Minds in it, right?  We have to find a stable state that allows big Minds and little Minds to coexist.  So what's the point in waiting?"

Well... you could have the humans grow up (at some eudaimonic rate of intelligence increase), and then when new people are created, they might be created as powerful Minds to start with.  Or when you create new minds, they might have a different emotional makeup, which doesn't lead them to feel overshadowed if there are more powerful Minds above them.  But we, as we exist already createdwe might prefer to stay on as the main characters, for now, if given a choice.

"You are showing far too much concern for six billion squishy things who happen to be alive today, out of all the unthinkable vastness of space and time."

The Past contains enough tragedy, and has seen enough sacrifice already, I think.  And I'm not sure that you can cleave off the Future so neatly from the Present.

So I will set out as I mean the future to continue: with concern for the living.

The sound of six billion faces being casually stepped on, does not seem to me like a good beginning.  Even the Future should not be assumed to prefer that another chunk of pain be paid into its price.

So yes, I am concerned for those currently alive, because it is that concern—and not a casual attitude toward the welfare of sentient beings—which I wish to continue into the Future.

And I will not, if at all possible, give any other human being the least cause to think that someone else might spark a better Singularity.  I can make no promises upon the future, but I will at least not close off desirable avenues through my own actions.  I will not, on my own authority, create a sentient superintelligence which may already determine humanity as having passed on the torch.  It is too much to do on my own, and too much harm to do on my own—to amputate someone else's destiny, and steal their main character status.  That is yet another reason not to create a sentient superintelligence to start with.  (And it's part of the logic behind the CEV proposal, which carefully avoids filling in any moral parameters not yet determined.)

But to return finally to the Culture and to Fun Theory:

The Minds in the Culture don't need the humans, and yet the humans need to be needed.

If you're going to have human-level minds with human emotional makeups, they shouldn't be competing on a level playing field with superintelligences.  Either keep the superintelligences off the local playing field, or design the human-level minds with a different emotional makeup.

"The Culture's sole justification for the relatively unworried, hedonistic life its population enjoyed was its good works," writes Iain Banks.  This indicates a rather unstable moral position.  Either the life the population enjoys is eudaimonic enough to be its own justification, an end rather than a means; or else that life needs to be changed.

When people are in need of rescue, this is is a goal of the overriding-static-predicate sort, where you rescue them as fast as possible, and then you're done.  Preventing suffering cannot provide a lasting meaning to life.  What happens when you run out of victims?  If there's nothing more to life than eliminating suffering, you might as well eliminate life and be done.

If the Culture isn't valuable enough for itself, even without its good works—then the Culture might as well not be.  And when the Culture's Minds could do a better job and faster, "good works" can hardly justify the human existences within it.

The human-level people need a destiny to make for themselves, and they need the overshadowing Minds off their playing field while they make it.  Having an external evangelism project, and being given cute little roles that any Mind could do better in a flash, so as to "supply meaning", isn't going to cut it.

That's far from the only thing the Culture is doing wrong, but it's at the top of my list.

 

Part of The Fun Theory Sequence

Next post: "Dunbar's Function"

Previous post: "Can't Unbirth a Child"

New Comment
67 comments, sorted by Click to highlight new comments since: Today at 11:25 AM

Point of clarification: if human ascension to Mind status is possible, and speeding that ascension is within the power of the NFSI, how are you avoiding having at least one human mind ascend to main character status well ahead of the rest of the species?

At least one of the current six billion squishy things is going to want to enter the godmode code and ascend immediately, and if not them then one of the other trillions of Earth organisms that could be uplifted. Even if the NFSI limits the rate of ascension to the eudaimonic rate, that will vary between people; given six billion rolls of the dice (and more rolls every day), someone will have the value "really really fast" for his/her personal eudaimonic rate. Anything worth waiting for is worth having right now.

The effect seems like passing the recursive buck a very short distance. Humans create a computer that can but will not make all human efforts superfluous; the computer can and does uplift a human to equal capacities; at least one human can and may make all human efforts superfluous. Perhaps CEV includes something like, "No one should be able to get (much) smarter (much) faster than the rest of us," but restricting your intelligence because I am not ready for anyone that smart is an odd moral stance.

What's interesting to me is how much I believe the original poster has missed about the culture. Sure there are features lacking from culture society such as it having not too many significant challenges for either the humans or the Minds.

What you have gotten incorrect however, is that the human part of the Culture DOES need the minds. The main factor behind the collapse of all human civilizations up till now has been the "will to power" of the humans who seek glory through military means who have ultimately caused the destruction of many cultures (small c) in the past. Human nature being what it is, there will always be those who seek to topple the status quo which benefits everyone in order to set things up to benefit only themselves or satisfy their own personal whims whatever the case may be. It is my position that human culture NEEDS something like the Minds in order to protect itself from such usurpers and barbarians.

Additionally, it is quite clearly stated throughout the culture novels that though many of the Minds see humans as being animals, there are, nevertheless a handful of humans living in the Culture at any one time who are capable of making the right decision every time. The Culture has a great deal of respect for these humans.

Back in the real world, if you ask the question: "why would any sufficiently advanced AI tolerate the existence of humans if not just because it likes them?" The answer to that is Adam Smith. Even in the case of two countries where country A outcompetes country B in productivity at all possible classes of product, it is STILL more efficient to have country B produce a limited number of products and trade them to country A. That is likely to continue to be the case in the future.

The above is only one possible requirement for Minds living with humans. There are other arguments for keeping them around.

The only place where I can agree 100% is asking the question "why don't humans simply become Minds?"

That would seem to me to be a reasonable choice to make on your deathbed.

Even in the case of two countries where country A outcompetes country B in productivity at all possible classes of product, it is STILL more efficient to have country B produce a limited number of products and trade them to country A.

This isn't a matter of productivity. It's a matter of efficiency. If a Mind can construct 1000 widgets or 2000 gadgets in a year, and a human can only construct one widget or one gadget, it would be best for the Minds to produce gadgets and the humans to produce widgets, but if a Mind can construct 1000 widgets or 2000 gadgets with one unit of resources, and a human can only construct one widget or one gadget, you're better off just having the Mind make everything.

Even if they're producing thoughts, the humans are made of matter that can be used to make Minds. If nothing else, you can choose between having more humans or Minds. Two Minds can produce 1000 widgets and 2000 gadgets, but a human and a Mind can only produce 501 widgets and 1000 gadgets.

This post seems to be missing one important thing about Culture universe (unless I missed it): in that universe "high-grade transhumanism", if I understand the term correctly, is possible, and, if anything, common. The Culture is an aberration, one of very few civilization in that universe which is capable of Sublimation, and yet remains in its human form. The only reason for that must be very strong cultural reasons, which are constantly reinforced, because all those who do not agree with them sublimate into incomprehensibility before they can can significantly influence anything.

I think sublimation is a big literary dodge of the very problem of recursive self-improvement, and doesn't make much sense, neither as a plot device nor as an explanation.

It explains why Minds are so quirky (Look to Windwards)

But it's oddly religiious for a fully-paid-up athiest. Reminds me of something.....

But it's oddly religiious for a fully-paid-up athiest. Reminds me of something.....

Zing.

EDIT:

It explains why Minds are so quirky (Look to Windwards)

I can't get at my copy of LTW. What is the explanation? That they have to be to avoid realizing that Sublimation is clealry an awesome idea and they should do it? (I assume if it was a plot point I would remember it, but rot13 might be a good idea for anyone answering just to be on the safe side.)

The idea is that well-balanced Minds sublimate spontaneously as soon as they are booted up.

Ah, I was misremembering that as only perfect abstract reasoners, free of any cultural mores.

Why yes, yes it is. That would be because Banks can't actually write superintelligences, having only a human brain to run them on.

Oh, and it lets him add the sci-fi cliches of Precursors and Ascend to a Higher Plane of Existence while keeping it reasonably original vaguely realistic, at least until The Hydrogen Sonata. And it clears away all the near-omnipotent Elder Civilizations that should be hanging around.

For me it just spoiled the whole thing. Banks should've kept to his original design where he'd never thought of RSI, and hence it's neither mentioned nor handwaved away, and the Culture and Idarans were both doing their best. I can suspend my disbelief for that universe just like I can suspend my belief for FTL travel. I can't suspend my disbelief in the face of a bad handwave, it just throws me right out of the story.

What's the relevant difference between Banks' Sublimation and Vinge's Transcend? Vinge divides up his Zones of Thought by galactic geography, whereas Banks does it by physical dimensions, but surely that doesn't matter. Is it that Vinge talks about Transcendence being an inherently dangerous process, whereas in Banks' universe it seems virtually impossible for Sublimation to go wrong?

Vinge gives you a huge, blatant plot device up front and doesn't try to rationalize it or handwave it. I'm okay with that on a literary level, just like I'd be okay with Banks just not talking about RSI.

I still don't understand what is it that you think Banks is doing that Vinge isn't, that makes you unable to enjoy Banks. Although on second thought maybe it's better that I don't find out, in case I become similarly afflicted. :)

.

[This comment is no longer endorsed by its author]Reply

... why did you try to post a link to that?

I always saw Subliming as the blatant plot device that thankfully he didn't try to explain.

I don't want to spoil anything, but you probably shouldn't read The Hydrogen Sonata if Sublimation is already killing your SOD. It threw me right out, that's for sure.

My guess is that Eliezer will be horrified at the results of CEV-- despite the fact that most people will be happy with it.

This is obvious given the degree to which Eliezer's personal morality diverges from the morality of the human race.

He is not the only one who'd be horrified. Median humanity scares me.

Would it be fair to ignore them and make a geek/LW-specific CEV?

No, but I'm not sure how much I care.

[-][anonymous]9y00

It's not as if average-utilitarianism is the only possible answer. Real life today already allows for subcultures who enjoy diverging from most of humanity. Any goal system halfway worth implementing would also allow for such.

According to us. How certain are you that the CEV of all of humanity agrees?

The fact that they exist today isn't an answer; it could be (and to some degree is) because eradicating them would be too costly, morally or economically.

[-][anonymous]9y00

How certain are you that the CEV of all of humanity agrees?

Since CEV is, AFAICT, defined by the mean of all humans' utility functions (derived from reflective equilibrium), it disagrees by definition. But CEV is not divine revelation: it's just the best Eliezer could do at the time. As we learn more about the evaluative and social cognition underlying our "moral" judgements, I expect to be able to dissolve a priori philosophizing and build a better Friendly goal-system than CEV.

Of course, this is because I don't believe in normative ethics in the normal sense (post-hoc consequentialist based on Peter Railton's style of moral realism, blah blah blah), so I'm sure this will wind up a massive debate at some point. I strongly doubt the armchair philosophers will go down without a fight.

Zubon, I thought of that possibility, and one possible singleton-imposed solution is "Who says that subjective time has to run at the same rate for everyone?" You could then do things fast or slow, as you preferred, without worrying about being left behind, or for that matter, worrying about pulling ahead. To look at it another way, if people can increase in power arbitrarily fast on your own playing field, you may have to increase in power faster than you prefer, to keep up with them; this is a coordination/competition problem, and two singleton solutions are to fence off people who grow too fast, or to slow down their subjective time rates so that the competence growth rate per tick of sidereal time is coordinated.

Ramarren, Banks added on that part later, and it renders a lot of the earlier books nonsensical - why didn't the Culture or the Idarans increase their intelligence to win their war, if it was that easy? I refuse to regard Excession as canon; it never happened.

Unknown, the question is how much of this divergence is due to (a) having moved further toward reflective equilibrium, (b) unusual mistakes in answering a common question, (c) being an evil mutant, (d) falling into an uncommon but self-consistent attractor.

Everyone would want subjective time to run as fast as possible for themselves. If they "skip" a thousand years and go straight to the Mind age, that's not become a Mind a thousand years earlier. It's losing a thousand years of being a human.

Of course, you could have them skip to the Mind age, and then let them have an extra thousand years later on.

Ramarren, Banks added on that part later, and it renders a lot of the earlier books nonsensical - why didn't the Culture or the Idarans increase their intelligence to win their war, if it was that easy? I refuse to regard Excession as canon; it never happened.

Eh? Subliming doesn't help you win wars. It may or may not grant superintelligence and various other god-tier powers, but for balance reasons it also makes you stop caring about this universe, and you drift off into ... well, nobody knows.

OTOH, not only have the Minds have deliberately engineered their society so vast numbers of humans commit suicide when "their time is up", but "infinite fun space" is implied to contain entire civilizations, simulated down to the quantum level, so ... screw the Culture. Seriously.

Hello, I think that Zeno paradox of Achilles and the tortoise fits perfectly here...

Are you suggesting running a simulation at an exponentially slower speed ,:-\

-1!

OK, I take that back. The idea that a tortoise is faster than a runner is preposterous and counterintuitive.

Therefore humans can become Minds

Provided that knowledge is not infinite

But of course that is a preposterous idea too, so I take that one back too, I don’t want to provoke the anger of my anonymous friend and get another wedgie.

We encourage you to downvote any comment that you'd rather not see more of - please don't feel that this requires being able to give an elaborate justification. -LW Wiki Deletion Policy

Folks are encouraged to downvote liberally on LW, but the flip-side of that is that people will downvote where they might otherwise just move on for fear of offending someone or getting into an argument that doesn't interest them. You might want to be less sensitive if someone brings one of your posts to -1 - it's not really an act of aggression.

This is fun! To tell you the truth (my thruth not the absolute one) I dont care. I am having a blast trying to unravel what (and how) most people write here.

Cheers!

"(b) my being a mutant,"

It looks like (especially young) humans have quite a lot of ability to pick up a wide variety of basic moral concerns, in a structured fashion, e.g. assigning ingroups, objects of purity-concerns, etc. Being raised in an environment of science-fiction and Modern Orthodox Judaism may have given you quite unusual terminal values without mutation (although personality genetics probably play a role here too). I don't think you would characterize this as an instance of c), would you?

Presumably, because if they increased their intelligence they would realize that the war is stupid and go home, which leaves fighting only those that did not. This starts to look like a rationalization, rather than serious reason, but then I always thought that Culture books are carefully constructed to retain as much as possible of "classic" science fiction (starships! lasers! aliens!) in the face of singularity.

Carl, I would indeed call that an "uncommon but self-consistent attractor" if we assume that it is neither convergent, mistaken, nor mutated. As far as I can tell, those four possibilities seem to span the set - am I missing anything?

I'm just confused by your distinction between mutation and other reasons to fall into different self-consistent attractors. I could wind up in one reflective equilibrium than another because I happened to consider one rational argument before another, because of early exposure to values, genetic mutations, infectious diseases, nutrition, etc, etc. It seems peculiar to single out the distinction between genetic mutation and everything else. I thought 'mutation' might be a shorthand for things that change your starting values or reflective processes before extensive moral philosophy and reflection, and so would include early formation of terminal values by experience/imitation, but apparently not.

"If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?"

Yes, definititely. If nothing else, it means diversity.

"Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?"

I do not care, as long as story continues.

And yes, I would like to hear the story - which is about the same thing I would get in case Minds are prohibited. I will not be the main character of the story anyway, so why should I care?

"Should existing human beings grow up at some eudaimonic rate of intelligence increase, and then eventually decide what sort of galaxy to create, and how to people it?"

Grow up how? Does it involve uploading your mind to computronium?

"Or is it better for a nonsentient superintelligence to exercise that decision on our behalf, and start creating new powerful Minds right away?"

Well, this is the only thing I fear. I would prefer sentient superintelligence to create nonsentient utility maximizers. Much less chance of error, IMO.

"If we don't have to do it one way or the other - if we have both options - and if there's no particular need for heroic self-sacrifice - then which do you like?"

As you have said - this is a Big world. I do not think both options are mutually exclusive. The only mutually exclusive option I see is nonsentient maximizer singleton programmed to avoid sentient AI and Minds.

"Well... you could have the humans grow up (at some eudaimonic rate of intelligence increase), and then when new people are created, they might be created as powerful Minds to start with."

Please, explain the difference between the Mind created outright and "grown up humans". Do you insist on biological computronium?

As you have said, we are living in a Big world. It inevitably means that there is (or will be) quite likely some Culture like civilisation that we will meet if things go well.

How do you think we will be able to compete with your "no sentient AIs, only grown up humans" bias?

Or: Say your CEV AI creates singleton.

Will we be allowed to create the Culture?

What textbooks will be banned?

Will CEV burn any new textbooks we are going to create so that nobody is able to stand on other people's arms?

I saw it from the other side, "why on earth would humans not choose to uplift" - given the contextual quite reasonable expectation they could just ask and receive. The real problem with that universe is not a lack of things for humans to do, but a lack of things for anybody to do. Minds are hardly any better placed. I could waste my time as human dabbling uselessly in obsolete skills, or as a Mind acting as a celestial truck driver and bored tinkerer on the edges of other people's civilizations - what a worthless choice.

Julian Morrison:

Or you can revert the issue once again. You can enjoy your time on obsolete skills (like sports, arts or carving table legs...).

There is no shortage of things to do, there is only a problem with your definition of "worthless".

Eliezer (about Sublimation):

"Ramarren, Banks added on that part later, and it renders a lot of the earlier books nonsensical - why didn't the Culture or the Idarans increase their intelligence to win their war, if it was that easy? I refuse to regard Excession as canon; it never happened."

Just a technical (or fandom?) note:

Sublimed civilization is the central plot of Consider Phlebas (Schar's world, where Mind escapes, is "protected" by sublimed civilization - that is why direct military action by either Iridans or Culture is impossible).

luzr, in Consider Phlebas, the term "Sublimed" is never used. It is implied that the Dra'Azon are simply much older than the Culture and hence more powerful - a very standard idiom in SF which makes no mention of deliberately refraining from progress at higher speeds. In Consider Phlebas, the Culture is implied to be advancing its technology as fast as possible in order to fight the war.

Julian, what in any possible reality would count as "something to do"?

Eliezer:

It is really off-topic, and I do not have a copy of Consider Phlebas at hand now, but

http://en.wikipedia.org/wiki/Dra%27Azon

Even if Banks have not mentioned 'sublimed' in the first novel, the concept exactly fits Dra'Azon.

Besides, Culture is not really advancing its 'base' technology, but rather rebuilding its infrastructure to war-machine.

And I will not, if at all possible, give any other human being the least cause to think that someone else might spark a better Singularity. I can make no promises upon the future, but I will at least not close off desirable avenues through my own actions.

A possible problem here is that your high entry requirement specifications may well, with a substantial probability, allow others with lower standards to create a superintelligence before you do.

So: since you seem to think that would be pretty bad, and since you say you are a consequentialist - and believe in the greater good - you should probably act to stop them - e.g. by stepping up your own efforts to get there first by bringing the target nearer to you.

I do have a copy of Consider Phlebas on hand, and reread it, along with Player of Games before writing this post. Wikipedia can say what it likes, but the term "Sublimed" is certainly never used, nor anything like the concept of "deliberately refused hard takeoff" implied. The Culture is advancing its base technology level as implied by the notion of an unusually advanced Mind-prototype, capable of feats thought to be impossible to the Culture's technology level, being lost on Schar's World. "Subliming" is an obvious later graft which simply doesn't fit the world depicted in the earlier novels.

It's questionable how relevant any of this is, since we are arguing over a ficton - but the original Culture does not have anything akin to Subliming and I am criticizing it on those grounds.

Eliezer, I'm confused what you're asking. Read literally, you're asking for a summary description of reachable fun space, which you can make better than I can. All the other parses I can see are more confusing than that. Plain text doesn't carry tone. Please could you elaborate?

Consider Phlebas is subpar Culture and Player of Games is the perfect introductory book but still not full power Banks. Use of Weapons, Look to Windward, Inversions.. and Feersum Endjinn favourite non-Culture.

More to the point however, Look to Windward discusses part of the points you raise. I'm just going by memory here but one of the characters Cr. Ziller, a brilliant and famous non human composer, asks a Mind whether it could create symphonies as beautiful as it and how hard it would be. The Mind answers that yes, it could (and we get the impression that quite easily in fact) and goes on to argue how that does not take anything away from Ziller's achievement. I dont remember the detail exactly but at one point there is an analogy with mountain climbing when you can just use a helicopter.

From my readings i dont get the impression that there is "competing on a level playing field with superintelligences" and in fact when Banks does bring Minds too far into the limelight things break down (Excession)

David:

"asks a Mind whether it could create symphonies as beautiful as it and how hard it would be"

On somewhat related note, there are still human chess players and competitions...

Brilliant observation! Damn, we really are living in the future already...

I agree with Unknown. It seems that Eliezer's intuitions about desirable futures differ greatly from many of the rest of us here at this blog, and mostly likely even more from the rest of humanity today. I see little evidence that we should explain this divergence as mainly due to his "having moved further toward reflective equilibrium." Without a reason to think he will have vastly disproportionate influence, I'm having trouble seeing much point in all these posts that simply state Eliezer's intuitions. It might be more interesting if he argued for those intuitions, engaging with existing relevant literatures, such as in moral philosophy. But what is the point of just hearing his wish lists?

"...and mostly likely even more from the rest of humanity today. "

True, 90% of humanity, in this age, believe in ominpotent beings that look over our wellfare.

To me what Eliezer says is that it would be boring to have a god around serving all our needs. But perhaps "it" exists and it is benevolent by not ruining our existence, simply by not existing...

Off-topic, but amusing:

[Long + off-topic = deleted. Take it to an Open Thread.]

Robin, it's not clear to me what further kind of argument you think I should offer. I didn't just flatly state "the problem with the Culture is the Minds", I described what my problem was, and offered Narnia as a simplified case where the problem is especially stark.

It's not clear to me what constitutes an "argument" beyond sharing the mental images that invoke your preferences, in this matter of terminal values. What other sort of answer could I give to "Why don't you think that's fun?" Would you care to briefly state a contrary view you have, and what you would see as a different sort of argument in favor of it?

Again, my purpose in all this is twofold: To retain people who now turn away from transhumanism, cryonics, or life itself because they can't imagine any future in which they would be happy; and to deliver a further general argument against religions by showing that the present world isn't optimized for eudaimonia, including moral responsibility or self-reliance.

[-][anonymous]9y00

To me this whole article looks like a confusion of all possible goals being terminal goals. We want responsibility for our terminal goals, but when the faucet breaks, we call a plumber instead of contemplating our self-reliance. Ok, I contemplate my failure at self-reliance, but that's a psychological mess-up I don't endorse for a second.

I have a lot of sympathy for what Unknown said here:

"My guess is that Eliezer will be horrified at the results of CEV-- despite the fact that most people will be happy with it."

And Carl Schulmann has a very good point here:

"It looks like (especially young) humans have quite a lot of ability to pick up a wide variety of basic moral concerns, in a structured fashion, e.g. assigning ingroups, objects of purity-concerns, etc. Being raised in an environment of science-fiction and Modern Orthodox Judaism may have given you quite unusual terminal values"

Sorry to keep harping on about this, but if you read Joshua Greene's PhD thesis, pp 194, you'll find this:

"By participating in these interlinked custom complexes regarding the use of space and the purification of the body, children learn that a central project of moral life is the regulation of one’s own bodily states as one navigates the complex topography of purity and pollution….

Social skills and judgmental processes that are learned gradually and implicitly then operate unconsciously, projecting their results into consciousness, where they are experienced as intuitions arising from nowhere"

If there's nothing more to life than eliminating suffering, you might as well eliminate life and be done.

I nominate this for the next "Rationality Quotes".

Doesn't this line of thinking make the case for Intelligence Augmentation (IA) over that of FAI? And let me qualify that when I say IA, I really mean friendly intelligence augmentation relative to friendly artificial intelligence. If you could 'level up' all of humanity to the wisdom and moral ethos of 'friendliness', wouldn't that be the most important step to take first and foremost? If you could reorganize society and reeducate humans in such a way to make a friendly system at our current level of scientific knowledge and technology, that would almost (not entirely, but as best as we can) cut the probability of existential threats to a minimum and allow for a sustainable eudaimonic increase of intelligence towards a positive singularity outcome. Yes, that is a hard problem, but I'm sure not harder than FAI (probably a lot less hard). It'll probably take generations, and we might have to take a few steps backwards before we take further steps forwards (and non-existential catastrophes might provide those backward steps regardless of our choosing), but it seems like it is the best path. The only reasons to choose an FAI plan is because you 1.) think an existential threat is likely to occur very soon, 2.) you want to be alive for the singularity and don't want to risk cryogenics, 3.) you just fancy the FAI idea for personal non-rational reasons.

Eliezer,

I have to question your literary interpretation of the Culture. Is Banks' intention really to show an idealized society? I think the problem of the Minds that you describe is used by Banks to show the existential futility of the Culture's activities. The Culture sans Minds would be fairly run-of-the-mill sci-fi. With all of its needs met (even thinking), it throws into question every action the Culture takes, particularly the meddlesome ones. That's the difference between Narnia and the Culture; Aslan has a wonderful plan for the childrens' lives, whereas the Culture really has nothing to do but avoid boredom. The Romantic Ideals (High Challenge, Complex Novelty) you espouse are ultimately what is being attacked by what I see as Banks' Existential ones. I think you can take the transhumanism out of the argument and just debate the ideas, since we aren't yet at the point of being infinitely intelligent, immortal, etc.

Aaron

haig, one might also believe that Friendly Artificial Intelligence is easier than Friendly Biological Intelligence. We have relatively few examples of FBI and no consistent, reliable way to reproduce it. FAI, if it works, works on better hardware with software that is potentially provably correct, and you can copy that formula.

AI is often mocked because it has been "almost there" for about 50 years, and FAI is a new subset of that. Versions of FBI have been attempted for at least 4000 years, suggesting that the problem may be difficult.

I like how you chose the acronym FBI, which is obviously ambiguous.

We might want to go with "Friendly Natural Intelligence" instead, FNI.

Eliezer, what do you have against "Excession"? It's been a while since I last read them, but I thought it was the 2nd best of the Culture books after "Use of Weapons". I do agree that "Player of Games" is the best place to start though (I started with Consider Phlebas but found it a little dry).

Anyway, as for your actual point, I think it sounds reasonable at least on the surface, but I think considering this stuff too deeply may be putting the cart ahead of the horse somewhat when we're not even very sure what causes consciousness in the first place, or what the details of its workings are, and therefore to what extent a non-conscious yet correctly working FAI is even possible or desirable.

Eliezer:

"Narnia as a simplified case where the problem is especially stark."

I believe there are at least two significant differences:

  • Aslan was not created by humans, it does not represent the "story of intelligence" (quite contrary, lesser intelligence was created by Aslan, as long as you interpret it as God).

  • There is only single Aslan with single predetermined "goal" while there are millions of Culture minds, with no single "goal".

(actually, second point is what I dislike so much about the idea of singleton - it can turn into something like benevolent but oppressing God too easily. Aslan IS Narnia Singleton).

The concern expressed above over the consistency of the Culture universe seems unnecessary. The quality of construction of the Culture universe and it's stories is non-trivial, and hence, as with all things, one absorbs what is useful and progresses forward.

I read Amputation of Destiny and your subsequent replies with interest Eliezer, here's my contribution.

The Problem With The Minds could also read The Entire Reason For The Culture/Idiran War. The Idirans consider sentient machines an abomination or to quote Consider Phlebas;

'The fools in the Culture couldn't see that one day the Minds would start thinking how wasteful and inefficient the humans in the Culture themselves were".

It's not a plot flaw, it's a plot device and it occurs throughout the series.

Your Living By Your Own Strength Point I don't agree with either as you appear to negate important backstory about the Culture. People leave and join the Culture all the time. The Culture itself splits occasionally as a yearning for personal fulfillment afflicts Minds as well.

The wish to become stronger is fully exhibited by Culture Citizens, I think your analogy doesn't fit. We are told that Culture Citizens all have physical and mental enhancements that put them several notches above their non Culture counterparts on the strength scale. Strength is therefore valued, so is intelligence but they are not the most valued...

I think in common with ourselves and within the non confines of Culture society there is a wish to attain status and respect because many apply to Contact and Special Circumstances. Special Circumstances in particular gives access to more offensive levels of technology.

Read Matter for an account of upgrades and assignments to a person becoming a special circumstances agent and how they made her feel.

This brings me to how it all ties together. You've mentioned that the Minds overshadow the humans this is wrong. First of all Culture citizens have an intimate relationship with Minds and can form friendships and dislikes with them the same as any other being. In Culture society sentience is the most valued thing of all. This is not to say that the humans are blind to the obvious differences in abilites, far from it. In the main they don't feel the need to change themselves to AI, what with the good s**t, ability to change sex and whatnot. It's even considered rude for organics to take on the forms of conventional AI like drones and vice versa. To be sentient within the Culture is to claim equal status with all in the Culture.

Contrast this with the Idirans in Consider Phlebas, their religion gives them the right to rule lesser beings within their influence.

That's why I think the Narnia/Culture comparison isn't right. The Narnia books as you've said have a christian fable at their heart, however Aslan is not a central character, Aslan is THE central character, both creator and destroyer. Although religion plays a prominent part in Consider Phlebas, religion is not the central pillar the Culture is built upon. The Minds may have godlike powers but they're not gods themselves. Billions of humans live within the Culture, billions more and all the rest live quite happily without it.

The nonperson predicate point I find very interesting. It's good to get opinions on AI from specialists such as yourself and certainly I'm not going to use a novel to outpoint proper research. I would like to mention Look To Windward though. As part of the backstory it mentions that the problem with creating AI without the taint of their creators is that the results almost instantly sublime so while the author may be paying lipservice, he isn't ignoring it.

Could it be that the Minds themselves yearn for a purpose? This is only my question.

I like your later point you make about subliming albeit I think this has been undergoing a process of refinement. In fairness it's an authors right to embellish and improve a work in progress, provided there are no inconsistences.

Thanks for the right of reply Eliezer.

"If there's nothing more to life than eliminating suffering, you might as well eliminate life and be done."

This only applies if non-existence is considered a preferable state to existence. Obviously Culture AI's consider existence preferable, and thus strive to make human existence as suffering-free as possible.

If you need to live in a world where you are needed, then you go ahead and live there, but please send me to the Culture (I haven't read these books so I'm only going off your initial quote).

Or if the very existence of that option strips the meaning from your life, then you modify yourself. Not me.

So I wonder if when you're really good at something and you die and go to heaven, if there is some dude who was doing it 2000 years ago, who's been doing it heaven the whole time, who's like.. 2000 years better at it then you

and like.. you try to catch up, but it's like.. he's always 2000 years better

so you get really depressed and try to kill yourself, but you're already dead

reincarnation solves a lot of these problems

-- #perl