by [anonymous]
3 min read25th Feb 201229 comments

10

Manna is the title of a science fiction story that describes a near future transition to an automated society where humans are uneconomical. In the later chapters it describes in some detail a post-scarcity society. There are several problems with it however, the greatest by far is that the author seems to have assumed that "want" and "envy" are primarily tied in material needs. This is simply not true.

I would love to live in a society with material equality on a sufficiently hight standard, I'd however hate to live in society with a enforced social equality, simply because that would override my preferences and freedom to interact or not interact with whomever I wish.

Also since things like the willpower to work out (to stay in top athletic condition even!) or not having the resources to fulfil even basic plans are made irrelevant, things like genetic inequality or how comfortable you are messing with your own hardware to upgrade your capabilities or how much time you dedicate to self-improvement would be more important than ever.

I predict social inequality would be pretty high in this society and mostly involuntary. Even a decision about something like the distribution of how much time you use for self-improvement, which you could presumably change later, there wouldn't be a good way to catch up with anyone (think opportunity cost and compound interest), unless technological progress would hit diminishing returns and slow down. Social inequality would however be more limited than pure financial inequality I would guess because of things like Dunbar's number. There would still be tragedy (that may be a feature rather than a bug of utopia). I guess people would be comfortable with gods above and beasts below them, that don't really figure in their "my social status compared to others" part of the brain, but even in the narrow band where you do care about inequality would grow rapidly. Eventually you might find yourself alone in your specific spot.

To get back to my previous point about probable (to me) unacceptable limitations on freedom, It may seem silly that a society with material equality would legislate intrusive and micromanaging rules that would force social equality to prevent this, but the hunter gatherer instincts in us are strong. We demand equality. We enjoy bringing about "equality". We look good demanding equality. Once material needs are met, this powerful urge will still be there and bring about signalling races. And new and new ways to avoid the edicts produced by such races (because also strong in us is our desire to be personally unequal or superior to someone, to distinguish and discriminate in our personal lives). This would play out in interesting and potentially dystopia ways.

I'm pretty sure the vast majority of people in the Australia project would probably end up wireheading. Why bother to go to the Moon when you can have a perfect virtual reality replica of it, why bother with the status of building a real fusion reactor when you can just play a gameified simplified version and simulate the same social reward, why bother with a real relationship ect... dedicating resoruces for something like a real life space elevator simply wouldn't cross their minds. People I think systematically overestimate how much something being "real" matters to them. Better and better also means better and better virtual super-stimuli. Among the tiny remaining faction of remaining "peas" (those choosing to spend most of their time in physical existence), there would be very few that would choose to have children, but they would dominate the future. Also I see no reason why the US couldn't buy technology from the Australia Project to use for its own welfare dependant citizens. Instead of the cheap mega-shelters, just hook them up on virtual reality, with no choice in the matter. Which would make a tiny fraction of them deeply unhappy (if they knew about it).

I maintain that the human brains default response to unlimited control of its own sensor input and reasonable security of continued existence is solipsism. And the default of a society of human brains with such technology is first social fragmentation, then value fragmentation and eventually a return to living under the yoke of an essentially Darwinian processes. Speaking of which the society of the US as described in the story would probably outpace Australia since it would have machines do its research and development.

It would take some time for the value this creates to run out though, much like Robin Hanson finds a future with a dream time of utopia followed by trillions of slaves glorious , I still find a few subjective millennia of a golden age followed by non-human and inhuman minds to be worth it.

It is not like we have to choose between infinity and something finite, the universe seems to have an expiration date as it is. A few thousand or million years doesn't seem like something fleas on a insignificant speck should sneer at.

New to LessWrong?

New Comment
29 comments, sorted by Click to highlight new comments since: Today at 9:03 AM

It would take some time for the value this creates to run out though, much like Robin Hanson finds a future with a dream time of utopia followed by trillions of slaves glorious , I still find a few subjective millennia of a golden age followed by non-human and inhuman minds to be worth it.

I'd add that total wireheading probably still feels nicer to us than Hanson's scenario. Our CEV might disagree, but damn, I'd say that they are in different classes of badness: "somewhat bitter yet predictable outcome of a horrible mess" versus "boot stamping on a human face forever".

[-][anonymous]12y100

"boot stamping on a human face forever"

What people consistently are not getting (going as far to call the Malthusian ems scenario a "hell world") is that any minds that will exist in it will be well adapted to it.

So the question is really whether "boot stamping on a human face forever" is more or less worthy of being called a 'hell world' than "humanity is destroyed, replaced by trillions of alien slaves".

I'll have to think about that one.

[-][anonymous]12y10

I don't consider it any worse than humanity being destroyed and replaced by nothing. Indeed I consider it marginally better in the same way I would consider it marginally better for mankind to go extinct without destroying the Earth's ecosystem than all life on Earth being extinguished. Also in the em scenario we get a least a few if not all humans living through finite (perhaps a few centuries, perhaps a few millennia or even a million years, it is hard to tell) era of plenty.

I don't consider it any worse than humanity being destroyed are replaced by nothing. Indeed I consider it marginally better

On this we agree. I don't see that it answers the question, though.

[-][anonymous]12y30

Actually it does. Hell world implies creatures suffering greatly. No one would call a world where humans just went extinct a hell world.

Why call a world where humans go extinct and aliens live a hell world?

Yup, and medieval peasants were well adapted to their world. We wouldn't want to get adapted like that! I did say: "feels nicer to us".

[-][anonymous]12y90

Yup, and medieval peasants were well adapted to their world.

If you think medieval peasants would agree they where better off never existing at all you are probably mistaken. I would argue that if medieval peasant's lives aren't worth living, the odds are very poor that our own current lives are worth living.

You aren't getting it. Well adapted as in alien, inhuman. Why would anything like a human mind be economical? And during the transition to inhuman minds we still get lots and lots and lots of utility from human-like minds living lives worth living.

I don't know wire-heading just doesn't feel nicer to me than a few millennia of fun followed by alien creatures inhabiting the universe. And if one is at all altruistic, the opinions of the alien creatures surely count for something.

Again, notice the "us". Of course I didn't mean that the hypothetical Malthusian ems would experience emotions such as misery or describe their existence as not worth it.

I was simply implying that, say, you or me might survive that long and... see things we think we should sympathize with but can't.

[-][anonymous]12y10

I was simply implying that, say, you or me might survive that long and... see things we think we should sympathize with but can't.

You are right on that. I guess I'm not too bothered by that because I don't expect to survive that long. Feeling glad or not bothered by someone getting to live a life he finds worthy but one I don't appreciate is a much easier gesture in far mode. Actually living to see the em world would mean near mode would get a say too... and it may not be very pleased by what it sees.

It is easy to put up a sign "altruism" and declare you are more than pleased by minds existing and having fulfilling existences. It is much harder to live next door to a necrophiliac.

I skimmed the story. I didn't see anything which enforced social equality, though I was quite curious about how not hurting people was interpreted, and how it's interpreted would make a big difference.

I got the impression the author simply didn't care about social structures, so he didn't bother to think about what they'd be like in a utopia of abundance.

I don't think your hierarchies of self-improvement would most of what's going on, though they'd exist-- I think a lot of the social hierarchy would be about celebrity and emotional skills.

Not hurting people might play out as you fear if the computer system thinks that making sure no one is lonely is more important than not making people spend time with people no one likes.

One answer to the problem of needing to feel dominiance and superiority while finding that instinct unethical is, of course, catgirls. No matter how fake advanced robots might feel, I predict that they'd still do a satisfactory job at that, greatly diminishing the urge to compete in status against real people. See the current research of how people at the top of even the smallest social heap - say, a gaming club - begin to feel really good and display "alpha" behavior. So, that issue might be pretty easily manageable.

No matter how fake advanced robots might feel, I predict that they'd still do a satisfactory job at that, greatly diminishing the urge to compete in status against real people.

Maybe average people would be satisfied with this, but are they the ones who will really decide?

I think that dominance over real people feels better than dominance over advanced robots, and in some sense "dominance" means ability to decide what other people do, to make them frustrated. In some sense this is a zero-sum game; if average people would be allowed to feel like alphas of their robotic groups, then the people who now have power over people would feel their power weakening. How much power do you really have over someone who can ignore you completely? If Joe is a president of the world or whatever, but all my material needs are fulfilled and I have my pack of robots, I can ignore Joe completely; and if everyone does, then Joe will no longer feel like president. So I would expect that Joe and his friends would make some laws that prevent me from escaping their power.

Generally, if someone is winning in some social ladder, they want to make it the ladder for everyone. For example if someone is great in chess, they will not only express superiority over other chessplayers, but also superiority of chessplayers over non-chessplayers, thus virtually making everyone part of their ladder. It is a social instinct -- your leadership in a group is threatened not only by people winning over you, but also by people leaving your group. So the people who are high in the "people-dominating-over-people ladder" will seek ways to prevent others from leaving; and almost by definition they will succeed.

Yeah, sure, that's why we'll still need either really strong tradition or a system that maintains strict social control ("the Leviathan") - whether said tradition or said system manifests itself as a government, a singleton like AGI, or something we can't yet imagine. Even if there are no external scarce resources left for people to struggle over, they will still inevitably struggle over dominiance and status for its own sake.

And, like Konkvistador has recently pointed out, if you want to repress that struggle, you are to chose a proportion of "brainwashing" (tradition, or more intrusive influence) or "violence" (formal rigid system). With maximum brainwashing, you'd need no violence. With maximum violence, you'd need no brainwashing (but unlimited violence in human hands probably spells guaranteed disaster, especially if said violence can deliver economic control to its enforcer; IMO, that's what Moldbug is in denial about when proposing his "patchwork" of absolute sovereigns; like any other "free" market, they could stop competing in niceness for citizens and unite under a treaty to enslave them).

Now, maximum brainwashing sounds even worse to most of us - 1984 springs to mind - but I'd say that, with research and a way to make the brainwasher incorruptible, it could maybe turn out pretty nice, maybe even like Banks' Culture, where everyone is practically brainwashed from birth via a constructed language.

Of course, many people would say that the sanest option is a combination of the two. But, again, if we'll somehow find a way to make one of the two nice and ethical, but not the other, then we might have to chose to maximize the one to eliminate the other.

So that's the stick; if it works reliably, people might be made to stay content with the carrot (fake zero-sum social games).

UPD: if you downvoted the above primarily for me mentioning that argument against the Patchwork thing - whether you find it weak or just out-of-place and needlessly political/etc: please downvote this comment but remove your downvote from the parent.

(Everyone else, please don't vote on this comment.)

[-][anonymous]12y20

While this is true, this is only a limited take. I think Nancy had a much better grasp of the core argument:

I don't think your hierarchies of self-improvement would most of what's going on, though they'd exist-- I think a lot of the social hierarchy would be about celebrity and emotional skills.

Not hurting people might play out as you fear if the computer system thinks that making sure no one is lonely is more important than not making people spend time with people no one likes.

It is not like we have to choose between infinity and something finite, the universe seems to have an expiration date as it is.

(Although it might make sense to have this as a default hypothesis, it could easily be wrong for multiple disjunctive reasons, and careful attention is warranted if one wants more out of their analysis than some fun far mode fantasy.)

[-][anonymous]12y40

Yes that is why I put "seems to have" in the sentence rather than simply "has". I didn't feel it productive to discuss those unlikley scenarios in this thread.

Fair enough; in retrospect I likely wouldn't have commented if I agreed that timeless/eternal scenarios are unlikely rather than highly likely, so in a sense I'm like an atheist interrupting a reasonable soteriological debate, which isn't exactly classy.

[-][anonymous]12y20

if I agreed that timeless/eternal scenarios are unlikely rather than highly likely

Please feel free to shorten inferential distances with references to previous LW discussions or sources. My best estimation is that we probably will be limited by negentropy, since you are likley familiar with the basic physics, I expect I will need to update in your direction as I probably haven't heard the same arguments or scenarios that you have.

This story was previously discussed on LW here.

I would love to live in a society with material equality on a sufficiently hight standard

This is probably more restrictive than you think or would like, much like the case of social equality (depending on your definition of material inequality).

[-][anonymous]12y40

Good point. On reconsidering I'd probably want something like a basic income guarantee, where everyone at the very least get enough resources and energy to maintain a healthy immortal Homo Sapiens classic body indefinitely. Actually probably a bit higher than that now that I think about it.

On reconsidering I'd probably want something like a basic income guarantee,

People are always saying that, yet what they mean by "basic income" keeps increasing as their living standards increase.

I just need sustenance and electricity for my holodeck.

I've now actually read the story, and what I have to say is that in Russian SF parlance such a society is called simply "Communism" - as in, the true Communist utopia, not any particular political spin on it. Indeed, if you're interested at all, try reading something of the Strugatsky brothers; they've written http://tvtropes.org/pmwiki/pmwiki.php/Main/NoonUniverse set in a world with similar values. Not sure how likely their works are to be translated into most non-Slavic languages, though.

I expected that you were referring to this.

[-][anonymous]12y00

Is the new title better?

Yes. Thanks!