knb comments on Just another day in utopia - Less Wrong

78 Post author: Stuart_Armstrong 25 December 2011 09:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread.

Comment author: knb 28 December 2011 09:23:40AM 2 points [-]

Reading about this tragic and horrifyingly wasteful dystopia really solidifies my hope that the future goes more like what Robin Hanson has envisioned.

Comment author: wedrifid 17 January 2012 02:58:14PM 11 points [-]

Reading about this tragic and horrifyingly wasteful dystopia really solidifies my hope that the future goes more like what Robin Hanson has envisioned.

Accordingly I must raise my estimation of the threat Robin Hanson poses to humanity. He has persuaded at least one person to advocate his Malthusian hell!

Comment author: knb 17 January 2012 08:06:58PM 1 point [-]

Meh, I wasn't advocating it, just saying it would be way better than this scenario. Either n humans burning the cosmic commons for tacky IRL video games and sex with strangers or 1 Billion n humans living worthwhile productive lives.

It just seems obvious when you do the math.

Comment author: wedrifid 17 January 2012 11:26:06PM 9 points [-]

Either n humans burning the cosmic commons for tacky IRL video games and sex with strangers or 1 Billion n humans living worthwhile productive lives.

I don't share your premises - including the one that seems to be that the agents that survive in Hansonian Hell are humans in any meaningful sense.

It just seems obvious when you do the math.

Your expression of preference here cannot be credibly described as 'doing math'.

Comment author: thomblake 17 January 2012 08:11:20PM 7 points [-]

Either n humans burning the cosmic commons for tacky IRL video games and sex with strangers or 1 Billion n humans living worthwhile productive lives.

I guess one person's tacky IRL video game is another's worthwhile productive life.

Comment author: NancyLebovitz 18 January 2012 01:14:22PM 3 points [-]

What do you think people should be doing? In a post scarcity economy, it seems to me that a lot of what remains to be done is keeping each other entertained.

Comment author: knb 18 January 2012 08:48:35PM *  2 points [-]

My problem isn't particularly with the Ishtar's pastimes, but with the overall system. I'm arguing that Hanson's upload society would be better than this because it could support so many more lives worth living total and more total utility than this alleged eutopia.

Comment author: Normal_Anomaly 06 January 2015 08:28:39PM 0 points [-]

So you'd be happy with this world if it all existed inside a small piece of the galactic computronium-pile, and there was lots more of it? I actually hadn't considered that, because I just assume all post-Singularity futures are set inside the galactic computronium-pile unless explicitly stated otherwise.

Comment author: ArisKatsaris 18 January 2012 04:41:54PM 1 point [-]

It just seems obvious when you do the math.

What math is that? Are you talking about number of lives at any given century -- effectively judging the situation as if time-periods were sentient observers to be happy or unhappy at their current situation?

Do you have any reason to believe that maximum diversity in human minds (i.e. allowing lots of different humans to exist) would be best satisfied by cramming them all in the same century, as densely as possible?

A trillion lives all crammed in the same century aren't inherently more worthwhile than a trillion lives spread over a hundred centuries -- any more than 10 people forced to live in the same flat are inherently more precious than 10 people having separate apartments. Do you have any reason to prefer the former over the latter? Surely there's some ideal range where utility is satisfied in terms of people density spread over time and location.

Comment author: knb 18 January 2012 08:02:04PM 2 points [-]

You are misunderstanding my argument.

When you use up negentropy, it is used up for good, and there is a finite amount in each section of the universe. The amount being used on Ishtar could theoretically support good lives of billions of upload minds (or a smaller but still huge number of other possible lives). This isn't a matter of a long and narrow future or a short and wide future, but of how many total, worthwhile lives will exist.

As for quality, there seems to be no reason why simulations can't be as happy, or even happier than Ishtar.

Comment author: [deleted] 28 December 2011 09:35:03AM 6 points [-]

You know you're looking at a dystopia when even Hanson's malthusian hell world looks good in comparison.

(Agree with the sentiment, though.)

Comment author: Baughn 28 December 2011 09:17:29PM 5 points [-]

It's one world, or one solar system, and for all we know they've found a way around entropy - or this could all be a highly realistic simulation.

But even if it isn't, I consider this option far better than Hanson's dystopia. Its main flaw is inefficiency, which can be fixed.

Comment author: knb 29 December 2011 03:32:43AM 3 points [-]

Its main flaw is inefficiency

Its main characteristic is inefficiency.

Comment author: Eugene 03 January 2012 09:41:22AM 5 points [-]

There's little indication of how the utopia actually operates at a higher level, only how the artificially and consensually non-uplifted humans experience it. So there's no way to be certain, from this small snapshot, whether it is inefficient or not.

I would instead say that it's main flaw is that the machines allow too much of the "fun" decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren't very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.

Comment author: TheOtherDave 17 January 2012 03:18:04PM 2 points [-]

OTOH, nothing in that story requires that the humans are making unaided assessments. The protagonist's environment may well have been suggested by the system in the first place as its best estimate of what will maximize her enjoyment/fulfilment/fun/Fun/utility/whatever, and she may have said "OK, sounds good."

Comment author: [deleted] 17 January 2012 08:29:18PM 1 point [-]

I would instead say that it's main flaw is that the machines allow too much of the "fun" decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren't very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.

I'm afraid I'd prefer it that way. Having the machines decide what's fun for us would likely lead to wireheading. Or am I missing something?

[off to read the Fun Theory sequence in case this helps me find the answer myself]

Comment author: Nornagest 17 January 2012 08:32:54PM 3 points [-]

Depends on the criteria the machines are using to evaluate fun, of course -- it needn't be limited to immediate pleasure, and in fact a major point of the Fun Theory sequence is that immediate pleasure is a poor metric for capital-F Fun. Human values are complex and there's a lot of possible ways to get them wrong, but people are pretty bad at maximizing them too.

Comment author: Stuart_Armstrong 29 December 2011 08:26:50AM 3 points [-]

Also known as fun.

Comment author: Baughn 29 December 2011 10:29:37PM 1 point [-]

Efficiency in fun-creation.

Efficiency in doing something that doesn't match my utility function seems.. fairly pointless, really. An abuse of the word, even.

Comment author: Multiheaded 17 January 2012 01:58:54PM 3 points [-]

Yet the horror is that it's what you might catch yourself worshiping down the line, forgetting to enjoy any of it. Just take a look at the miserable and aimless workaholics out there, if they can still handle whatever it is they're doing, their boss will happily exploit them. Do you think your brain would care more about you if you set "efficiency" as its watchword?

Comment author: TheOtherDave 17 January 2012 03:23:23PM 0 points [-]

Yup, if we set out to build a system that maximized our ability to enjoy life, and we ended up with a system in which we didn't enjoy life, that would be a failure.

If we set out to build a system with some other goal, or with no particular goal in mind at all, and we ended up with a system in which we didn't enjoy life, that's more complicated... but at the very least, it's not an ideal win condition. (It also describes the real world pretty accurately.)

I'm curious: do you have a vision of a win condition that you would endorse?

Comment author: Multiheaded 17 January 2012 04:46:58PM 0 points [-]
Comment author: Multiheaded 17 January 2012 01:53:15PM 0 points [-]

wasteful

You best be sarcastic. Waste is good! It's signaling, it's ease, it's a lack of tension, it's the life's little luxuries that you'd wish back if they were all taken from you simultaneously, without caring much about the "efficiency" of it.

Comment author: knb 17 January 2012 08:00:19PM 3 points [-]

I wasn't being sarcastic.

Waste is good!

No, waste is by definition not good. Resource usage can be good, but the world of this story makes me pessimistic about how it is being done. It seems like the AI gods of this world have engineered a "post-scarcity" society with population control to keep the amount of resources extremely high per person--which enables this video game-like lifestyle for people who want it. Millions of lives could be supported with resources centrally allocated to her. That is a horribly anti-egalitarian form of communism.

Admittedly, it is possible that this takes place within a simulation, but that is never stated, and we have reason to believe that it isn't true. For example, the author mentions that Ishtar knows the AI-gods won't let her die even if she crashes, implying that this is her physical body.

Comment author: Multiheaded 17 January 2012 08:14:55PM 2 points [-]

Millions of lives could be supported with resources centrally allocated to her.

Are you sure you want them to pop into existence? Why? I just can't understand! Why must there be more people? So that you can have more smiley faces? That's the road to paper-clipping!

Comment author: Nornagest 17 January 2012 08:28:21PM 8 points [-]

Well, yes. Several popular versions of utilitarianism lead by a fairly short path to what's probably the first paperclipping scenario I ever read about, although it's not usually described in those terms.

Coming up with a version of utilitarianism that doesn't have those problems or an equally unintuitive complement to them is harder than it looks, though.

Comment author: knb 17 January 2012 09:16:20PM *  1 point [-]

Why does anyone value anything? If we could painlessly pop all but 70 human beings out of existence but make the ones who remain much happier (say, 10x as happy), would you do it? Why not? Why must there be more people?

Comment author: Multiheaded 17 January 2012 09:38:55PM *  -1 points [-]

That's easy; we have to look at both cases in some detail.

-Forking over a part of our genes, mind, society and culture to create new beings with new complexity, knowing that less than optimal conditions await them, -

-versus refraining from erasing all of the extant and potential value and complexity of current beings, here and now, for a very mixed blessing (increasing the smileyness of faces while decreasing the amount of tiles). The second action has much greater utility, and is not very much like the first at all. So we could easily do the second while avoiding the first, and be consistent in our values and judgment.

Sorry, I'm a bit high.