knb comments on Just another day in utopia - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (116)
Reading about this tragic and horrifyingly wasteful dystopia really solidifies my hope that the future goes more like what Robin Hanson has envisioned.
Accordingly I must raise my estimation of the threat Robin Hanson poses to humanity. He has persuaded at least one person to advocate his Malthusian hell!
Meh, I wasn't advocating it, just saying it would be way better than this scenario. Either n humans burning the cosmic commons for tacky IRL video games and sex with strangers or 1 Billion n humans living worthwhile productive lives.
It just seems obvious when you do the math.
I don't share your premises - including the one that seems to be that the agents that survive in Hansonian Hell are humans in any meaningful sense.
Your expression of preference here cannot be credibly described as 'doing math'.
I guess one person's tacky IRL video game is another's worthwhile productive life.
What do you think people should be doing? In a post scarcity economy, it seems to me that a lot of what remains to be done is keeping each other entertained.
My problem isn't particularly with the Ishtar's pastimes, but with the overall system. I'm arguing that Hanson's upload society would be better than this because it could support so many more lives worth living total and more total utility than this alleged eutopia.
So you'd be happy with this world if it all existed inside a small piece of the galactic computronium-pile, and there was lots more of it? I actually hadn't considered that, because I just assume all post-Singularity futures are set inside the galactic computronium-pile unless explicitly stated otherwise.
What math is that? Are you talking about number of lives at any given century -- effectively judging the situation as if time-periods were sentient observers to be happy or unhappy at their current situation?
Do you have any reason to believe that maximum diversity in human minds (i.e. allowing lots of different humans to exist) would be best satisfied by cramming them all in the same century, as densely as possible?
A trillion lives all crammed in the same century aren't inherently more worthwhile than a trillion lives spread over a hundred centuries -- any more than 10 people forced to live in the same flat are inherently more precious than 10 people having separate apartments. Do you have any reason to prefer the former over the latter? Surely there's some ideal range where utility is satisfied in terms of people density spread over time and location.
You are misunderstanding my argument.
When you use up negentropy, it is used up for good, and there is a finite amount in each section of the universe. The amount being used on Ishtar could theoretically support good lives of billions of upload minds (or a smaller but still huge number of other possible lives). This isn't a matter of a long and narrow future or a short and wide future, but of how many total, worthwhile lives will exist.
As for quality, there seems to be no reason why simulations can't be as happy, or even happier than Ishtar.
You know you're looking at a dystopia when even Hanson's malthusian hell world looks good in comparison.
(Agree with the sentiment, though.)
It's one world, or one solar system, and for all we know they've found a way around entropy - or this could all be a highly realistic simulation.
But even if it isn't, I consider this option far better than Hanson's dystopia. Its main flaw is inefficiency, which can be fixed.
Its main characteristic is inefficiency.
There's little indication of how the utopia actually operates at a higher level, only how the artificially and consensually non-uplifted humans experience it. So there's no way to be certain, from this small snapshot, whether it is inefficient or not.
I would instead say that it's main flaw is that the machines allow too much of the "fun" decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren't very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.
OTOH, nothing in that story requires that the humans are making unaided assessments. The protagonist's environment may well have been suggested by the system in the first place as its best estimate of what will maximize her enjoyment/fulfilment/fun/Fun/utility/whatever, and she may have said "OK, sounds good."
I'm afraid I'd prefer it that way. Having the machines decide what's fun for us would likely lead to wireheading. Or am I missing something?
[off to read the Fun Theory sequence in case this helps me find the answer myself]
Depends on the criteria the machines are using to evaluate fun, of course -- it needn't be limited to immediate pleasure, and in fact a major point of the Fun Theory sequence is that immediate pleasure is a poor metric for capital-F Fun. Human values are complex and there's a lot of possible ways to get them wrong, but people are pretty bad at maximizing them too.
Also known as fun.
Efficiency in fun-creation.
Efficiency in doing something that doesn't match my utility function seems.. fairly pointless, really. An abuse of the word, even.
Yet the horror is that it's what you might catch yourself worshiping down the line, forgetting to enjoy any of it. Just take a look at the miserable and aimless workaholics out there, if they can still handle whatever it is they're doing, their boss will happily exploit them. Do you think your brain would care more about you if you set "efficiency" as its watchword?
Yup, if we set out to build a system that maximized our ability to enjoy life, and we ended up with a system in which we didn't enjoy life, that would be a failure.
If we set out to build a system with some other goal, or with no particular goal in mind at all, and we ended up with a system in which we didn't enjoy life, that's more complicated... but at the very least, it's not an ideal win condition. (It also describes the real world pretty accurately.)
I'm curious: do you have a vision of a win condition that you would endorse?
See more in my latest post; I'll be adding to it.
http://lesswrong.com/r/discussion/lw/9g0/placeholder_against_dystopia_rally_before_kant/
You best be sarcastic. Waste is good! It's signaling, it's ease, it's a lack of tension, it's the life's little luxuries that you'd wish back if they were all taken from you simultaneously, without caring much about the "efficiency" of it.
I wasn't being sarcastic.
No, waste is by definition not good. Resource usage can be good, but the world of this story makes me pessimistic about how it is being done. It seems like the AI gods of this world have engineered a "post-scarcity" society with population control to keep the amount of resources extremely high per person--which enables this video game-like lifestyle for people who want it. Millions of lives could be supported with resources centrally allocated to her. That is a horribly anti-egalitarian form of communism.
Admittedly, it is possible that this takes place within a simulation, but that is never stated, and we have reason to believe that it isn't true. For example, the author mentions that Ishtar knows the AI-gods won't let her die even if she crashes, implying that this is her physical body.
Are you sure you want them to pop into existence? Why? I just can't understand! Why must there be more people? So that you can have more smiley faces? That's the road to paper-clipping!
Well, yes. Several popular versions of utilitarianism lead by a fairly short path to what's probably the first paperclipping scenario I ever read about, although it's not usually described in those terms.
Coming up with a version of utilitarianism that doesn't have those problems or an equally unintuitive complement to them is harder than it looks, though.
Why does anyone value anything? If we could painlessly pop all but 70 human beings out of existence but make the ones who remain much happier (say, 10x as happy), would you do it? Why not? Why must there be more people?
That's easy; we have to look at both cases in some detail.
-Forking over a part of our genes, mind, society and culture to create new beings with new complexity, knowing that less than optimal conditions await them, -
-versus refraining from erasing all of the extant and potential value and complexity of current beings, here and now, for a very mixed blessing (increasing the smileyness of faces while decreasing the amount of tiles). The second action has much greater utility, and is not very much like the first at all. So we could easily do the second while avoiding the first, and be consistent in our values and judgment.
Sorry, I'm a bit high.