nyan_sandwich comments on Morality is Awesome - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (437)
That doesn't sound awesome.
If you thought it was, I'd have to conclude that you were wrong.
Also,
To you. But it might to someone else. Morality is the difficult problem of how groups of humans or other entities interact with each other so as to realise their won preferences without infringing on others preferences. Just declaring or pretenting that everyone has the same preference doesn't solve or even address the problem of morality. It is a non starter. I know it is supposed to be naive, but it is too naive.
Then which one of us is right just comes down to an appeal to force, doesn't it? (and all those various kinds of meta-force).
I.e., if I can incarnate my Diabolus-1 Antifriendly AI and grow it up to conquer the universe before you can get your Friendly AI up to speed, I win.
This one is actually really subtle, and I forget the solution, and it's in the metaethics sequence somewhere (look for pebblesorters), but the punchline is that the outcome sucks.
So yes, you and your Diabolus-1 "win", but the outcome still sucks.
Sure, but... okay, I'm going to go concrete here.
I suffered a lot of abuse as a child; as a result, sometimes my mind enters a state where its adopted optimization process is "maximize suck". In this state, I tend to be MORE rational about my goals than I am when I'm in a more 'positive' state.
So I don't have to stretch very far to imagine a situation where the outcome sucking - MAZIMALLY sucking - is the damned POINT. Because fuck you (and fuck me too).
So the outcome still sucks, if you are not maximizing actual awesomeness.
Not necessarily. Plenty of people think the Saw films are awesome. Plenty of people on 4chan think that posting flashing images to epileptic support boards is awesome, and pushing developmentally disabled children to commit suicide and then harassing their parents forever with pictures of the suicide is awesome.
They will, in fact, use "awesome" explicitly to describe what they're doing.
I thought the first Saw film was awesome. It was a cool gory story about making the most of life. It's fiction, so nobody actually got hurt and there is no secondary consideration of awesomeness there.
Some people think that the prospect of making disabled kids commit suicide is awesome; fewer people think that actually doing so is awesome. I don't think that people who actually do so are awesome.
I think that's a relatively standard use of "awesome".
Much for the same reasons that people can be mistaken about their own desires, people can be mistaken about what they would actually consider awesome if they were to engage in an accurate modeling of all the facts. E.g. People who post flashing images to epileptic boards or suicide pictures to battered parents are either 1) failing to truly envision the potential results of their actions and consequently overvaluing the immediate minor awesomeness of the irony of the post or whatever vs. the distant, unseen, major anti-awesomeness of seizures/suicides, or 2) they’re actually socio- or psychopaths. Given the infrequency of real sociopathy, it’s safe to assume a lot of the former happens, especially over the impersonal, empathy-sapping environment of the Internet.
The answer you are referrign to is probably the utiiitarian one that you morally-should maximisie everyone's preferences, not just your own. But that's already going well beyond to the naive "awesomeness" theory presented above.
This is assuming, of course, that "awesome" is subjective. Maybe if we had a universal scale of awesomeness...
You guys are thinking too hard about this.
Either don't think about it, and maximize awesome (of which there is only one).
Or read the metaethics sequence, where you will realize that you need to maximize awesome (of which there is only one).
Look, here's the problem with that whole line of reasoning, right in the joy of the merely good:
First, our esteemed host says:
And then, he says:
But then suddenly, at the end, he flips around and says:
Speaking as a living child who has been dragged ONTO train tracks, I don't buy it. Domination and infliction of misery are just as "awesome" as altruism and sharing joy.
I guess the crux of my point is, if you don't think that the weak are contemptible and deserve to suffer, what are you gonna do about it? Because just trying to convince people that it's less "awesome" than giving them all free spaceships is going to get you ignored or shot, depending on how much the big boys think you'll interfere with them inflicting suffering for the lulz.
This doesn't seem to be true over the long haul-- somehow, the average behavior of the big boys has become less cruel. Part of this is punishment, but even getting punishment into place takes convincing people, some of them high status, that letting the people in charge do what they please isn't the most awesome alternative.
Alternatively, maybe the cruelest never get convinced, it's just people have been gradually solving the coordination problem for those who don't want cruelty.
At least, at the levels most people operate at. Things tend to get better from the top down; for the bottom-dwellers, things are still pretty desperate and terrifying.
I agree, but I think your initial general claim of no hope for improvement was too strong.
Let me put it this way: I would be willing to bet my entire net worth, if it weren't negative, that if some kind of uplifting Singularity happens, my broke ass gets left behind because I won't have anything to invest into it and I won't have any way to signal that I'm worth more than my raw materials.
You're wrong.
By the point of the singularity no human has any instrumental value. Everything any human can do, a nanotech robot AI can do better. No one will be able to signal usefulness or have anything to invest; we will all be instrumentally worthless.
If the singularity goes well at all, though, humanity will get its shit together and save everyone anyways, because people are intrinsically valuable. There will be no concern for the cost of maintaining or uplifting people, because it will be trivially small next to the sheer power we would have, and the value of saving a friend.
Don't assume that everyone else will stay uncaring, once they have the capacity to care. We would save you, along with everyone else.
Downvotes for being unreasonably dramatic.
Bur there is no evidence that there real world works on he Awesomeness theory.
Even if we did, something non-awesome would STILL be able to "win" if it had enough resources and was well-optimized. At which point, why isn't its "non-awesome" idea more important than your idea of "awesome"?
(Yes, this is the old Is-Ought thing; I'm still not convinced that it's a fallacy. I think I might be a nihilist at heart.)
If you taboo "important" you might discover you don't know what you're talking about.