nyan_sandwich comments on Morality is Awesome - Less Wrong

86 [deleted] 06 January 2013 03:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (437)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 05 January 2013 08:18:04AM 0 points [-]

That doesn't sound awesome.

If you thought it was, I'd have to conclude that you were wrong.

Also,

If we still insist on being confused, ...

Comment author: Peterdjones 05 January 2013 03:47:48PM *  2 points [-]

That doesn't sound awesome.

To you. But it might to someone else. Morality is the difficult problem of how groups of humans or other entities interact with each other so as to realise their won preferences without infringing on others preferences. Just declaring or pretenting that everyone has the same preference doesn't solve or even address the problem of morality. It is a non starter. I know it is supposed to be naive, but it is too naive.

Comment author: ialdabaoth 05 January 2013 08:21:10AM 0 points [-]

That doesn't sound awesome.

If you thought it was, I'd have to conclude that you were wrong.

Then which one of us is right just comes down to an appeal to force, doesn't it? (and all those various kinds of meta-force).

I.e., if I can incarnate my Diabolus-1 Antifriendly AI and grow it up to conquer the universe before you can get your Friendly AI up to speed, I win.

Comment author: [deleted] 05 January 2013 08:28:38AM 2 points [-]

This one is actually really subtle, and I forget the solution, and it's in the metaethics sequence somewhere (look for pebblesorters), but the punchline is that the outcome sucks.

So yes, you and your Diabolus-1 "win", but the outcome still sucks.

Comment author: ialdabaoth 05 January 2013 08:30:11AM *  3 points [-]

Sure, but... okay, I'm going to go concrete here.

I suffered a lot of abuse as a child; as a result, sometimes my mind enters a state where its adopted optimization process is "maximize suck". In this state, I tend to be MORE rational about my goals than I am when I'm in a more 'positive' state.

So I don't have to stretch very far to imagine a situation where the outcome sucking - MAZIMALLY sucking - is the damned POINT. Because fuck you (and fuck me too).

Comment author: [deleted] 05 January 2013 08:34:49AM 0 points [-]

So the outcome still sucks, if you are not maximizing actual awesomeness.

Comment author: ialdabaoth 05 January 2013 08:42:27AM *  9 points [-]

Not necessarily. Plenty of people think the Saw films are awesome. Plenty of people on 4chan think that posting flashing images to epileptic support boards is awesome, and pushing developmentally disabled children to commit suicide and then harassing their parents forever with pictures of the suicide is awesome.

They will, in fact, use "awesome" explicitly to describe what they're doing.

Comment author: Jabberslythe 05 January 2013 07:34:59PM 2 points [-]

I thought the first Saw film was awesome. It was a cool gory story about making the most of life. It's fiction, so nobody actually got hurt and there is no secondary consideration of awesomeness there.

Some people think that the prospect of making disabled kids commit suicide is awesome; fewer people think that actually doing so is awesome. I don't think that people who actually do so are awesome.

I think that's a relatively standard use of "awesome".

Comment author: NoisyEmpire 05 January 2013 08:00:21PM 0 points [-]

Much for the same reasons that people can be mistaken about their own desires, people can be mistaken about what they would actually consider awesome if they were to engage in an accurate modeling of all the facts. E.g. People who post flashing images to epileptic boards or suicide pictures to battered parents are either 1) failing to truly envision the potential results of their actions and consequently overvaluing the immediate minor awesomeness of the irony of the post or whatever vs. the distant, unseen, major anti-awesomeness of seizures/suicides, or 2) they’re actually socio- or psychopaths. Given the infrequency of real sociopathy, it’s safe to assume a lot of the former happens, especially over the impersonal, empathy-sapping environment of the Internet.

Comment author: Peterdjones 05 January 2013 03:51:55PM 1 point [-]

The answer you are referrign to is probably the utiiitarian one that you morally-should maximisie everyone's preferences, not just your own. But that's already going well beyond to the naive "awesomeness" theory presented above.

Comment author: [deleted] 05 January 2013 08:25:14AM 0 points [-]

This is assuming, of course, that "awesome" is subjective. Maybe if we had a universal scale of awesomeness...

Comment author: [deleted] 05 January 2013 08:39:44AM 5 points [-]

You guys are thinking too hard about this.

Either don't think about it, and maximize awesome (of which there is only one).

Or read the metaethics sequence, where you will realize that you need to maximize awesome (of which there is only one).

Comment author: ialdabaoth 05 January 2013 11:57:05AM 5 points [-]

Look, here's the problem with that whole line of reasoning, right in the joy of the merely good:

First, our esteemed host says:

Every time you say should, it includes an implicit criterion of choice; there is no should-ness that can be abstracted away from any criterion.

And then, he says:

But there are possible minds that implement any utility function, so you don't get any advice there about what you should do.

But then suddenly, at the end, he flips around and says:

Look to the living child, successfully dragged off the train tracks. There you will find your justification. What ever should be more important than that?

Speaking as a living child who has been dragged ONTO train tracks, I don't buy it. Domination and infliction of misery are just as "awesome" as altruism and sharing joy.

I guess the crux of my point is, if you don't think that the weak are contemptible and deserve to suffer, what are you gonna do about it? Because just trying to convince people that it's less "awesome" than giving them all free spaceships is going to get you ignored or shot, depending on how much the big boys think you'll interfere with them inflicting suffering for the lulz.

Comment author: NancyLebovitz 05 January 2013 02:16:05PM 4 points [-]

Because just trying to convince people that it's less "awesome" than giving them all free spaceships is going to get you ignored or shot, depending on how much the big boys think you'll interfere with them inflicting suffering for the lulz.

This doesn't seem to be true over the long haul-- somehow, the average behavior of the big boys has become less cruel. Part of this is punishment, but even getting punishment into place takes convincing people, some of them high status, that letting the people in charge do what they please isn't the most awesome alternative.

Alternatively, maybe the cruelest never get convinced, it's just people have been gradually solving the coordination problem for those who don't want cruelty.

Comment author: ialdabaoth 05 January 2013 02:23:53PM 2 points [-]

At least, at the levels most people operate at. Things tend to get better from the top down; for the bottom-dwellers, things are still pretty desperate and terrifying.

Comment author: NancyLebovitz 05 January 2013 10:36:12PM 0 points [-]

I agree, but I think your initial general claim of no hope for improvement was too strong.

Comment author: ialdabaoth 06 January 2013 02:05:56AM 0 points [-]

Let me put it this way: I would be willing to bet my entire net worth, if it weren't negative, that if some kind of uplifting Singularity happens, my broke ass gets left behind because I won't have anything to invest into it and I won't have any way to signal that I'm worth more than my raw materials.

Comment author: [deleted] 06 January 2013 02:27:49AM 3 points [-]

You're wrong.

By the point of the singularity no human has any instrumental value. Everything any human can do, a nanotech robot AI can do better. No one will be able to signal usefulness or have anything to invest; we will all be instrumentally worthless.

If the singularity goes well at all, though, humanity will get its shit together and save everyone anyways, because people are intrinsically valuable. There will be no concern for the cost of maintaining or uplifting people, because it will be trivially small next to the sheer power we would have, and the value of saving a friend.

Don't assume that everyone else will stay uncaring, once they have the capacity to care. We would save you, along with everyone else.

Downvotes for being unreasonably dramatic.

Comment author: Peterdjones 05 January 2013 04:01:45PM 0 points [-]

This doesn't seem to be true over the long haul-- somehow, the average behavior of the big boys has become less cruel.

Bur there is no evidence that there real world works on he Awesomeness theory.

Comment author: ialdabaoth 05 January 2013 08:27:39AM -1 points [-]

Even if we did, something non-awesome would STILL be able to "win" if it had enough resources and was well-optimized. At which point, why isn't its "non-awesome" idea more important than your idea of "awesome"?

(Yes, this is the old Is-Ought thing; I'm still not convinced that it's a fallacy. I think I might be a nihilist at heart.)

Comment author: nshepperd 05 January 2013 04:48:28PM 0 points [-]

If you taboo "important" you might discover you don't know what you're talking about.