Comment author: cunning_moralist 11 October 2016 07:18:30AM 1 point [-]

The author is far from alone in his view that both a complete rightness criterion and a consistent decision method must be required of all serious moral theories.

Among hedonistic utilitarians it's quite normal to demand both completeness, to include all (human) situations, and consistency, to avoid contradictions. The author simply describes what's normal among consequentialists, who, after all, are more or less the rational ones. ;-) There's one interesting exception though! The demand to include all situations, including the non-human ones, is radical, and quite hard a challenge for hedonistic utilitarians, who do have problems with the bloodthirsty predators of the jungle.

Comment author: DanArmak 11 October 2016 11:32:50AM *  0 points [-]

I'm confused. Is it normal to regard all possible acts and decisions as morally significant, and to call a universal decision theory a moral theory?

What meaning does the word "moral" even have at that point?

Comment author: ChristianKl 10 October 2016 09:43:51PM 0 points [-]

I don't think I would need to define it that way for the above comment to be coherent.

Comment author: DanArmak 10 October 2016 09:56:24PM 0 points [-]

Of course not. Then you meant simply the success of the goals of the group's creators?

Comment author: DanArmak 10 October 2016 09:54:20PM 0 points [-]

The author says a moral theory should:

  • "Cover how one should act in all situations" (instead of dealing only with 'moral' ones)
  • Contain no contradictions
  • "Cover all situations in which somebody should perform an action, even if this “somebody” isn’t a human being"

In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by "moral theory'.

Comment author: ChristianKl 10 October 2016 09:02:41PM 0 points [-]

The success of a Facebook group depends a lot on how it get's promoted and whether there are a few people who care about creating content for it.

Comment author: DanArmak 10 October 2016 09:32:39PM 0 points [-]

Is the 'success' of a group its number of members, regardless of actual activity?

Comment author: Lumifer 10 October 2016 04:43:09PM 0 points [-]

If it's a tool AGI, I don't see how it would help with friendliness, and if it's an active self-developing AGI, I thought the canonical position of LW was that there could be only one? and it's too late to do anything about friendliness at this point?

Comment author: DanArmak 10 October 2016 09:32:01PM 0 points [-]

I agree there would probably only be one successful AGI, so it's not the first step of many. I meant it would be a step in that direction. Poor phrasing on my part.

Comment author: Lumifer 10 October 2016 03:10:09PM 1 point [-]

Options (b) and (c) are basically wishes and those are complex X-D

"Not kill us" is an easy criterion, we already have an AI like that, it plays Go well.

Comment author: DanArmak 10 October 2016 04:18:24PM 3 points [-]

We don't have an AGI that doesn't kill us. Having one would be a significant step towards FAI. In fact, "a human-equivalent-or-better AGI that doesn't do anything greatly harmful to humanity" is a pretty good definition of FAI, or maybe "weak FAI".

Comment author: Lumifer 10 October 2016 02:48:06PM *  -2 points [-]

Nothing, because we still don't know what a friendly AI is.

Comment author: DanArmak 10 October 2016 02:55:47PM 2 points [-]

We do know it isn't an AI that kills us. Options b and c still qualify.

Comment author: ChristianKl 10 October 2016 12:53:15PM 5 points [-]

Nothing. I don't think facebook membership counts are a good measurement.

Comment author: DanArmak 10 October 2016 02:54:19PM 4 points [-]

Or possibly they are accurate measurements of the rates of Facebook use among these two groups. Maybe it's a good thing if people who are concerned about existential risk do serious things about it instead of participating in a Facebook group.

Comment author: WhySpace 08 October 2016 09:44:27PM 1 point [-]

I agree with you on the complexity of value. However, perhaps we are imagining the ideal way of aggregating all those complex values differently. I absolutely agree that the simple models I keep proposing for individual values are spherical cows, and ignore a lot of nuance. I just don't see things working radically differently when the nuance is added in, and the values aggregated.

That sounds like a really complex discussion though, and I don't think either of us is likely to convince the other without a novel's worth of text. However, perhaps I can convince you that you already are suppressing some impulses, and that this isn't always disastrous. (Though it certainly can be, if you choose the wrong ones.)

there aren't large benefits to be gained by discarding some emotions and values.

Isn't that what akrasia is?

If I find that part of me values one marshmallow now at the expense of 2 later, and I don't endorse this upon reflection, wouldn't it make sense to try and decrease such impulses? Removing them may be unnecessarily extreme, but perhaps that's what some nootropics do.

Similarly, if I were to find that I gained a sadistic pleasure from something, I wouldn't endorse that outside of well defined S&M. If I had an alcoholism problem, I'd similarly dislike my desire for alcohol. I suspect that strongly associating cigarettes with disgust is helpful in counteracting the impulse to smoke.

If I understand correctly, some Buddhist try to eliminate suffering by eliminating their desires. I find this existentially terrifying. However, I think that boosting and suppressing these sorts of impulses is precisely what psychologists call conditioning. A world where none refines or updates their natural impulses is just as unsettling as the Buddhist suppression of all values.

So, even if you don't agree that there are cases where we should suppress certain pro-social emotions, do you agree with my characterization of antisocial emotions and grey area impulses like akrasia?

(I'm using values, impulses, emotions, etc fairly interchangeably here. If what I'm saying isn't clear, let me know and I can try to dig into the distinctions.)

Comment author: DanArmak 08 October 2016 10:24:54PM *  0 points [-]

I think I understand your point better now, and I agree with it.

My conscious, deliberative, speaking self definitely wants to be rid of akrasia and to reduce time discounting. If I could self modify to remove akrasia, I definitely would. But I don't want to get rid of emotional empathy, or filial love, or the love of cats that makes me sometimes feed strays. I wouldn't do it if I could. This isn't something I derive from or defend by higher principles, it's just how I am.

I have other emotions I would reduce or even remove, given the chance. Like anger and jealousy. These can be moral emotions no less than empathy - righteous anger, justice and fairness. It stands to reason some people might feel this way about any other emotion or desire, including empathy. When these things already aren't part of the values their conscious self identifies with, they want to reduce or discard them.

And since I can be verbally, rationally convinced to want things, I can be convinced to want to discard emotions I previously didn't.

It's a good thing that we're very bad at actually changing our emotional makeup. The evolution of values over time can lead to some scary attractor states. And I wouldn't want to permanently discard one feeling during a brief period of obsession with something else! Because actual changes take a lot of time and effort, we usually only go through with the ones we're really resolved about, which is a good condition to have. (Also, how can you want to develop an emotion you've never had? Do you just end up with very few emotions?)

Comment author: DanArmak 08 October 2016 09:44:11PM *  4 points [-]

These six principles are true as far as they go, but I feel they're so weak so not to be very useful. I'd like to offer a more cynical view.

The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents. This has a name: Defense Against the Dark Arts. And I feel like these six principles are about as effective in real life as taking the canonical DADA first year class and then going up against HPMOR Voldemort.

With today's information technology and globalization, we're all exposed to world-class Dark Arts practitioners. Not being vulnerable to Cialdini's principles might help defend you in an argument with your coworker. But it won't serve you well when doubting something you read in the news or in an FDA-endorsed study.

And whatever your coworker or your favorite blog was arguing probably derives from such a curated source to begin with. All arguments rest on factual beliefs - outside of math anyway - and most of us are very far from being able to verify the facts we believe. And your own prior beliefs need to be well supported, to avoid being rejected on the same basis.

View more: Prev | Next