Nominull comments on SotW: Check Consequentialism - Less Wrong

38 Post author: Eliezer_Yudkowsky 29 March 2012 01:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (311)

You are viewing a single comment's thread.

Comment author: Nominull 24 March 2012 03:05:19AM 2 points [-]

What, we're not even allowed to have identities now?

Comment author: Vladimir_Nesov 24 March 2012 12:12:43PM *  10 points [-]

Identity shouldn't act as a normative consideration. "He's going to do X because he belongs to a reference class Y" may be a valid outside view observation, a way of predicting behavior based on identity. On the other hand, "I'm going to do X because I belong to a reference class Y" is an antipattern, it's a descriptive explanation, fatalist decision rule, one that may be used to predict, but not to decide. An exception is where you might want to preserve your descriptive identity, but then the reason you do that is not identity-based.

So you can have an identity, the way you can have a pair of gloves or a Quirrell, just don't consider it part of morality.

Comment author: Will_Newsome 24 March 2012 12:19:05PM *  6 points [-]

Identity shouldn't act as a normative consideration for an angel, maybe. For a human, "identity" is a pragmatic reification of cached complexes of moral conclusions that aren't immediately accessible for individual analysis. "Normative" is a misleading word here.

Comment author: Vladimir_Nesov 24 March 2012 01:01:20PM 0 points [-]

Identity shouldn't act as a normative consideration for an angel, maybe.

Still shouldn't for a human, even if does. It's a normative consideration, not a descriptive one.

Comment author: Will_Newsome 24 March 2012 01:33:28PM 5 points [-]

...Is there a word for "normative given bounded rationality"?

Comment author: Vaniver 24 March 2012 05:12:26PM 3 points [-]

Prescriptive.

Comment author: Vladimir_Nesov 24 March 2012 01:39:12PM 0 points [-]

Bounded rationality is like the mass of the Sun, difficulty of the problem, not a kind of goal.

Comment author: Will_Newsome 24 March 2012 02:25:25PM *  7 points [-]

I don't understand.

If you're trying to dam a river, and you only have 100,000 bricks, then there is a normative solution, i.e., the solution that has the greatest chance of successfully damming the river. Talking about solutions that require one million bricks is talking about a different problem that is only relevant to people with millions of bricks. So when you say, "identity shouldn't act as a normative consideration", that sounds to me like, "you should already have one million bricks, there is no normative solution if you only have 100,000 bricks". Using 100,000 bricks to dam a river isn't using an approximation of the solution you would use if you had a million bricks. That's why I say "normative" is a misleading word here. It implies that you should try to approximate the million-brick solution even when you know you don't have enough bricks to do that: a tenth of a great million-brick dam is one millionth as useful as a complete 100,000-brick dam. Why not just renormalize such that your constraints are part of your environment and thus part of the problem, and find a normative solution given your constraints? Otherwise the normative solution is always to have already solved the problem. "What would Jesus do? Jesus would have had the foresight not to get into this situation in the first place." "Normative" is always relative to some set of constraints, so I don't see why normative-given-boundedness isn't a useful concept. I'm reminded of Nick Tarleton's intuition that decision theory needs to at some point start taking boundedness into account.

Comment author: Vladimir_Nesov 24 March 2012 02:36:28PM *  1 point [-]

It's useful to take the limitations of decision-making setup into account, but that is not fundamentally different from taking the number of bricks into account. The idealized criteria for comparing the desirability of alternatives don't normally depend on which alternatives are available. People shouldn't die even if it's impossible to keep them from dying.

Comment author: TheOtherDave 24 March 2012 02:28:06PM *  1 point [-]

I'm not sure this is responsive to Will's point... at least, it seems plausible that the moral considerations he considers identity to imperfectly encapsulate are also normative, which is why he refers to them as moral in the first place. That is, I think he means to challenge the idea that identity shouldn't be/isn't a normative consideration.

Comment author: taryneast 25 March 2012 08:49:33AM *  0 points [-]

I agree but.... purposely self-identifying with a reference class that has supposed-skills that you are trying to acquire does seem to have benefits in actually becoming more likely to have those skills. eg "I'm a hard-working person and hard-working people wouldn't just give up" is a way of convincing (/tricking) yourself into actually being a hard-working person.

EDIT: that being said - it certainly wouldn't be consequentialist. :)

Comment author: jschulter 04 April 2012 05:02:19AM 1 point [-]

But it is near-consequentialist: "I'm a hard-working person and hard-working people wouldn't just give up" --> "the act of giving up will make me feel less like a hard-working person and therefore make me less likely to work hard in the future"

Comment author: taryneast 04 April 2012 05:28:52AM 0 points [-]

Yes - it can definitely be re-phrased in consequentialist ways...

Comment author: Wei_Dai 25 March 2012 02:40:29AM 3 points [-]

I previously wrote a comment that seems relevant here:

How to translate identity-based decision making into values and/or beliefs seems non-trivial, and can perhaps be compared to the problem of translating anticipated-reward type decision making into preferences over states of the world or over math.

An agent that lets identity influence its decisions probably deviates from ideal rationality, but how to fix that? If we just excise the identity-based parts of its decision procedure without any compensation, that could easily make it worse off if for example it's CEV depends on its identity.

Comment author: Manfred 24 March 2012 04:32:24AM 4 points [-]

We are the Borg. Lower your shields and surrender your ships.

Comment author: David_Gerard 24 March 2012 12:12:33PM 2 points [-]

Depends what the consequences of asserting one to yourself are.

Comment author: katydee 24 March 2012 06:33:51AM 5 points [-]
Comment author: Incorrect 24 March 2012 04:58:51AM *  4 points [-]

To become a true rationalist one must shed the trappings of personhood. The rationalist's mind has no goal except rationality itself; no thought except the Bayesian update.. Only once you are free of worldly concerns and the concept of autonomy may you see the light of Bayes.

edit: Sorry, I was joking. I thought I was being ridiculous enough for it to be obvious.

Comment author: [deleted] 24 March 2012 01:20:26PM 3 points [-]

The rationalist's mind has no goal except rationality itself

I thought it had the goal of maximizing expected utility.

Comment author: orthonormal 24 March 2012 04:17:12PM 1 point [-]
Comment author: Incorrect 24 March 2012 07:01:33PM 3 points [-]

Sorry, I was joking. I thought I was being ridiculous enough for it to be obvious.

Comment author: orthonormal 24 March 2012 07:05:04PM 3 points [-]

I should have remembered that you've been around for a while, but bear in mind that the joke is just the sort of Straw Vulcan reasoning that some new people think Less Wrong obviously must subscribe to.

Comment author: Will_Newsome 24 March 2012 09:12:27PM *  3 points [-]

'Twas completely obvious to me. I mean seriously, "light of Bayes".

Comment author: [deleted] 11 April 2012 04:48:17PM 1 point [-]

Poe's law applies!

Comment author: fubarobfusco 24 March 2012 07:07:06AM -2 points [-]

Incorrect indeed.

Comment author: handoflixue 29 March 2012 09:27:17PM 0 points [-]

laughs The username was a pretty obvious give-away, IMO :)