Comment author: someonewrongonthenet 07 December 2014 07:36:49AM *  6 points [-]

I see... cleverly, it also takes advantage of how many people are afraid to ask for high salaries out of modesty or something.

I kind of view this as defecting and it seems like I have to defect in turn, to counter it (conveniently, I get to move second)... I guess this means I must start quoting highball figures and generally concealing my previous salary if it is lower than I expect the opponent to estimate, and displaying it loudly when it is higher than the opponent would estimate. Is that an effective thing to do?

(When I say it's defecting, I'm not attaching moral value to it or anything. I do want organizations which I want to see succeed do whatever is most rational, even if it is defecting, if that's what all the other agents are doing. Still, I feel like mutual cooperation would be generally more pleasant. I wonder if there is a mechanism to determine a person's true-market-value (as in, taking into account the opportunity costs on both sides) so as to avoid this sort of thing.)

Comment author: Decius 09 December 2014 01:51:37AM 0 points [-]

If they posted a salary range, and it was higher than you would have expected them to offer, would you "cooperate"?

In response to comment by Decius on On Caring
Comment author: AmagicalFishy 25 November 2014 01:12:50AM 0 points [-]

... Oh.

Hm. In that case, I think I'm still missing something fundamental.

In response to comment by AmagicalFishy on On Caring
Comment author: Decius 28 November 2014 06:11:40AM 0 points [-]

I care about self-consistency because an inconsistent self is very strong evidence that I'm doing something wrong.

It's not very likely that if I take the minimum steps to make the evidence of the error go away, I will make the error go away.

The general case of "find a self-inconsistency, make the minimum change to remove it" is not error-correcting.

In response to comment by lalaithion on On Caring
Comment author: AmagicalFishy 24 November 2014 05:30:36AM *  2 points [-]

For the most part, I follow—but there's something I'm missing. I think it lies somewhere in: "It would be trivial for me to increase how much I care about one fo them, and therefore I should care about them as if I had already completed that process, even if I hadn't."

Is the underlying "axiom" here that you wish to maximize the number of effects that come from the caring you give to people, because that's what an altruist does? Or that you wish to maximize your caring for people?

To contextualize the above question, here's a (nonsensical, but illustrative) parallel: I get cuts and scrapes when running through the woods. They make me feel alive; I like this momentary pain stimuli. It would be trivial for me to woods-run more and get more cuts and scrapes. Therefore I should just get cuts and scrapes.

I know it's silly, but let me explain: A person usually doesn't want to maximize their cuts and scrapes, even though cuts and scrapes might be appreciated at some point. Thus, the above scenario's conclusion seems silly. Similarly, I don't feel a necessity to maximize my caring—even though caring might be nice at some point. Caring about someone is a product of my knowing them, and I care about a person because I know them in a particular way (if I knew a person and thought they were scum, I would not care about them). The fact that I could know someone else, and thus hypothetically care about them, doesn't make me feel as if I should.

If, on the other hand, the axiom is true—then why bother considering your intuitive "care-o-meter" in the first place?

I think there's something fundamental I'm missing.

(Upon further thought, is there an agreed-upon intrinsic value to caring that my ignorance of some LW culture has lead me to miss? This would also explain wanting to maximize caring.)

(Upon further-further thought, is it something like the following internal dialogue? "I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.")

In response to comment by AmagicalFishy on On Caring
Comment author: Decius 24 November 2014 11:59:59PM 0 points [-]

(Upon further-further thought, is it something like the following internal dialogue? "I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.")

I care about self-consistency, but being self-consistent is something that must happen naturally; I can't self-consistently say "This feeling is self-inconsistent, therefore I will change this feeling to be self-consistent"

In response to comment by AmagicalFishy on On Caring
Comment author: lalaithion 23 November 2014 11:32:52PM 1 point [-]

For me, personally, I know that you could choose a person at random in the world, write a paragraph about them, and give it to me, and by doing that, I would care about them a lot more than before I had read that piece of paper, even though reading that paper hadn't changed anything about them. Similarly, becoming friends with someone doesn't usually change the person that much, but increases how much I care about them an awful lot.

Therefore, I look at all 7 billion people in the world, and even though I barely care about them, I know that it would be trivial for me to increase how much I care about one of them, and therefore I should care about them as if I had already completed that process, even if I hadn't

Maybe a better way of putting this is that I know that all of the people in the world are potential carees of mine, so I should act as though I aready care about these people in deference to possible future-me.

In response to comment by lalaithion on On Caring
Comment author: Decius 24 November 2014 11:58:13PM 0 points [-]

I look at a box of 100 bullets, and I know that it would be trivial for me to be in mortal danger from any one of them, but the box is perfectly safe.

It is trivial-ish for me to meet a trivial number of people and start to care about them, but it is certainly nontrivial to encounter a nontrivial number of people.

Comment author: [deleted] 08 November 2014 04:09:28AM 7 points [-]

You could say they died of insufficient friendly AGI, but not from an AGI that was insufficiently friendly.

In response to comment by [deleted] on Rationality Quotes November 2014
Comment author: Decius 16 November 2014 01:37:45AM *  5 points [-]

By that logic, every death everywhere can be attributed to insufficient friendly AGI.

Among the causes, the easiest to prevent was pilot error due to inadequate training .

In response to comment by Wes_W on On Caring
Comment author: Jiro 15 October 2014 03:55:10PM 3 points [-]

Your second category of response seems to say "my intuitions about considering a group of people, taken billions at a time, aren't reliable, but my intuitions about considering the same group of people, one at a time, are". You then conclude that you care because taking the billions of people one at a time implies that you care about them.

But it seems that I could apply the same argument a little differently--instead of applying it to how many people you consider at a time, apply it to the total size of the group. "my intuitions about how much I care about a group of billions are bad, even though my intuitions about how much I care about a small group are good." The second argument would, then, imply that it is wrong to use your intuitions about small groups to generalize to large groups--that is, the second argument refutes the first. Going from "I care about the people in my life" to "I would care about everyone if I met them" is as inappropriate as going from "I know what happens to clocks at slow speeds" to "I know what happens to clocks at near-light speeds".

In response to comment by Jiro on On Caring
Comment author: Decius 16 October 2014 04:44:34AM 0 points [-]

I'll go a more direct route:

The next time you are in a queue with strangers, imagine the two people behind you (that you haven't met before and don't expect to meet again and didn't really interact with much at all, but they are /concrete/). Put them on one track in the trolley problem, and one of the people that you know and care about on the other track.

If you prefer to save two strangers to one tribesman, you are different enough from me that we will have trouble talking about the subject, and you will probably find me to be a morally horrible person in hypothetical situations.

In response to comment by Decius on On Caring
Comment author: Wes_W 15 October 2014 08:32:01AM 2 points [-]

If you don't feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?

I can think of two categories of responses.

One is something like "I care by induction". Over the course of your life, you have ostensibly had multiple experiences of meeting new people, and ending up caring about them. You can reasonably predict that, if you meet more people, you will end up caring about them too. From there, it's not much of a leap to "I should just start caring about people before I meet them". After all, rational agents should not be able to predict changes in their own beliefs; you might as well update now.

The other is something like "The caring is much better calibrated than the not-caring". Let me use an analogy to physics. My everyday intuition says that clocks tick at the same rate for everybody, no matter how fast they move; my knowledge of relativity says clocks slow down significantly near c. The problem is that my intuition on the matter is baseless; I've never traveled at relativistic speeds. When my baseless intuition collides with rigorously-verified physics, I have to throw out my intuition.

I've also never had direct interaction with or made meaningful decisions about billions of people at a time, but I have lots of experience with individual people. "I don't care much about billions of people" is an almost totally unfounded wild guess, but "I care lots about individual people" has lots of solid evidence, so when they collide, the latter wins.

(Neither of these are ironclad, at least not as I've presented them, but hopefully I've managed to gesture in a useful direction.)

In response to comment by Wes_W on On Caring
Comment author: Decius 16 October 2014 12:27:57AM *  0 points [-]

To address your first category: When I meet new people and interact with them, I do more than gain information- I perform transitive actions that move them out of the group "people I've never met" that I don't care about, and into the group of people that I do care about.

Addressing your second: I found that a very effective way to estimate my intuition would be to imagine a group of X people that I have never met (or specific strangers) on one minecart track, and a specific person that I know on the other. I care so little about small groups of strangers, compared to people that I know, that I find my intuition about billions is roughly proportional; the dominant factor in my caring about strangers is that some number of people who are strangers to me are important to people who are important to me, and therefore indirectly important to me.

In response to comment by hyporational on On Caring
Comment author: Weedlayer 09 October 2014 09:53:24PM 0 points [-]

There's no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn't it be possible to judge most morality on the basis of these common features, making an argument like "wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing"? I think this is basically the point EY makes about the "psychological unity of humankind".

Of course, this dream goes out the window with UFAI and aliens. Lets hope we don't have to deal with those.

In response to comment by Weedlayer on On Caring
Comment author: Decius 15 October 2014 07:43:27AM 0 points [-]

Shouldn't it be possible to judge most morality on the basis of these common features, making an argument like "wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing"?

Yes, it should. However, in the hypothetical case involved, the reason is not true; the hypothetical brain does not have the quality "Has empathy and values survival and survival is impaired by murder".

We are left with the simple truth that evolution (including memetic evolution) selects for things which produce offspring that imitate them, and "Has a moral system that prohibits murder" is a quality that successfully creates offspring that typically have the quality "Has a moral system that prohibits murder".

The different quality "Commits wanton murder" is less successful at creating offspring in modern society, because convicted murderers don't get to teach children that committing wanton murder is something to do.

In response to comment by AnthonyC on On Caring
Comment author: NancyLebovitz 09 October 2014 02:14:29PM 0 points [-]

That's something I've wondered about, and also what you could accomplish by having an organization of people with unusually high Dunbar's numbers.

In response to comment by NancyLebovitz on On Caring
Comment author: Decius 15 October 2014 07:32:26AM 0 points [-]

Or a breeding population selecting for higher Dunbar's numbers.

Or does that qualify as bioengineering?

In response to On Caring
Comment author: Decius 15 October 2014 07:27:02AM 1 point [-]

If you don't feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?

Serious question; I traverse the reasoning the other way, and since I don't care much about the aggregate six billion people I don't know, I divide and say that I don't care more than one six-billionth as much about the typical person that I don't know.

People that I do know, I do care about- but I don't have to multiply to figure my total caring, I have to add.

View more: Prev | Next