Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: DanielLC 26 November 2014 04:20:42AM 0 points [-]

Why are you adding utility functions together? We're discussing what an effective altruist who cares about animals should do as an individual. We are not trying to work out CEV or something. If we did, I'd hope animals get counted for more than just how much the humans care about them on average. If Alice is an effective altruist and Bob and Carol are not, in which case it can be assumed that Bob and Carol's money would otherwise be wasted on themselves when they don't need it very much, or possibly on charity that doesn't do very much good, then Alice shouldn't care much how much Bob and Carol pay.

Perhaps it's more obvious if we suppose that they somehow get hold of a list of people who are keen on vegetarianism, and find that each one of those 10,000 people values a person-month of vegetarianism at $10. Is it now a good deal if all of them spend $10 to make Alice a vegetarian for a month? Has her abstinence from meat for that month suddenly done 3000x more good than when it was just her, Bob and Carol who knew about it?

I don't think a situation that extreme can really come up. If the whole thing will stop because of one person not donating, there's no way the other 10,000 people will all donate.

Comment author: gjm 26 November 2014 01:04:17PM 0 points [-]

Why are you adding utility functions together?

I'm not. I'm (well, actually Fluttershy is and I'm agreeing) adding amounts of money together, and I'm suggesting that to make the Alice/Bob/Carol outcome seem like a good one you'd have to add together utility functions that ought not to be added together (even if one were generally willing to add up utility functions).

If Alice is an effective altruist and Bob and Carol are not, [...]

Look again at the description of the situation: Alice, Bob and Carol are all making their ethical position on meat-eating a central part of their decision-making. For each of them, at least about half of their delta-utility-converted-to-dollars in this situation is coming from the reduction in animal suffering that they anticipate. They are choosing their actions to optimize the outcome including this highly-weighted concern about animal suffering. This is the very definition of effective altruism. (Or at least of attempted effective altruism. Any of them might be being incompetent. But we don't usually require competence before calling someone an EA.)

I don't think a situation that extreme can really come up.

If some line of reasoning gives absurd results in such an extreme situation, then either there's something wrong with the reasoning or there's something about the extremeness of the situation that invalidates the reasoning even though it wouldn't invalidate it in a less extreme situation. I don't see that there's any such thing in this case.

BUT

I do actually think there's something wrong with Fluttershy's example, or at least something that makes it more difficult to reason about than it needs to be, and that's the way that the participants' values and/or knowledge change. Specifically, at the start of the experiment Alice is eating meat even though (on reflection + persuasion) she actually values a month's animal suffering more than a month's meat-eating pleasure. Are we to assess the outcome on the basis of Alice's final values, or her initial values (whatever they may have been)? I think different answers to this question yield different conclusions about whether something paradoxical is going on.

Comment author: DanielLC 24 November 2014 05:41:54AM *  1 point [-]

How much do they disvalue each other losing money? If they don't care at all, then the given scenario would be better than nothing for all involved. If they do care, then that should count as a cost, and should be considered accordingly. I generally value someone having a few dollars as worth vastly less than the same amount of money donated to the best charity, so I would be willing to pay essentially that much to get that person to donate.

Edit:

Consider this similar scenario. Alice, Bob, and Carol are roommates. They find a nice picture at a store for $30. Alice values having the picture in their living room at $10, Bob at $15, and Carol at $20. They agree to split the costs, with Alice paying $5, Bob paying $10, and Carol paying $15. This adds up to a total cost of $30 across all parties, even though no party actually values the painting as being worth more than $20. Is there any sort of paradox going on here?

Comment author: gjm 25 November 2014 09:38:47PM 1 point [-]

In the case of the picture, presumably what Alice values is "Alice being able to look at the picture", what Bob values is "Bob being able to look at the picture", and likewise for Carol. (If not -- if each is making a serious attempt to include the others' benefit from the picture -- then indeed their decision is probably a mistake.)

But with Alice, Bob and Carol all interested in having someone become a vegetarian, what they're valuing is (something like) "a person-month less of animal-eating", and if you add up all their individual values for that you're double-counting (er, triple-counting).

Perhaps it's more obvious if we suppose that they somehow get hold of a list of people who are keen on vegetarianism, and find that each one of those 10,000 people values a person-month of vegetarianism at $10. Is it now a good deal if all of them spend $10 to make Alice a vegetarian for a month? Has her abstinence from meat for that month suddenly done 3000x more good than when it was just her, Bob and Carol who knew about it?

Comment author: DanielLC 24 November 2014 05:44:17AM 1 point [-]

I consider oysters fine to eat. I am told that, even if insects are sentient, they most likely would enjoy the sort of conditions that would be used in factory farming, so they're good too. I have not been told if that extends to shrimp.

Comment author: gjm 25 November 2014 09:30:15PM 0 points [-]

I have a hazy memory of reading that the conditions under which (some) shrimp are farmed are very bad for the people involved. My memory is hazy and I didn't check very carefully when I heard that, but if you eat a lot of shrimp you might want to investigate.

Comment author: CAE_Jones 25 November 2014 12:48:25AM 1 point [-]

It seems that, in order to accomplish anything, one needs some combination of conscientiousness, charisma, and/or money*. It seems that each of the three can strengthen the others:

  • Conscientiousness correlates with earning potential
  • A conscientious person can exert extraordinary effort to learn, practice, and internalize behaviors that increase charisma.
  • a charismatic person can make connections and get deals and convince people to give them money.
  • Money can buy charisma/conscientiousness training or devices, or can pay people to be charismatic/conscientious in pursuit of one's goals.

If someone lacks all of these resources severely enough, is there any way to correct that? It rather seems like the answer is "no, but most people can't imagine someone with that much of a deficit in all three at the same time".

* Yes, I could have gone for alliteration with "cash", "credit", or "capital". Money seems different enough that the dissonance seemed like a better idea at the time.

Comment author: gjm 25 November 2014 12:17:31PM 2 points [-]

All of those things can be mitigated by other traits. Connections can be useful even without very much charisma. Cleverness can lead to pretty good earning potential even with relatively little conscientiousness, and may help one think of ways to improve charisma and conscientiousness. At any given level of earning potential, being cheap ("frugal" would be a better word but begins with the wrong letter) eases the transition from gradually sliding into debt to gradually accumulating savings. Other aspects of character besides conscientiousness make a difference -- e.g., a reputation for honesty may be helpful.

Given a bad enough deficit in everything that matters, it's certainly possible to be so screwed that recovery is unlikely. It's also possible to overestimate those deficits and the resulting screwage, e.g. on account of depression. There's probably a nasty positive feedback loop where doing so makes getting unscrewed harder.

Comment author: [deleted] 22 November 2014 05:16:38AM 1 point [-]

downvotes on articles are not publicly visible

They are too iff the user has “Make my votes public” checked in their preferences. Same with upvotes.

Comment author: gjm 22 November 2014 10:17:17AM 1 point [-]

Oh, hey, another thing I didn't know about. Thanks. Not surprisingly given that it's a non-default preference, it seems not to be used much. (I checked the 15 people in the most-karma-in-30-days list and two had it enabled: NancyLebovitz and Capla.)

Comment author: HalMorris 21 November 2014 03:45:42PM 2 points [-]

The easiest way to filter out 99 percent of this is to ignore anything that has no impact on your life (ie doesn't pay rent).

Eh? If I was renting, I think that would have an impact on my life -- so maybe this is yet another metaphor I never heard of.

If everyone was processing reality to the best of their analytical (and other) abilities, and honestly passing on the conclusions they reach then virtuosity at recognizing rational fallacies would go a lot further than I think it actually does; I'm afraid much of what we need is a social understanding of others.

Just FWIW, Aspergers types, which many I encounter here are self-proclaimed to be, have a chance to do this better than other people, because they have to do consciously what others have no idea that they're doing. By the way, book recommendation: The Journal of Best Practices by David Finch. Very funny and enlightening, about an Aspergers/non-Aspergers mixed marriage. My wife and I had a good time reading it.

Comment author: gjm 21 November 2014 06:32:29PM 4 points [-]

maybe this is yet another metaphor I never heard of.

Yup. See: Making beliefs pay rent.

Comment author: HalMorris 21 November 2014 03:11:56PM 2 points [-]

Ah, another irregular verb. I am a deep and original thinker, synthesising good ideas from multiple sources without regard to ideology.

I'm going over the verbs trying to locate what you're referring to as an irregular verb. Am I making a mistake? Does "irregular verb" have some metaphorical connotation I'm not aware of?

You seem to follow with 3 likely different interpretations of the same behavior. If I understand it correctly, that is kind of interesting, I'll warrant

I am deeply suspicious when people try to explain away their opponents' beliefs, rather than defeat them intellectually

So you have a criteria for being skeptical of (I won't say "explaining away", which would be presumptuous) my arguments having to do with the style of my argument rather than its content. That is good - I think we all should have such criteria, unless we plan to intellectually take apart all of the thousands upon thousands of assertions that cross our paths.

I have been proposing one such. You just proposed another, one which is generally pretty good.

Once you criticize something as "to explain away" most of what else you say is apt to be redundant.

Comment author: gjm 21 November 2014 06:30:45PM 4 points [-]

Does "irregular verb" have some metaphorical connotation I'm not aware of?

Yes. (At least with a plausible guess at what you're aware of.) The point is precisely the observation you make that these are three descriptions of the same behaviour; the implied criticism here is that you (or some hypothetical person who somewhat resembles you) chooses very differently-biased descriptions of the same behaviour depending on whether it's your own or someone else's. (The comparison is of course with irregular verbs in natural languages -- I am / you are / he is. The main point is the difference between the "I" and "he" versions, the "you" typically being something intermediate.)

So it's more or less an accusation of insincerity. Salemicus is suggesting that you are hostile to some varieties of eclecticism when other people do them, but not when you do the same yourself. (I have no idea what evidence, if any, he has.)

Comment author: IlyaShpitser 21 November 2014 11:44:00AM *  48 points [-]

I am no PR specialist, but I think relevant folks should agree on a simple, sensible message accessible to non-experts, and then just hammer that same message relentlessly. So, e.g. why mention "Newcomb-like problems?" Like 10 people in the world know what you really mean. For example:

(a) The original thing was an overreaction,

(b) It is a sensible social norm to remove triggering stimuli, and Roko's basilisk was an anxiety trigger for some people,

(c) In fact, there is an entire area of decision theory involving counterfactual copies, blackmail, etc. behind the thought experiment, just as there is quantum mechanics behind Schrodinger's cat. Once you are done sniggering about those weirdos with a half-alive half-dead cat, you might want to look into serious work done there.


What you want to fight with the message is the perception that you are a weirdo cult/religion. I am very sympathetic to what is happening here, but this is, to use the local language, "a Slytherin problem," not "a Ravenclaw problem."

I expect in 10 years if/when MIRI gets a ton of real published work under its belt, this is going to go away, or at least morph into "eccentric academics being eccentric."


p.s. This should be obvious: don't lie on the internet.

Comment author: gjm 21 November 2014 12:28:58PM 9 points [-]

Yes.

Further: If you search for "lesswrong roko basilisk" the top result is the RationalWiki article (at least, for me on Google right now) and nowhere on the first page is there anything with any input from Eliezer or (so far as such a thing exists) the LW community.

There should be a clear, matter-of-fact article on (let's say) the LessWrong wiki, preferably authored by Eliezer (but also preferably taking something more like the tone Ilya proposes than most of Eliezer's comments on the issue) to which people curious about the affair can be pointed.

(Why haven't I made one, if I think this? Because I suspect opinions on this point are strongly divided and it would be sad for there to be such an article but for its history to be full of deletions and reversions and infighting. I think that would be less likely to happen if the page were made by someone of high LW-status who's generally been on Team Shut Up About The Basilisk Already.)

Comment author: Azathoth123 21 November 2014 06:37:01AM *  2 points [-]

And what caused these differences between these two countries? (Hint: it's not magical corruption ray located in Mogadishu.) And how will these traits change as more people move from Somalia to France?

Comment author: gjm 21 November 2014 11:34:44AM -1 points [-]

It could be any number of things. Including the one I take it you're looking for, namely some genetic inferiority on the part of the people in country A. But even if that were the entire cause it could still easily be the case that when someone moves from A to B their productivity (especially if expressed in monetary terms) increases dramatically.

I'm actually not quite sure what point you're arguing now. A few comments back, though, your claim was that Nancy was (nearly) contradicting herself by expecting immigrants to (1) be productive in their new country even though (2) their old country is the kind of place where it's really hard to be productive, on the grounds that for #2 to be true the people in the old country must be unproductive people.

It seems to me that for this argument to work you'd need counters to the following points (which have been made and which you haven't, as it seems to me, given any good counterargument to so far):

  • There are lots of other ways in which the old country could make productivity harder than the new -- e.g., the ones I mention above.

    • Let me reiterate that these apply even if the old country's productivity is entirely a matter of permanent, unfixable genetic deficiencies in its people. Suppose the people of country A are substantially stupider and lazier than those of country B; this will lead to all kinds of structural problems in country A; but in country B it may well be that even someone substantially stupider and lazier than the average can still be productive. (Indeed I'm pretty sure many such people are.)
    • If the differences between A and B do indeed all arise in this way (which, incidentally, I think there are good reasons to think is far from the truth) then yes, if the scale of migration from A to B is large enough then it could make things worse rather than better overall. Given that the empirical evidence I'm aware of strongly suggests that migration to successful countries tends to make them better off, I think the onus is on you if you want to make the case that this actually happens at any credible level of migration.
  • The people who move from country A to country B may be atypical of the people of country A, in ways that make them more likely overall to be productive in country B.

    • Your only response to this has been a handwavy dismissal, to the effect that that might have been true once but now immigration is too easy so it isn't any more. How about some evidence?
Comment author: ike 19 November 2014 02:09:22PM 5 points [-]

That's not actually true. Anyone can easily see any posts I've upvoted here or for that matter see Gleb's upvoted posts here. The Turk task asks for the usernames, which can then be checked to see which posts they've upvoted.

Comment author: gjm 19 November 2014 02:52:25PM 2 points [-]

Yow, you're right. So:

  • upvotes on articles are publicly visible (albeit clunkily)
  • downvotes on articles are not publicly visible
  • neither upvotes nor downvotes on comments are publicly visible (other than in the aggregate).

View more: Next