Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Eugine_Nier 19 April 2014 06:48:31AM 2 points [-]

Reminds me of John C. Wright's comments on the subject here

So I tried to puzzle out that safest way to store your body while you slept.

Option one: you can trust to the government to look after it, or some other long lived private institution. Menelaus Montrose does this in an early stage of history called the Cryonarchy, where the control of the suspended animation tombs is the core of the political power of the ruling caste (all of whom are Montrose’s remote inlaws).

You can try the longest-lived institution of all, which is the Catholic Church. Their famous reverence for relict and boneyards and preserving the lore of the past could be turned to preserving their sleeping ancestors as an act of charity.

(No one will believe this, but I had that idea long before I converted. It just seemed a natural extrapolation of human behavior based on non-PC, that is, non-revisionist hence non-lying-ass, history.)

Comment author: gjm 19 April 2014 08:26:13PM 0 points [-]

non-revisionist hence non-lying-ass

Gosh.

Comment author: johnlawrenceaspden 04 April 2014 11:24:09PM *  -1 points [-]

Eadem Mutata Resurgo

[the] Same, [but] Changed, I [shall] Rise

On the tombstone of Jacob Bernoulli.

Comment author: gjm 16 April 2014 10:26:00PM 3 points [-]

Some context may be useful. (Sadly, the people who made the tombstone screwed up[1] and put the wrong sort of spiral on it.)

[1] I suppose this is a rather clever pun, but only by coincidence.

Comment author: blacktrance 10 April 2014 08:45:49PM *  0 points [-]

I suspect our inferential distance may be too high for agreement at this time. But, to clarify on one point

What meanings, and where do you think I'm using each?

You said "I frequently do things that, on the whole, I think I shouldn't do. Often while actually thinking, in so many words, 'I really shouldn't be doing this.'". This is a plausible rephrasing of "I frequently do things that I generally disapprove of and perhaps would prefer if people in general wouldn't do them, also I may sometimes feel guilty about doing things I disapprove of, especially if they're generally socially disapproved of in my culture, subculture, or social group. When I do these things, I think the words 'I shouldn't do this', by which I don't literally mean that I shouldn't do this, but that doing this is 'boo!'/'ugh'/low-status/seems to conflict with things I approve of/would not happen in a world I'd prefer to live in."

Comment author: gjm 10 April 2014 09:57:21PM 2 points [-]

I suspect our inferential distance may be too high for agreement at this time.

Oh. Would you care to say more?

(meanings of "should")

So, your proposed expansion of my second "should": (1) on what grounds do you think it likely that I mean that, and (2) is it actually different from your proposed expansion of the first? ("Seems to conflict with things I approve of" and "would not happen in a world I'd prefer to live in" are not far from "things that I generally disapprove of" and "perhaps would prefer if people in general wouldn't do them", respectively.)

It seems a little curious to me that your proposed expansion of my second "should" offers, in fact, not one possible meaning but five (though I'm not sure there's a very clear distinction between "boo!" and "ugh" here). It seems to me that this weakens your point -- as if you're sure I must mean something other than what I say, but you have no real idea what.

In fact, despite your dismissive references to social status in what you say, I can't help suspecting that you're trying to pull a sort of status move here: when blacktrance says "should" s/he really means "should", but when gjm says "should" he means "hooray!" or "high-status" or something -- anything! -- with a little touch of intellectual dishonesty about it.

Well, you might be right. But let's see some evidence, if so.

Comment author: blacktrance 09 April 2014 09:23:10PM 0 points [-]

I am not defining "X thinks they should do Y" in terms of 1, but in terms of 2. People can certainly feel inclined to do things they shouldn't do. But if you force them into a reflective mode and they still act as they did before, it tells you about what they really believe. If it's a failure of self-control due to habits/forgetfulness, that I can understand. But in the case of reluctant meat-eaters, it seems to be something more than that - they claim to not want to eat meat, but if you don't want to eat meat, it's easy not to - just don't buy it and then you won't have any meat to eat. Sometimes people buy things they wouldn't reflectively want, but that's when they're buying something they'd view as harmful to the self (or just suboptimal), and not in the general category of "evil". No one can simultaneously reflectively think "I shouldn't do this (because it's evil)" and "I should do this (evil) thing". The only possibility is that for reluctant meat-eaters, meat is an impulse buy, but that seems unlikely.

I frequently do things that, on the whole, I think I shouldn't do. Often while actually thinking, in so many words, "I really shouldn't be doing this."

I suspect you're using two different meanings of "should" here.

Comment author: gjm 10 April 2014 07:39:07PM 1 point [-]

But if you force them into a reflective mode [...]

OK, so either now you're making a weaker claim than the one you started out with ("I don't believe that it's possible to believe that you're doing something unethical while you're doing it") or I misunderstood what you meant before. Because people frequently aren't in "a reflective mode". (And I don't think believing something's unethical requires being in a reflective mode.)

But you still haven't moved far enough for me to agree (not that there's any particular reason you should care about that). I think I have frequently had the experience of reflecting that I really don't want to be doing X, while doing X. It's not that I'm not in reflective mode, it's that the bit of me that's in reflective mode doesn't have overall control.

This is all a separate matter, by the way, from the question of how to use terms like "should", "ethical", etc., in the face of the fact that we (almost) all care much more about ourselves than about distant others, and that many of us hold that in some sense we shouldn't. I appreciate that you wish to use those terms to refer to a person's "overall" values as (maybe inexactly) shown by their actions, rather than to their theoretical beliefs about what morally perfect agents would do. I'm not sure I agree, but that isn't what I'm disagreeing with here.

I suspect you're using two different meanings of "should" here.

What meanings, and where do you think I'm using each?

Comment author: blacktrance 08 April 2014 08:52:56PM *  -3 points [-]

Really. To unpack that statement, "unethical" = "what one shouldn't do". If you're choosing to do something, you think you should do it, so you obviously can't be thinking that you shouldn't do it.

On the other hand, if "unethical" means "what one shouldn't do, according to X", one can certainly do something they consider to be unethical. This second definition is also a common one.

Confusion between the two different meanings are at the root of much disagreement about ethics.

Comment author: gjm 09 April 2014 07:10:33PM 3 points [-]

Here are a few related but different questions.

  1. "What do I feel most inclined to do right now?"

  2. "What do I, on reflection, think it would be best to do right now?"

  3. "What do I, on reflection, think it would be best to do right now *if I tried to suppress my natural tendencies to be more concerned for myself than others, more concerned for those close to me than those further away, etc.?"

If you define "X thinks s/he should do Y" in terms of X's answer to question 1 (or some slight variant worded to ensure that it always matches what X is actually doing) then, indeed, no one ever does anything they think they "shouldn't". But I see no reason at all to think that this sense of "should" has anything much to do with what's usually called ethics, or indeed with anything else of much interest to anyone other than maybe X's psychiatrist. Our actions are driven not only by our stable long-term values but also by any number of temporary whims, some of them frankly crazy.

If you define "X thinks s/he should do Y" in terms of X's answer to question 2 or 3, then you can make a case that "should" is now something to do with ethics (especially for question 3, but maybe also for question 2) -- but now it's not at all true that a person's actions always match what they "think they should do". I frequently do things that, on the whole, I think I shouldn't do. Often while actually thinking, in so many words, "I really shouldn't be doing this."

And all this is true whether X is thinking about what-it-would-be-best-to-do explicitly in terms of "best in such-and-such a system of values", or taking "best" as having an "absolute" meaning somehow.

Comment author: ericyu3 07 April 2014 04:04:13PM *  0 points [-]
  1. I was unclear there - I'm finding the optimal wage at the optimal population level, not the maximum possible wage.
  2. Whoops, I meant 1-alpha. Fixed.
  3. Non-income factors are important, but I didn't consider them here because they're less obviously related to the population level.
  4. I was trying to say that even taking resource constraints, the critical income and the optimal income don't differ by that much compared to how much countries currently differ in income. Critical-level utilitarianism is supposed to be a "compromise" between total and average utilitarianism, but it would still yield strange conclusions in today's world.
Comment author: gjm 09 April 2014 01:42:30PM 0 points [-]
  1. Oh, I see. You're taking wage to be determined by production, which in turn is determined by population according to the Cobb-Douglas formula, and then asking "what's the optimal population?". Got it.

  2. Yup, better now.

So, anyway, now that I understand your argument better, there's something that looks both important and wrong, but maybe I'm misunderstanding. You're assuming that A -- the constant factor in the Cobb-Douglas formula -- is the same for all countries. But surely it isn't, and surely this accounts for a large amount of the variation in productivity and wealth between countries. It seems like this would lead to big differences in w between countries even if they're all close to optimal population.

Comment author: Xachariah 08 April 2014 09:34:18PM 0 points [-]

Maybe I should back up a bit.

I agree that at 1000004:1000000, you're looking at the wrong hypothesis. But in the above example, 104:100, you're looking at the wrong hypothesis too. It's just that a factor of 10,000x makes it easier to spot. In fact, at 34:30 or even a fewer number of iterations, you're probably also getting the wrong hypothesis.

A single percentage point of doubt gets blown up and multiplied, but that percentage point has to come from somewhere. It can't just spring forth from nothingness once you get to past 50 iterations. That means you can't be 96.6264% certain at the start, but just a little lower (Eliezer's pre-rounding certainty).

The real question in my mind is when that 1% of doubt actually becomes a significant 5%->10%->20% that something's wrong. 8:4 feels fine. 104:100 feels overwhelming. But how much doubt am I supposed to feel at 10:6 or at 18:14?

How do you even calculate that if there's no allowance in the original problem?

Comment author: gjm 09 April 2014 01:14:55AM 4 points [-]

There should always, really, be "allowance in the original problem". Perhaps not explicitly factored in, but you should assign some nonzero probability to possibilities like "the experimenter lied to me", "I goofed in some crazy way", "I am being deceived by malevolent demons", etc. In practice, these wacky hypotheses may not occur to you until the evidence for them starts getting large, and you can decide at that point what prior probabilities you should have put on them. (Unfortunately it's easy to do that wrongly, e.g. because of hindsight bias.)

As Douglas_Knight says, frequentist statistics is full of tests that will tell you when some otherwise plausible hypothesis (e.g., "these two samples are drawn from things with the same probability distribution") are incompatible with the data in particular (or not-so-particular) ways.

Comment author: Troshen 06 October 2012 12:05:36AM 0 points [-]

I completely agree with you that an accurate answer to a student is "I don't know"

But teaching in general, and PhD's in particular are specifically trainined never to say that. I mean look at how much effort they have to put into proving that they DO know. Oral examinations are NOT a place to say "I don't know." Just in general smart people don't like to say it, and authority figures don't like to say it. But I've heard it said that the one thing a PhD will never say is "I don't know"

A great story about that from the opposit direction is one about astronaut John Young. Apparently he would ask instructors question after question until he reached "I don't know" and if he never got to it you would never gain his trust.

Is it important? Yes.

Should teachers say it? Absolutely.

Is it one of the hardest things for people to say? Oh yes. I mean, even my kids teachers never say it. I've met with my son's teachers a lot over the years, and I ask tons of detailed questions. It's really, really hard to get them, or any authority figure to say "I don't know."

I tell my kids lots of things. They ask me all kinds of questions and I give them all the info I've got to give. They're like me and keep asking more and asking more. I did that so much growing up (and still do!) that I annoyed the heck out of people with my questions. So I'm generous when my kids do it and don't get frustrated and keep giving the next answer I've got. Eventually I get to "I don't know." I've started saying things like "That's one of the mysterious scientists are still trying to figure out" because I've said "I don't know" so much that it's gotten monotonous.

My point is that it's not surprising to me that a questioning student gets frustrating answers from frustrated college professors. Even if the best answer in a perfect world should have been "I don't know."

Comment author: gjm 08 April 2014 03:23:18PM 2 points [-]

I know a lot of PhDs and haven't noticed any tendency for such people to be more reluctant than others to say "I don't know". By whom have you heard it said that that's one thing a PhD will never say?

(Disclaimer: Some of those PhDs are friends are mine. One of them is me.)

Comment author: Xachariah 08 April 2014 02:09:38PM *  2 points [-]

Surely that can't be correct.

Intuitively, I would be pretty ready to bet that I know the correct bookbag if I pulled out 5 red chips and 1 blue. 97% seems a fine level of confidence.

But if we get 1,000,004 red and 1,000,000 blues, I doubt I'd be so sure. It seems pretty obvious to me that you should be somewhere close to 50/50 because you're clearly getting random data. To say that you could be 97% confident is insane.

I concede that you're getting screwed over by the multi-verse at that point, but there's got to be some accounting for ratio. There is no way that you should be equally confident in your guess regardless of if you receive ratios of 5:1, 10:6, 104:100, or 1000004:1000000.

Comment author: gjm 08 April 2014 03:08:37PM 10 points [-]

What getting a ratio of 1000004:1000000 tells you is that you're looking at the wrong hypotheses.

If you know absolutely-for-sure (because God told you, and God never lies) that you have either a (700,300) bag or a (300,700) bag and are sampling whichever bag it is uniformly and independently, and the only question is which of those two situations you're in, then the evidence does indeed favour the (700,300) bag by the same amount as it would if your draws were (8,4) instead of (1000004,1000000).

But the probability of getting anything like those numbers in either case is incredibly tiny and long before getting to (1000004,1000000) you should have lost your faith in what God told you. Your bag contains some other numbers of chips, or you're drawing from it in some weirdly correlated way, or the devil is screwing with your actions or perceptions.

("Somewhere close to 50:50" is correct in the following sense: if you start with any sensible probability distribution over the number of chips in the bags that does allow something much nearer to equality, then Pr((700,300)) and Pr((300,700)) are far closer to one another than either is to Pr(somewhere nearer to equality) and the latter is what you should be focusing on because you clearly don't really have either (700,300) or (300,700).)

Comment author: gjm 05 April 2014 10:25:07PM 3 points [-]

There are several things here I fail to understand.

  1. Why d/dN? If you're looking for optimal income per capita, you need d/dw=0 not d/dN=0.

  2. The result you've allegedly reached is that w = w0 exp(alpha-1) where alpha<1, which means w<w0, which means you're not actually in the regime where net utility equals N[U(w)-U(w0)], so you've been doing calculus on the wrong formulae.

  3. Clearly utility is not only a function of income. (Even considering only money, you need to consider assets as well as income.) Of course considering only income is a handy simplification that may turn something impossibly complicated into something susceptible to analysis, but I think you should be explicit about making that simplification because the importance of things other than money is actually a pretty big deal.

  4. This all seems like a more complicated but still minor variation on simple and familiar observations like these: (a) simple versions of utilitarianism say well-off people should give almost all they have to poorer people; (b) simple versions of average utilitarianism say we should kill all the least happy people; (c) simple versions of total utilitarianism say we should prefer an enormous population of people with just-better-than-nothing lives to a normal-sized population of very happy people. I would expect solutions to (or bullet-biting on) these problems to deal with the more complicated but similarly counterintuitive conclusions presented here (assuming for the sake of argument that either my objections above are wrong or else the conclusions remain when the errors are repaired).

View more: Next