It seems strange to me that a topic that generated so much discussion and speculation was voted to approximately zero. Perhaps the number of comments on a post is a better indicator of its interest than upvotes - downvotes.

I can't help think part of the difference is that they're *your* books so you can do whatever you want to them, whereas he's your employee and is being paid to do this the "right way".

Good point.

He was learning how to cut the books. *You* were learning how to teach someone to cut the books, a task in which you had no prior experience. Yes, it took two people and it took longer than working out how to cut the books yourself; but given what you now know, assuming your new hire suddenly moves away and has to be unexpectedly replaced, you would be able to teach someone else how to cut the books more quickly than before.

Teaching someone how to do the skill is a different skill to being able to do the skill, and it requires a more thorough conscious knowledge of how to use the skill than using the skill does.

You're right, yet I think it's still remarkable that it took longer to watch myself do it and figure out how I was doing it, than it took me to figure it out in the first place. For many types of skills, that wouldn't be the case. I think the ease of discovery, rather than the difficulty of observing myself, made the difference.

## Don't teach people how to reach the top of a hill

When is it faster to rediscover something on your own than to learn it from someone who already knows it?

Sometimes it's faster to re-derive a proof or algorithm than to look it up. Keith Lynch re-invented the fast Fourier transform because he was too lazy to walk all the way to the library to get a book on it, although that's an extreme example. But if you have a complicated proof already laid out before you, and you are not Marc Drexler, it's generally faster to read it than to derive a new one. Yet I found a knowledge-intensive task where it would have been much faster to tell someone nothing at all than to tell them how to do it.

And my final conclusion is, then:

Either become an average utilitarian; or stop describing rationality as expectation maximization.

That's unwarranted. Axioms are being applied to describe very different processes, so you should look at their applications separately. In any case, reaching a "final conclusion" without an explicit write-up (or discovering a preexisting write-up) to check the sanity of conclusion is in most cases a very shaky step, predictably irrational.

Okay: Suppose you have two friends, Betty and Veronica, and one balloon. They both like balloons, but Veronica likes them a little bit more. Therefore, you give the balloon to Veronica.

You get one balloon every day. Do you give it to Veronica every day?

Ignore whether Betty feels slighted by never getting a balloon. If we considered utility and disutility due to the perception of equity and inequity, then average utilitarianism would also produce somewhat equitable results. The claim that inequity is a problem in average utilitarianism does not depend on the subjects perceiving the inequity.

Just to be clear about it, Betty and Veronica live in a nursing home, and never remember who got the balloon previously.

You might be tempted to adopt a policy like this: p(v) = .8, p(b) = .2, meaning you give the balloon to Veronica eight times out of 10. But the axiom of independence assumes that it is better to use the policy p(v) = 1, p(b) = 0.

This is straightforward application of the theorem, without any mucking about with possible worlds. Are you comfortable with giving Veronica the balloon every day? Or does valuing equity mean that expectation maximization is wrong? I think those are the only choices.

I figured out what the problem is. Axiom 4 (Independence) implies average utilitarianism is correct.

Suppose you have two apple pies, and two friends, Betty and Veronica. Let B denote the number of pies you give to Betty, and V the number you give to Veronica. Let v(n) denote the outcome that Veronica gets *n* apple pies, and similarly define b(n). Let u_v(S) denote Veronica's utility in situation S, and u_b(S) denote Betty's utility.

Betty likes apple pies, but Veronica loves them, so much so that u_v(v(2), b(0)) > u_b(b(1), v(1)) + u_v(b(1), v(1)). We want to know whether average utilitarianism is correct to know whether to give Veronica both pies.

Independence, the fourth axiom of the von Neumann-Morgenstern theorem, implies that if the outcome L is preferable to outcome M, then one outcome of L and one outcome of N is preferable to one outcome of M and one outcome of N.

Let L represent giving one pie to Veronica and M represent giving one pastry to Betty. Now let’s be sneaky and let N also represent giving one pastry to Veronica. The fourth axiom says that L + N—giving two pies to Veronica—is preferable to L + M—giving one to Veronica and one to Betty. We have to *assume* that to use the theorem.

But that’s the question we wanted to ask--whether our utility function U should prefer the solution that gives two pies to Veronica, or one to Betty and one to Veronica! Assuming the fourth axiom builds average utilitarianism into the von Neumann-Morgenstern theorem.

Argh; never mind. This is what Wei_Dai already said below.

[Average utilitarianism] implies that for any population consisting of very good lives there is a better population consisting of just one person leading a life at a slightly higher level of well-being (Parfit 1984 chapter 19). More dramatically, the principle also implies that for a population consisting of just one person leading a life at a very negative level of well-being, e.g., a life of constant torture, there is another population which is better even though it contains millions of lives at just a slightly less negative level of well-being (Parfit 1984). That total well-being should not matter when we are considering lives worth ending is hard to accept. Moreover, average utilitarianism has implications very similar to the Repugnant Conclusion (see Sikora 1975; Anglin 1977).

Average utilitarianism has even more implausible implications. Consider a world A in which people experience nothing but agonizing pain. Consider next a different world B *which contains all the people in A*, plus arbitrarily many people all experiencing pain only slightly less intense. Since the average pain in B is less than the average pain in A, average utilitarianism implies that B is better than A. This is clearly absurd, since B differs from A only in containing a surplus of arbitrarily many people experiencing nothing but intense pain. How could one possibly *improve* a world by merely adding lots of pain to it?

You realize you just repeated the scenario described in the quote?

First of all, I find the term "prescriptive" to be rather equivocatory, used primarily to express disapproval rather than to communicate precise meaning, and it quite often just means "more strict than I think is appropriate". To the extent that "prescriptive" has a clear meaning, I disagree with your application of the word to definitions. There are prescriptive and descriptive *approaches* to writing a dictionary, but the definitions themselves are descriptive. For instance, there was a flap about a dictionary that included in its definition of the word "gay" that one meaning is "stupid". A girl objected to that, and started advocating that the dictionary remove that meaning of the word. So, one person might say "The word is sometimes used to mean 'stupid', and dictionaries are should describe how words are used, so we should include that meaning". That's a descriptive *approach* to definitions. The girl, on the other hand, was saying "This meaning is offensive, and dictionaries shouldn't offend people, so this meaning should be removed". This is a prescriptive *approach*. But both "gay mean means homosexual" and "gay means homosexual or stupid" are descriptive statements.

You need a prescriptive, subjective definition of a thing that will transport you over water.

If you want something that will transport you over water, that's not “prescriptive”, “subjective”, or even a “definition”. It's a specification. You aren't saying “things that can't transport me over water shouldn't be called boats”, you're saying “The genie shouldn't give me something that can't transport me over water”. You don't need a new definition of “boat” to communicate that, you just need to phrase your wish as being more specific than just a “boat”. If you really want to have a term that refers to a thing that will transport you over water, you can make up a new word, and give it that definition. If you define a word as meaning “a thing that can transport PhilGoetz over water”, then that will be an objective definition.

I'm talking about what people do, to warn people to watch out for it when they do that. Sometimes you'll be in an discussion, and some people will have defined a term descriptively, and some will have defined it what I'm calling prescriptively, and you need to notice that.

I figured out what the problem is. Axiom 4 (Independence) implies average utilitarianism is correct.

Suppose you have two apple pies, and two friends, Betty and Veronica. Let B denote the number of pies you give to Betty, and V the number you give to Veronica. Let v(n) denote the outcome that Veronica gets *n* apple pies, and similarly define b(n). Let u_v(S) denote Veronica's utility in situation S, and u_b(S) denote Betty's utility.

Betty likes apple pies, but Veronica loves them, so much so that u_v(v(2), b(0)) > u_b(b(1), v(1)) + u_v(b(1), v(1)). We want to know whether average utilitarianism is correct to know whether to give Veronica both pies.

Independence, the fourth axiom of the von Neumann-Morgenstern theorem, implies that if the outcome L is preferable to outcome M, then one outcome of L and one outcome of N is preferable to one outcome of M and one outcome of N.

Let L represent giving one pie to Veronica and M represent giving one pastry to Betty. Now let’s be sneaky and let N also represent giving one pastry to Veronica. The fourth axiom says that L + N—giving two pies to Veronica—is preferable to L + M—giving one to Veronica and one to Betty. We have to *assume* that to use the theorem.

But that’s the question we wanted to ask--whether our utility function U should prefer the solution that gives two pies to Veronica, or one to Betty and one to Veronica! Assuming the fourth axiom builds average utilitarianism into the von Neumann-Morgenstern theorem.

If you'd explained that I misunderstand Gibbs sampling, that would have been a failure to update. You didn't.

I wrote a comment that was so discordant with your understanding of Gibbs sampling and EM that it should have been a red flag that one or the other of us was misunderstanding something. Instead you put forth a claim stating your understanding, and it fell to me to take note of the discrepancy and ask for clarification. This failure to update is the exact event which prompted me to attach "Dunning-Kruger" to my understanding of you.

I don't see how distinction makes sense for Gibbs sampling or EM... That's why these algorithms exist--they spare you from having to choose a prior, if the data is strong enough that the choice makes no difference.

The way in which the ideas you have about EM and Gibbs sampling are wrong isn't easily fixable in a comment thread. We could do a Google Hangout at some point; if you're interested, PM me.

I believe my ideas about Gibbs sampling are correct, as demonstrated by my correct choice and implementation of it to solve a difficult problem. My terminology may be non-standard.

Here is what I believe happened in that referenced exchange: You wrote a comment that was difficult to comprehend, and I didn't see how it related to my question. I explained why I asked the question, hoping for clarification. That's a failure to communicate, not a failure to update.

View more: Next

*0 points [-]