it blew up to 14M

The object streams for indirect objects have been unpacked and stripped away, leaving their contents uncompressed. Use `qpdf`

to regenerate compressed object streams:

```
qpdf --object-streams=generate in.pdf out.pdf
```

(The `--stream-data=compress`

option is already set by default.)

While you are at it, might as well re-linearize the PDF for online readers with low bandwidth:

```
qpdf --object-streams=generate --linearize in.pdf out.pdf
```

*Applied Statistical Decision Theory*, Raiffa & Schlaifer 1961 (not to be confused with their 1995 or 1959 decision theory textbooks).

Not on Libgen, Google Books, Google Scholar, the Chinese library site, or in any of the Google hits I found despite all the book review PDFs. I found a table of contents for it, and googled some chapter titles in quotes, but only turned up the same table of contents, so it really doesn't seem to be online in the clear. Betawolf discovered that an online copy *does* seem to exist at HathiTrust, which seems to think that the book is somehow in the public domain as unlikely as that may sound, and can be downloaded by people at a variety of institutions such as UMich, UWash, etc, but in this case, my UWash proxy doesn't work (it gets me IP-based access to stuff, but not account-login-based access, which HathiTrust seems to be.) Can anyone download it? (EDIT: the 1-page-at-a-time PDF download does work so I am scripting that right now as `for i in {1..394}; do sleep 60s; wget "<http://babel.hathitrust.org/cgi/imgsrv/download/pdf?id=mdp.39015022416351;orient=0;size=100;seq=$i;attachment=0>" -O $i.pdf; done`

, but if someone can get the whole PDF, that'd be better since then I know nothing was left out and all the metadata will be intact.)

If not, I will buy a used copy ($16-25 on Amazon & AbeBooks) and try out 1DollarScan.

On a historical note, besides compiling many results and being one of the key texts of the 1960s Bayesian revolution, apparently this is the book which introduced the general concept of conjugate distributions into Bayesian statistics, which I had always assumed had been introduced by Laplace or someone early on like that since they are so critical to pre-MCMC analyses.

Got the whole PDF from HathiTrust. I think Chart I is missing from the scan.

Downvoted. I'm sorry to be so critical, but *this is the prototypical LW mischaracterization of utility functions*. I'm not sure *where* this comes from, when the VNM theorem gets so many mentions on LW.

A utility function is, by definition, that which the corresponding rational agent maximizes the expectation of, by choosing among its possible actions. It is not "optimal as the number of bets you take approaches infinity": first, it is not 'optimal' in any reasonable sense of the word, as it is *simply an encoding of the actions which a rational agent would take in hypothetical scenarios*; and second, it has nothing to do with repeated actions or bets.

*Humans do not have utility functions*. We do not exhibit the level of counterfactual self-consistency that is required by a utility function.

The term "utility" used in discussions of utilitarianism is generally vaguely-defined and is *almost never* equivalent to the "utility" used in game theory and related fields. I suspect that is the source of this never-ending misconception about the nature of utility functions.

Yes, it is common, especially on LW and in discussions of utilitarianism, to use the term "utility" loosely, but don't conflate that with utility functions by creating a chimera with properties from each. If the "utility" that you want to talk about is vaguely-defined (e.g., if it depends on some account of subjective preferences, rather than on definite actions under counterfactual scenarios), then it probably lacks all of useful mathematical properties of utility functions, and its expectation is no longer meaningful.

Hmmm, yes, I suppose I was making the same mistake they were... I thought that what confidence intervals were are actually what credible intervals are.

I see. Looking into this, it seems that the (mis)use of the phrase "confidence interval" to mean "credible interval" is endemic on LW. A Google search for "confidence interval" on LW yields more than 200 results, of which many—perhaps most—should say "credible interval" instead. The corresponding search for "credible interval" yields less than 20 results.

The Fallacy of Placing Confidence in Confidence Intervals

I just read through this, and it sounds like they're trying to squish a frequentist interpretation on a Bayesian tool. They keep saying how the confidence intervals don't correspond with reality, but confidence intervals are supposed to be measuring degrees of belief. Am I missing something here?

I briefly skimmed the paper and don't see how you are getting this impression. Confidence intervals are—if we force the dichotomy—considered a frequentist rather than Bayesian tool. They point out that *others* are trying to squish a *Bayesian* interpretation on a *frequentist* tool by treating confidence intervals as though they are credible intervals, and they state this quite explicitly (p.17–18, emphasis mine):

Finally, we believe that in science, the meaning of our inferences are important. Bayesian credible intervals support an interpretation of probability in terms of plausibility, thanks to the explicit use of a prior. Confidence intervals, on the other hand, are based on a philosophy that does not allow inferences about plausibility, and does not utilize prior information.

Using confidence intervals as if they were credible intervals is an attempt to smuggle Bayesian meaning into frequentist statistics, without proper consideration of a prior.As they say, there is no such thing as a free lunch; one must choose. We suspect that researchers, given the choice, would rather specify priors and get the benefits that come from Bayesian theory. We should not pretend, however, that the choice need not be made. Confidence interval theory and Bayesian theory are not interchangeable, and should not be treated as so.

Software Engineering, A Historical Perspective J. Marciniak DOI 10.1002/0471028959.sof321

*Probability and Statistics for Business Decisions*, Robert Schlaifer 1959. Surprisingly expensive used, and unfortunately for such a foundational text in Bayesian decision theory, doesn't seem to be available online. If you can't get a digital copy, does anyone know of a good service or group which would produce a high-quality digital copy given a print edition?

Page-by-page .djvu scans are available here (found via this search; edit: it seems to appear sporadically in the search results). Full sequence of download links is `<http://202.116.13.3/ebook%5C24/24000522/ptiff/00000{001..744}.djvu`

>

I wrote the following just before finding the scan of the book. I'll post it anyway.

I've used 1DollarScan for about 50 books, including math/stat textbooks, and the quality is consistently good (unless you need accurate color reproduction) even with the cheapest option (i.e., $1 per 100 pages), but you'll need to do your own post-processing to:

- Lossily compress further and binarize B/W text; expect about 400 KB/page from 1DollarScan.
- Perform OCR; 1DollarScan's OCR option is expensive and performs okay at best.
- Straighten pages; pages are often offset slightly from the vertical.
- Add metadata (e.g., page numbering, section bookmarks).

I use Adobe Acrobat with ABBYY FineReader for these. FineReader's OCR is more accurate than Acrobat's, but Acrobat performs okay by itself. Acrobat's trial can be indefinitely reactivated every month in a Windows VM by reverting to a pre-activation snapshot, whereas FineReader has to be bought or torrented, as its trial is overly restrictive. I don't know of any good options on Linux.

BTW, there's a used copy on Half.com for $39. Not sure if you saw that.

You take the probability of A *not* happening and multiply by the probability of B *not* happening. That gives you P(not A and not B). Then subtract that from 1. The probability of at least one of two events happening is just one minus the probability of neither happening.

In your example of 23% and 48%, the probability of getting at least one is

1 - (1-0.23)*(1-0.48) = 0.60.

You take the probability of A not happening and multiply by the probability of B not happening. That gives you P(not A and not B).

Only if A and B are independent.

Is the term 'expected value' interchangeable with the term 'expected utility?'

No. "Expected value" refers to the expectation of a variable under a probability distribution, whereas "expected utility" refers specifically to the expectation of a *utility function* under a probability distribution. That is, expected utility is a specific instantiation of an expected value; expected value is more general than expected utility and can refer to things other than utility.

The importance of this distinction often arises when considering the utility of large sums of money: a person may well decline a deal or gamble with positive expected value (of money) because the expected utility can be negative (for example, see the St. Petersburg paradox).

Yes! I think this is it. The wikipedia article links to these ray diagrams, which I found helpful (particularly the fourth picture).

I suspected it had to do with an overlap in the penumbra, or the "fuzzy edges", of the shadow, but I kept getting confused because the observation isn't what you would expect, if you think of the penumbra as two separate pictures that you're simply "adding together" as they overlap.

See also this highly-upvoted question on the Physics Stack Exchange, which deals with your question.

View more: Next

*0 points [-]