In response to How to Fix Science
Comment author: RichardWein 09 March 2012 12:15:21PM *  7 points [-]

The inferential method that solves the problems with frequentism — and, more importantly, follows deductively from the axioms of probability theory — is Bayesian inference.

You seem to be conflating Bayesian inference with Bayes Theorem. Bayesian inference is a method, not a proposition, so cannot be the conclusion of a deductive argument. Perhaps the conclusion you have in mind is something like "We should use Bayesian inference for..." or "Bayesian inference is the best method for...". But such propositions cannot follow from mathematical axioms alone.

Moreover, the fact that Bayes Theorem follows from certain axioms of probability doesn't automatically show that it's true. Axiomatic systems have no relevance to the real world unless we have established (whether explicitly or implicitly) some mapping of the language of that system onto the real world. Unless we've done that, the word "probability" as used in Bayes Theorem is just a symbol without relevance to the world, and to say that Bayes Theorem is "true" is merely to say that it is a valid statement in the language of that axiomatic system.

In practice, we are liable to take the word "probability" (as used in the mathematical axioms of probability) as having the same meaning as "probability" (as we previously used that word). That meaning has some relevance to the real world. But if we do that, we cannot simply take the axioms (and consequently Bayes Theorem) as automatically true. We must consider whether they are true given our meaning of the word "probability". But "probability" is a notoriously tricky word, with multiple "interpretations" (i.e. meanings). We may have good reason to think that the axioms of probability (and hence Bayes Theorem) are true for one meaning of "probability" (e.g. frequentist). But it doesn't automatically follow that they are also true for other meanings of "probability" (e.g. Bayesian).

I'm not denying that Bayesian inference is a valuable method, or that it has some sort of justification. But justifying it is not nearly so straightforward as your comment suggests, Luke.

Comment author: wnoise 09 March 2012 05:13:04PM 1 point [-]

It's actually somewhat tricky to establish that the rules of probability apply to the Frequentist meaning of probability. You have to mess around with long run frequencies and infinite limits. Even once that's done, it hard to make the case that the Frequentist meaning has anything to do with the real world -- there are no such thing as infinitely repeatable experiments.

In contrast, a few simple desiderata for "logical reasoning under uncertainty" establish probability theory as the only consistent way to do so that satisfy those criteria. Sure, other criteria may suggest some other way of doing so, but no one has put forward any such reasonable way.

Comment author: wnoise 23 December 2011 08:40:34AM 5 points [-]

It's not clear that the effect is really there, and certainly isn't as strong as originally thought:

http://devoid.blogs.heraldtribune.com/11438/the-decline-effect-haunts-science/

U.Cal-Santa Barbara psychology professor Jonathan Schooler has a problem. The certitude of a phenomenon that made him a rock star in academic circles — he called it “verbal overshadowing,” and he published the results 20 years ago — is beginning to break down. And its fragility is calling the entire scientific method into question.

Comment author: wnoise 19 December 2011 11:22:29AM 0 points [-]

It seems much of our cognitive architecture was developed in the context of social situations. Indeed, the standard experiments on checking modus ponens and modus tollens understanding show sharp increases in ability when they are presented as social rules (e.g. http://en.wikipedia.org/wiki/Wason_selection_task checking whether someone is violating the "minor drinking alcohol" rules, rather than cards gives much higher performance). Testing whether you understand a social rule by deliberately violating your current understanding can be a very, very expensive test. It seems plausible that this cost has led to the human default ways for testing implicit rules to avoid seeking out these negatives, even when the cost would be low.

In response to comment by ME3 on The Moral Void
Comment author: thomblake 07 December 2011 10:50:22PM 3 points [-]

"Imagine if I proved to you that nothing is actually yellow. How would you proceed?"

A propos: Magenta isn't a color.

In response to comment by thomblake on The Moral Void
Comment author: wnoise 07 December 2011 11:54:08PM 3 points [-]

It's not a spectral color. That is, no one wavelength of light can reproduce it. But I've seen magenta things, and there is widespread intersubjective agreement about what is magenta and what isn't. It damn well is a color.

Comment author: [deleted] 03 November 2011 02:16:55AM 0 points [-]

Well, that's half the battle.

Comment author: wnoise 02 December 2011 07:41:31PM 3 points [-]

Wrong cartoon.

Comment author: [deleted] 28 November 2011 12:34:53AM 0 points [-]

a state in which an agent cannot increase its expected utility by changing its utility function

Surely if you could change your utility function you could always increase your expected utility that way, e.g. by defining the new utility function to be the old utility function plus a positive constant.

In response to comment by [deleted] on Taboo Your Words
Comment author: wnoise 28 November 2011 12:37:55AM *  1 point [-]

I think Normal_Anomaly means "judged according to the old utility function".

EDIT: Incorrect gender imputation corrected.

Comment author: Prismattic 27 November 2011 05:09:54PM 19 points [-]

Q: What do you call it when a bayesian loses an argument?

A: Getting your posterior handed to you.

Comment author: wnoise 27 November 2011 08:45:36PM 9 points [-]

I think this works better as "lose an argument with a Bayesian". Because then the Bayesian really does hand you your new belief.

In response to comment by wnoise on Existential Risk
Comment author: dlthomas 15 November 2011 07:33:56PM 1 point [-]

Fair enough, I suppose. But then it's not really a ring world so much as a... what? Space station?

In response to comment by dlthomas on Existential Risk
Comment author: wnoise 15 November 2011 08:06:57PM 4 points [-]

Yeah, pretty much. If it were bigger, I might call it a Culture orbital.

In response to comment by gjm on Existential Risk
Comment author: dlthomas 15 November 2011 07:21:55PM 1 point [-]

Not much smaller than the earth at all!

With more physics and attention, one could produce better numbers, but as a crude ballpark (using data from wikipedia):

Surface area of the Earth: 510,072,000 km^2

Circumference of ring, if it's placed at 1 AU: 2 * pi AU = 939,951,956 km

So, if the ring is a little over a half a kilometer in width, it has the same surface area as the Earth - and could be smaller still, if we just compare habitable area.

In response to comment by dlthomas on Existential Risk
Comment author: wnoise 15 November 2011 07:24:20PM 7 points [-]

The scale of curvature there makes it clear it's not 1 AU in radius.

Comment author: brazil84 14 November 2011 12:44:26PM -2 points [-]

You act as if there is one unified "case against Knox and Sollecito". There is not. There are many, as different people who believe Knox and Sollecito did it find different aspects to be more convincing.

Perhaps reasonable people can differ on fine gradations of significance among the more important pieces of evidence, but, for example, anyone who seriously believes that Knox's "mannerisms" are among the best pieces of evidence against her has seriously missed the boat.

Anyway, if I have the time I will put together a blog post laying out what I think are the most important pieces of evidence and Knox (and Sollecito).

Comment author: wnoise 14 November 2011 07:53:03PM *  -2 points [-]

I await with bated breath.

View more: Prev | Next