Comment author: IlyaShpitser 27 January 2015 02:45:59PM *  3 points [-]

The binary thing isn't important, what's important is that there are real situations where likelihood based methods (including Bayes) don't work well (because by assumption there is only strong info on the part of the likelihood we aren't using in our functional, and the part of the likelihood we are using in our functional is very complicated).

I think my point wasn't so much the technical specifics of that example, but rather that these are the types of B vs F arguments that actually have something to say, rather than going around and around in circles. I had a rephrase of this example using causal language somewhere on LW (if that will help, not sure if it will).

Robins and Ritov have something of paper length, rather than blog post length if you are interested.

Comment author: AmagicalFishy 27 January 2015 05:20:35PM *  1 point [-]

Wait, IlyaShipitser—I think you overestimate my knowledge of the field of statistics. From what it sounds like, there's an actual, quantitative difference between Bayesian and Frequentist methods. That is, in a given situation, the two will come to totally different results. Is this true?

I should have made it more clear that I don't care about some abstract philosophical difference if said difference doesn't mean there are different results (because those differences usually come down to a nonsensical distinction [à la free will]). I was under the impression that there is a claim that some interpretation of the philosophy will fruit different results—but I was missing it, because everything I've been introduced to seems to give the same answer.

Is it true that they're different methods that actually give different answers?

Comment author: Lumifer 26 January 2015 05:19:24PM 6 points [-]

Well, the key point here is whether the word "probability" can be applied to things which already happened but you don't know what exactly happened. You said

A quantitative thing that indicates how likely it is for an event to happen.

which implies that probabilities apply only to the future. The question is whether you can speak of probabilities as lack of knowledge about something which is already "fixed".

Another issue is that in your definition you just shifted the burden of work to the word "likely". What does it mean that an event is "likely" or "not likely" to happen?

Comment author: AmagicalFishy 26 January 2015 05:53:28PM *  0 points [-]

Sorry, I didn't mean to imply that probabilities only apply to the future. Probabilities apply only to uncertainty.

That is, given the same set of data, there should be no difference between event A happening, and you having to guess whether or not it happened, and event A not having happened yet—and you having to guess whether or not it will happen.

When you say "apply a probability to something," I think:

"If one were to have to make a decision based on whether or not event A will happen, how would one consider the available data in making this decision?"

The only time event A happening matters is if it happening generated new data. In the Bob-Alice situation, Alice rolling a die in separate room gives zero information to Bob—so whether or not she already rolled it doesn't matter. Here are a couple of different situations to illustrate:

A) Bob and Alice are in different rooms. Alice rolls the die and Bob has to guess the number she rolled. B) Bob has to guess the number that Alice's die will roll. Alice then rolls the die. C) Bob watches alice roll the die, but did not see the outcome. Bob must guess the number rolled. D) Bob is a supercomputer which can factor in every infinitesimal fact about how Alice rolls the die, and the die itself upon seeing the roll. Bob-the-supercomputer watches Alice roll the die, but did not see the outcome.

In situations A, B, and C—whether or not Alice rolls the die before or after Bob's guess is irrelevant. It doesn't change anything about Bob's decison. For all intents and purposes, the questions "What did Alice roll?" and "What will Alice roll?" are exactly the same question. That is: We assume the system is simple enough that rolling a fair die is always the same. In situation D, the questions are different because there's different information available depending on whether or not Alice rolled already. That is, the assumption of a simple-system isn't there because Bob is able to see the complexity of the situation and make the exact same kind of decision. Alice having actually rolled the dice does matter.

I don't quite understand your "likely or not likely" question. To try to answer: If an event is likely to happen, then your uncertainty that it will happen is low. If it is not likely, then your uncertainty that it will happen is high.

(Sorry, I totally did not expect this reply to be so long.)

Comment author: Lumifer 26 January 2015 04:48:42PM 4 points [-]

Let's say Alice and Bob are in two different rooms and can't see each other. Alice rolls a 6-sided die and looks at the outcome. Bob doesn't know the outcome, but knows that the die has been rolled. In your interpretation of the word "probability", can Bob talk about the probabilities of the different roll outcomes after Alice rolled?

Comment author: AmagicalFishy 26 January 2015 05:02:30PM *  0 points [-]

I'm having a hard time answering this question with "yes" or "no":

The event in question is "Alice rolling a particular number on a 6-sided die." Bob, not knowing what Alice rolled, can talk about the probabilities associated with rolling a fair die many times, and base whatever decision he has to make from this probability (assuming that she is, in fact, using a fair die). Depending on the assumed complexity of the system (does he know that this is a loaded die?), he could convolute a bunch of other probabilities together to increase the chances that his decision is accurate.

Yes... I guess?

(Or, are you referring to something like: If Alice rolled a 5, then there is a 100% chance she rolled a 5?)

Comment author: polymathwannabe 26 January 2015 04:03:50PM 0 points [-]

What "fundamental definition of probability" are you using?

Comment author: AmagicalFishy 26 January 2015 04:11:45PM *  0 points [-]

A quantitative thing that indicates how likely it is for an event to happen.

Comment author: AmagicalFishy 26 January 2015 03:39:41PM 5 points [-]

I still don't understand the apparently substantial difference between Frequentist and Bayesian reasoning. The subject was brought up again in a class I just attended—and I was still left with a distinct "... those... those aren't different things" feeling.

I am beginning to come to the conclusion that the whole "debate" is a case of Red vs. Blue nonsense. So far, whenever one tries to elaborate on a difference, it is done via some hypothetical anecdote, and said anecdote rarely amounts to anything outside of "Different people sometimes treat uncertainty differently in different situations, depending on the situation." (Usually by having one's preferred side make a very reasonable conclusion, and the other side make some absurd leap of psuedo-logic).

Furthermore, these two things hardly ever seem to have anything to do with the fundamental definition of probability, and have everything to do with the assumed simplicity of a given system.

I AM ANGRY

Comment author: RichardKennaway 12 January 2015 08:24:37AM *  0 points [-]

Why does observing a finite amount of light from a finite distance contradict anything about the range of electromagnetic radiation?

I guess this is a reference to Olbers' paradox. If every ray projected from a given point must eventually hit the surface of a star, then the night sky should look uniformly as bright as the Sun.

Comment author: AmagicalFishy 12 January 2015 09:49:25PM 1 point [-]

This ends up being somewhat circular then, doesn't it?

Olbers' paradox is only a paradox in an infinite, static universe. A fininte, expanding universe explains the night sky very well. One can't use Olbers' paradox to discredit the idea of an expanding universe when Olbers' paradox depends on the universe being static.

Furthermore, upon re-reading MazeHatter's "The way I see it is..." comment, Theory B does not put us at some objective center of reality. An intuitive way to think about it is: Imagine "space" being the surface of a balloon. Place dots on the surface of the balloon, and blow the balloon up. The distance between dots in all directions expands. One can arbitrarily consider one dot as the "center," but that doesn't change anything.

I'm beginning to think that MazeHatter's comments do not warrant as much discussion as has taken place in this thread. =\

Comment author: gjm 12 January 2015 11:25:18AM 1 point [-]

They aren't explicit, but my moral decisions all form a consistent web.

How do you know? (Or, if the answer is "I can just tell" or something: How do you know that your consistency is any better than anyone else's?)

Comment author: AmagicalFishy 12 January 2015 09:35:37PM 0 points [-]

Trial-and-error.

There are, of course, inconsistencies that I'm unaware of: These are known unknowns. The idea, though, is that when I'm presented with a situation, any such relevant inconsistencies come up and are eliminated (either by a change of the foundation or a change of the judgement).

That is, inconsistencies that exist but don't come up aren't relevant.

An example—extreme but illustrative: Say an element of this foundational set is "I want to 'treat everyone equally'". I interview a Blue man for a job and, upon reflecting, think very negatively of him, even though he's more qualified than others. When I review the interview as if I were a 3rd party [ignorant of any differences between Blue people and regular people], I come to the conclusion that the interview was actually pretty solid.

I now have a choice to make. Do I actually want to treat people equally? If so, then I must think differently of this Blue man, his Blue people, give him this job, and make a very conscious effort to incorperate Blue people into my "everybody" perception. This is a change in judgement. Or, maybe I don't want to treat everyone equally—maybe I want to treat everyone who's not Blue equally. This is a change in foundation (but this change in foundation would have to coincide with the other elements in the foundation-set; or those, too, would change).

But, until now, my perception of Blue people was irrelevant.

Perhaps it would have been best to say: The process by which I make moral decisions is built to maximize for consistency. A lot goes into this. everything from honing the ability to look at a situation as a 3rd party, to comparing a decision with decisions I've made in the past. As a result, there's a very practiced part of me that immediately responds to nigh all situations with "Is this inconsistent?"

(An unrelated note: Are there things in this post I could have eliminated to get the same point across, but be more succint? I often feel as if my responses [in general] are too long.)

Comment author: gjm 10 January 2015 08:30:56PM 0 points [-]

Is this really much easier than shifting the decimal place and then adding half the number? (Rounding at the start if you want, which you probably do.)

Comment author: AmagicalFishy 12 January 2015 03:37:37AM 0 points [-]

Haha, that's what I do.

If my cost is $14.32, I know $1.43 is 10%, and half of that is about $0.71, so the tip's $2.14 (though I tip 20%, which is even easier).

Comment author: Capla 08 January 2015 11:23:51PM 1 point [-]

Is sex significantly more pleasurable than masturbation? Why?

Comment author: AmagicalFishy 12 January 2015 03:25:22AM 5 points [-]

Yes and no. It's a different experience—like taking a bath and going swimming.

Comment author: AmagicalFishy 12 January 2015 03:21:53AM 2 points [-]

Why is the Newcomb problem... such a problem? I've read analysis of it and everything, and still don't understand why someone would two-box. To me, it comes down to:

1) Thinking you could fool an omniscient super-being 2) Preserving some strictly numerical ideal of "rationality"

Time-inconsistency and all these other things seem totally irrelevant.

View more: Prev | Next