I might be a bit blind but what are Priz1 and Priz2? Because here it looks like Priz1=Priz2. And what the priors do? What are your hypothesis?

I am sorry if I didn't get it (and I'm maybe looking like a fool right now).

In response to
Priors Are Useless

I might be a bit blind but what are Priz1 and Priz2? Because here it looks like Priz1=Priz2. And what the priors do? What are your hypothesis?

I am sorry if I didn't get it (and I'm maybe looking like a fool right now).

[;Pr_{i_{z1}};] and [;Pr_{i_{z2}};] are the posterior probabilities on [;Pr_{i_1};] and [;Pr_{i_2};] respectively.

This post contains Latex. Please install Tex the World for Chromium or other similar Tex typesetting extensions to view this post properly.

Priors are irrelevant. Given two different prior probabilities , and for some hypothesis .

Let their respective posterior probabilities be and .

After sufficient number of experiments, the posterior probability .

Or More formally:

.

Where is the number of experiments.

Therefore, priors are useless.

The above is true, because as we carry out subsequent experiments, the posterior probability gets closer and closer to the true probability of the hypothesis . The same holds true for . As such, if you have access to a sufficient number of experiments the initial prior hypothesis you assigned the experiment is irrelevant.

To demonstrate.

http://i.prntscr.com/hj56iDxlQSW2x9Jpt4Sxhg.png

This is the graph of the above table:

http://i.prntscr.com/pcXHKqDAS\_C2aInqzqblnA.png

In the example above, the true probability of Hypothesis is and as we see, after sufficient number of trials, the different s get closer to .

To generalize from my above argument:

If you have enough information, your initial beliefs are irrelevant—you will arrive at the same final beliefs.

Because I can’t resist, a corollary to Aumann’s agreement theorem.

Given sufficient information, two rationalists will always arrive at the same final beliefs irrespective of their initial beliefs.

The above can be generalized to what I call the “Universal Agreement Theorem”:

Given sufficient evidence, all rationalists will arrive at the same set of beliefs regarding a phenomenon irrespective of their initial set of beliefs regarding said phenomenon.

Prove .

In response to
Welcome to Less Wrong!

I’m a 19-year-old Nigerian male. I am strictly heterosexual and an atheist. I am a strong narcissist, and I may have Narcissist Personality Disorder (though I am cognizant of this vulnerability and do work against it which would lower the probability of me suffering from NPD). I am ambitious, and my goal in life is to plant my flag on the sand of time; engrave my emblem in history; immortalise myself in the memory of humanity. I desire to be the greatest man of the 21st century. I am a transhumanist, and intend to live indefinitely, but failing that being the greatest man of the 21st century would suffice. I fear death.

I'm an insatiably curious person. My interests are broad; rationality, science, mathematics, philosophy, economics, computing, literature.

My hobbies include discourse and debate, writing, reading, anime and manga, strategy games, problem solving and learning new things.

I find intelligence the most attractive quality in a potential partner—ambition and drive form a close second.

I am working on an article titled "You Can Gain Information Through Psychoanalysing Others", with the central thesis being with knowledge of the probability someone assigns a proposition, and their calibration, you can calculate a Bayesian probability estimate for the truthhood of that proposition.

For the article, I would need a rigorously mathematically defined system for calculating calibration given someone's past prediction history. I thought of developing one myself, but realised it would be more prudent to inquire if one has already been invented to avoid reinventing the wheel.

Thanks in advance for your cooperation. :)

#Disclaimer

I am chronically afflicted with a serious and invariably fatal epistemic disease known as narcissist bias (this is a misnomer as it refers a broad family of biases). No cure is known yet for narcissist bias, and I’m currently working on cataloguing and documenting the disease in full using myself as a test case. This disease affects how I present and articulate my points—especially in written text—such that I assign a Pr of > 0.8 that a somebody would find this post condescending, self-aggrandising, grandiose or otherwise deluded. This seems to be a problem with all my writing, and a cost of living with the condition I guess. I apologise in advance for any offence received, and inform that I do not intend to offend anyone or otherwise hurt their sensibilities.

In response to
Any Christians Here?

and be blessed you will also testify the good work.

Do you think the argument from infinity is in fact a valuable heuristic?

More scattering of information, presumably.

The second law of thermodynamics I see...

View more: Next

I definitely agree that after we become omniscient it won't matter where we started...but going from there to priors 'are useless' seems like a stretch. Like, shoes will be useless once my feet are replaced with hover engines, but I still own them now.

But this isn't all there is to it.

@Alex. also, take a set of rationalists with different priors. Let this set of priors be S.

Let the standard deviation of S after i trials be d_i.

d_{i+1} <= d_i for all i: i is in N. The more experiments are conducted the greater the precision of the probabilities of the rationalists.