## Probabilistic Löb theorem

24 26 April 2013 06:45PM

In this post (based on results from MIRI's recent workshop), I'll be looking at whether reflective theories of logical uncertainty (such as Paul's design) still suffer from Löb's theorem.

Theories of logical uncertainty are theories which can assign probability to logical statements. Reflective theories are theories which know something about themselves within themselves. In Paul's theory, there is an external P, in the meta language, which assigns probabilities to statements, an internal P, inside the theory, that computes probabilities of coded versions of the statements inside the language, and a reflection principle that relates these two P's to each other.

And Löb's theorem is the result that if a (sufficiently complex, classical) system can prove that "a proof of Q implies Q" (often abbreviated as □Q → Q), then it can prove Q. What would be the probabilistic analogue? Let's use □aQ to mean P('Q')≥1-a (so that □0Q is the same as the old □Q; see this post on why we can interchange probabilistic and provability notions). Then Löb's theorem in a probabilistic setting could:

Probabilistic Löb's theorem: for all a<1, if the system can prove □aQ → Q, then the system can prove Q.

To understand this condition, we'll go through the proof of Löb's theorem in a probabilistic setting, and see if and when it breaks down. We'll conclude with an example to show that any decent reflective probability theory has to violate this theorem.

## Proof of fungibility theorem

3 12 January 2013 09:26AM

Appendix to: A fungibility theorem

Suppose that $P$ is a set and we have functions $v_1, \dots, v_n : P \to \mathbb{R}$. Recall that for $p, q \in P$, we say that $p$ is a Pareto improvement over $q$ if for all $i$, we have $v_i(p) \geq v_i(q)$. And we say that it is a strong Pareto improvement if in addition there is some $i$ for which $v_i(p) > v_i(q)$. We call $p$Pareto optimum if there is no strong Pareto improvement over it.

Theorem. Let $P$ be a set and suppose $v_i: P \to \mathbb{R}$ for $i = 1, \dots, n$ are functions satisfying the following property: For any $p, q \in P$ and any $\alpha \in [0, 1]$, there exists an $r \in P$ such that for all $i$, we have $v_i(r) = \alpha v_i(p) + (1 - \alpha) v_i(q)$.

Then if an element $p$ of $P$ is a Pareto optimum, then there exist nonnegative constants $c_1, \dots, c_n$ such that the function $\sum c_i v_i$ achieves a maximum at $p$.

## Is Race Realism Racist?

-12 12 May 2012 04:05AM

Race Realism AKA Human Biodiversity Theorem is an extremely contentious issue, which frequently seems to be owned by the extremists on both sides.  Some people say we should have a frank discussion on race, and personally I think we should have one.

The link that follows goes to a 20 minute youtube video where I discuss the issue.  Is it racist to discuss race realism?  By the colloquial defintion of racist.  Well, sort of.  But that doesn't mean you should throw the baby out with the bath water.  Stormfront might happily embrace any study that shows disparate achievement, but that doesn't mean that the studies are false.

Are the Race Realists on the internet anti-black, or is sensible social policy based upon acceptance of differences?

...

BTW, I've acted like a jerk.  This will be deleted in 48 hours.

## [Link] A Bayes' Theorem Visualization

15 09 January 2012 04:44PM

A while ago when Bret Victor's amazing article Up and Down the Ladder of Abstraction was being discussed, someone mentioned that they'd like to see one made for Bayes' Theorem. I've just completed version 1.0 of my "Bayes' Theorem Ladder of Abstraction", and it can be found here: http://www.coarsegra.in/?p=111

(It uses the Canvas html5 element, so won't work with older versions of IE).

There's a few bugs in it, and it leaves out many things that I'd like to (eventually) include, but I'm reasonably satisfied with it as a first attempt. Any feedback for what works and what doesn't work, or what you think should be added, would be greatly appreciated.

## Log-odds (or logits)

18 28 November 2011 01:11AM

(I wrote this post for my own blog, and given the warm reception, I figured it would also be suitable for the LW audience. It contains some nicely formatted equations/tables in LaTeX, hence I've left it as a dropbox download.)

Logarithmic probabilities have appeared previously on LW here, here, and sporadically in the comments. The first is a link to a Eliezer post which covers essentially the same material. I believe this is a better introduction/description/guide to logarithmic probabilities than anything else that's appeared on LW thus far.

Introduction:

Our conventional way of expressing probabilities has always frustrated me. For example, it is very easy to say nonsensical statements like, “110% chance of working”. Or, it is not obvious that the difference between 50% and 50.01% is trivial compared to the difference between 99.98% and 99.99%. It also fails to accommodate the math correctly when we want to say things like, “five times more likely”, because 50% * 5 overflows 100%.
Jacob and I have (re)discovered a mapping from probabilities to log- odds which addresses all of these issues. To boot, it accommodates Bayes’ theorem beautifully. For something so simple and fundamental, it certainly took a great deal of google searching/wikipedia surfing to discover that they are actually called “log-odds”, and that they were “discovered” in 1944, instead of the 1600s. Also, nobody seems to use log-odds, even though they are conceptually powerful. Thus, this primer serves to explain why we need log-odds, what they are, how to use them, and when to use them.

Article is here (Updated 11/30 to use base 10)

## AI reflection problem

4 [deleted] 27 November 2011 06:29AM

I tried to write down my idea a few times, but it was badly wrong each time. Now, Instead of solving the problem, I'm just going to give a more conservative summary of what the problem is.

-

Eliezer's talk at the 2011 Singularity Summit focused largely on the AI reflection problem (how to build AI that can prove things about its own proofs, and execute self modifications on the basis of those proofs, without thereby reducing its self modification mojo). To that end, it would be nice to have a "reflection principle" by which an AI (or its theorem prover) can know in a self-referential way that its theorem proving activities are working as they should.

The naive way to do this is to use the standard provability predicate, ◻, which can be thought of as asking whether a proof of a given formula exists. Using this we can try to formalize our intuition that a fully reflective AI, one that can reason about itself in order to improve itself, should understand that its proof deriving behavior does in fact produce sentences derivable from its axioms:

AI ⊢ ◻P → P,

which is intended to be read as "The formal system "AI" understand in general that "If a sentence is provable then it is true" ", though literally it means something a bit more like "It is derivable from the formal system "AI" that "if there exists a proof of sentence P, then P", in general for P".

Surprisingly, attempting to add this to a formal system, like Peano Arithmetic, doesn't work so well. In particular, it was shown by Löb that adding this reflection principle in general lets us derive any statement, including contradictions.

So our nice reflection principle is broken. We don't understand reflective reasoning as well as we'd like. At this point, we can brainstorm some new reflection principles: Maybe our reflection principles should be derivable within our formal system, instead of being tacked on. Also, we can try to figure out in a deeper way why "AI ⊢ ◻P → P" doesn't work: If we can derive all sentences, does that mean that proofs of contradictions actually do exist? If so, why weren't those proofs breaking our theorem provers before we added the reflection principle? Or do the proofs for all sentences only exist for the augmented theorem prover, not for the initial formal system? That would suggest our reflection principle is allowing the AI to trust itself too much, allowing it to derive things because deriving them allows them to be derived. Though it doesn't really look like that if you just stare at the old reflection principle. We are confused. Let's unconfuse ourselves.

## Considering all scenarios when using Bayes' theorem.

9 20 June 2011 06:11PM

Disclaimer: this post is directed at people who, like me, are not Bayesian/probability gurus.

Recently I found an opportunity to use the Bayes' theorem in real life to help myself update in the following situation (presented in gender-neutral way):

Let's say you are wondering if a person is interested in you romantically. And they bought you a drink.
A = they are interested in you.
B = they bought you a drink.
P(A) = 0.3 (Just an assumption.)
P(B) = 0.05 (Approximately 1 out of 20 people, who might be at all interested in you, will buy you a drink for some unknown reason.)
P(B|A) = 0.2 (Approximately 1 out of 5 people, who are interested in you, will buy you a drink for some unknown reason. Though it's more likely they will buy you a drink because they are interested in you.)

These numbers seem valid to me, and I can't see anything that's obviously wrong. But when I actually use Bayes' theorem:
P(A|B) = P(B|A) * P(A) / P(B) = 1.2
Uh-oh! Where did I go wrong? See if you can spot the error before continuing.

Turns out:
P(B|A) = P(A∩B) / P(A) ≤ P(B) / P(A) = 0.1667
BUT
P(B|A) = 0.2 > 0.1667

I've made a mistake in estimating my probabilities, even though it felt intuitive. Yet, I don't immediately see where I went wrong when I look at the original estimates! What's the best way to prevent this kind of mistake?
I feel pretty confident in my estimates of P(A) and P(B|A). However, estimating P(B) is rather difficult because I need to consider many scenarios.

I can compute P(B) more precisely by considering all the scenarios that would lead to B happening (see wiki article):

P(B) = ∑i P(B|Hi) * P(Hi)

Let's do a quick breakdown of everyone who would want to buy you a drink (out of the pool of people who might be at all interested in you):
P(misc. reasons) = 0.05; P(B|misc) = 0.01
P(they are just friendly and buy drinks for everyone they meet) = 0.05; P(B|friendly) = 0.8
P(they want to be friends) = 0.3; P(B|friends) = 0.1
P(they are interested in you) = 0.6; P(B|interested) = P(B|A) = 0.2
So, P(B) = 0.1905
And, P(A|B) = 0.315 (very different from 1.2!)

Once I started thinking about all possible scenarios, I found one I haven't considered explicitly -- some people buy drinks for everyone they meet -- which adds a good amount of probability (0.04) to B happening. (Those types of people are rare, but they WILL buy you a drink.) There are also other interesting assumptions that are made explicit:

• Out of all the people under consideration in this problem, there are twice as many people who would be romantically interested in you vs. people who would want to be your friend.
• People who are interested in you will buy you a drink twice as often as people who want to be your friend.

The moral of the story is to consider all possible scenarios (models/hypothesis) which can lead to the event you have observed. It's possible you are missing some scenarios, which under consideration will significantly alter your probability estimates.

Do you know any other ways to make the use of Bayes' theorem more accurate? (Please post in comments, links to previous posts of this sort are welcome.)

## An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem

16 18 December 2010 01:26PM

Common Sense Atheism has recently had a string of fantastic introductory LessWrong related material. First easing its audience into the singularity, then summarising the sequences, yesterday affirming that Death is a Problem to be Solved, and finally today by presenting An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem

From the article:

Eliezer’s explanation of this hugely important law of probability is probably the best one on the internet, but I fear it may still be too fast-moving for those who haven’t needed to do even algeba since high school. Eliezer calls it “excruciatingly gentle,” but he must be measuring “gentle” on a scale for people who were reading Feynman at age 9 and doing calculus at age 13 like him.

So, I decided to write an even gentler introduction to Bayes’ Theorem. One that is gentle for normal people.

It may be interesting if you want to do a review of Bayes' Theorem from a different perspective, or offer some introductory material for others. From a wider viewpoint, it's great to see a popular blog joining our cause for raising the sanity waterline.