Last Saturday, nine people met for the Southern California FAI Workshop. Unsurprisingly, we did not come up with any major results, but I know some people were curious about this experiment, so I a providing a summary anyway.
First, I would like to say that I consider this first meeting a success. The turnout was higher than I expected. We had 9 participants, and there were 2 other people who did not show up due to scheduling conflicts. We basically stayed on topic the entire 7 hours from 10:00 to 5:00, and then we had dinner at 5:00, generously provided by MIRI. We will be hosting these workshops again. In fact, we have decided to hold them monthly. We will probably follow a schedule of meeting the first Saturday of each month, starting in June. I will make another post announcing the second meetup once this date is finalized.
We talked about various ideas participants had about FAI, but most of our time was spent thinking about probability distributions on consistent theories. One thing we observed that if you view the space of all probability assignments to logical sentences as living inside the vector space of all functions from sentences to the real numbers, then the collection of coherent probability assignments (those which correspond to probability distributions on consistent theories), is an affine subspace. This is exciting, because we can set up an inner product on this vector space and orthogonally project probability assignments onto the closest point on this subspace (i.e. find a nearby coherent probability assignment to a given probability assignment). Further, while this projection is not computable, there exists a computable procedure which converges to this point. However, I am now convinced that this idea is a dead end, for the following reason: Just because the point you start with has all coordinates between 0 and 1, does not mean that the projection to the subspace containing coherent assignments still has all coordinates between 0 and 1. (Imagine a 3d unit cube, and imagine that a theory is coherent if x+y+z=1. If you project 1,1,0 onto this subspace, you get 2/3,2/3,-1/3, which is not a valid probability assignment) I am now convinced that this idea will not be fruitful.
However, we did get several good things out of the meeting. First, we introduced several new mathy people to the problems associated with FAI. Second, we set up an email list, so that we can bounce ideas we have off of people that we know personally and who are interested in this stuff. Third, and most importantly we have become excited about doing more. I personally spent most the day after the workshop writing up lots of stuff related to what we observed above (This was before I discovered that it did not work), and I know I am not the only one to have this reaction.
Thanks to all of the participants, and please let me know if you would be interested in joining us next time!
Why is this an issue? Just project onto the simplex (sum x_1... x_n =1, x_1 >=0 ... x_n >= 0) instead of the affine subspace. This is perfectly possible in O(n) time Duchi '08. You can add many more constraints and still make efficient projections.
But I have to admit I'm confused about what use this is. What is the application supposed to be, and why is simply dividing by the sum of probabilities insufficient?
The basis vectors do not in general correspond to a set of propositions exactly one of which is true, but merely a set of propositions with some set of logical constraints (e.g. if x, y, and z are all logically equivalent, then the assignment is coherent iff x=y=z. The situation where the assignment is coherent iff x+y+z=1 was just an example).