Results from MIRI's December workshop
Last week (Dec. 14-20), MIRI ran its 6th research workshop on logic, probability, and reflection. Writing up mathematical results takes time, and in the past, it's taken quite a while for results from these workshops to become available even in draft form. Because of this, at the December workshop, we tried something new: taking time during and in the days immediately after the workshop to write up results in quick and somewhat dirty form, while they still feel fresh and exciting. In total, there are seven short writeups. Here's a list, with short descriptions of each. Before you get started on these writeups, you may want to read John Baez's blog post about the workshop, which gives an introduction to the two main themes of the workshop. Theme 1: Scientific induction in mathematics One of the main themes was using Bayesian probability to represent uncertainty about mathematical statements. Like human mathematicians, an AI will be able to outright prove or disprove many mathematical statements, but there will also be many that it will be uncertain about, and the obvious thing to try to get a handle on dealing with such uncertainty is to assign a probability to each such statement. This would mean choosing some sort of prior, and then updating on evidence: For example, if you're not sure whether the twin prime conjecture is true, then each time you discover a new twin prime larger than all that you have seen before, you should ever so slightly increase the probability you assign to the conjecture. But what sort of prior should we choose? That's the problem that people at the December workshop tried to make some progress on. Here is an interesting problem which the best previous proposal, due to Abram Demski, fails on: Suppose that Q(x) is a predicate symbol, and suppose that you find evidence that Q(x) is true of exactly 90% of all numbers between x=1 and x=10^100. Then we would expect that if you plug in some arbitrary number in this range, say n = floor(7^7^7^7
Donated $300.