Take a programing language with two characters. Assign each program a prior of 2^-length(program). If the program outputs some string, then P(string | program) = 1, else it equals 0.
This is exactly Solomonoff induction, assuming that the programming language is prefix-free.
I figure there must be some reason people don't do this already, or else there's a bunch of people doing it. I'd be real happy to find out about either.
Because it's uncomputable: there are infinitely many programs to consider, and there are programs that don't halt and for many of them, you can't prove that they don't halt.
If you set a maximum program length and a maximum execution time, then inference becomes computable, albeit combinatorially hard: maximum a posteriori inference can be easily reduced to MAX-SAT or ILP and is therefore NP-complete (optimization version), while posterior evaluation is #P-complete.
It's known that many NP-hard problems are practically solvable for real-world instances. In this case, there are indeed people (primarily at Microsoft, I think) who managed to make it work, but only for short program lengths in limited application domains: Survey paper.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Here's how I predict your setup to work, and shame on you for chickening out:
http://sketchtoy.com/66313589
You start doing the experiment, you flip four coins, 15/16 of you memorize a sequence, 15*32 of you memorized pairwise probably different sequences. In the end, you have a probability of 15/16 to find yourself having memorized a sequence. If QI works and half of you who find a tails in a first four coins commit suicide, start-experiment-you only has a 15/17 chance to find themselves having found tails and failed to kill themselves.
What do you mean by "commit suicide" here? Memorize the results of 5 more coins?