jabowery
jabowery has not written any posts yet.

To what degree can the paper "Approval-directed agency and the decision theory of Newcomb-like problems" be expressed in the CTMU's mathematical metaphysics?
One man's random bit string is another man's cyphertext.
Ha, ha! As if the half-silvered mirror did different things on different occasions!
Ha, ha! As if the photon source were known to emit photons that were in all respects identical on different occasions!
The machine learning world is doing a lot of damage to society by confusing "is" with "ought" which, within AIXI, is equivalent to confusing its two unified components: Algorithmic Information Theory (compression) with Sequential Decision Theory (conditional decompression). This is a primary reason the machine learning world has failed to provide anything remotely approaching the level of funding for The Hutter Prize that would be required to attract talent away from grabbing all of the low hanging fruit in the matrix multiply hardware lottery branches, while failing to water the roots of the AGI tree. So the failure is in the machine learning world -- not the Hutter Prize criteria. There is... (read more)
This seems to be a red-herring issue. There are clear differences in description complexity of Turing machines so the issue seems merely to require a closure argument of some sort in order to decide which is simplest:
Decide on the Turing machine has the shortest program that simulates that Turing machine while running on that Turing machine.
Marcus Hutter provides a full formal approximation of Solomonoff induction which he calls AIXI-tl.
This is incorrect. AIXI is a Sequential Decision Theoretic AGI whose predictions are provided by Solomonoff Induction. AIXI-tl is an approximation of AIXI in which Solomonoff Induction's predictions are approximate but also in which Sequential Decision Theory's decision procedure is approximate.
Lossless compression is the correct unsupervised machine learning benchmark, and not just for language models. To understand this, it helps to read the Hutter Prize FAQ on why it doesn't use perplexity:
http://prize.hutter1.net/hfaq.htm
Although Solomonoff proved this in the 60s, people keep arguing about it because they keep thinking they can, somehow, escape from the primary assumption of Solomonoff's proof: computation. The natural sciences are about prediction. If you can't make a prediction you can't test your model. To make a prediction in a way that can be replicated, you need to communicate a model that the receiver can then use to make an assertion about the future. The language used to communicate this... (read more)
The Hutter Prize for Lossless Compression of Human Knowledge reduced the value of The Turing Test to concerns about human psychology and society raised by Computer Power and Human Reason: From Judgment to Calculation (1976) by Joseph Weizenbaum.
Sadly, people are confused about the difference between the techniques for model generation and and the techniques for model selection. This is no more forgivable than is confusion between mutation and natural selection and gets to the heart of the philosophy of science prior to any notion of hypothesis testing.
Where Popper could have taken a clue from Solomonoff is understanding that when an observation is not predicted by a model, one can immediately... (read more)
IS resides in Algorithmic Information Theory
OUGHT resides in Sequential Decision Theory
SDT(AIT ) = AIXI
And don't get confused about AIXI being "just" a theory of AGI. Any system that makes decisions (what OUGHT I to do?) depends on learning (what IS the case?).
Moreover, Hume's Guillotine has been the foundation of ethics for centuries, whether that is recognized or not.
The fact that the field of AGI ethics introduces the concept of "sharp left turn" without regard to either the founding theory of AGI, or the founding theory of ethics is quite a sight to behold!
See Hume's Guillotine at github for a further exposition.