If we were to take AIXI literally, we'd be concerned that induction (the generation of predictive models from observation) appears to provide about half of general intelligence (the rest is decision theory).
I don't think "taking AIXI literally" in this way makes sense; nor does saying that decision theory is about half of general intelligence.
Thanks for the link, though.
I mean, it's not exactly provable from first principles, but using the architecture of AIXI as a heuristic for what a general intelligence will look like seems to make sense to me. 'Do reinforcement learning on a learned world model' is, I think, also what many people think a GAI may end up in fact looking like, e.g., and saying that that's half decision theory and half predictive model doesn't seem too far off.
Well I'm not sure there's any reason to think that we can tell, by looking at the mathematical idealizations, that the inductive parts will take about the same amount of work to create as the agentic parts, just because the formalisms seem to weigh similar amounts (and what does that seeming mean?). I'm not sure our intuitions about the weights of the components mean anything.
If a thing has two main distinct parts, it seems reasonable to say that the thing is half part-1 and half part-2. This does not necessarily imply that the parts are equally difficult to create, although that would be a reasonable prior if you didn't know much about how the parts worked.
Is there any evidence that this is actually a general inductor, i.e. that as a prior it dominates some large class of functions? From skimming the paper it sounds like this could be interesting progress in ILP, but not necessarily groundbreaking or close to being a fully general inductor. At the moment I'd be more concerned about the transformer architecture potentially being used as (part of) a general inductor.
My impression is that it's interesting because it's good at some functions that deep learning is bad at (although unfortunately the paper doesn't make any toe-to-toe comparisons), but certainly there's a lot of things in which transformers would beat it. In particular I would be very surprised if it could reproduce GPT3 or DALL-E. So, if this leads to a major breakthrough it would probably be through merging it with deep learning somehow.
I'm not aware of a technical definition of "general inductor". I meant that it's an inductor that is quite general.
Our system [the Apperception Engine] is able to produce interpretable human-readablecausal theories from very small amounts of data
This is novel.
Wondering whether Integrated Information theory dictates that most anthropic moments have internet access
Hm, to clarify, by "consciously" I didn't mean experiential weight/anthropic measure, in this case I meant the behaviors generally associated with consciousness: metacognition, centralized narratization of thought, that stuff, which I seem to equate to deliberateness.. though maybe those things are only roughly equivalent in humans.
If we were to take AIXI literally, we'd be concerned that induction (the generation of predictive models from observation) appears to provide about half of general intelligence (the rest is decision theory). It also seems noteworthy that the models that the apperception engine produces are reductive enough to be readable to humans, a quality similar to being analyzable, classifiable, generally comprehensible enough to be intelligently worked as components in an intellectual medium, that is to say, they may be amenable to a process of self-improvement that is informed by consciously applied principles and meta-knowledge, which in turn might be improved in similar ways. So, we should probably pay attention to this sort of thing.