There's an interesting parallel with Modal Combat. Both approaches want to express the idea that "moral agents are those that cooperate with moral agents". Modal Combat resolves the circularity with diagonalization, and Eigenmorality resolves it by finding a stable distribution.

This thread was prompted by this comment in the Open Thread.

That comment is about utilitarianism and doesn't mention "utility functions" at all.

Shouldn't AIXI include itself (for all inputs) recursively? If so I don't think your sequence is well defined.

No, AIXI isn't computable and so does not include itself as a hypothesis.

the maximally unexpected sequence will be random

In a random sequence, AIXI would guess on average *half* of the bits. My goal was to create a specific sequence, where it couldn't guess *any*. Not just a random sequence, but specifically... uhm... "anti-inductive"? The exact opposite of lawful, where random is merely halfway opposed. I don't care about other possible predictors, only about AIXI.

Imagine playing rock-paper-scissors against someone who beats you all the time, whatever you do. That's worse than random. This sequence would bring the mighty AIXI to tears... but I suspect to a human observer it would merely seem pseudo-random. And is probably not very useful for other goals than making fun of AIXI.

Ok. I still think the sequence is random in the algorithmic information theory sense; i.e., it's incompressible. But I understand you're interested in the adversarial aspect of the scenario.

You only need a halting oracle to compute your adversarial sequence (because that's what it takes to run AIXI). A super-Solomonoff inductor that inducts over all Turing machines with access to halting oracles would be able to learn the sequence, I think. The adversarial sequence for that inductor would require a higher oracle to compute, and so on up the ordinal hierarchy.

I have a random mathematical idea, not sure what it means, whether it is somehow useful, or whether anyone has explored this before. So I guess I'll just write it here.

Imagine the *most unexpected* sequence of bits. What would it look like? Well, probably not what you'd expect, by definition, right? But let's be more specific.

By "expecting" I mean this: You have a prediction machine, similar to AIXI. You show the first N bits of the sequence to the machine, and the machine tries to predict the following bit. And the most *unexpected* sequence is one where the machine makes the most guesses *wrong*; preferably all of them.

More precisely: The prediction machine starts with imagining *all* possible algorithms that could generate sequences of bits, and it assigns them probability according to the Solomonoff prior. (Which is impossible to do in real life, because of the infinities involved, etc.) Then it receives the first N bits of the sequence, and removes all algorithms which would *not* generate a sequence starting with these N bits. Now it normalizes the probabilities of the remaining algorithms, and lets them vote on whether the next bit would be 0 or 1.

However, our sequence is generated in defiance to the prediction machine. We actualy don't have any sequence in advance. We just ask the prediction machine what is the next bit (starting with the empty initial sequence), and then do the exact opposite. (There is some analogy with Cantor's diagonal proof.) Then we send the sequence with this new bit to the machine, ask it to predict the next bit, and again do the opposite. Etc.

There is this technical detail, that the prediction machine may answer "I don't know" if *exactly half* of the remaining algorithms predict that the next bit will be 0, and other half predicts that it will be 1. Let's say that if we receive this specific answer, we will always add 0 to the end of the sequence. (But if the machine thinks it's 0 with probability 50.000001%, and 1 with probability 49.999999%, it will output "0", and we will add 1 to the end of the sequence.)

So... at the beginning, there is no way to predict the first bit, so the machine says "I don't know" and the first bit is 0. At that moment, the prediction of the following bit is 0 (because the "only 0's" hypothesis is very simple), so the first two bits are 01. I am not sure here, but my next prediction (though I am predicting this with naive human reasoning, no math) would be 0 (as in "010101..."), so the first three bits are 011. -- And I don't dare to speculate about the following bits.

The exact sequence depends on how exactly the prediction machine defines the "algorithms that generate the sequence of bits" (the technical details of the language these algorithms are written in), but can still something be said about these "most unexpected" sequences in general? My guess is that to a human observer they would seem like a random noise. -- Which contradicts my initial words that the sequence would *not* be what you'd expect... but I guess the answer is that the generation process is trying to surprise the prediction machine, not me as a human.

In order to capture your intuition that a random sequence is "unsurprising", you want the predictor to output a distribution over {0,1} — or equivalently, a subjective probability p of the next bit being 1. The predictor tries to maximize the expectation of a proper scoring rule. In that case, the maximally unexpected sequence will be random, and the probability of the sequence will approach 2^{-n}.

Allowing the predictor to output {0, 1, ?} is kind of like restricting its outputs to {0%, 50%, 100%}.

I'm not familiar with Kim's touch game, but I did run across a different game that someone came up with to practice applying Bayes theorem, which involved touching people in the shoulder with one of two objects (in the example, it was a coat-hanger, if memory serves), and then testing their ability to update their predictions based on more information about the object they were touched with. I wish I could find that page again, but I haven't been able to find it again. It might have even been linked off of a LessWrong meetup group.

I'm less concerned with getting people to apply Bayes theorem (which would be GREAT, mind) than I am with getting people to be more comfortable with collecting information, sharing observations, and not getting fixated on personal theories. I'd especially like them to get comfortable with making the jump to making reasoned predictions about hidden properties of objects, given their theories about what an object is, but I'd like to find a way to make that process at least as fun as shaking a box to determine its contents.

The first game was just a deck of cards and a cardboard box large enough to allow the object to be flipped (though one type of flip was not possible in certain orientations). The players were all adults and I consider them to be quite astute; I expect that children could also play, and it could be a useful lesson about how to not get fixated on your own ideas, how to incorporate observations from others, and how to share observations constructively.

The idea came up in a discussion with a friend about how terrible our science classes had been as children, and how learning individual facts was not particularly useful.

Critch's Really Getting Bayes game.

On Monday, I have to decide whether to eat ice cream or not. After a little thought, I decide it's all right to eat ice cream because <reason>. The reason is a lame reason that my mind came up with because it really wants ice cream.

On Tuesday I only eat ice cream with 50% probability.

I started at my first full-time job as a software engineer in the Bay Area after following Alexei's advice.

I got engaged to an awesome person as a result of this post.

Thank you so much for writing this. I will be in a similar situation in a couple months.

It worked! I now have a similar job to Alexei's.

View more: Next

I'm an EA and interested in signing up for cryonics. After cryocrastinating for a few years (ok I guess I don't get to say "after" until I actually sign up), I've realized that I should definitely sign up for life insurance, because of the ability to change the beneficiary. I place a low probability on cryonics working right now, but I can claim a charity or a Donor Advised Fund as the beneficiary until I place a sufficient probability on suspension technology working. In the future, I can change it back if I change my mind, etc.

Any issues that might come into this? If no one sees any flaws, I'm committing to sign up for life insurance with this plan in mind by or during the next open thread, and making a more prominent post about this strategy for any EA+Cryonics people.

It might be worth looking into which life insurance companies are friendly to cryonics.