To answer your first bullet: Solomonoff induction has many hypotheses. One class of hypotheses would continue predicting bits in accordance with what the first camera sees, and another class of hypotheses would continue predicting bits in accordance with what the second camera sees. (And there would be other hypotheses as well in neither class.) Both classes would get roughly equal probability, unless one of the cameras was somehow easier to specify than the other. For example, if there was a gigantic arrow of solid iron pointing at one camera, then maybe it would be easier to specify that one, and so it would get more probability. Bostrom discusses this a bit in Anthropic Bias, IIRC.
To answer your second bullet: Yep. To reason about Solomonoff Induction properly we need to think about what the simplest "psychophysical laws" are, since they are what SI will be using to make predictions given the physics-simulation. And depending on what they are, various transformations of the camera may or may not be supported. Plausibly, when a camera is destroyed and rebuilt with functionally similar materials, the sorts of psychophysical laws which say "you survive the process" will be more complex than the sorts which say you don't. If so, SI would predict the end of its perceptual sequence. (Of course, after the transformation, you'd have a system which continued to use SI. So it would update away from those psychophysical laws that (in its view) just made an erroneous prediction.
To answer your third question: For SI, there is only one rule: Simpler is better. So, think about how you are not sure how to classify what counts as "drastic." Insofar as it turns out to be hard to specify, it's a distinction SI would not make use of. So it may well be that a rock falling on a camera would be predicted to result in doom, but it may not. It depends on what the overall simplest psychophysical laws are. (Of course, they have to also be consistent with data so far -- so presumably lots of really simple psychophysical laws have already been ruled out by our data, and any real-world SI agent would have an "infancy period" where it is busy ruling out elegan, simple, and wrong hypotheses, hypotheses which are so wrong that they basically make it flail around like a human baby.)
Those are my answers at least, I'd be interested to hear if anyone disagrees.
FWIW I am excited to hear Carl was thinking about this in 2012, I ended up having similar thoughts independently a few years ago. (My version: Solomonoff Induction is solipsistic phenomenal idealism.)
My version: Solomonoff Induction is solipsistic phenomenal idealism.
I don't understand what this means (even searching "phenomenal idealism" yields very few results on google, and none that look especially relevant). Have you written up your version anywhere, or do you have a link to explain what solipsistic phenomenal idealism or phenomenal idealism mean? (I understand solipsism and idealism already; I just don't know how they combine and what work the "phenomenal" part is doing.)
I wrote about a closely related issue (more directly about human developmental psychology / cognitive science than Solomonoff induction) here.
Thanks, that's definitely related. I had actually read that post when it was first published, but didn't quite understand it. Rereading the post, I feel like I understand it much better now, and I appreciate having the connection pointed out.
This is highly related to UDASSA. In the linked post, especially Problem #2 (about splitting conscious computers) and bits of Problem #3 (e.g. "What happens if we apply UDASSA to a quantum universe? For one, the existence of an observer within the universe doesn't say anything about conscious experience. We need to specify an algorithm for extracting a description of that observer from a description of the universe"...)
Lanrian's mention of UDASSA made me search for discussions of UDASSA again, and in the process I found Hal Finney's 2005 post "Observer-Moment Measure from Universe Measure", which seems to be describing UDASSA (though it doesn't mention UDASSA by name); it's the clearest discussion I've seen so far, and goes into detail about how the part that "reads off" the camera inputs from the physical world works.
I also found this post by Wei Dai, which seems to be where UDASSA was first proposed.
So right before the camera is duplicated, Solomonoff induction "knows" that it will be in just one of the cameras soon, but doesn't know which one.
It sounds like it'd "know" that it will be both, separately.
I'm not sure I understand. The bit sequence that Solomonoff induction receives (after the point where the camera is duplicated) will either contain the camera inputs for just one camera, or it will contain camera inputs for both cameras. (There are also other possibilities, like maybe the inputs will just be blank.) I explained why I think it will just be the camera inputs for one camera rather than two (namely, tracking the locations of two cameras requires a longer program). Do you have an explanation of why "both, separately" is more likely? (I'm assuming that "both, separately" is the same thing as the bit sequence containing camera inputs for both cameras. If not, please clarify what you mean by "both, separately".)
My disagreement was terminological, not conceptual.
There is a teleporter. You step into part A and you will disappear, and step out of both part B and part C separately. There are now two of you. These two do not possess any special telepathy or connection, but both are you, and you may care about the outcomes for both before you step into the teleporter, and this may affect whether you choose to do so.
Duplication is not a process where you will end up as 'one of the two, but unclear which'. Duplication is a process where you become two entities which are not changed by the process. You become not one, but "both, separately." The 'separation' means that the two do not share observations directly with each other (though an object entering the same room as both could be seen by both from different angles).
I consider this to be a flaw in AIXI type designs. To actually make sense, these designs need hypercompute, and so have to guess at what rules allow the hypercompute to interact with the normal universe. I have a rough idea of some kind of FDTish agent that can solve this, but can't formalize it.
I might have misunderstood your comment, but it sounds like you're saying that Solomonoff induction isn't naturalized/embedded, and that this is a problem (sort of like in this post). If so, I'm fine with that, and the point of my question was more like, "given this flawed-but-interesting model (Solomonoff induction), what does it say about this question that I'm interested in (consciousness)?"
We can make Solomonoff induction believe all sorts of screwy things about consciousness. Take a few trillion identical computers running similar computations. Put something really special and unique next to one of the cases, say a micro black hole. Run solomonoff induction on all the computers, each with different input. Each inductor simulates the universe and has to know its own position in order to predict its input. The one next to the black hole can most easily locate itself as the one next to the black hole, if the black hole is moved, it will believe its consciousness resides in "the computer next to the black hole" and predict accordingly.
Back in 2012, in a thread on LW, Carl Shulman wrote a couple of comments connecting Solomonoff induction to brain duplication, epiphenomenalism, functionalism, David Chalmers's "psychophysical laws", and other ideas in consciousness.
The first comment says:
The second comment says:
Carl's comments pose the questions you can ask/highlight the connection, but they don't answer those questions. I would be interested in references to other places discussing this idea, or answers to these questions.
Here are some of my own confused thoughts (I'm still trying to learn algorithmic information theory, so I would appreciate hearing any corrections):