The characters in the movie take a lot of precautions to isolate themselves from their time-clones, meaning that they don't really know whether they got out of the box at the start. Therefore, they just have faith in the plan and jump in the box at the end of the loop. So long as they don't create any obvious paradoxes ("break symmetry" as they call it), everything works out from their perspective, and they can assume it's consistent-timeline travel rather than branching, so they don't think they're creating a timeline in which they mysteriously vanish.
When they start creating paradoxes, of course, they should realize. The fact that they don't think about it this way fits with the general self-centeredness of the characters, however.
I agree that it makes sense to think of this probabilistically, but we can also think of it as just all timelines existing. I'm happy to excuse the events of the movie as showing one particularly interesting timeline out of the many. It makes sense that the lens of the film isn't super interested in the timelines which end up lacking one of the viewpoint characters.
If we do think of it probabilistically, though, are the events of the movie so improbable that we should reject them? By my thinking, the movie still fits well with that. Depending on how you think the probabilities should work out, it seems like that first timeline where the person just vanishes is low-probability, particularly if they create a relatively consistent time-loop. In a simple consistent loop, only the original branch has them vanish, while each other branch looks like an internally consistent timeline, and spawns another just like itself. The probability of a timeline like "one Abe, then two Abes, then back to one" seems high, if Abe is careful to avoid paradoxes. With paradoxes, the high-probability timelines get chaotic, which is what we see in the movie (and in the comic I linked).
I'm not sure why you say it's hard to explain with branching timelines. To me this is just branching timelines. The movie voiceover states at one point that the last version of events seems to be the one that holds true, meaning that you see the last branching timeline, usually the one with the most Bobs. I don't think you have to belive this part of the voiceover, though; this is just the opinion of someone trying to make sense of events. You could instead say that the movie has a convention of showing us later splits rather than earlier.
Canada is doing a big study to better understand the risks of AI. They aren't shying away from the topic of catastrophic existential risk. This seems like good news for shifting the Overton window of political discussions about AI (in the direction of strict international regulations). I hope this is picked up by the media so that it isn't easy to ignore. It seems like Canada is displaying an ability to engage with these issues competently.
This is an opportunity for those with technical knowledge of the risks of artificial intelligence to speak up. Making such knowledge legible to politicians and the general public is an important part of civilization being able to deal with AI in a sane manner. If you can state the case well, you can apply to speak to the committee:
Luc Theriault is responsible for this study taking place.
I don't think the 'victory condition' of something like this is a unilateral Canadian ban/regulation -- rather, Canada and other nations need to do something of the form "If [some list of other countries] pass [similar regulation], Canada will [some AI regulation to avoid the risks posed by superintelligence]".
Here's a relatively entertaining second hour of proceedings from 26 January:
https://youtu.be/W0qMb1qGwFw?si=EqgPSHRt_AYuGgu8&t=4123
Full videos:
https://www.youtube.com/watch?v=W0qMb1qGwFw&t=30s
https://www.youtube.com/watch?v=mow9UFdxiIw&t=30s
https://www.youtube.com/watch?v=ipMS1S5oOlg&t=19s
3. How does that handle ontology shifts? Suppose that this symbolic-to-us language would be suboptimal for compactly representing the universe. The compression process would want to use some other, more "natural" language. It would spend some bits of complexity defining it, then write the world-model in it. That language may turn out to be as alien to us as the encodings NNs use.
The cheapest way to define that natural language, however, would be via the definitions that are the simplest in terms of the symbolic-to-us language used by our complexity-estimator. This rules out definitions which would look to us like opaque black boxes, such as neural networks.
I note that this requires a fairly strong hypothesis: the symbolic-to-us language apparently has to be interpretable no matter what is being explained in that language. It is easy to imagine that there exist languages which are much more interpretable than neural nets (EG, English). However, it is much harder to imagine that there is a language in which all (compressible) things are interpretable.
Python might be more readable than C, but some Python programs are still going to be really hard to understand, and not only due to length. (Sometimes terser programs are the more difficult to understand.)
Perhaps the claim is that such Python programs won't be encountered due to relevant properties of the universe (ie, because the universe is understandable).
Yes, I think what I've described here shares a lot with Bengio's program.
The closest we can get is a little benchmark where the models are supposed to retrieve “a needle out of a haystack”. Stuff like a big story of 1 million tokens and they are supposed to retrieve a fact from it
This isn't "the closest we can get". Needle-in-a-haystack tests seem like a sensible starting point, but testing long-context utilization in general involves synthesis of information, EG looking at a novel or series of novels and answering reading comprehension questions. There are several benchmarks of this sort, EG:
https://epoch.ai/benchmarks/fictionlivebench
This is my inclination, but a physicalist either predicts that the phenomenology would in fact change, or perhaps asserts that you're deluded about your phenomenal experience when you think that the experience is the same despite substrate shifts. My understanding of cube_flipper's position is that they anticipate changes in the substrate to change the qualia.
From a physicalist's perspective, you're essentially making predictions based on your theory of phenomenal consciousness, and then arguing that we should already update on those predictions ahead of time, since they're so firm. I'm personally sympathetic to this line of argument, but it obviously depends on some assumptions which need to be articulated, and which the physicalist would probably not be happy to make.
Today's Inkhaven post is an edit to yesterday's, adding more examples of legitimacy-making characteristics, so I'm posting it in shortform so that I can link it separately:
Here are some potential legitimacy-relevant characteristics:
Yeah, the logic still can't handle arbitrary truth-functions; it only works for continuous truth-functions. To accept this theory, one must accept this limitation. A zealous proponent of the theory might argue that it isn't a real loss, perhaps arguing that there isn't really a true precise zero, that's just a model we use to understand the semantics of the logic. What I'll say is that this is a real compromise, just a lesser compromise than many other theories require. We can construct truth-functions arbitrarily close to a zero detector, and their corresponding Strengthened Liar will be arbitrarily close to false.
They do deliberately try to set up an "I'll get in the box if I don't see myself get out" sort of situation in the movie, though they don't succeed, and they don't seem to realize that it would result in 0-2-0-2-... across metatime.
Good point about how permanent increases have to be as improbable as permanent decreases! I should've gotten that from what you were saying earlier. I suppose that leaves me with the "movies follow interesting timelines" theory, where it's just a convention of the film to look at the timelines where characters multiply.