3. How does that handle ontology shifts? Suppose that this symbolic-to-us language would be suboptimal for compactly representing the universe. The compression process would want to use some other, more "natural" language. It would spend some bits of complexity defining it, then write the world-model in it. That language may turn out to be as alien to us as the encodings NNs use.
The cheapest way to define that natural language, however, would be via the definitions that are the simplest in terms of the symbolic-to-us language used by our complexity-estimator. This rules out definitions which would look to us like opaque black boxes, such as neural networks.
I note that this requires a fairly strong hypothesis: the symbolic-to-us language apparently has to be interpretable no matter what is being explained in that language. It is easy to imagine that there exist languages which are much more interpretable than neural nets (EG, English). However, it is much harder to imagine that there is a language in which all (compressible) things are interpretable.
Python might be more readable than C, but some Python programs are still going to be really hard to understand, and not only due to length. (Sometimes terser programs are the more difficult to understand.)
Perhaps the claim is that such Python programs won't be encountered due to relevant properties of the universe (ie, because the universe is understandable).
Yes, I think what I've described here shares a lot with Bengio's program.
The closest we can get is a little benchmark where the models are supposed to retrieve “a needle out of a haystack”. Stuff like a big story of 1 million tokens and they are supposed to retrieve a fact from it
This isn't "the closest we can get". Needle-in-a-haystack tests seem like a sensible starting point, but testing long-context utilization in general involves synthesis of information, EG looking at a novel or series of novels and answering reading comprehension questions. There are several benchmarks of this sort, EG:
https://epoch.ai/benchmarks/fictionlivebench
This is my inclination, but a physicalist either predicts that the phenomenology would in fact change, or perhaps asserts that you're deluded about your phenomenal experience when you think that the experience is the same despite substrate shifts. My understanding of cube_flipper's position is that they anticipate changes in the substrate to change the qualia.
From a physicalist's perspective, you're essentially making predictions based on your theory of phenomenal consciousness, and then arguing that we should already update on those predictions ahead of time, since they're so firm. I'm personally sympathetic to this line of argument, but it obviously depends on some assumptions which need to be articulated, and which the physicalist would probably not be happy to make.
Today's Inkhaven post is an edit to yesterday's, adding more examples of legitimacy-making characteristics, so I'm posting it in shortform so that I can link it separately:
Here are some potential legitimacy-relevant characteristics:
Yeah, the logic still can't handle arbitrary truth-functions; it only works for continuous truth-functions. To accept this theory, one must accept this limitation. A zealous proponent of the theory might argue that it isn't a real loss, perhaps arguing that there isn't really a true precise zero, that's just a model we use to understand the semantics of the logic. What I'll say is that this is a real compromise, just a lesser compromise than many other theories require. We can construct truth-functions arbitrarily close to a zero detector, and their corresponding Strengthened Liar will be arbitrarily close to false.
Seems to me like both.
No disagreement with the broad statements, but I note that your words do not particularly register the point that good conversation itself might be a turnon and lack thereof a turnoff? IE your post presents a puzzle: what's with the banter -> sex thing? I'm suggesting that many people might want to talk first as an inherent preference. Sure, there might be ways around that, but you weren't asking for something with no loopholes, you were asking about the banter -> sex thing.
Not really an experienced player of the relevant games, but I personally have turned down an obvious sex invitation with someone who I was otherwise interested in because too little conversation (and don't regret this choice). I am not very interested in sex with someone who I can't have a good conversation with. I feel like a lot of the intrigue of an intimate encounter is conversational intimacy. I've never experienced the chat at party -> sex pipeline, however. Only [chat online for multiple months]->sex.
Canada is doing a big study to better understand the risks of AI. They aren't shying away from the topic of catastrophic existential risk. This seems like good news for shifting the Overton window of political discussions about AI (in the direction of strict international regulations). I hope this is picked up by the media so that it isn't easy to ignore. It seems like Canada is displaying an ability to engage with these issues competently.
This is an opportunity for those with technical knowledge of the risks of artificial intelligence to speak up. Making such knowledge legible to politicians and the general public is an important part of civilization being able to deal with AI in a sane manner. If you can state the case well, you can apply to speak to the committee:
Luc Theriault is responsible for this study taking place.
I don't think the 'victory condition' of something like this is a unilateral Canadian ban/regulation -- rather, Canada and other nations need to do something of the form "If [some list of other countries] pass [similar regulation], Canada will [some AI regulation to avoid the risks posed by superintelligence]".
Here's a relatively entertaining second hour of proceedings from 26 January:
https://youtu.be/W0qMb1qGwFw?si=EqgPSHRt_AYuGgu8&t=4123
Full videos:
https://www.youtube.com/watch?v=W0qMb1qGwFw&t=30s
https://www.youtube.com/watch?v=mow9UFdxiIw&t=30s
https://www.youtube.com/watch?v=ipMS1S5oOlg&t=19s