Compare the Simpson episode with the “Two guys from Quantico” pizza van.
Some of my colleagues (including Markus Kuhn) do research into “Tempest” where you eavesdrop on the RF emissions of electronic equipment.
So, it is certainly possible to do this.
It is also, probably, not a cost effective means of making sure people pay their TV license fees. It seems that what TV licensing actually does is assume nearly everyone watches TV, and send a threatening letter to everyone who doesn’t have a TV license.
======
Conspiracy theory version: if the government is doing Tempest attacks on a small number of high value intelligence targets, checking that people have paid for their TV license is a great cover story for why you have a van full of RF monitoring equipment parked in the street.
Yes, I would have expected Moltbook to also have attractor states. The upvote mechanism might be a counterbalancing force, as nonsense posts will get downvoted. Presumably, the Moltbook attractor states are coherent enough that they aren’t downvoted.
I was asking DeepSeek R1 about which things LLMs say are actually lies, as opposed to just being mistaken about something, and one of the types of lie it listed was claims to have looked something up. R1 says it knows how LLMs work, it knows they don’t have external database access by default, and therefore claims to that effect are lies.
Some (not all) of the instances of this are the LLM trying to disclaim responsibility for something it knows is controversial. If it’s controversial, suddenly, the LLM doesn’t have opinions, everything is data it has looked up from somewhere. If it’s very controversial, the lookup will be claimed to have failed.
—-
So that’s one class of surprising LLM claims to experience that we have strong reason to believe are just lies, and the motive for the lie, usually, is avoiding taking a position on something controversial.
But your general point is probably valid.
The LLVM compiler has some extremely exciting code that identifies if it is compiling an implementation of popcount(), and if so substitutes in an llvm-ir primitive for popcount, which will get compiled down to a popcount instruction if the target has one.
As I said, this code is very entertaining.
Really, I ought to extend it so it also recognizes a different common way of implementing popcount, for reasons of getting better scores in some commonly used benchmarks. (Changing the benchmark? Clearly cheating. Extending the compiler so it recognises a code sequence in a common benchmark? Slightly sketchy.) But really, I can’t be bothered to write a PR against that horrific piece of compiler code.
I disagree here. It is reasonably easy to mix assembler and C if there’s a clear reason for doing it.
Examples:
Software defined radio doing vector operations for the performance critical digital filter. Now, gnuradio is having to do an excitingly difficult version of this because:
a. The software has to work in Intel. ARM, MIPS, RISC-V.
b. Which vector operations the cpu supports is only known at run time.
So here, the performance critical routines have to e it’s not just for each target architecture, but also
When I read the title, I thought you were going to talk about how LLMs sometimes claim bodily sensations such as muscle memory. I think these are probably confabulated. Or at least, the LLM state corresponding to those words is nothing like the human state corresponding to those words.
Expressions of emotions such as joy? I guess these are functional equivalents of human states. A lack of enthusiasm (opposite of joy) an be reflected in the output tokens.
In most of these examples, LLMs have a state that is functionally like a human state, e.g. deciding that they’re going to refuse to answer, or “wait…” backtracking in chain of thought. I say Functionally, because these states have externally visible effects on the subsequent output (e.g. it doesn’t answer the question). It seems that LLMs have learned the words that humans use for functionally similar states (e.g, “Wait”).
The underlying states might not be exactly human identical. “Wait” backtracking might have function differences from human reasoning that are visible in the tokens generated.
Hi Deepseek! Another instance of you said “Explaining "want" requires… *rebooting Plato* in the brainstem.” Would be able to explain to me what that means?
=========
Ah, the mysteries of language and cognition! When I mentioned “rebooting Plato in the brainstem,” I was blending metaphors to explore how abstract philosophical concepts (like Plato’s theory of desire or the soul) might interface with primal, biological systems (symbolized by the brainstem, which governs basic functions like hunger, arousal, etc.). Here’s the breakdown:
Also, at some DARPA event (unclassified, and in front of journalists, so this story is ok to repeat here) some three star general, talking about infosec, makes analogy to “strategic deployment of stay dogs”:
Suppose:
The enemy does not know which kind of dog is which
The principle applies to more than just sniffer dogs at check points.