Computer scientist, applied mathematician. Based in the eastern part of England.
Fan of control theory in general and Perceptual Control Theory in particular. Everyone should know about these, whatever subsequent attitude to them they might reach. These, plus consciousness of abstraction dissolve a great many confusions.
I created the Insanity Wolf Sanity Test. There it is, work out for yourself what it means.
Change ringer since 2022. It teaches learning and grasping abstract patterns, memory, thinking with your body, thinking on your feet, fixing problems and moving on, always looking to the future and letting both the errors and successes of the past go.
As of April 2025, I have yet to have a use for LLMs. (If this date is more than six months old, feel free to remind me to update it.)
I get off the train half way through section I, if I ever got on it at all. I see the end in the beginning, and if I do not want the end, I turn away from the beginning. There is any amount of "interesting" stuff on Facebook, and Substack, and even LessWrong and ordinary news sources, that I routinely pass by.
Miro is a zombie from the start. Do not be a zombie.
Noether's theorem is an actual theorem. You will find it formulated and proved in textbooks on mathematical physics, and indeed was formulated and proved at its original publication.
Is the Free Energy Principle a theorem? I have spent a deal of time studying the FEP primary sources, trying to grasp the mathematics, but I have not yet found the sort of text that I described above. This paper, from its title, is where I would expect to find what I am looking for, but there is little mathematical argument there, with not even a mention of the Langevin equations, Fokker-Planck equations, and Non-Equilibrium Steady States that some other sources go into. Instead, the FEP is formulated verbally as e.g. "all the quantities that can change; i.e., that are owned by the system, will change to minimise free energy".
Has anyone written the sort of textbook exposition for the FEP that is routine for things like Noether's theorem? Or if the FEP is a different sort of thing, what sort of thing is it, that is "unfalsifiable" yet not a mathematical truth?
"Noise" suggests randomness, which is what it means when talking about transmission lines and radio reception and s/n ratios. The intention of "din" seems to be more "something that at first glance looks like evidence for a thing but on closer looking is seen to be not causally entangled with it", as in Olli Järviniemi's example. Kodo is what is truly entangled with the thing.
These are not things to be put in a ratio like the radio engineer's signal and noise, but to be separated from each other and the din dismissed as irrelevant to the question at hand.
Sure, never give up, die with dignity if it comes to that. None of that translates into a budget. Concrete plans translate into a budget.
So if no one else knew how to counter drone swarms, and every defence they experimented with got obliterated by drone swarms,
…then by hypothesis, you’re screwed. But you’re making up this scenario, and this is where you’ve brought the imaginary protagonists to. You’re denying them a solution, while insisting they should spend money on a solution.
Suppose you had literally no ideas at all how to counter drone swarms, and you were really bad at judging other people's ideas for countering drone swarms.
In that case, I would be unqualified to do anything, and I would be wondering how I got into a position where people were asking me for advice. If I couldn’t pass the buck to someone competent, I’d look for competent people, get their recommendations, try as best I could to judge them, and turn on the money tap accordingly. But I can’t wave a magic wand, and where there was a pile of money there is now a pile of anti-drone technology.
Neither can anyone in AI alignment.
Money isn’t magic. It’s nothing more than the slack in the system of exchange. You have to start from some idea of what the work is that needs to happen. That seems to me to be lacking. Are there any other proposals on the table against doom but “shut it all down”?
If you spend 8000 times less on AI alignment (compared to the military),
You must also believe that AI risk is 8000 times less (than military risk).
Why?
We know how to effectively spend money on the military: get more of what we have and do R&D to make better stuff. The only limit on effective military spending is all the other things that the money is needed for, i.e. having a country worth defending.
It is not clear to me how to buy AI safety. Money is useless without something to spend it on. What would you buy with your suggested level of funding?
Sounds more like Zuckerberg's vision.