Computer scientist, applied mathematician. Based in the eastern part of England.
Fan of control theory in general and Perceptual Control Theory in particular. Everyone should know about these, whatever subsequent attitude to them they might reach. These, plus consciousness of abstraction dissolve a great many confusions.
I wrote the Insanity Wolf Sanity Test. There it is, work out for yourself what it means.
Change ringer since 2022. It teaches learning and grasping abstract patterns, memory, thinking with your body, thinking on your feet, fixing problems and moving on, always looking to the future and letting both the errors and successes of the past go.
I first found an LLM useful (other than for answering the question "let's see how well the dog can walk on its hind legs") in September 2025. As yet they do not form a regular part of anything I do.
Do yourself a favor, go buy a pair of cheap (~40 euros or so) wide toebox shoes
But where? I always have a problem finding shoes with a toe-to-heel width ratio to match my feet, but I've never seen anything advertised for that feature. From experience I've learned a few brands that are more likely to be suitable, and a few that aren't even worth picking off the shelf (one of which is Adidas), but I can spend a day going to half a dozen shoe shops (online is out of the question) and find nothing.
there's obviously no way Sam would say something like "consciousness is causally disconnected from the rest of the universe, and can never influence future decision making processes in the brain."
But what does he say? I have not read him and I'm not about to (he does not by a long way make my own very short list of "people who are pretty much always right"), but the extracts in the OP sound epiphenomenalistic. Consciousness in his account only observes, never acts. It could be cut off without affecting the organism. This is as incoherent an idea as p-zombies.
I'm guessing that this fiction is based on the actual Clawdbot? Having downloaded and skimmed through (but not run) its installation script, and read some of the user feedback, I still can't tell if the site is a real thing or performance art.
There are some anomalies in the chapter numbering:
Part IV ends with Chapter 18; Part V begins with Chapter 21.
Part V ends with Chapter 23; Part VI begins with Chapter 27.
Part VI jumps from Chapter 28 to Chapter 31.
Part VII omits Chapter 33.
Are these just trifling errors, or indications of deliberate omissions by the writer, negative spaces for those with eyes to see?
The first does not work at all. “Personal experience” is not a level of confidence, and does not imply any particular level of confidence.
Words have meanings. Confidence level is about my conclusion; epistemic status is about how I got there. These are different things, and the terms should be used accordingly.
I find epistemic status far more informative than confidence level. What can I do with someone’s “70%”? Nothing.
How large is the largest quantum coherent object possible? Unknown. The limit seems set by decoherence: thermal radiation, environmental interactions, the difficulty of maintaining phase relationships across distance. But there’s no in-principle small limit.
How about 22 micrograms, the Planck mass? Epistemic status: idle speculation. There's this absolute mass that falls out of the fundamental constants, it must mean something.
ETA: I recently saw a report of the latest record in putting a large blob of atoms into a superposition state: 7000 sodium atoms. That's a total mass of amu = grams = Planck masses = about Planck masses. Only 15 orders of magnitude to go before discovering whether there's anything in that speculation.
A Planck mass is about the mass of a medium-sized grain of sand.
It does, but very wastefully. Almost all of the avoidance manoeuvres you make will be unnecessary, and some will even cause a collision, but you will not know which ones. Further modelling (that I think would be belabouring the point to do) would allow a plot of how a decision rule for manoeuvring reduces the probability of collisions.
Let be the frequency of collisions given the tracking precision and some rule for manoeuvres. Let be the frequency of collisions without manoeuvres. Define effectiveness to be .
I would expect effectiveness to approach 1 for perfect tracking (and a sensible decision rule) and decline towards 0 as the precision gets worse.
Do you have a positive vision for the future? For your own future?