Moltbook for misalignment research?
I don't think so – my CLAUDE.md is fairly short (23 lines of text) and consists mostly of code style comments. I also have one skill for set up for using Julia via a REPL. But I don't think either of these would result in more disagreement/correction.
I've used Claude Code in mostly the same way since 4.0, usually either iteratively making detailed plans and then asking it to check off todos one at a time, or saying "here's a big, here's how to reproduce it, figure out what's going on."
I also tend to write/speak with a lot of hedging, so that might make Claude more likely to assume my instructions are wrong.
I move data around and crunch numbers at a quant hedge fund. There are some aspects that make our work somewhat resistant to LLMs normally: we use a niche language (Julia) and a custom framework. Typically, when writing framework related code, I've given Claude Code very specific instructions and it's followed them to the letter, even when those happened to be wrong.
In 4.6, Claude seems to finally "get" the framework, searching the codebase to understand its internals (as opposed to just understanding similar examples) and has given me corrections or pushback – e.g. it warned me (correctly) about cases where I had an unacceptably high chance of hash collisions, and said something like "no, the bug isn't X, it's Y" (again correctly) when I was debugging.
Relatedly, get good at the things that you're hiring for. It's possible to tell if somebody is about twice as good as you are at something. It's very hard to tell the difference between twice as skilled and ten times as skilled. So if you need to hire people who are very good at something you need to get at least decently good at it yourself.
This also has a strange corollary. It often makes sense to hire people for the things that you're good at and to keep doing the things that you're mediocre at.
After my daughter got covid (at 4 months old), she was only sleeping for about an hour at a time, which was really rough on us and her – we were all constantly exhausted. It took just two days of cry it out to get her back to sleeping much better, and then she was noticeably happier and more energetic (and so were we.)
Donated $2.5k. Thanks for everything!
I tried to search for surveys of mathematicians on the axiom of choice, but couldn't find any. I did find one survey of philosophers, but that's a very different population, asked whether they believed AC/The Continuum Hypothesis has an answer rather than what the answer is: https://thephilosophyforum.com/discussion/13670/the-2020-philpapers-survey
My subjective impression is that my Mathematician friends would mostly say that asking whether AC is true or not is not really an interesting question, while asking what statements depend on it is.
Use random spot-checks
This is really, really hard to internalize. The default is to pay uniformly less attention to everything, e.g. switch to skimming every PR rather than randomly reviewing a few in detail. But that default means you lose a valuable feedback loop, while spot checking even 10% sustains it.
If you believe that the UHC CEO knowingly pushed a model that had a 90% error rate, being programmed to almost always just (illegally, incorrectly) deny health care coverage to people who were less likely to sue, then "innocent" is a big overstatement. That's pretty close to murdering people for money.
Similarly, I don't think you could claim that the executives who knowingly launched the Ford Pinto were innocent.
The UHC nhPredict lawsuit has not resolved yet, and I haven't done enough research to be confident about it one way or another. But my point is that the crux is more "are current billionaires actively getting people killed for money?", not "is it ok to kill innocent people because they're rich?"