Here is a model of mine, that seems related.
[Edit: Add Epistemic status]
Epistemic status: I have used this successfully in the past and found it helpful. It is relatively easy to do. is large for me.
I think it is helpful to be able to emotionally detach yourself from your ideas. There is an implicit "concept of I" in our minds. When somebody criticizes this "concept of I", it is painful. If somebody says "You suck", that hurts.
There is an implicit assumption in the mind that this concept of "I" is eternal. This has the effect, that when somebody says "You suck", it is actually more like they say "You sucked in the past, you suck now, and you will suck, always and ever".
In order to emotionally detach yourself from your ideas, you need to sever the links in your mind, between your ideas and this "concept of I". You need to see an idea as an object that is not related to you. Don't see it as "your idea", but just as an idea.
It might help to imagine that there is an idea-generation machine in your brain. That machine makes ideas magically appear in your perception as thoughts. Normally when somebody says "Your idea is dumb", you feel hurt. But now we can translate "Your idea is dumb" to "There is idea-generating machinery in my brain. This machinery has produced some output. Somebody says this output is dumb".
Instead of feeling hurt, you can think "Hmm, the idea-generating machinery in my brain produced an idea that this person thinks is bad. Well maybe they don't understand my idea yet, and they criticize their idea of my idea, and not actually my idea. How can I make them understand?" This thought is a lot harder to have while being busy feeling hurt.
Or "Hmm, this person that I think is very competent thinks this idea is bad, and after thinking about it I agree that this idea is bad. Now how can I change the idea-generating machinery in my brain, such that in the future I will have better ideas?" That thought is a lot harder to have when you think that you yourself are the problem. What is that even supposed to mean that you yourself are the problem? This might not be a meaningful statement, but it is the default interpretation when somebody criticizes you.
The basic idea here is, to frame everything without any reference to yourself. It is not me producing a bad plan, but some mechanism that I just happened to observe the output of. In my experience, this not only helps alleviate pain but also makes you think thoughts that are more useful.
Here is what I would do, in the hypothetical scenario, where I have taken over the world.
Though this is what I would do in any situation really. It is what I am doing right now. This is what I breathe for, and I won't stop until I am dead.
[EDIT 2023-03-01_17-59: I have recently realized that is is just how one part of my mind feels. The part that feels like me. However, there are tons of other parts in my mind that pull me in different directions. For example, there is one part that wants me to do lots of random improvements to my computer setup, which are fun to do, but probably not worth the effort. I have been ignoring these parts in the past, and I think that their grip on me is stronger because I did not take them into account appropriately in my plans.]
For a long time I didn't use folders to organize my notes. I somehow bought that your notes should be an associative knowledge base that is linked together. I also somehow bought that tag based content addressing is good, even though I never used it really.
These believes I had are quite strange. Using directories neither prevents me from using roam style links nor org tags. Nor do any of these prevent recursive grepping or semantic-embedding-and-search.
All these compose together. And each solves a different problem.
I made a choice where there wasn't any to make. It's like trying to choose between eating only pasta or only kale.
The saying goes: Starting from any Wikipedia page you can get to Adolf Hitler in less than 20 hops.
I just tried this (using wikiroulette.co):
Imagine your notes would be as densely connected as Wikipedia's.
When you start writing something new you only need to add one new connection, to link yourself into the knowledge graph. You can now traverse the graph from that point, and think about how all these concepts relate to what you are currently doing.
Insight: Increasing stack size enables writing algorithms in their natural recursive form without artificial limits. Many algorithms are most clearly expressed as non-tail-recursive functions; large stacks (e.g., 32GB) make this practical for experimental and prototype code where algorithmic clarity matters more than micro-optimization.
Virtual memory reservation is free. Setting a 32GB stack costs nothing until pages are actually touched.
Stack size limits are OS policy, not hardware. The CPU has no concept of stack bounds—just a pointer register and convenience instructions.
Large stacks have zero performance overhead from the reservation. Real recursion costs: function call overhead, cache misses, TLB pressure.
Conventional wisdom ("don't increase stack size") protects against: infinite recursion bugs, wrong tool choice (recursion where iteration is better), thread overhead at scale (thousands of threads).
Ignore the wisdom when: single-threaded, interactive debugging available, experimental code where clarity > optimization, you understand the actual tradeoffs.
Note: Stack memory commits permanently. When deep recursion touches pages, OS commits physical memory. Most runtimes never release it (though it seems it wouldn't be hard to do with madvise(MADV_DONTNEED)). One deep call likely permanently commits that memory until process death. Large stacks are practical only when: you restart regularly, or you accept permanent memory commitment up to maximum recursion depth ever reached.
"Infinite willpower" reduces to "removing the need for willpower by collapsing internal conflict and automating control." Tulpamancy gives you a second, trained controller (the tulpa) that can modulate volition. That controller can endorse enact a policy.
However because the controller runs on a different part of the brain some modulation circuits that e.g. make you feel tired or demotivated are bypassed. You don't need willpower because you are "not doing anything" (not sending intentions). The tulpa is. And the neuronal circuits the tulpa runs on---which generate intentions to steer that ultimately turn into mental and/or muscle movements---are not modulated by the willpower circuits at all.
Gears-level model
First note that willpower is totally different from fatigue.
Principle: Reduce conflict and increase precision/reward for the target policy and “willpower” isn’t consumed; it’s unnecessary. (This is the non-tulpa way.)
The central guiding principle is to engineer the control stack so endorsed action is default, richly rewarded, and continuously stabilized. Tulpamancy gives you a second, controller with social authority and multi-modal access to your levers. This controller can just overwrite your mental state and has no willpower constraints.
The optimum policy probably includes using the sledgehammer of overwriting your mental state, as well as optimizing to adopt the target policy that you actually endorse wholeheartedly at the same time.
It's tempting to think of modern graphics APIs as requiring a bunch of tedious setup followed by "real computation" in shaders. But pipeline configuration is programming the hardware!
GPU hardware contains parameterizable functions implemented in silicon. When you specify a depth format or blend mode, you're telling the GPU how to compute.
Creating an image view with D24_UNORM_S8_UINT configures depth comparison circuits. Choosing a different depth format, results in different hardware curcits activating, resulting in a different computation.
So there isn't really a fixed "depth computation" stage in the pipeline. There is no single "I compute depth" circuit.
Another example: Choosing SRGB activates in silico gamma conversion hardware, whereas UNORM bypasses this circuit.
Why declare all this upfront? Because thousands of shader cores write simultaneously. The hardware must pre-configure memory controllers, depth testing units, and blending circuits before launching parallel execution. Runtime dispatch would destroy performance.
GPUs deliberately require upfront declaration. By forcing programmers to pre-declare computation patterns, the hardware can be configured once before a computation.
The API verbosity maps to silicon complexity. You're not "just setting up context". You're programming dozens of specialized hardware units through their configuration parameters.
If you haven't seen this video already I highly recommend it. It's about representing the transition structure of a world in a way that allows you to visually reason about it. The video is timestamped to the most interesting section. https://www.youtube.com/watch?v=YGLNyHd2w10&t=320s
Disclaimer: Note that my analysis is based on reading only very few comments of Said (<15).
To me it seems the "sneering model" isn't quite right. I think often what Said is doing seems to be:
One of the main problems seems to be that in 1. any flaw is a valid target. It does not need to be important or load bearing to the points made in the text.
It's like somebody building a rocket shooting it to the moon and Said complaining that the rocket looks pathetic. It should have been painted red! And he is right about it. It does look terrible and would look much better painted red. But that's sort of... not that important.
Said correctly finds flaws and nags about them. And these flaws actually exist. But talking about these flaws is often not that useful.
I expect that what Said is doing is to just nag on all the flaws he finds immediately. These will often be the non important flaws. But if there are actually important flaws that are easy to find, and are therefore the first thing he finds, then he will point out these. This then can be very useful! How useful Said's comments are depends on how easy it is to find flaws that are useful to discuss VS flaws that are not useful to discuss.
Also: Derivations of new flaws (3.) might be much shakier and often not correct. Though I have literally only one example of this so this might not be a general pattern.
Said seems to be a destroyer of the falsehoods that are easiest to identify as such.
Typst is better than Latex
I started to use Typst. I feel a lot more productive in it. Latex feels like a slug. Typst doesn't feel like it slows me down when typing math, or code. That and the fact that it has an online collaborative editor, and that rendering is very very fast are the most important features. Here are some more:
Here is a comparison of encoding the games of life in logic:
Latex
Typst
Typst in Emacs Org Mode
Here is some elisp to treat latex blocks in emacs org-mode as typst math, when exporting to HTML (renders/embeds as SVG images):
Simply eval this code and then call
org-html-export-to-html-with-typst.