Here is a model of mine, that seems related.
[Edit: Add Epistemic status]
Epistemic status: I have used this successfully in the past and found it helpful. It is relatively easy to do. is large for me.
I think it is helpful to be able to emotionally detach yourself from your ideas. There is an implicit "concept of I" in our minds. When somebody criticizes this "concept of I", it is painful. If somebody says "You suck", that hurts.
There is an implicit assumption in the mind that this concept of "I" is eternal. This has the effect, that when somebody says "You suck", it is actually more like they say "You sucked in the past, you suck now, and you will suck, always and ever".
In order to emotionally detach yourself from your ideas, you need to sever the links in your mind, between your ideas and this "concept of I". You need to see an idea as an object that is not related to you. Don't see it as "your idea", but just as an idea.
It might help to imagine that there is an idea-generation machine in your brain. That machine makes ideas magically appear in your perception as thoughts. Normally when somebody says "Your idea is dumb", you feel hurt. But now we can translate "Your idea is dumb" to "There is idea-generating machinery in my brain. This machinery has produced some output. Somebody says this output is dumb".
Instead of feeling hurt, you can think "Hmm, the idea-generating machinery in my brain produced an idea that this person thinks is bad. Well maybe they don't understand my idea yet, and they criticize their idea of my idea, and not actually my idea. How can I make them understand?" This thought is a lot harder to have while being busy feeling hurt.
Or "Hmm, this person that I think is very competent thinks this idea is bad, and after thinking about it I agree that this idea is bad. Now how can I change the idea-generating machinery in my brain, such that in the future I will have better ideas?" That thought is a lot harder to have when you think that you yourself are the problem. What is that even supposed to mean that you yourself are the problem? This might not be a meaningful statement, but it is the default interpretation when somebody criticizes you.
The basic idea here is, to frame everything without any reference to yourself. It is not me producing a bad plan, but some mechanism that I just happened to observe the output of. In my experience, this not only helps alleviate pain but also makes you think thoughts that are more useful.
Here is what I would do, in the hypothetical scenario, where I have taken over the world.
Though this is what I would do in any situation really. It is what I am doing right now. This is what I breathe for, and I won't stop until I am dead.
[EDIT 2023-03-01_17-59: I have recently realized that is is just how one part of my mind feels. The part that feels like me. However, there are tons of other parts in my mind that pull me in different directions. For example, there is one part that wants me to do lots of random improvements to my computer setup, which are fun to do, but probably not worth the effort. I have been ignoring these parts in the past, and I think that their grip on me is stronger because I did not take them into account appropriately in my plans.]
"Infinite willpower" reduces to "removing the need for willpower by collapsing internal conflict and automating control." Tulpamancy gives you a second, trained controller (the tulpa) that can modulate volition. That controller can endorse enact a policy.
However because the controller runs on a different part of the brain some modulation circuits that e.g. make you feel tired or demotivated are bypassed. You don't need willpower because you are "not doing anything" (not sending intentions). The tulpa is. And the neuronal circuits the tulpa runs on---which generate intentions to steer that ultimately turn into mental and/or muscle movements---are not modulated by the willpower circuits at all.
Gears-level model
First note that willpower is totally different from fatigue.
Principle: Reduce conflict and increase precision/reward for the target policy and “willpower” isn’t consumed; it’s unnecessary. (This is the non-tulpa way.)
The central guiding principle is to engineer the control stack so endorsed action is default, richly rewarded, and continuously stabilized. Tulpamancy gives you a second, controller with social authority and multi-modal access to your levers. This controller can just overwrite your mental state and has no willpower constraints.
The optimum policy probably includes using the sledgehammer of overwriting your mental state, as well as optimizing to adopt the target policy that you actually endorse wholeheartedly at the same time.
It's tempting to think of modern graphics APIs as requiring a bunch of tedious setup followed by "real computation" in shaders. But pipeline configuration is programming the hardware!
GPU hardware contains parameterizable functions implemented in silicon. When you specify a depth format or blend mode, you're telling the GPU how to compute.
Creating an image view with D24_UNORM_S8_UINT
configures depth comparison circuits. Choosing a different depth format, results in different hardware curcits activating, resulting in a different computation.
So there isn't really a fixed "depth computation" stage in the pipeline. There is no single "I compute depth" circuit.
Another example: Choosing SRGB
activates in silico gamma conversion hardware, whereas UNORM
bypasses this circuit.
Why declare all this upfront? Because thousands of shader cores write simultaneously. The hardware must pre-configure memory controllers, depth testing units, and blending circuits before launching parallel execution. Runtime dispatch would destroy performance.
GPUs deliberately require upfront declaration. By forcing programmers to pre-declare computation patterns, the hardware can be configured once before a computation.
The API verbosity maps to silicon complexity. You're not "just setting up context". You're programming dozens of specialized hardware units through their configuration parameters.
If you haven't seen this video already I highly recommend it. It's about representing the transition structure of a world in a way that allows you to visually reason about it. The video is timestamped to the most interesting section. https://www.youtube.com/watch?v=YGLNyHd2w10&t=320s
Disclaimer: Note that my analysis is based on reading only very few comments of Said (<15).
To me it seems the "sneering model" isn't quite right. I think often what Said is doing seems to be:
One of the main problems seems to be that in 1. any flaw is a valid target. It does not need to be important or load bearing to the points made in the text.
It's like somebody building a rocket shooting it to the moon and Said complaining that the rocket looks pathetic. It should have been painted red! And he is right about it. It does look terrible and would look much better painted red. But that's sort of... not that important.
Said correctly finds flaws and nags about them. And these flaws actually exist. But talking about these flaws is often not that useful.
I expect that what Said is doing is to just nag on all the flaws he finds immediately. These will often be the non important flaws. But if there are actually important flaws that are easy to find, and are therefore the first thing he finds, then he will point out these. This then can be very useful! How useful Said's comments are depends on how easy it is to find flaws that are useful to discuss VS flaws that are not useful to discuss.
Also: Derivations of new flaws (3.) might be much shakier and often not correct. Though I have literally only one example of this so this might not be a general pattern.
Said seems to be a destroyer of the falsehoods that are easiest to identify as such.
This is a useful video to me. I am somehow surprised that physics crackpots exist to the extend that this is a know concept. I actually knew this before, but failed to relate it to this article and my previous comment.
I once thought I had solved P=NP. And that seemed very exciting. There was some desire to just tell some other people I trust. I had some clever way to transform SAT problems into a form that is tractable. Of cause later I realized that transforming solutions of the tractable problem form back into SAT was NP hard. I had figured out how to take a SAT problem and turn it into an easy problem that was totally not equivalent to the SAT problem. And then I marveled at how easy it was to solve the easy problem.
My guess at what is going on in a crackpots head is probably exactly this. They come up with a clever idea that they can't tell how it fails. So it seems amazing. Now they want to tell everybody, and well do so. That seems to be what makes a crackpot a crackpot. Being overwhelmed by excitement and sharing their thing, without trying to figure out how it fails. And intuitively it really really feels like it should work. You can't see any flaw.
So it feels like one of the best ways to avoid being a crackpot is to try to solve a bunch of hard problems, and fail in a clear way. Then when solving a hard problem your prior is "this is probably not gonna work at all" even when intuitively it feels like it totally should work.
It would be interesting to know how many crackpots are repeated offenders.
I am somewhat confused how somebody could think they have made a major breakthrough in computer science, without being able to run some algorithm that does something impressive.
Imagine being confused if you got an algorithm that solves some path finding problem. You run your algorithm to solve path finding problems, and either it doesn't work, or is to slow, or it actually works.
Or imagine you think you found a sorting algorithm that is somehow much faster than quick sort. You just run it and see if that is actually the case.
It seems like "talking to reality" is really the most important step. Somehow it's missing from this article. Edit: Actually it is in step 2. I am just bad at skim reading.
Granted the above does not work as well for theoretical computer science. It seems easier to be confused about if your math is right, than if your algorithm efficiently solves a task. But still math is pretty good at showing you when something doesn't make sense, if you look carefully enough. It let's you look at "logical reality".
The way to not get lead to believe false things really doesn't seem different, whether you use an LLM or not. Probably an LLM triggers some social circuits in your brain that makes it more likely to be falsely confident. But this does seem more like a quantitative than qualitative difference.
Why can't the daemon just continuously look at a tiny area around the gate and decide just based on that? A tiny area seems intuitively sufficient for both recognizing that a molecule would go from left to right when opened, and no molecule would go from right to left. This would mean that it doesn't need to know a distribution over molecules at all.
Basically: Why can't the daemon just solve a localised control task.
Typst is better than Latex
I started to use Typst. I feel a lot more productive in it. Latex feels like a slug. Typst doesn't feel like it slows me down when typing math, or code. That and the fact that it has an online collaborative editor, and that rendering is very very fast are the most important features. Here are some more:
Here is a comparison of encoding the games of life in logic:
Latex
Typst
Typst in Emacs Org Mode
Here is some elisp to treat latex blocks in emacs org-mode as typst math, when exporting to HTML (renders/embeds as SVG images):
Simply eval this code and then call
org-html-export-to-html-with-typst
.