I think I can replicate all of these just fine? What's so unteachable about these? Where do people actually run into problems trying to adopt these?
> This sense -- which I might call, genre-savviness about the genre of real life -- is historically where I began; it is where I began, somewhere around age nine, to choose not to become the boringly obvious dramatic version of Eliezer Yudkowsky that a cliche author would instantly pattern-complete about a literary character facing my experiences.
If I look around at my life and try to apply it, a lot of problems immediately jump out at me like 'for that really hard problem you have, go ask people for advice instead of struggling alone like you usually do' or 'instead of coming up with Yet Another Analysis, just straight forwardly do all of the things already on your todo list', etc.
Seems to mostly be pattern matching. Noticing the pattern I am in, and willfully deciding to act orthogonal to it, or otherwise interrupt the pattern directly.
I suppose I already do this with relationships a lot, actually. I notice myself starting to not talk about something, notice 'this is the setup to a drawn out TV episode of stupid drama' and then just talk about the thing with the person instead.
I also do this when writing characters relatively straight forwardly - as soon as I notice where I want to be going with something, I question whether the character would actually do that, or what they would do that would maximally benefit them instead given what they know. This usually leads to very strange situations such as 'realize oneself is about to do more supervillain things -> willingly turn self in, instead of hiding from the heroes' which completely blows up the 'plot'. Although I also do tend to emulate characters step by step instead of actually structuring things anyway. So there's not as much large scale direction. There's a lot of setting up the environment so that it is natural for characters to behave in the ways I want.
Maybe this one comes from just writing a lot and being unhappy with how characters act in stories, and I guess I already had it? May be the influence of consuming rationalist fiction, actually. Constantly thinking about how characters can do better in stories?
Mostly this feels like the motion 'be anti-inductive' and 'consider the situation afresh, what would you want yourself to be doing?'. This is perhaps because I read the Cognitive Trope Therapy post years ago and internalized it? I guess I do think about it every now and then. I apparently ENTIRELY MISSED the rest of the 'intelligent characters' posts you made though!
> Don't make the end of the world be about you.
Was hard to find a pithy section that generated the pointer, but somewhere along reading this I was able to recognize what you meant here and how to manipulate it. Seems relatively straight forward? To the point where I'm not really sure what to even say about it. Some events are not about you, even if you were a part of them. Pay attention to what is happening, still have emotions about what is happening, but keep the focus on the actual thing instead of on yourself? It's sort of like "you are not a belief, you are the judge" so your emotions don't have to be caught up in whether the outcome goes one way or another? You are still allowed to be sad, of course. But it is a sadness for the world instead of for yourself.
> With that said: The getting-out-of-bed trick involves looking into the part of my cognition where my action plan is stored, and loading an image into it; and because the human brain's type system is a mess, this has the native type-feeling of an expectation or prediction that in a few seconds I will execute the motor-plan and get out of bed.
This section by itself is sufficient for me to have a pointer to what I think you're talking about and be able to put things in there. Load data -> motor plan executes. Can do it at various levels of preparation although plugging in 'pick up my phone' instantly leads to my hand doing it autonomously even without my involvement. I can also set a time delay and be surprised when it happens when my focus genuinely went to something else.
This is sort of new to me, in that I haven't really bothered to access direct 'low level motion plan' and didn't really have a reason to, but I sometimes load long term habits I want to make there from a layer or two of indirection up. Usually I just direct my attention at what I want to happen and something puts it there. But neat that one can access it directly.
> The third way I stay sane is a fiat decision to stay sane.
It is a little bit harder to find this one, and I feel like I'm guessing a bit, but it feels like just setting up what I call a 'generative seed' (sort of a goal directed action-generator ongoing feedback loop that does stuff in the subconscious for me - comes up with plans/actions and puts them into motion) in the specific shape of 'notice ways to be more sane' + 'do that'?
I am inferring that you have a very specific anchor to the entire category-axis of 'sane'/'insane' things as a very tight emotional-data cluster and can therefore point at it as a direction to move along. With many examples 'under the hood' which just get wrapped up in the abstraction.
So even though I cannot take all your exact 'internal function calls' due to not having the exact motions/ideas/data you associate with it, it seems that one could build an equivalent of this by just looking at all of your posts and collecting examples of 'sane' versus 'insane' behavior-examples-clusters and constructing an approximation of the specific thing you mean that way.
So not quite actionable to replicate yours, but possible to do for an arbitrary cluster-axis that is available in one's own mind, and so just use one's own 'sanity' cluster-axis. Not quite 'the same thing' but as close as one will reasonably get with the goal 'perform the same motion as described'.
> if there's a clever way to overwrite pseudo-predictions of suffering and thereby achieve Buddhist indifference to bad things, I don't have it as a simple obvious surface lever to pull.
Near as I can tell this is possible from previous investigations, but my subconscious says not to do it when I start directing my attention there. And it says that I would have to override multiple internal safety mechanisms to do it. And also modify some internal values. Which I don't want to do because like you said, it sounds dangerous and also I care about caring about things.
--
So yes, as far as I can tell these are completely teachable mental motions and you have laid down them in sufficient detail for me to see them and use them if I wanted. They don't appear to have any real dependencies. 'Look at it and notice how to do it' is sufficient given the descriptions.
Are there any other motions you think other people aren't able to get? These seem entirely legible to me.
Making my room 100% blackout (as in I cannot detect my hand waving in front of my face during daylight hours when I wake up) has reduced apparent sleep need by ~an hour. I used to need 8-9 hours, now I seem to need 7-8 hours. Some nights I'm even getting closer to 6.5 hours! Which would previously have barely felt like sleeping at all but now feels totally fine. This seems to bring me more in line with human norms. It is difficult to measure quality, but my internal 'go to sleep now' shifted forward an hour without affecting my waking time with no apparent negative consequence.
This was after already having two layers of 'blackout' material! But in the morning I could see there was some light on the sides that vaguely bothered me, or would wake me up more if I happened to look at it before waking up properly. I had not applied More Dakka, but there is a huge difference.
Although some of this is due to going to bed at 2-3 am, which means half of my sleep would be in partial light conditions. I have also always been particularly sensitive to light. I just wasn't aware it was having this big of a sleep quality reduction.
Heuristic that I should have applied earlier: if a sensation bothers you enough to grab your attention, it is probably causing you problems. Reduce it and see if it helps.
Took buproprion for years and while it did help with executive function, I was also half-insane that entire time (literal years from like 2015 to 2021). I guess it was hypomania? And to expand on 'half-insane' - one aspect of what I mean is was far too willing to accept ideas on a dime, and accepted background assumptions conspiracy theories made while only questioning their explicit claims. Misinformation feels like information! Overall there was a lack of sense of grounding to base conclusions on in the first place. I will note this still describes me somewhat, but not nearly as bad. Although it is a bit hard to pin down how much of that was a lack of tools and knowledge, a lot of it was an inability to calm down and rest. A brain constantly on the edge of exhaustion and constantly trying to push is in no state to think coherently.
Buproprion also made my anxiety significantly worse - I attribute most of the panic attacks in my life to it. But all this was very hard to notice due to college stress, and after taking it long enough I had just just attributed it to my base personality + existential despair from learning AI risk.
My overall positive experience from it was that it felt like a stronger caffeine.
What ultimately helped depression (not cured but way improved) was
* transitioning to female (estrogen in particular has strong positive effects for me within hours, but only when taken via the buccal or sublingual route instead of orally)
* stopping buproprion - was frankly not good for my brain for multiple reasons (some listed here)
* adderall to treat my (unknown to me until ~2022) ADHD
* graduating college and then not having constant stress from college or work deadlines
* learning to genuinely rest and enjoy doing nothing (stopping buproprion helped a lot with this).
* not constantly trying to come up with ideas and write expansions of them (this behavior mostly stopped when buproprion stopped as well, actually)
* eating better (beef in particular is extremely important for some unknown reason)
* doing physical therapy to fix upper and lower cross syndrome (took a long time to identify) - sleep is better, less constant muscle tension while laying down
* working less than 20 hours a week. (More than that isn't sustainable for me)
* letting my activity be primarily driven by projects shaped like dopamine trails that spawn further dopamine trails instead of todo lists and dependency trees. Where I define 95% of what needs to happen in the moment as a reaction to the shiny thing in front of me - just one more interesting idea to implement this one tiny thing. Contrasted to the next awful task being handed down from various bigger todo lists.
* 4 days totally off for every ~8 of work (2 days off is never restful and I have multi-day momentum where I don't want to stop working on projects)
* immense sense of calm safety while cuddling girlfriend (decreases my anxiety an absurd degree)
Also dropping the autistic masking. I didn't think I did any of this since I'd known I was autistic since gradeschool, and thought I'd actively fought anything shaped like 'being normal'. The kind of masking was people pleasing - I hadn't even realized I was doing it so hard. It was completely and utterly out of control. I would simulate conversation trees to notice what things I might say that would induce stress in people, and then explicitly avoid saying those things later. I was unable to intentionally choose to induce stress in another person, and as it turns out that is a massive liability in fact. Because it means anything shaped like being slightly mean on purpose in your personality gets implicitly erased. Which is in fact traumatizing. Or any needs you have that require causing someone a bit of stress just don't get met. It requires an unending quantity of input energy to accomplish, more and more as you get better at noticing what induces stress and contorting to avoid it. Never intentionally doing harm is completely untenable. It is an utterly unrealistic standard to hold oneself to. One has to intentionally induce some number of harms one is aware of causing beforehand.
But in doing so there's suddenly room to breathe and live.
My purchase of an arduino kit at the end of highschool. This has essentially passively introduced me to a lot of basic electronics over the years without explicitly studying them. And so now I sometimes think "I want to measure my heart rate" or "I want to build a DIY custom keyboard" or "I want a physical pomodoro timer with just one button and 3 LEDs" and I can just order some parts, build the thing, and have a new tool that solves a simple problem.
I sometimes try to recommend other people build something simple with electronics occasionally, only to realize that they don't even have any kind of microcontroller. Whereas for me it has become nearly as primitive an action as 'make a simple bash/python script for this.' Having the ability to produce electronics has opened up a multitude of solutions that I didn't even quite realize until I noticed other people getting stuck without this capacity.
The mere presence of electronics in my life encouraged acquiring many other small pieces of knowledge such as what a diode is and why it is useful. What a transistor actually is (which I had theoretically learned in college, but when I needed to make an electrically controlled switch 'transistor' did not come to mind as a thing I could buy at the store). This also led to some skills like learning to solder and desolder, and learning to use a 3D printer. Which also was a massive boon that deserves its own answer.
Two years later, there are now whole brain wide recordings on C. Elegans via calcium imaging. This includes models apparently at least partially predictive of behavior and analysis of individual neuron contributions to behavior.
If you want the "brain-wide recordings and accompanying behavioral data" you can apparently download them here!
It is very exciting to finally have measurements for this. I still need to do more than skim the paper though. While reading it, here are the questions on my mind:
* What are the simplest individual neuron models that properly replicates each measured neuron-activation? (There are different cell types so take that into account too)
* If you run those individually measurement-validated neuron models forward in time, do they collectively produce the large scale behavior seen?
* If not, why not? What's necessary?
* Are these calcium imaging measurements sufficient to construct the above? (Assume individualized connectomes per-worm are gathered prior instead of using averages across population)
* If not, what else is necessary?
* And if it is sufficient, how do you construct the model parameters from the measurements?
* Can we now measure and falsify our models of individual neuron learning?
* If we need something else, what is that something?
Edit: apparently Gwern is slightly ahead of me and pointed at Andrew Leifer whose group an entire year ago who produced a functional atlas of C Elegans that also included calcium imaging. Which I'd just totally missed. One missing element is extrasynaptic signaling, which apparently has a large impact on C Elegans behavior. So in order to predict neuron behavior you need to attend to those as well.
I expanded 'shocked at failure' into:
The plans you make work.
When they fail, it is because of one of the following reasons:
When they fail for reasons other than these, you are extremely surprised and can point to exactly what about your worldview and anticipations misled you.
The planbot link is down.
I first tried to describe rationality piece by piece, but realized that just comes out as something like: "Enumerate all the principles, fundamentals, and ideas you can think of and find about effective thinking and action. Master all of them. More thoroughly and systematically apply them to every aspect of your life. Use the strongest to solve its most relevant problem. Find their limits. Be unsatisfied. Create new principles, fundamentals, and ideas to master. Become strong and healthy in all ways. "
Non-meta attempt:
<Epistemic status: I would predict most of these are wrong. In fact, I rather recently proved I didn't understand fundamental parts of The Sequences. So I know that my beliefs here are weak and thoroughly misled. So my basis of belief for all of these is broken and weak. I am certain my foundation for beliefs is wrong even if all of my actual beliefs here turn out to be basically accurate. I cannot thoroughly justify why they are right.>
General strategy: collect all the important things you think are true, and consider what it means for each to be false.
Starting with a list of the things most important to you, state the most uncontroversial and obvious facts about how those work and why that is the case. Now assume the basic facts about the things most important to you are wrong. The impossible is easy. The probable is actually not true. Your assumptions do not lead to their conclusions. The assumptions are also false. You don't want the conclusions to be true anyway. The things that you know work, work based on principles other than what you thought. Most of your information about those phenomena is maliciously and systematically corrupted, and all of it is based on wrong thinking. Your very conceptions of the ideas are set up to distort your thinking on this subject.
What if my accepted ideas of civilizational progress are wrong? What if instead of exponential growth, you can basically just skip to the end? Moore's Law is actually just complacency. You can, at any point, write down the most powerful and correct version of any part of civilization. You can also write down what needs to happen to get there. You can do this without actually performing any research and development in between, or even making prototypes. You don't need an AGI to do this for you. Your brain and its contents right now are sufficient. You just need to organize them differently. In fact, you already know how to do this. You're tripping over this ability repeatedly, overlooking the capability to solve everything you care about because you regard it as trash, some useless idea, or even a bad plan. You've buried it alongside the garbage of your mind. You're not actually looking at what is in your head and how it can be used. Even if it feels like you are. Even if you're already investing all your resources in 'trying.' It is possible, easy even. You're just doing it wrong in an obvious way you refuse to recognize. Probably because you don't actually want what you feel, think, and say you do. You already know why you're lying to yourself about this.
You can't build AGI without understanding what it'll do first, so AI safety as a separate field is actually not even necessary or especially valuable. You can't even get started with the tech that really matters until you've laid out what is going to happen in advance. That tech can also only be used for good ends. Also, AGI is impossible to build in the first place. Rationality is bunk and contains more traps than valuable thinking techniques. MIRI is totally wrong about AI safety and is functionally incapable of coming anywhere close to what is necessary to align superintelligences. Even over a hundred years it will be mechanically unable to self-correct. CFAR is just very good at making you feel like rationality is being taught. They, don't understand even the basics of rationality in the first place. Instead they're just very good at convincing good people to give them money, and everyone including themselves that this is okay. Also, it is okay. Because morality is actually about making you feel like good things are happening, not actually making good things happen. We actually care about the symbol, not the substance.
That rationality cannot, even in its highest principles of telling you how to overcome itself, actually lead you to something better. To that higher unnamed thing which is obviously better once you're there. There is, in fact, actually no rationality technique for making it easier to invent the next rationality. Or for uncovering the principles it is missing. Even the fact of knowing there are missing principles you must look for when your tools shatter is orthogonal to resolving the problem. It does not help you. Analogously there is no science experiment for inventing rationality. You cannot build an ad-hoc house whose shape is engineering. If it somehow happens, it will be because of something other than the principles you thought you were using. You can keep running science experiments about rationality-like-things and eventually get an interesting output, but the reason it will work is because of something like brute force random search.
That the singularity won't happen. Exponential growth already ended. But we also won't destroy ourselves for not being able to stop X-risk. In fact, X-risk is a laughable idea. Humans will survive no matter what happens. It is impossible to actually extinguish the species. S-risk is also crazy, it is okay for uncountable humans to suffer forever, because immense suffering is orthogonal to good/bad. What we actually value has nothing to do with humans and conscious experience at all, actually.
Hopefully, you came up with at least 100 bugs; I came up with 142.
I wrote 20,000 words from these prompts. Not all of those bugs, but also my reactions to them. Ended up doing not much else for about three days, but I went over basically my entire life top to bottom. I now have a thorough overview of my errors. I stopped not because I ran out of things I think I need to fix, but because I realized the list would never end. I was still finding MAJOR areas I need to improve even after all that. I see why the exercise is supposed to only be half an hour now: there are about 200 million insects per person!
Lesson learned: sample, not catalog.
> I often feel more like a disembodied observer of the world around me, rather than an active participant.
> Far more of my mental energy is spent navigating the realm of ideas than identifying with the persona that is everything that everyone else identifies with me, so I tend to think far more about what ought to be done than about how I feel about things.
This sounds pretty similar to myself, therefore I have some questions:
In the past did you have a lot of overwhelmingly intense emotions? Do you sometimes go from almost non-awareness of feeling (or feeling but weakly) to overwhelming emotion in a very short span?
Does encountering information also bring with it emotions?
Do you have alexithymia?
Are you able to enter your body and basically stay there for 30 minutes without returning to analysis or abstract thought? Does doing this affect your emotions at all?
Is there an internal sense that there is a part of your mind that is 'personality'+'analysis/thoughts', and a part that is 'where all of the qualia happens'? Possibly also with a third part that is 'emotions'? These might be divided a bit differently.
For me, the 'where all of the qualia happens' component was acting as a blockade between 'thoughts' and 'emotions' - it enabled thinking to happen even during intense emotions, but seems to have caused alexithymia. And 'pushing that to the side' makes intense emotions available to my sense of experiencing things, and thus available for thinking about/analyzing. Instead of being inaccessible.
All of this happened because as a child during tantrums I set up a 'mental space' where I could perform logic and analysis even while having extremely intense emotions.
This seems to be why people say that I am 'like a robot' or 'an absurdly analytical person' - because I am actively suppressing emotions through that dissociated? state. All the time.
So you might get some benefit from seeing what happens if you go from disembodied to embodied for a while. And put away any tools which keep you in that state.