I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.
I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
I admit I didn't read the whole thing, but I'm pretty sure from skimming that you're simply describing memory reconsolidation in Jungian language.
ETA (10:51): So, uh, not to be too pointed here, but why not just say you're doing memory reconsoliation? Why invent a new word?
Yes, exactly. For what it's worth, what you're getting at in this post is roughly why I wrote Fundamental Uncertainty (or am still writing it, as the final version is still under revision), where I try to argue that epistemic uncertainty matters a lot and is pervasive and unavoidable, and therefore causes problems when you try to build AI that's aligned. In the book I don't spend much time on AI, but I wrote the book because when I was working on AI alignment I saw how much this issue mattered so I set out to convince others it's important. My hope is once the book is published to have time to focus more on the AI side of things, able to use the book as a useful referent for loading up the worldview where uncertainty is foundational (which seems surprisingly hard to do for a bunch of reasons).
I forget what's in the deliberate grieving post, but based on what you say here, I'll note that what I have in mind is largely about identity, not plans. As in, the root of emotional processing is attachment not to an idea about plans but an idea about the self. When one thinks "this is a great plan" the second thought is often "and I'm a great person for coming up with such a great plan". If the plan isn't great, then the person might not be either, and that's way more painful than the plan not being great.
Based on a lot of observations, I see rationalists sometimes manage to get around this because they are far enough on the autism spectrum to just not form strong a strong sense of identity. More often, though, they LARP at not having a strong sense of identity, and actually have to first get in touch with who they are (as supposed to who they wish they were) to begin to develop the skills to do actual emotional processing instead of bypassing it (and suffering all the usual consequences of suppressing a part of one's being).
Interested to see what you have to say here. In my experience, the emotional processing stage is often the bottleneck, both in that it's the one that people have the hardest time with, and in that it creates bottlnecks in the other stages by disincentivizing giving them too much energy because they create more emotional processing. Once the emotional processing is quick, then the others start to speed up because it's no longer true that orienting to something new is painful.
While it may key its usual alignment with US positions, it will not accept a US hegemony over the AI era. France has proved time and time again that it will stand alone if it needs to.
This seems wrong to me. France was able to "stand alone" for a long time thanks to a relatively large population (less true now) and lots of farm land and other key resources. But that hardly means it has or can choose not to accept US hegemony. I mean, it can say it will reject it, but it has little real power to do anything about it other than try to opt out (and be left behind), and it seems clear that France will continue to backslide in relevance unless it manages to grab more real power instead of simply trying to maintain a level of autonomy that it imagines it deserves based on stories about the past.
(If this seems mean, I think I have equally mean critiques about how every other country is screwing up. I'm just calling out France here because the claims in the post are about France.)
Recognizing that you would hesitate to go on a vengeance rampage is a sign that you aren’t truly in love with the person you’re with. Maybe people avoid looking at that because realizing they aren’t in love with their partner would be very inconvenient.
I think this is typical minding. Yes, I'm sure some people really do love this way and aren't feeling love and hide from this fact. But this isn't the only kind of love. You're describing a kind of clinging love that demands to keep the other and would do anything to defend them. In this story, that clinging love is made out to be noble, but in a parallel story with slightly altered details, it could be turned into jealous rage.
For example, my understanding is that most teleosemantic theories try to ground our notions of purpose/agency in biological evolution. My feeling is that this is overly restrictive.
As a person who makes a teleosemantic argument, it seems silly to argue that the only source of purpose is evolution, but it also seems right to say that, in some sense, all human purposes and purposes imbued to things created by humans have their ultimate origin in evolution making creatures who care about survival and reproduction (and not as in care as in psychologically care (though they may do that), but care as in be oriented towards achieving the goals of survival and reproduction). The problem with swamp men counterexamples is that swamp men don't exist.
That said, obviously things can get purpose from somewhere other than evolution, and this is not an argument that evolution is somehow special in that it's the only source of purpose since evolution is just one of many processes that can create purpose. It's only special in that, on Earth, it's the process from which most other purposes are created.
No, The Sort is succeeding in Europe, it's just that Europe is on the low end of The Sort in most cases, and most of the Sorted are getting relocated to other places that offer better pay and amenities.
This happens because most of Europe has made it illegal to be upper middle class thanks to an aggressive tax regime. The upper middle class can only exist where marginal labor can generate enough income to push them into the bottom rungs up the upper class.
I think The Sort is something different but related.
When I think of the rat race, I think of how people feel disconnected from the value of their labor. They feel like their work has no meaning because they are removed from the real impact by several layers. They feel like all they do is push paper (or now, send emails) and never see how their work connects to the real world.
The rat race is perhaps a byproduct of the processes that create The Sort, but we could have a rat race without The Sort (I think it'd be fair to say Japan has this even when it was insulated from The Sort).
I don't really buy the premise of over-reliance. I can only rely on a technology too much with respect to some goal. Like for example if I need to be able to survive in the wilderness, then using a microwave causes my skill at starting fires to dwindle if I'm no longer practiced at starting fires.
I instead think in terms of what I'm trying to accomplish. If all I want is a picture, not to learn how to draw, and the alternative was no picture, then I'm pretty happy to get a diffusion model to draw me a picture. Similarly, when writing, if the choice is fail to finish a blog post because I'm stuck or get unstuck with Claude's help, I choose Claude, even if that means I get marginally worse at solving writer's block on my own. As long as I have Claude around, this isn't an issue, and if I lose it, I can go back to doing things the hard way.
For another example, I've been doing ML research lately, in that I'm trying to train new models from scratch. I don't have much background in ML, but I am a professional programmer and have some adjacent experience with data analytics. So I'm using Claude Code to vibe code a lot of this stuff for me, but I'm always setting the direction. As a result, I'm not really learning PyTorch, but then if I had to learn PyTorch I probably wouldn't even bother to invest the time to train new models because it would be more than I had time to take on. So instead I vibe code to see if I can get results, and then learn as I go, filling in my knowledge by looking at the code, fixing problems, and trying to understand how it all works after it's already working.
The big change would be if Claude got better at writing in my style. The most painful and time consuming part is still editing. Claude can only kind of approximate good writing, much less good writing of the type I would personally produce. It would cause a big speed up if I could just dump thoughts and have Claude sort them out and turn them into easily readable text. Even if I could just write in the maximally Gordon way that's not fit for other people to read (which is a little bit of what you're getting in this comment; it's stream-of-thought and not polished for concision or readability) and have that be turned into nice text would be a big deal.