I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.
I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
On one hand, at every individual step, these things made sense and (I have to admit) they worked, in that they pushed us over some difficult hurdles and actually got us to accomplish what seems to me like it was some useful stuff. But on the other hand, ingredients like these are what the Stockholm Syndrome is made of, and I saw that taking hold in myself and those around me.
In some Buddhist lineages, like Zen, the relationship between student and teacher is meant to be like that between child and parent. However, this is a relationship that normally develops over months and years, and at first you're treated more like a lost child who's a guest that might end up staying and getting adopted or might wander on. Many teachers won't let new students attend sesshins (retreats) both because their practice might not be strong enough to handle it and because the relationship between teacher and student is not yet firmly established.
Personally, I think Zen's cautious approach is better than throwing people into the deep end. Best I can tell, the risk of psychosis is much higher with Goenka style retreats, although I don't have hard numbers, only anecdotal evidence and theory that suggests it should be more common.
As much as I want it to be true that progress is stalling, I think Sora and Atlas and the rest are mostly signs that OpenAI is trying to become an everything app via both vertical and horizontal integration. These are just a few of the building blocks they would need, and the business case for targeting an everything app given their valuation seems strong (in that the only way the valuation is defensible is if it seems like they will eat multiple existing business sectors or create new business sectors as big as many existing ones).
I’ve come to the conclusion that none of this would truly help me, and that, one way or another, I’m going to die anyway. What difference does it make whether I die in 60 years or in 10,000? In the end, I’ll still be dead.
The difference is 9,940 years of living! Who knows what you might get up to.
Perhaps it's a difference of opinion, but the value of life is in the living of it, not in how it ends.
The problems with local positivism seem to me... kinda important philosophically, but less so in practice.
Yes, most of the time they don't matter, but then sometimes they do! I think in particular the wrongness of logical positivism matters a lot if you're trying to solve a problem like proving that an AI is aligned with human flourishing because there's a specific, technical answer you want to guarantee but it requires formalizing a lot of concepts that normally squeak by because all the formal work is being done by humans who share assumptions. But when you need the AI to share those assumptions, things get dicier.
Oh, yes, I forgot about Kevin's blog! I also forgot about the ToC! I'm unfortunately not the best person to be writing an accurate history, as I tend to forget details that were once quite important. I guess that's why I promised an oral history, both because it was a talk and because you should treat this as a data point from which one might construct a complete history.
Regardless of what's happening at Antropic, I totally believe that 90% of code could be written by AI, because 90%+ of my code is written by AI, and specifically by Claude Code.
Of course, what this means exactly has some nuance. Most of the time Claude is acting as something like a more heavily automated keyboard for me. It's not actually doing that much on its own; it's doing what I tell it to do under relatively close supervision. So it nominally writes 90%+ of the lines of code, but actually did anything interesting on its own a far smaller amount of the time, maybe 10%, that makes it into production without heavy rewriting.
I like this. I notice you don't mention religion in this post, but I think one of the things religions do really well is try to provide access to all three of a scene, a clique, and a team at the same time (though I wouldn't have known to put it this way before reading your post!).
Why I say this:
I've previously made a case that rationalists should be more religious, and being able to talk in more detailed terms about what the community benefits offered by religions are is helpful!
Yes. This is something I frequently try to emphasize when someone is meditation curious but not already committed to doing it. I say that for most people it's great, but some people have trouble, and if you're in the category of people who might have trouble (especially people with high risk of schizophrenia), then you should avoid doing it.
I constantly find myself needing to give opposing advice because I'll read something and feel like it leaves out the other side. So, someone says meditation is great, I'm like, woah, there are risks. Someone says meditation sucks or not worth doing, and I extoll its virtues.
Apparently I'm forever cursed to push people back towards the middle way. 😜