I'm an admin of LessWrong. Here are a few things about me.
Randomly: If you ever want to talk to me about anything you like for an hour, I am happy to be paid $1k for an hour of doing that.
Thanks for writing the post!
William's recent & excellent (& totally spoiling, don't read if you haven't read HPMOR) review of HPMOR comes to mind. Here's his summary of what the story is about (vague-but-meaningful spoilers):
He gets to explain the thing he’s missing in life in Chapter 6:
“I know it doesn’t sound like much,” Harry defended. “But it was just one of those critical life moments, you see? I mean, I knew that not thinking about something doesn’t stop it from happening, I knew that, but I could see that Mum really thought that way.” Harry stopped, struggling with the anger that was starting to rise up again when he thought about it. “She wouldn’t listen. I tried to tell her, I begged her not to send me out, and she laughed it off. Everything I said, she treated like some sort of big joke...” Harry forced the black rage back down again. “That’s when I realised that everyone who was supposed to protect me was actually crazy, and that they wouldn’t listen to me no matter how much I begged them, and that I couldn’t ever rely on them to get anything right.” Sometimes good intentions weren’t enough, sometimes you had to be sane...
And later in the same chapter:
I’ve been isolated my whole life. Maybe that has some of the same effects as being locked in a cellar. And I’m too intelligent to look up to my parents the way that children are designed to do. My parents love me, but they don’t feel obliged to respond to reason, and sometimes I feel like they’re the children - children who won’t listen and have absolute authority over my whole existence.
What he wants is to have someone he can look up to, the way a child “is designed to” look up to his parents. This is his wish.
And, because this is the kind of story this is, the wish is granted by the devil.
Similarly, the feeling I occasionally have interacting with many people in the world is "they are insane". In your post you talk about a software dev you met who has only ever used an LLM ~once. Not having bothered to use LLMs a bunch is, like, not being involved in the most important thing happening in the world. We invented new intelligences (not quite life forms, but still!) and they're ~freely available to interact with! You can use them to do useful stuff! They're still getting smarter! A software dev can use them for their job!
I want boundaries between me and people that out of touch with the real world. I don't particularly trust their judgment or want them to have power over my life or to have them involved in my life. And that's a simplification, also they are here and we all have to work something out.
Now, are rationalists reliably sane? Nope. But sometimes they are. A momentary lapse into reason can occur, and sometimes for an extended time, like hours or months or even years, and that's exciting.
And also the culture is way better. Around these parts, it is odd not to be in touch enough with reality to have not used LLMs (or at least, if you haven't done so, then you'll have some good account of why, rather than it having not seemed worth trying). So the group-level incentives are toward being in touch with reality. If someone writes up an argument that you're screwing up in some behavior or key part of your life, the expected thing to do is to respond with a counterargument, not dismiss them for being impolite. This is a force toward engaging with reality that most people do not experience.
I don't know quite where I'm going, but felt an impulse to express some of this attitude toward most people vs rationalist.
Relatedly, I've been thinking about building a schedule-this-post-for-publication feature. If I publish a post at 10pm, it's often better to publish the next morning for visibility. My guess is this would be useful for Inkhaven Residents who finish writing near-midnight.
I will read it when you publish it!
We had mostly ruled out political advocacy, so there was no one trying to do the "make connections with congresspeople" work that would have caused us to discover that someone had been thinking of this as an important issue for years.
It is the case that I thought there was little point in building political connections on this issue; but the more earlier failure was that for the last two decades there have only ever been a handful of people working seriously on this problem to begin with, which means most balls would be dropped regardless.
How the heck has this guy been in Congress the whole time and we've not heard about him / he's not been in contact with the AI x-risk scene?
I'm somewhat confused... which reply about why I disagree makes what clear?
Also, to repeat, I don't see that many strengths—I only gave it +3. (And on reflection, I'd give the sequel a negative vote if it had been upvoted much in the first round of the review.)
These posts are about transformative psychological experiences that leave a person with a different sense of meaning in their lives, or ethics, or attitudes toward people / the world. That seems to me to be a real part of the world and human experience, and I think it's worthwhile exploring it and trying to understand its causes and what implications it may have for ethics/meaning/etc.
I think it's worth exploring religions as having been shaped substantially by some transformative psychological experiences, and purporting to bring them to adherents. It seems like a somewhat impoverished analysis of religions to think of this as the primary thing that they're built around (e.g. memetic successfulness of the stories, political competence of the institutions, etc). These read to me like they're quietly implying that this is the true core of them in a way that wants all religions to get along, and ignore a lot of what's going on with religions.
That is a weakness, but I vote on posts for presence of strengths more than for absence of weaknesses.
I think it helps these accounts to have personal accounts of transformative psychological experiences in them. I kind of wish there was more of that, relatively.
The attempted translations in the sequel post didn't help me much. Goodness of reality, karma, morality being clear / immorality being confusion, stuff about the afterlife... perhaps if someone else is attempting the project of analyzing religious messages this helped them, but I didn't find it very insightful or helpful.
I think I'll give this post a +3 and the sequel no vote (or +0).
I fear you are leaning on a piece of public infrastructure, the "personal list of disclaimers", that doesn't exist.
My guess is that this is a misreading of Eliezer's stance that you should not aim for CEV as your first goal with a superintelligence that you think is plausibly aligned. Quoting the wiki:
CEV is rather complicated and meta and hence not intended as something you'd do with the first AI you ever tried to build.
If you were talking about Owen's essay, that's not what this thread is about. (And if so, please take this as a datapoint against commenting with low context.)