npostavs
npostavs has not written any posts yet.

Hard RSI: AI modifies itself in a way that is different from just changing numerical values of its weights. It creates a new version of itself [...]
In hard RSI there is no danger of misalignment since AI doesn't create a successor, but rather modifies itself. In easy RSI there is danger of misalignment, [...]
I don't think I understand how "creates a new version of itself" is different from "create a successor"?
Oh, LLMs also suggested SCP-3125, but I thought they were wrong because "U" didn't seem like a plausible typo for "SCP". I wasn't aware of the alternate U-3125 naming.
But rebelling against a globalized techno-political-memeplex is like rebelling against U-3125.
Is "U-3125" referencing something?
I think I'm mostly following now, but when you write stuff like:
In the higher education system, I expect it would take the form of increasing the swathe of universities which taught a complete curriculum, as well as evening out the distribution of staff.
I wonder, is the undergraduate curriculum really significantly different between top-tier universities and others? Instead of wasting space on the rocket analogy, it would be useful to establish that sort of thing about the actual subject. And generally, the posts is really missing a lot of detail about universities, and has way too much details about rockets.
(I haven't cast any votes on your post)
... (read more)My position with respect to downvoting, or
You are talking so much about rockets that I can't even tell what point you're trying to make about universities. The post would probably be a lot clearer without this analogy.
This is an accidental double post of https://www.lesswrong.com/posts/FJxc4Lk6mijiFiPp2/the-big-nonprofits-post-2025 (also double posted on the wordpress site: https://thezvi.wordpress.com/2025/11/26/the-big-nonprofits-post-2025/ and https://thezvi.wordpress.com/2025/11/27/the-big-nonprofits-post-2025-2/)
Seems understandable to me (although I guess I'm somewhat primed by reading the previous versions).
I think most of "you" can be omitted in English as well:
Imagine: you study an immature AI in depth. Decode its mind entirely. Develop a great theory of how it works. Validate this theory on a bunch of examples. Use that theory to predict how the AI’s mind will change as it ascends to superintelligence and gains (for the first time) the very real option of grabbing the world for itself. Even then, you are, fundamentally, using a new and untested scientific theory to predict the results of an experiment that has not yet run, about what the AI will do when it really, actually, for real has the opportunity to grab power from the humans.
This seems to be an accidental repost of https://www.lesswrong.com/posts/9TPEjLH7giv7PuHdc/crime-and-punishment-1 from April. (It's also reposted on https://thezvi.wordpress.com/2025/11/03/crime-and-punishment-1-2/, but not thezvi.substack.com/).
It sounded kind of... rehearsed? Not sure if I should take this is as real position.