Da_Peach
Da_Peach has not written any posts yet.

Da_Peach has not written any posts yet.

I agree with the content of the essay, but I disagree with the allocated name.
Thus, I propose for the people to use in their discussions the term pipeline instead, since it's an established word used to express the same notion of transitioning people from one state to another, and it correctly expresses that there may be multiple stations. I suspect the essay could have been made much shorter too if a single sentence about the "alt right pipeline" was given, or something similar, since it would instantly hook people up as to what the crux of the topic being discussed is.
I wonder if we can compare the experiences of F->M transitioners versus M->F transitioners to see if things like perception of saturation/etc. are consistent with the hormonal changes, or are they just effects of the body experiencing chemical imbalances/people just being happier after transitioning so the world looks less gloomy
I think that's not the same, since the code publishing flow looks like this:
Instead of this:
At least that's how I've seen coding agent integration in my limited experience.
The fundamental problem being talked about in the post is when accountability is shifted to the AI, but since it naturally can't be held accountable for mistakes, you build architecture that prevents problems from popping up in the first place. And how companies are currently deploying Claude Code, et al. is the employee-centric flow; which doesn't require any additional internal security measures to be deployed, since the accountability for all the code has not changed at all.
I think stereotype of the stereotype is a mouthful and also not very clear, and thus present the term "presumed stereotype". Using this in a live conversation should immediately signal to the other party what you mean (that their description is what everyone presumes to be the stereotype, but the actual stereotype (which you would then describe to them) is something else, maybe even the polar opposite).
Could you kindly provide some examples of what you're talking about?
I'm not American per se, but am interested in hearing about these dysfunctional rules.
That's a cool way to frame damage risks, but I think your distribution for AI damage is for ASI, not AGI. I think it's very reasonable that an AGI-based system may cause the type of damage that I am talking about.
Even if you believe that as soon as we achieve AGI, we'll accelerate to ASI because AGI by definition is self-improving, it still takes time to train a model, and research is slow. I hope that the window b/w AGI & ASI is large enough for such a "Hiroshima event" to occur, so humanity wakes up to the risks of maligned AI systems.
PS: Sorry for the late response, I was offline for a couple of days
To sway public opinion about AI safety, let us consider the case of nuclear warfare—a domain where long-term safety became a serious institutional concern. Nuclear technology wasn’t always surrounded by protocols, safeguards, and watchdogs. In the early days, it was a raw demonstration of power: the bombs dropped on Hiroshima and Nagasaki were enough to show the sheer magnitude of destruction possible. That spectacle shocked the global conscience. It didn’t take long before nation after nation realized that this wasn't just a powerful new toy, but an existential threat. As more countries acquired nuclear capabilities, the world recognized the urgent need for checks, treaties, and oversight. What began as an arms race... (read more)
That's an interesting idea. The military would undoubtedly care about AI alignment — they'd want their systems to operate strictly within set parameters. But the more important question is: do we even want the military to be investing in AI at all? Because that path likely leads to AI-driven warfare. Personally, I'd rather live in a world without autonomous robotic combat or AI-based cyberwarfare.
But as always, I will pray that some institution (like the EU) leads the charge & start instilling it into people's heads that this is a problem we must solve.
I think this particular issue has less to do with public sentiment & more to do with problems that require solutions which would inconvenience you today for a better tomorrow.
Like climate change: it is an issue everyone recognizes will massively impact the future negatively (to the point where multiple forecasts suggest trillions of dollars of losses). Still, since fixing this issue will cause prices of everyday goods to rise significantly and force people into switching to green alternatives en masse, no one advocates for solutions. News articles get released each year stating record high temperatures & natural disaster rates; people complain that the seasons have been getting more extreme each passing year... (read more)
I don't get why this was curated, am I missing something? The piece basically says that what you want to do & what society expects you to do are 2 separate things (a topic which has been explored since time immemorial). Then it says that you should evaluate what you really want to do based on rational thinking & long-term planning (also something incredibly obvious). Is there anything more to it?
I thought the only novel bit was the passage about oxytocin, which is barely 10% of the article.