Confused about the disagreements. Is it because of the AI output or just the general idea of an AI risk chatbot?
Here's a riddle: A woman falls in love with a man at her mother's funeral, but forgets to get contact info from him and can't get it from any of her acquaintances. How could she find him again? The answer is to kill her father in hopes that the man would come to the funeral.
It reminds me of [security mindset](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html), in which thinking like an attacker exposes leaky abstractions and unfounded assumptions, something that is also characteristic of being agentic and "just doing things."
In fact, Claude 3 Opus is still available.
Is Judaism not also based around disputation of texts?
I pretty much agree with this. I just posted this as a way of illustrating how simulacrum stages could be generalized to be more than just about signalling and language. In a way, even stocks are stage 4 since they cash out in currency, so that stuff can be one stage in one way but another stage in another way.
Simulacrum stages as various kinds of assets:
I made a Manifold market for how many pre-orders there will be!
I'm generally confused by the notion that Buddhism entails suppressing one's emotions. Stoicism maybe, but Buddhism?
Buddhism is about what to do if one has no option but to feel one's emotions.
I think the prior for aliens having visited Earth should be lower, since it a priori it seems unlikely to me that aliens would interact with Earth but not to an extent which makes it clear to us that they have. My intuition is that its probably rare to get to other planets with sapient life before building a superintelligence (which would almost certainly be obvious to us if it did arrive) and even if you do manage to go to other planets with sapient life, I don't think aliens would not try to contract us if they're anything like humans.