I know this is an old comment, but it's expressing a popular sentiment under a popular post, so I'm replying mainly for others' sake.
There's an organization called PauseAI that lobbies for an international treaty against building powerful AI systems. It's an international organization, but the U.S. branches in particular could use a lot of help.
I think this post suffers from a lack of rigor regarding the limits of the advice. One limit is that, if you let your vibes steer you away from interpersonal interactions, then you'll eliminate interactions that have higher-than-average upside potential.
In most cases, most people's perceptions are similar to yours. (e.g. If you think that the guy who asked you out is weird, then most of the other women who he asked out probably think so, too.) Consequently, if you and everyone else in the same situation are steered by vibes, then your failures of judgement will be correlated. In other words, some interactions will be undervalued.
If you weren't steered by vibes, then you you could have harvested that difference in value. To piggy-back off of the examples that Said Achmiz gave:
When choosing whether to follow your vibes, remember that there is a Nash equilibrium. If everyone else follows their vibes, then your best option is to interrogate yours (as Said describes). If most people are ignoring their vibes, then your best option is to follow yours. Neither strategy is dominant.
I took another look at my source, and I think you're right. The subject of the plot, the Federal Register (FR), lists changes to the Code of Federal Regulations (CFR). It also suffers from the other problem that I identified (repeals counting as new rules).
For anyone who's curious, here's a nice overview of measures of regulatory burden.
In terms of Zvi's 4 levels of legality, I think that your reasoning is a valid argument against crossing the line between 2 and 3. However, I don't think that it's a valid argument against what Zvi is actually proposing, which is going from 1 to 2. If we have an obligation to help people who have APD, then the most cost-effective/highest-utility solution might involve making gambling less convenient for the general population.
I agree with the spirit of your comment, but there are a couple of technical problems with it. For one thing, the total number of pages does sometimes decrease. (See the last plot in this document.) Total pages isn't a perfect measure of regulatory burden, but many other measures have the problem of counting repeals as new regulations. (See the same source for a discussion about what counts as a "rule".) Also, most regulations are drafted by executive agencies, not legislatures--especially at the federal level.
What exactly is your hypothesis? Is it something like: P1) People are irrationally averse to actions that have a positive expected value and a low probability of success. P2) Self-deception enables people to ignore the low probability of success. C) Self-deception is adaptive.
I tried to test this reasoning by referencing the research that Daniel Kahneman (co-coiner of the term "planning fallacy") has done about optimism. He has many criticisms of over-optimism among managers/executives, as well as more ordinary people (e.g. those who pursue self-employment).
However, he also notes that, for a given optimistic individual, their optimism may have a variety of personal, social, and societal benefits, ranging from good mood and health to inspiring leadership and economic innovation. He goes so far as to say, "If you are allowed one wish for your child, seriously consider wishing him or her optimism.". (Thinking Fast and Slow, p. 255)
Altogether, I'm think I'm missing a subtlety that would enable me to deduce the circumstances in which a bias towards optimism would be beneficial. Given that, I'm unable to test your hypothesis.
I would have liked to see those who disagree with this comment engage with it more substantially. One reason I think that we're likely to have a warning shot is that LLM-based AIs are pretty consistently overconfident. Also, AI Control schemas have a probabalistic chance of catching misaligned AIs.