This is one of those mechanisms which are obvious once you notice them, and really useful to know about, but weirdly non-noticed. After reading this I started noticing a lot more of these, and (hopefully) became more open to accepting non extreme versions of various things that I previously thought horrific.
It's sad that Duncan is Deactivated, as there are multiple posts like this one that make me a better person.
This is an enjoyable, somewhat humorous summary of a very complicated topic, spanning literally billions of years. So it naturally skips and glosses over a bunch of details, while managing to give relatively simple answers to:
I really appreciated the disclaimers at the top - every time I discuss biology, I bump into these limitations, so it's very appropriate for an intro article to explicitly state them.
Wealth not equaling happiness works both ways. It's the idea of losing wealth that's driving sleep away. In this case, the goal of buying insurance is to minimize the risk of losing wealth. The real thing that's stopping you sleep is not whether you have insurance or not, it's how likely it is that something bad happens, which will cost more than you're comfortable losing. Having insurance is just one of the ways to minimize that - the problem is stress stemming from uncertainty, not whether you've bought an insurance policy.
The list of misunderstandings is a bit tongue in cheek (at least that's how I read it). So it's not so much disdainful of people's emotions, as much as it's pointing out that whether you have insurance is not the right thing to worry about - it's much more fruitful to try to work out the probabilities of various bad things then calculate how much you should be willing to pay to lower that risk. It's about viewing the world through the lens of probability and deciding these things on the basis of expected value. Rather than have sleepless nights, just shut up and multiply (this is a quote, not an attack). Even if you're very risk averse, you should be able to just plug that into the equation and come up with some maximum insurance cost above which it's not worth buying it. Then you just buy it (or not) and sleep the sleep of the just. The point is to actually investigate it and put some numbers on it, rather than live in stress. This is why it's a mathematical decision with a correct answer. Though the correct answer, of course, will be subjective and depend on your utility function. It's still a mathematical decision.
Spock is an interesting example to use, in how he's very much not rational. Here's a lot more on that topic.
It's probably not that large a risk though? I doubt any alien microbes would be that much of a problem to us. It seems unlikely that they would happen to use exactly the same biochemistry as we do, which makes it harder for them to infect/digest us. Chirality is just one of the multitudes of ways in which earth's biosphere is "unique". It's been a while since I was knowledgeable about any of this, but a quick o1 query seems to point in the same direction. Worth going through quarantine, just in case, of course. Though that works on earth pathogens which tend to quickly die off without hosts to infect, which very well might not hold true for more interesting environments.
Peter Watt's Rifters series goes a bit into this topic. This is by no means evidence either way, but I just wanted to let more people know about it.
A bit of nitpicking: the basic Open Source deal is not that you can do what you want with the product. It's that the source code should be available. The whole point of introducing open source as an idea was to allow coorporations etc. to give access to their source code without worrying so much about people doing what you're describing. Deleting a "don't do this bad thing" can be prosecuted as copyright infringement (if the whole license gets removed). This is what copyleft was invented for - to subvert copyright laws by using them to force companies to publish their code.
There are licenses like MIT which do what you're describing. Others are less permissive, and e.g. only allow you to use the code in non-commercial projects, or stipulate that you have to send any fixes back to the original developer if you're planning on distributing it. The GPL is a fun one, which requires any code that is derivative of it to also be open sourced.
Also, Open Source can very much be a source of liability, e.g. the SCO v. IBM case which was trying to get people to pay for linux (patent trolls being what they are) or Oracle vs Google, where Oracle (arguably also patent trolls) wanted Google to pay billions for use of the Java API (this ended up in the supreme court).
It's not that the elite groups are good or bad, it's the desire to be in an elite group that leads to bad outcomes. Like how the root of all evil is the love of money, where money in itself isn't bad, it's the desire to possess it that is. Mainly because you start to focus on the means rather than the ends, and so end up in places you wouldn't have wanted to end up in originally.
It's about status. Being in with the cool kids etc. Elite groups aren't inherently good or bad - they're usually just those who are better at whatever is valued, or at least better at signaling that they are better at whatever is valued, depending on the group phase (the classic description being geeks, mops and sociopaths or Scott Alexander's version). For many people, status is one of the most important things there are. And not just for instrumental reasons, but on a deep terminal level. You can argue that it's an evolutionary instrumental goal, but for them status is a value in and of itself. From what I've read of your comments around here, I'm assuming that's not true of you, especially as your last paragraph comes to the same conclusion as Lewis does.
People for whom status is so important are easy to manipulate by promising them status. They're willing to sacrifice other values for status gains. Basically Moloch and moral mazes on a personal level. So the best case scenario of chasing status just for the sake of status is that you spend lots of resources chasing a mirage, as there's always another group with higher status that you haven't yet joined. Unfortunately, many such status seekers want to join groups that tend towards immoral/illegal/etc. actions. So to join them, you have to jeopardize yourself. The Russian Kompromat system is a good example of how this works in practice. Or blackmail schemes, where you get the target to do worse and worse things to avoid leaking the previous action. Most inner circles are not that blatant, of course. The problem is that if you value joining such inner circles more than your other values, then there will probably be points where you have to choose between the two, and too many people prefer to sacrifice their other values on Moloch's alter.
It's not just from https://aisafety.info/. It also uses Arbital, any posts from the alignment forum, LW, EA forum that seem relevant and have a minimum karma, a bunch of arXiv papers, and a couple of other sources. This is a a relatively up to date list of the sources used (it also contains the actual data).
Another, related Machiavellian tactic is, when starting a relationship that you suspect will be highly valuable to you, is to have an argument with them as soon as possible, and then to patch things up with a (sincere!) apology. I'm not suggesting to go out of your way to start a quarrel, more that it's both a valuable data point as to how they handle problems (as most relationships will have patchy moments) and it's also a good signal to them that you value them highly enough to go through a proper apology.
There seems to be a largish group of people who are understandably worried about AI advances but have no hope of changing it, so start panicking. This post is a good reminder that yes, we're all going to die, but since you don't know when, you have to prepare for multiple eventualities.
Shorting life is good if you can pull it off. But the same caveats apply as to shorting the market.