::looks up Farenheit the person on Wikipedia::
Apparently, Daniel Farenheit was the inventor of the mercury thermometer (a significant improvement over the liquor-filled thermometers of the era), and his original procedure for calibrating it defined three points on a temperature scale: a solution of ammonium chloride in ice water should be zero degrees, the freezing point of water should be 30 degrees, and human body temperature should be 90 degrees. (That last one was a bit off - holding the other two points constant, it should have been 95.)
People soon noticed that there were very close to 180 degrees between the melting and boiling points of water, so the scale was changed a bit to redefine the freezing and boiling points of water to be exactly 180 degrees apart. In the current version of the Farenheit scale, the boiling point of water is 212 degrees, the freezing point of water is 32 degrees, and the standard value for human body temperature is 98.6 degrees. It's not an elegant system, but by coincidence it does happen to be a pretty good scale for outdoor temperatures, which tend to range between 0 and 100 degrees (instead of between -20 and 40 degrees on the Celsius scale.).
350 degrees Farenheit, incidentally, happens to be a common temperature for baking food in an oven. 😆
I've seen this called "audience capture" in the context of blogging - optimizing for maximum attention (and therefore maximum revenue) by saying whatever is popular.
Any love for the Stanford Encyclopedia of Philosophy?
This is a lot to think about. My late wife definitely lived in a much darker world than I have, and I really hope I managed to be a ray of light in hers.
Is it better to have a rationality meetup with a baseline level of unskilled ad-hoc conflict resolution, or no rationality meetup? (You can ask the same question about any social event potentially open to the public.) I imagine the answer would depend on the number of people expected to attend - informal methods work better for groups of 10 than for groups of 100.
I actually have found an example of a strategy that doesn't incentivize someone else to self-modify into Hawkbot: https://www.lesswrong.com/posts/TXbFFYpNWDmEmHevp/how-to-give-in-to-threats-without-incentivizing-them
Basically, when you're faced with a probable extorter, you play Cooperate some of the time (so you don't always get nuked) but either Defect or Nuke back often enough that Hawkbot gets a lower expected value than Cooperate/Cooperate.
There's a fairly simple and common conflict that I find explains a lot of procrastination:
I want X to have been done.
I do not want to be doing X.
So you struggle to convince yourself to "voluntarily" do X even though it's unpleasant, or you don't do X and struggle with worrying about the consequences of not doing X. :/
Does any process in which they ended up the way they did without considering your decision procedure count as #2? Like, suppose almost all the other agents it expects to encounter are CDT agents that do give in to extortion, and it thinks the risk of nuclear war with the occasional rock or UDT agent is worth it.
Especially in the age of AGI, leaders may no longer need to respect the values of most people because they're not economically relevant.
Or militarily relevant. Traditionally, if you were a ruler, you had to at least keep your army happy. However, if you command an entirely automated army that doesn't have any actual people in it, there's no risk of the army turning against you. You have the robot weapons and nobody else does, so you can do whatever the hell you want to people without having to care what anyone else thinks.
I took a college course in probability theory but never took one on information theory that would have given a formal definition of Shannon entropy, so I guess I'm a little bit underqualified for this explanation? I think I can kind of handwave my way through it, though, with the general idea that the lowest entropy a "random" variable can have is when it's constant...