whateverfor
whateverfor has not written any posts yet.

whateverfor has not written any posts yet.

Do you have some links to calibration training? I'm curious how they handle model error (the error when your model is totally wrong).
For question 10 for example, I'm guessing that many more people would have gotten the correct answer if the question was something like "Name the best selling PC game, where best selling solely counts units not gross, number of box purchases and not subscriptions, and also does not count games packaged with other software?" instead of "What is the best-selling computer game of all time?". I'm guessing most people answered WOW or Solitaire/Minesweeper or Tetris, each of which would be the correct answer if you remove on of those... (read more)
I've always believed having an issue with utility monsters is either a lack of imagination or a bad definition of utility (if your definition of utility is "happiness" then a utility monster seems grotesque, but that's because your definition of utility is narrow and lousy).
We don't even need to stretch to create a utility monster. Imagine there's a spacecraft that's been damaged in deep space. There's four survivors, three are badly wounded and one is relatively unharmed. There's enough air for four humans to survive one day or one human to survive four days. The closest rescue ship is three days away. After assessing the situation and verifying the air supply, the... (read more)
Realistically, Less Wrong is most concerned about epistemic rationality: the idea that having an accurate map of the territory is very important to actually reaching your instrumental goals. If you imagine for a second a world where epistemic rationality isn't that important, you don't really need a site like Less Wrong. There's nods to "instrumental rationality", but those are in the context of epistemic rationality getting you most of the way and being the base you work off of, otherwise there's no reason to be on Less Wrong instead of a specific site dealing with the sub area.
Also, lots of "building relationships with powerful people" is zero sum at best, since it resembles influence peddling more than gains from informal trade.
The stuff you want is called Jevity. It's a complete liquid diet that's used for feeding tube patients (Ebert after cancer being one of the most famous). It can be consumed orally, and you can buy it in bulk from Amazon. It's been designed by people who are experts in nutrition and has been used for years by patients as a sole food source.
Of course, Jevity only claims to keep you alive and healthy as your only food source, not to trim your fat, sharpen your brain, etc. But I'm fairly sure that has more to do with ethics, a basic knowledge of the subject, and an understanding of the necessity of double blind studies for medical claims than someone finding out the secrets to perfect health who forgot iron and sulfur in their supplement.
The problem is Objectivism was actually an Ayn Rand personality cult more than anything else, so you can't really get a coherent and complete philosophy out of it. Rothbard goes into quite a bit of detail about it in The Sociology of the Ayn Rand Cult.
http://www.lewrockwell.com/rothbard/rothbard23.html
Some highlights:
"The philosophical rationale for keeping Rand cultists in blissful ignorance was the Randian theory of "not giving your sanction to the Enemy." Reading the Enemy (which, with a few carefully selected exceptions, meant all non- or anti-Randians) meant "giving him your moral sanction," which was strictly forbidden as irrational. In a few selected cases, limited exceptions were made for leading cult members who could prove that... (read more)
You could try "adulterating" the candy with something non-edible, like colored beads. It would fix the volume concerns, be easily adjustable, and possibly add a bit of variable reinforcement.
OK, so all that makes sense and seems basically correct, but I don't see how you get from there to being able to map confidence for persons across a question the same way you can for questions across a person.
Adopting that terminology, I'm saying for a typical Less Wrong user, they likely have a similar understanding-the-question module. This module will be right most of the time and wrong some of the time, so they correctly apply the outside view error afterwards on each of their estimates. Since the understanding-the-question module is similar for each person, though, the actual errors aren't evenly distributed across questions, so they will underestimate on "easy" questions and overestimate on "hard" ones, if easy and hard are determined afterwards by percentage that get the answer correct.