All of whateverfor's Comments + Replies

OK, so all that makes sense and seems basically correct, but I don't see how you get from there to being able to map confidence for persons across a question the same way you can for questions across a person.

Adopting that terminology, I'm saying for a typical Less Wrong user, they likely have a similar understanding-the-question module. This module will be right most of the time and wrong some of the time, so they correctly apply the outside view error afterwards on each of their estimates. Since the understanding-the-question module is similar for each p... (read more)

0Vaniver
That seems reasonable to me, yes, as an easy way for a question to be 'hard' is if most answerers interpret it differently from the questioner.

Do you have some links to calibration training? I'm curious how they handle model error (the error when your model is totally wrong).

For question 10 for example, I'm guessing that many more people would have gotten the correct answer if the question was something like "Name the best selling PC game, where best selling solely counts units not gross, number of box purchases and not subscriptions, and also does not count games packaged with other software?" instead of "What is the best-selling computer game of all time?". I'm guessing mos... (read more)

I'm curious how they handle model error (the error when your model is totally wrong).

They punish it. That is, your stated credence should include both your 'inside view' error of "How confident is my mythology module in this answer?" and your 'outside view' error of "How confident am I in my mythology module?"

One of the primary benefits of playing a Credence Game like this one is it gives you a sense of those outside view confidences. I am, for example, able to tell which of two American postmasters general came first at the 60% leve... (read more)

I've always believed having an issue with utility monsters is either a lack of imagination or a bad definition of utility (if your definition of utility is "happiness" then a utility monster seems grotesque, but that's because your definition of utility is narrow and lousy).

We don't even need to stretch to create a utility monster. Imagine there's a spacecraft that's been damaged in deep space. There's four survivors, three are badly wounded and one is relatively unharmed. There's enough air for four humans to survive one day or one human to surv... (read more)

8Lumifer
Not exactly like that... :-) http://en.wikipedia.org/wiki/R_v_Dudley_and_Stephens

Realistically, Less Wrong is most concerned about epistemic rationality: the idea that having an accurate map of the territory is very important to actually reaching your instrumental goals. If you imagine for a second a world where epistemic rationality isn't that important, you don't really need a site like Less Wrong. There's nods to "instrumental rationality", but those are in the context of epistemic rationality getting you most of the way and being the base you work off of, otherwise there's no reason to be on Less Wrong instead of a specif... (read more)

The stuff you want is called Jevity. It's a complete liquid diet that's used for feeding tube patients (Ebert after cancer being one of the most famous). It can be consumed orally, and you can buy it in bulk from Amazon. It's been designed by people who are experts in nutrition and has been used for years by patients as a sole food source.

Of course, Jevity only claims to keep you alive and healthy as your only food source, not to trim your fat, sharpen your brain, etc. But I'm fairly sure that has more to do with ethics, a basic knowledge of the subject, a... (read more)

0RomeoStevens
Looks like it is 1c/cal. $20/day is not reasonable.

The problem is Objectivism was actually an Ayn Rand personality cult more than anything else, so you can't really get a coherent and complete philosophy out of it. Rothbard goes into quite a bit of detail about it in The Sociology of the Ayn Rand Cult.

http://www.lewrockwell.com/rothbard/rothbard23.html

Some highlights:

"The philosophical rationale for keeping Rand cultists in blissful ignorance was the Randian theory of "not giving your sanction to the Enemy." Reading the Enemy (which, with a few carefully selected exceptions, meant all non- o... (read more)

You could try "adulterating" the candy with something non-edible, like colored beads. It would fix the volume concerns, be easily adjustable, and possibly add a bit of variable reinforcement.