Comment author: Vaniver 04 January 2015 10:12:09PM 6 points [-]

I'm curious how they handle model error (the error when your model is totally wrong).

They punish it. That is, your stated credence should include both your 'inside view' error of "How confident is my mythology module in this answer?" and your 'outside view' error of "How confident am I in my mythology module?"

One of the primary benefits of playing a Credence Game like this one is it gives you a sense of those outside view confidences. I am, for example, able to tell which of two American postmasters general came first at the 60% level, simply by using the heuristic of "which of these names sounds more old-timey?", but am at the 50% level (i.e. pure chance) in determining which sports team won a game by comparing their names.

But it seems hard to guess beforehand that the question you thought you were answering wasn't the question that you were being asked!

This is the sort of thing you learn by answering a bunch of questions from the same person, or by having a lawyer-sense of "how many qualifications would I need to add or remove to this sentence to be sure?".

Comment author: whateverfor 04 January 2015 10:49:59PM 0 points [-]

OK, so all that makes sense and seems basically correct, but I don't see how you get from there to being able to map confidence for persons across a question the same way you can for questions across a person.

Adopting that terminology, I'm saying for a typical Less Wrong user, they likely have a similar understanding-the-question module. This module will be right most of the time and wrong some of the time, so they correctly apply the outside view error afterwards on each of their estimates. Since the understanding-the-question module is similar for each person, though, the actual errors aren't evenly distributed across questions, so they will underestimate on "easy" questions and overestimate on "hard" ones, if easy and hard are determined afterwards by percentage that get the answer correct.

In response to 2014 Survey Results
Comment author: whateverfor 04 January 2015 09:48:56PM 7 points [-]

Do you have some links to calibration training? I'm curious how they handle model error (the error when your model is totally wrong).

For question 10 for example, I'm guessing that many more people would have gotten the correct answer if the question was something like "Name the best selling PC game, where best selling solely counts units not gross, number of box purchases and not subscriptions, and also does not count games packaged with other software?" instead of "What is the best-selling computer game of all time?". I'm guessing most people answered WOW or Solitaire/Minesweeper or Tetris, each of which would be the correct answer if you remove on of those restraiints.

But it seems hard to guess beforehand that the question you thought you were answering wasn't the question that you were being asked! So you'd end up distributing that model error relatively evenly over all the questions, and so you'd end up underconfident on the questions where the model was straightfoward and correct and overconfident when the question wasn't as simple as it appeared.

Comment author: whateverfor 20 August 2013 12:54:15AM 4 points [-]

I've always believed having an issue with utility monsters is either a lack of imagination or a bad definition of utility (if your definition of utility is "happiness" then a utility monster seems grotesque, but that's because your definition of utility is narrow and lousy).

We don't even need to stretch to create a utility monster. Imagine there's a spacecraft that's been damaged in deep space. There's four survivors, three are badly wounded and one is relatively unharmed. There's enough air for four humans to survive one day or one human to survive four days. The closest rescue ship is three days away. After assessing the situation and verifying the air supply, the three wounded crewmembers sacrifice themselves so the one is rescued.

To quote Nozick from wikipedia: "Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility." That is exactly what happens on the spaceship, but most people here would find it pretty reasonable. A real utility monster would look more like that than some super-happy alien.

Comment author: RomeoStevens 13 July 2013 04:43:19AM 26 points [-]

It seems to me that, unless one is already a powerful person, the best thing one can do to gain optimization power is building relationships with people more powerful than oneself. To the extant that this easily trumps the vast majority of other failings (epistemic rationality wise) as discussed on LW. So why aren't we discussing how to do better at this regularly? A couple explanations immediately leap to mind:

  1. Not a core competency of the sort of people LW attracts.

  2. Rewards not as immediate as the sort of epiphany porn that some of LW generates.

  3. Ugh fields. Especially in regard to things that are considered manipulative when reasoned about explicitly, even though we all do them all the time anyway.

Comment author: whateverfor 13 July 2013 08:24:01PM 2 points [-]

Realistically, Less Wrong is most concerned about epistemic rationality: the idea that having an accurate map of the territory is very important to actually reaching your instrumental goals. If you imagine for a second a world where epistemic rationality isn't that important, you don't really need a site like Less Wrong. There's nods to "instrumental rationality", but those are in the context of epistemic rationality getting you most of the way and being the base you work off of, otherwise there's no reason to be on Less Wrong instead of a specific site dealing with the sub area.

Also, lots of "building relationships with powerful people" is zero sum at best, since it resembles influence peddling more than gains from informal trade.

Comment author: Eliezer_Yudkowsky 21 May 2013 08:39:56PM 13 points [-]

Build a better one yourself? I'm tired of eating.

Comment author: whateverfor 22 May 2013 03:13:24AM 19 points [-]

The stuff you want is called Jevity. It's a complete liquid diet that's used for feeding tube patients (Ebert after cancer being one of the most famous). It can be consumed orally, and you can buy it in bulk from Amazon. It's been designed by people who are experts in nutrition and has been used for years by patients as a sole food source.

Of course, Jevity only claims to keep you alive and healthy as your only food source, not to trim your fat, sharpen your brain, etc. But I'm fairly sure that has more to do with ethics, a basic knowledge of the subject, and an understanding of the necessity of double blind studies for medical claims than someone finding out the secrets to perfect health who forgot iron and sulfur in their supplement.

Comment author: Zaine 14 May 2013 09:16:11PM *  3 points [-]

I don't understand why objectivists seem to be held in low regard here. My exposure is limited to browsing a forum of objectivists[1] - they were indistinguishable from those here, though much more focussed on personal instrumental rationality in their topics.

I know they are formally a closed loop belief system limited by the writings of Ayn Rand (which I've not read), and have heard this belief system is flawed in some way. That sounds like a straw man.

I'm only interested in the steel man. What is the difference between rationality and objectivism?
The only one that comes to mind: Objectivism implies there is only one true way of some things, while rationality allows for individual variety in thinking processes (resulting from different information, experiences, terminal values, etc.)

However, if for one person their most desired thing is happiness - which can only be achieved through quasi-altruistic deeds - then I cannot see it as anything but objective and rational to carry out those deeds. Objectivism applied to the fulfilment of one's desires appears indistinguishable from rationality to me. Where am I wrong on this - or am I playing semantics?

[1] Knowing what to look for, I discovered the site again. They indeed are very skilled at applying instrumental rationality to various areas of their lives (exempli gratia what type of plastic surgery yields the most natural results?) - however in the Philosophy and Ethics sections, Ayn Rand philosophy abounds. They are describing the intent of an Important Figure (scary), without doing so for the purposes of then breaking it down; those that try the latter attack straw-man versions and are refuted.

Comment author: whateverfor 15 May 2013 01:09:34AM 6 points [-]

The problem is Objectivism was actually an Ayn Rand personality cult more than anything else, so you can't really get a coherent and complete philosophy out of it. Rothbard goes into quite a bit of detail about it in The Sociology of the Ayn Rand Cult.

http://www.lewrockwell.com/rothbard/rothbard23.html

Some highlights:

"The philosophical rationale for keeping Rand cultists in blissful ignorance was the Randian theory of "not giving your sanction to the Enemy." Reading the Enemy (which, with a few carefully selected exceptions, meant all non- or anti-Randians) meant "giving him your moral sanction," which was strictly forbidden as irrational. In a few selected cases, limited exceptions were made for leading cult members who could prove that they had to read certain Enemy works in order to refute them."

"The psychological hold that the cult held on the members may be illustrated by the case of one girl, a certified top Randian, who experienced the misfortune of falling in love with an unworthy non-Randian. The leadership told the girl that if she persisted in her desire to marry the man, she would be instantly excommunicated. She did so nevertheless, and was promptly expelled. And yet, a year or so later, she told a friend that the Randians had been right, that she had indeed sinned and that they should have expelled her as unworthy of being a rational Randian."

This is not to say Rand didn't have any valid insights, but since Rand really believed that things she said were by definition rational since she was rational (and as a bonus, the only possible rational thing)... there's a lot of junk and cruft in there, so there's no real good reason to take the whole label.

Comment author: Intrism 15 April 2013 12:00:30AM *  1 point [-]

Could you try using smaller candy?

The way the feeder is built, that wouldn't really help. It dispenses a constant volume, not a set number of candies. I could try to reduce the dispensed volume further, but I think other techniques would be best to try first.

if the reward is in the system, I tend not to wait very long before using it.

This seems OK to me.

It's not a problem except insofar as it interferes with some of the rules.

Or perhaps giving 1 candy per N points? Or giving a candy with probability 1/N?

These are the two big options I'm considering for next time. I'm leaning towards the "1 candy per N points" model, because that allows me to "gamify" the system with a big XP bar.

Comment author: whateverfor 15 April 2013 06:03:12AM 2 points [-]

You could try "adulterating" the candy with something non-edible, like colored beads. It would fix the volume concerns, be easily adjustable, and possibly add a bit of variable reinforcement.