Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: jschulter 26 January 2014 12:28:02PM 0 points [-]

Wait, so is this on Monday the 3rd or Tuesday the 4th?

Comment author: wallowinmaya 21 April 2012 09:42:55AM 8 points [-]

For example, he must have thought that writing HPMoR was a good use of time, and therefore must have (correctly) predicted that it would be quite popular if he was to write it.

Isn't the simpler explanation that he just enjoys writing fiction?

Comment author: jschulter 23 April 2012 04:46:34AM 0 points [-]

The enjoyment of the activity factors into whether it is a good use of time.

Comment author: Eliezer_Yudkowsky 20 April 2012 01:46:34AM 13 points [-]

I'll update by putting more trust in mainstream modern physics - my probability that something like string theory is true would go way up after the detection of a Higgs boson, as would my current moderate credence in dark matter and dark energy. It's not clear how much I should generalize beyond this to other academic fields, but I probably ought to generalize at least a little.

Comment author: jschulter 23 April 2012 04:19:36AM *  4 points [-]

my probability that something like string theory is true would go way up after the detection of a Higgs boson

I'm not sure that this should be the case, as the Higgs is a Standard Model prediction and string theory is an attempt to extend that model. The accuracy of the former has little to say on whether the latter is sensible or accurate. For a concrete example, this is like allowing the accuracy of Newtonian Mechanics (via say some confirmed prediction about the existence of a planetary body based on anomalous orbital data) to influence your confidence in General Relativity before the latter had predicted Mercury's precession or the Michelson-Morley experiment had been done.

EDIT: Unless of course you were initially under the impression that there were flaws in the basic theory which would make the extension fall apart, which I just realized may have been the case for you regarding the Standard Model.

Comment author: taryneast 25 March 2012 08:49:33AM *  0 points [-]

I agree but.... purposely self-identifying with a reference class that has supposed-skills that you are trying to acquire does seem to have benefits in actually becoming more likely to have those skills. eg "I'm a hard-working person and hard-working people wouldn't just give up" is a way of convincing (/tricking) yourself into actually being a hard-working person.

EDIT: that being said - it certainly wouldn't be consequentialist. :)

Comment author: jschulter 04 April 2012 05:02:19AM 1 point [-]

But it is near-consequentialist: "I'm a hard-working person and hard-working people wouldn't just give up" --> "the act of giving up will make me feel less like a hard-working person and therefore make me less likely to work hard in the future"

In response to comment by Zaine on SotW: Be Specific
Comment author: Eliezer_Yudkowsky 03 April 2012 08:03:31AM 1 point [-]

Hm. I especially note the concept of handing out some sort of non-monetary gift at the end of the session to someone. I wonder if that would be productive or counterproductive...

Comment author: jschulter 04 April 2012 04:26:45AM 0 points [-]

That particular element seems like it would incentivize campers to spend the period hyper-aware of their own and others' specificity, which seems counterproductive to me. The goal is an increase in the specificity of statements made casually, which could be entirely unrelated. Extending the period to say, a week, might work to prevent this- at that point it would be a long term incentive rather than a prize.

In response to SotW: Be Specific
Comment author: Vaniver 03 April 2012 05:37:51AM 3 points [-]

Continuing with the "adapt a classic" suggestions:

Surely there's some way to adapt charades to this. You give someone an example of a complicated concept, and have them try to communicate that concept in as few examples as possible. We have similar problems with the twenty questions suggestion, though: a lot of specificity depends on deep knowledge of the subject matter. If you get a concept you or the guessers have never heard of, then you're dead in the water and that'll be frustrating.

The skill goals appear to be mostly "articulate the knowledge you have" and "model your interlocutor's knowledge." If I get a card that says "a startup that tracks viewing statistics and can combine them with parameters from the site to get detailed knowledge of how different classes of customers interact with the site", explaining that concept is going to be odd, and it may have been the communication failure between the Entrepreneur and Graham may have been that E didn't realize that G didn't know that they were about combining viewing statistics with user data. If the card only has one meaningful datapoint, things are too easy- the 'knowledge' and 'model' parts are done for you, and you just need to articulate- but if the card has lots of meaningful datapoints, things are too hard- now you need to figure out what the needle is in a haystack, probably in a field that's not relevant to you.

That suggests an alternative approach- give two people two different cards, with related but distinct concepts on them. Now, they have to figure out what the other person's concept is with as few examples as possible. Both players share whether an example fits their rule or not. If you score, consider including the number of examples the other person gives into the score, so that you have an incentive to ask questions with examples that both convey as much information about your concept as possible while also seeking to get as much information about their concept as possible.

For example, I might have the card that reads "A blue polygonal shape with less than five sides" while you might have the card that reads "a red polygonal shape with at least four sides."

I begin by saying "a blue pentagon fits my rule." You respond with "A blue pentagon does not fit my rule," and then follow up with "a red square fits my rule." I respond with "a red square does not fit my rule," then follow with "a red hexagon does not fit my rule." You respond with "A red hexagon does fit my rule," then follow with "A red triangle does not fit my rule." I respond that a red triangle does not fit my rule, and the game continues.

Once I think I have your rule, I write it down and stop providing examples. You can continue to provide as many examples as you like, until you also write down a rule. We then reveal rules, and then if scoring get points for guessing correctly, possibly with points taken away for every example provided.

This looks like it'll be obvious with rules based on geometric concepts, and similar mathematical objects so long as everyone is familiar with them. It should be extendable to fuzzier concepts as well (adding a "maybe fits my rule" or related answers will help).

In response to comment by Vaniver on SotW: Be Specific
Comment author: jschulter 04 April 2012 04:03:02AM 1 point [-]

This activity seems like it would tie in well with a unit on hypothesis and experiment generation as well- it reminds me of the 2-4-6 test. Perhaps have two different scoring rules: when trying to teach specificity, give points for getting your partner to guess; when teaching how to find the right hypotheses and tests, give points for guessing correctly.

Comment author: jschulter 30 July 2011 11:33:02PM 0 points [-]

I unfortunately wont be able to make it up for this one, despite my strong interest in the subject. Would someone be willing to host me via Skype though?

Comment author: jschulter 21 July 2011 07:43:07PM 1 point [-]

This sounds very interesting. Do you have anybody who's particularly experienced with TDT or other decision theories committed to come? And which business will it be at, as that appears to be a mall of some sort?

Also, is anyone from down here in Tucson looking for a ride up, or already has one with an extra seat?

Comment author: pjeby 18 July 2011 11:06:00PM 4 points [-]

Perhaps I wasn't clear; I wasn't asking for your conclusions (which were already stated) or your hypothesized mechanisms for those conclusions, but rather, I was asking for evidence and definitions. Would you be willing to share the evidence that led you to formulate the above hypotheses?

I am particularly concerned because some of what you have said sounds like the sort of thing that one might anticipate about the process, but which is not actually the case at all. For example, I have seen no evidence of a reinforcement process such as you describe. (Quite the opposite in fact.) So, if you have actually measured or demonstrated such a reinforcement effect, I would be most curious to know how.

There are other things you're saying that also appear to me to be contrary to actual fact (as opposed to one's intuitive expectations that are easily confirmation-biased into appearing real), so I would really like to find out what specific evidence you have and what contrary explanations you've tested, because I don't wish the efficacy of the technique to be overstated. (Thereby presenting others with something to criticize, never mind that I wasn't the one who made the overstated claim(s).)

Thanks.

Comment author: jschulter 21 July 2011 06:16:26PM 1 point [-]

Okay, thanks for clarifying the question. I've essentially already stated all the "evidence" I'm using for the claim, it's almost entirely anecdotal, and there's certainly no actual studies that I've used to support this particular bullet point. So, there is a good chance I may have stated things in a way which seems overconfident, and I may in fact be overconfident regarding this particular claim, especially considering that I've not tested alternate explanations for the efficacy I've had. I'd be more than willing to have a detailed discussion regarding both of our experiences/intuitions with the method, but I feel as though this probably isn't the place(I've already messaged you), though I'd be happy to update the wording of the article afterwards if it's necessary.

Comment author: pjeby 15 July 2011 08:53:26PM 1 point [-]

Establishing 'pull' motivation' works best with strong visualization, and is reinforced upon experiencing the completion of the task.

To be clear, are you making a general statement here, or describing experimental results from the conference? And if this is from experimental results, could you elaborate on the specific evidence that led to these conclusions?

That is, what specifically do you mean by "strong visualization" and "reinforced", not to mention "experiencing the completion"? Thanks!

Comment author: jschulter 18 July 2011 04:20:43PM 0 points [-]

The statement about strong visualization (essentially simulating experiences as closely as possible) is taken from the video and personal (and anecdotal) experience with the method. The reinforcement from actual completion refers to how once you've completed the task you were motivating yourself to do, you should get the feeling of reward you were imagining to motivate yourself. Actually experiencing the reward makes it easier to simulate if you need to become motivated again later. Additionally the mental connection you'll make between completing the task and the reward makes it less likely that you'll need to repeat the exercise for that task, unless it has an extremely high activation cost: the next time you go to do the task, one of the first things that comes to mind will likely be the reward you felt last time(s) you performed it.

View more: Next