Comment author: Eliezer_Yudkowsky 29 March 2012 10:39:13PM 13 points [-]

We've actually noticed in our weekly sessions that our nice official-looking yes-we're-gathering-data rate-from-1-to-5 feedback forms don't seem to correlate with how much people seem to visibly enjoy the session - mostly the ratings seem pretty constant. (We're still collecting useful data off the verbal comments.) If anyone knows a standard fix for this then PLEASE LET US KNOW.

Comment author: AShepard 31 March 2012 09:11:21PM 2 points [-]

Another thing you could do is measure in a more granular way - ask for NPS about particular sessions. You could do this after each session or at the end of each day. This would help you narrow down what sessions are and are not working, and why.

You do have to be careful not to overburden people by asking them for too much detailed feedback too frequently, otherwise they'll get survey fatigue and the quality of responses will markedly decline. Hence, I would resist the temptation to ask more than 1-2 questions about any particular session. If there are any that are markedly well/poorly received, you can follow up on those later.

Comment author: Eliezer_Yudkowsky 29 March 2012 10:39:13PM 13 points [-]

We've actually noticed in our weekly sessions that our nice official-looking yes-we're-gathering-data rate-from-1-to-5 feedback forms don't seem to correlate with how much people seem to visibly enjoy the session - mostly the ratings seem pretty constant. (We're still collecting useful data off the verbal comments.) If anyone knows a standard fix for this then PLEASE LET US KNOW.

Comment author: AShepard 30 March 2012 02:00:05AM *  14 points [-]

I'd suggest measuring the Net Promoter Score (NPS) (link). It's used in business as a better measure of customer satisfaction than more traditional measures. See here for evidence, sorry for the not-free link.

  1. "On a scale of 0-10, how likely would you be to recommend the minicamp to a friend or colleague?"
  2. "What is the most important reason for your recommendation?

To interpret, split the responses into 3 groups:

  • 9-10: Promoter - people who will be active advocates.
  • 7-8: Passive - people who are generally positive, but aren't going to do anything about it.
  • 0-6: Detractor - people who are lukewarm (which will turn others off) or will actively advocate against you

NPS = [% who are Promoters] - [% who are Detractors]. Good vs. bad NPS varies by context, but +20-30% is generally very good. The followup question is a good way to identify key strengths and high priority areas to improve.

Experience with Lumosity?

5 AShepard 18 March 2012 10:51PM

I just saw a commercial for Lumosity, which is a mental-skills training website. It seems like something that someone on LessWrong would have tried, but some googling of the site turns up only some passing mentions. Has anyone actually signed up and used it? Have you had results, and are they worth the subscription cost? (~$5-15/month, depending on subscription length).

Comment author: Ezekiel 16 February 2012 01:31:59AM 5 points [-]

I like the idea of capping the length of an admissible chain of hearsay, but whenever I hear about a rule like that, I always think of the risk that you'll miss an obviously true conclusion just because the evidence wasn't admissible. Of course, that's a silly argument, since we have lots of such limits and they're not something I disagree with.

The obvious solution to this entire debate is to teach people a basic understanding of practical probability, but I guess you work with what you've got...

Incidentally, is the title a deliberate play on "Lies, damn lies, and statistics"? I couldn't work it out.

Comment author: AShepard 17 February 2012 12:22:08AM 1 point [-]

I think it's just the standard "a thing, another thing, and yet one more additional thing". A common species, of which "lies damned lies, and statistics" is another example.

Comment author: AShepard 11 January 2012 05:52:53AM 11 points [-]

This is an odd post. It starts out with a suggestion for how to structure group brainstorming, then veers into an argument for why cannabis use enhances creativity. I think you would be better served splitting those arguments into separate posts.

Comment author: AShepard 30 December 2011 04:07:12PM *  6 points [-]

Off-topic: in a number of places where you've used italics, the spaces separating the italicized words from the rest of the text seems to have been lost (e.g. "helpedanyone at all.") Might just be me though?

Comment author: AShepard 31 December 2011 08:47:08PM 1 point [-]

Addendum: This is apparently a known issue with the LW website.

Comment author: AShepard 30 December 2011 04:07:12PM *  6 points [-]

Off-topic: in a number of places where you've used italics, the spaces separating the italicized words from the rest of the text seems to have been lost (e.g. "helpedanyone at all.") Might just be me though?

In response to Uncertainty
Comment author: AShepard 01 December 2011 09:03:09PM 3 points [-]

I'm having difficulties with your terminology. You've given special meanings to "distinction", "prospect", and "deal" that IMO don't bear any obvious relationship to their common usage ("event" makes more sense). Hence, I don't find those terms helpful in evoking the intended concepts. Seeing "A deal is a distinction over prospects" is roughly as useful to me as seeing "A flim is a fnord over grungas". In both case, I have to keep a cheat-sheet handy to understand what you mean, since I can't rely on an association between word and concept that I've already internalized. Maybe this is accepted terminology that I'm not aware of?

Comment author: AShepard 22 November 2011 06:04:06AM 1 point [-]

It looks like a couple of footnotes got cut off.

Comment author: KPier 09 October 2011 05:31:04PM 9 points [-]

Since I first read about calibration on LessWrong, I've been trying this with tests and debate tournaments.

With a sample size of about 50: 95% of my estimated test grades are within 3% of my actual test grades.

On debate, however, if I am 60% confident I won a round, I won it 90% of the time; if I am 80% confident I won, I win 100% of the time. Other people seem to be much better than me at assessing the probability I won a debate round (if they observed it).

It seems that I am really good at some forms of estimating, and really bad in other situations, which means that overall switching from Inside View to Outside View wouldn't necessarily be an improvement, but that in certain situations it would help me enormously. Has anyone else encountered this?

Comment author: AShepard 09 October 2011 09:54:52PM 4 points [-]

Interesting that your debate predictions tend too low. In my debate experience, nearly everyone consistently overestimated their likelihood of winning a given round. This bias tended to increase the better the debaters perceived themselves to be.

View more: Prev | Next