This is an idea that just occurred to me. We have a large community of people who think about scientific problems recreationally, many of whom are in no position to go around investigating them. Hopefully, however, some other community members are in a position to go around investigating them, or know people who are. The idea here is to allow people to propose relatively specific ideas for experiments, which can be upvoted if people think they are wise, and can be commented on and refined by others. Grouping them together in an easily identifiable, organized way in which people can provide approval and suggestions seems like it may actually help advance human knowledge, and with its high sanity waterline and (kind of) diverse group of readers, this community seems like an excellent place to implement this idea.
These should be relatively practical, with an eye towards providing some aspiring grad student or professor with enough of an idea that they could go implement it. You should explain the general field (physics, AI, evolutionary psychology, economics, psychology, etc.) as well as the question the experiment is designed to investigate, in as much detail as you are reasonably capable of.
If this is a popular idea, a new thread can be started every time one of these reaches 500 comments, or quarterly, depending on its popularity. I expect this to provide help for people refining their understanding of various sciences, and if it ever gets turned into even a few good experiments, it will prove immensely worthwhile.
I think it's best to make these distinct from the general discussion thread because they have a very narrow purpose. I'll post an idea or two of my own to get things started. I'd also encourage people to post not only experiment ideas, but criticism and suggestions regarding this thread concept. I'd also suggest that people upvote or downvote this post if they think this is a good or bad idea, to better establish whether future implementations will be worthwhile.
Where specifically have I done that? (Is it the "applause light" part? Do you think it obviously false that the thesis serves as an applause light?)
Are you tapping out? This is frustrating as hell. Crocker's Rules, dammit - feel free to call me an idiot, but please point out where I'm being one!
Without outside help I can certainly go on doubting - holding off on believing what others seem to believe. But I want something more - I want to form positive knowledge. (As one fictional rationalist would have it, "My bottom line is not yet written. I will figure out how to test the magical strength of Muggleborns, and the magical strength of purebloods. If my tests tell me that Muggleborns are weaker, I will believe they are weaker. If my tests tell me that Muggleborns are stronger, I will believe they are stronger. Knowing this and other truths, I will gain some measure of power.")
Yeah, good catch. The 10x ratio is supposed to hold for workgroup-sized samples (10 to 20). What the source population is, that's less clearly defined. A 1983 quote from Mills refers to "programmers certified by their industrial position and pay", and we could go with that: anyone who gets full time or better compensation for writing code and whose job description says "programmer" or a variation thereof.
We can add "how large is the programmer population" to our list of questions. A quick search turns up an estimate from Watts Humphrey of 3 million programmers in the US about ten years ago.
So let's assume those parameters hold - population size of 3M and sample size of 10. Do we now have a testable hypothesis?
What is the math for finding out what distribution of "productivity" in the overall population gives rise to a typical 10x best-to-worst ratio when you take samples of that size? Is that even a useful line of inquiry?
The misinterpretation that stood out to me was:
I'm not sure whether you meant "design" to refer to e.g. internal API or overall program behavior, but they're both relevant in the same way:
The important metric of "rate of output" is how fast a prog... (read more)