Wiki Contributions

Comments

Okay, so I like everyone else's comments, but they feel complicated with what I came up with:

  1. Harry convinces himself of #2 enough to say it in parselmouth.

  2. Harry says "I think I understand the prophecy you're trying to avoid, and I believe killing me makes it happen. I would say more, but you'd probably use it to kill me" in parselmouth.

  3. Harry stays silent.

The karma has spoken. I've registered proveitforreal.com. Thank you!

I think a trademarked "proved" image will do nicely for use on labels :)

"Did you kill yourself at any point during the last 24 hours?" is not likely to produce anything useful at all.

I see. Right now the system doesn't have any defined questions. I believe that suitable questions will be found so I'm focusing on the areas I have a solid background in.

If a project is unsafe in a literal way, shipping the product to consumers (or offering it for sale) is of course illegal. However, when considering a sous vide cooker in the past I have always worried about the dangers of potentially eating undercooked food (eg. diarrhea, nausea, and light headedness), which was how I took your meaning previously. "Product is safe for use, but accidental use might lead to undesirable outcomes". As I mentioned in our discussion here this project is not intended to be a replacement for the FDA.

shipping a product to random people and asking them "Is it useful?" ... is not likely to produce anything useful

I agree that "is it useful" is not a particularly useful question to ask, but I don't see any harm in supporting it. If you are looking for a better question, "80% of users used the product twice a week or more three months after receiving it" sounds like information that would personally help me make a buying decision. (Have you used the product today?)

So perhaps frequency of use might be a better question? I wasn't haggling over what questions to ask because it was your example.

never mind a proper scientific study

I think rigor in data collection and data processing are what make something scientific. For an example, you could do a rigorous study on "do you think the word turtle is funny?".

The link I gave to the data collection webapp describes the data collection more depth, which I believe is what you are asking about between 6 and 7.

From that url:

Core function:

  • Every day an SMS/email is sent to participants with a securely generated one time URL.
  • The participant visits this URL and is greeted with a list of questions to answer.

Potential changes to this story:

  • If the URL is not used within 16 hours, it expires forever.
  • If a participant does not enter data for more than 3 days, they are automatically removed from the study.
  • If a participant feels that they need to removed from the study, they may do so at any time. They will be prompted to provide details on their reasons for doing so. These reasons will be communicated to the study organizer.
  • The study organizer may halt the study for logistical or ethical reasons at any time

I described an overview in a different thread, but that was before a lot of discussion happened.

I'll use this as an opportunity to update the script based on what has been discussed. This is of course still non-final.

  1. The creator of the sous vide machine (Tester) would register his study and agree to the terms.
  2. The Tester would register this as a food-related study, automatically adding required safety questions.
  3. The Tester would perform a search of our questions database and locate customer satisfaction related questions.
  4. The Tester would click "start the experiment".
  5. Our application would post an ad seeking participants.
  6. The participants would register for the study, once a critical mass was reached our app would create a new instance of the data collection webapp.
  7. Once the study period is complete, the data collection app signs and transfers the participant data back to our main app. The analysis is performed, the study is posted publicly, and the Tester is notified of the results via email.

Okay, hope (and the link to my earlier user story) helps make things more clear. If you see issues with this please do bring it up -- finding and fixing issues early is the reason I started this thread.

Awesome, great link. Example study here.

I think the needs for this project are still substantially different. Genomera trusts the scientists, which is usually a fine thing to do. I've applied for a beta invite, but don't have access. Based on the example study I've linked it seems like they are more focused on assisting in data gathering (which based on my recent experience seems like the easiest thing we are considering).

Okay, sorry I've been away from the thread for a while. I spent the last half day hacking together a rough version of the data collection webapp. This seemed reasonable because I haven't heard any disagreement on that part of the project, and I hope that having some working code will excite us :)

The models are quite good and well tested at this point, but the interface is still a proof of concept. I'll have some more time tomorrow evening, which will hopefully be enough time to finish off the question rendering and SMS sending. I think with those two features added, we will have a reasonable v1.

We will still need to create

  • The main project page & study creation interfaces
  • Questions for use in our initial experiments
  • Participant location and screening criteria
  • Data analysis routines
  • Legal contracts
  • Paper describing what we did -- erdos numbers don't grow on trees :p

Repo is: https://github.com/GeneralBiotics/GLaDyS (I'll move it away from the GB github once we finalize a project name).

Update: Question rendering now works, demo app can be viewed at http://gladys-example.herokuapp.com/

Ahh, okay. That one goes on the scrap heap.

I think if you change the price of something by an order of magnitude you get a fundamental change in what it's used for. The examples that jump to mind are letters -> email, hand copied parchment -> printing press -> blogs, and SpaceX. If you increase the quality at the same time you (at least sometimes) get a mini-revolution.

I think a better example might be online courses. It can be annoying that you can't ask the professor any questions (customize the experience), but they are still vastly better than nothing.

Hmm. I'm confused. Let's look at something slightly more extreme than what we're talking about and see if that helps.

Level 0: Imagine we make a product study as good as possible, then allows anyone to perform the same study with a different product. Some products "shouldn't" be tested that way, but I don't see how a protocol like that will produce garbage (they will merely establish "no effect").

Level 1: We broaden to support more companies, and allow anyone to perform those studies as well.

Level 2: After a sufficient number of companies have had their experiment created, we take the protocols and create a "build your own experiment constructor kit" which allows for an increasingly large number of products to receive a good test.

Level 3: As we add more and more products to the adaptable system, we reach the point where most product claims have a community ratified question for them, and the protocol is stable. You might not be able to test 20% of the things that you'd like, but for the other 80% you can test those just fine.

Please let me know where you believe that plan breaks. Actual plan will likely differ of course, but we need something concrete to talk about or I'm going to keep not understanding what sounds like potentially constructive methodology advice.

P.S. Oh, and a random thought. What do you think 4Chan will do with your "webapp"? X-)

Perform some hilarious experiments! Hopefully we get publicity from them :D

Load More