Social scientists think humans operate under one of two different sets of norms, depending on the circumstances: "market norms" or "social norms". The basic idea is that when exchanging money for goods and services formally, it's considered okay to be much more calculating and self-interested than when exchanging favors with friends informally. You can read this blog post by Dan Ariely for more.
It's often considered rude to introduce market norms in an area where they don't traditionally apply. For example, by charging money for your presence at a barbecue.
This is a thread where it's okay to talk about trading money for goods and services with other Less Wrong users, which might otherwise be considered rude because you'd be inappropriately introducing market norms. Things you're encouraged to do include:
- Post your resume
- Advertise a product sold by you or your company
- Advertise a service provided by you or your company
- Advertise an open position working for you or your company
The argument for having a thread like this is as follows. Less Wrong users have a variety of goals they wish to accomplish. Some of these goals involve engaging in marketplace transactions. It's plausible that a thread facilitating marketplace transactions between LW users will buy just as much or more collective goal accomplishment per unit attention consumed than a traditional Less Wrong thread.
Anecdotally it seems that introducing market norms takes a certain amount of chutzpah. For example, apparently it takes a certain kind of person to actually be able to name a dollar figure in a sales conversation, and that's why you need a professional salesperson to come along with a sales engineer when selling a technical product. One LWer friend of mine struggled for a while before she was able to get herself to charge money for talk therapy she had been providing to friends for free.
To combat this, please feel inclined to vote up folks who post in this thread. They likely overcame some akrasia in the act of promoting their offer.
To discuss the concept of this thread, as opposed to advertising a transaction you wish to engage in, please reply to this comment.
I'm looking for someone to help with me on a paid basis with statistical analysis. I have problems like the following:
1. When to inspect?
I have 10k documents per month steaming to office staff for data entry in offices scattered around the world. I have trained staff at HQ doing inspections of the data entry performed by the office staff, detecting errors and updating fields in which they detected errors. I will soon have random re-checking by HQ inspectors of entries already checked by other HQ staff.
The HQ staff currently detect errors on ~15% of documents (between nearly none and ~6% errors on particular fields on documents). I don't yet have a good estimate of how many of those events are false positives and how many errors are not detected at all. Users show learning (we detect fewer errors from users who have entered data on more documents) that continues over their first 2000 or so documents (where I start running out of data). Required: I need to decide when a document can skip secondary inspection. I need to decide when users (HQ or practice users) don't understand something and need training (their error rate seems high for the difficulty of data entry on that field). When I change the user interface I need to decide whether I helped or hurt, and I need future error prediction (after I changed the data entry environment) to recover quickly.
2. What works?
We have a number of businesses that sell stuff, and we often change how that's done and how we promote (promotions, press placements (that I can work to get), changes in price, changes in product, changes in business websites, training for our sales people, etc.). I'd like to learn more than I am from the things we change, so that I can focus our efforts where they work best. There is a huge amount of noise in this data.
Proposals should be sent to jobs@trikeapps.com, should reference this comment, and should include answers to the following two questions (and please don't post your answers to the questions on this site):
In my first example job above, across 200 users the average error rate in their first 10 documents was 12% (that is, of the set of 2000 documents made from the first 10 document entered by each of 200 users, 12% contained at least one error). Across so few documents from each user (only 10) there is only a small indication that the error rate on the 10th document is lower than the error rate on the first document (learning might be occurring, but isn't large across 10 documents). A new user has entered 9 documents without any errors. What is the probability that they will error on their next document?
What question should I ask in this place to work out who will be good at doing this work? What question will effectively separate those who understand how to answer questions like this with data from those who don't understand the relevant techniques?