Two minor comments:
I posted a handful of example questions I have been/would be interested in on Raemon's bounty question. I think these examples address several of the challenges in section 4:
I see that second point as the biggest advantage of bounties over a marketplace: just paying for the best answer means I don't need to go to the effort of finding someone who's competent and motivated and so forth. I don't need to babysit someone while they work on the problem, to make sure we're always on the same page. I don't need to establish careful criteria for what counts as "finished". I can just throw out my question, declare a bounty, and move on. That's a much lower-effort investment on the asker's side than a marketplace.
In short, with a bounty system, competition between answerers solves most of the trust problems which would otherwise require lots of pre-screening and detailed contract specifications.
Bounties will also likely need to be higher to compensate for answer-side risk, but that's a very worthwhile tradeoff for those of us who have some money and don't want to deal with hiring and contracts and other forms of baby-sitting.
I agree with this argument for bounties over marketplace.
I currently lean the best norm being less "I give the best answer a full bounty" and more "I distribute a fixed amount of money among people who contributed significantly to answering the question", since I think in many cases part of the work will be refactoring the question in pieces.
Totally on board with that. The important point is eliminating risk-management overhead by (usually) only having to reward someone who contributes value, in hindsight.
Uncertainty around payment.
Possible solution: Being able to avoid duplicated efforts and being beaten to the mark by short, hasty answers by reserving the ability to answer first, by contributing some of your own money (20% of the current bounty?) to the bounty. If your answer isn't Accepted, it's a loss, so you have to be confident.
You get a finite amount of time to answer in, maybe the donation percentage should be an increasing function of the amount of time you reserved. It should be set by the asking party.
How much control do you want to give to the asking party? Smart people ask lots of questions, but so do stupid people. You can't guarantee that they're going to have good judgement about what qualifies as a valid answer. I see so many conflicts of interest, the asker might choose to decline the confident answerer's answer, copy the text and answer the question themselves, not only would they get the answer for free, they'd make a profit from the reserve contribution.
I suppose it really cannot be left to a single judge. Maybe we should ask why the question asker has any right to judge answer validity at all, maybe that should be left to the epistemic community.
I actually suspect that the biggest market for this would be EA, not LW. There are already a large number of people who want to work in EA; and many organisations which would like to see certain questions answered.
I do expect a lot of the bounty money to come from the EA community, and for some of the value to come from the Q&A feature also getting use on the EA forum.
But I have reasons to be particularly optimistic about it on LW: The LessWrong community is about thinking, in a more direct way than the EA community is. (i.e. EA community filters for people who want to do good, and then it turns out answering hard questions is a thing you need for that, so you invest in that. Whereas LW community filters directly for the people who like thinking, and who improve their thinking-capacity as a hobby)
There's also natural clusters, where even within EA space, work on AI Alignment and human rationality tends historically to have clustered on LW rather than in EA spaces. So insofar as those are areas that EA funders are interested in, that work is more likely to be happening on LW, (although it could also happen on the EA forum).
Just thought I'd add: I suspect that support for referencing/footnotes in LW articles would move the content that is posted further towards the direction that you seem to desire whilst depending on a lot less assumptions. You might want to try that first.
I agree that's a good feature too (and it's something we're planning on getting to sooner or later. FYI we just added footnotes to the markdown editor, although they're trickier to implement UI-wise in the rich editor)
One major thing the team had talked about for "why Q&A" though is generating more, and clearer, demand for content. I touched on this in another comment:
Right now on LW you might be vaguely interested in writing posts to contribute, but it's not clear what topics people are interested in. If you have a clear idea of a blogpost to write you certainly can do that, but the generator for such posts are "what things are you already thinking about?"
By contrast, the Q&A system gives you clear visibility into "what topics do people actually want to know more about?" and the value is not just that you can answer specific questions, but that you can learn about topics as you do so that can lead to more generation of content. This seems potentially valuable to hedge against future years where "the people with lots of good ideas are mostly doing things other than write blogposts" (such as what happened in 2016 or so). I'm hoping the Q&A system makes the LW community more robust.
FYI we just added footnotes to the markdown editor
Is this documented anywhere? What is the syntax, etc.?
Syntax is based on the markdown-it footnotes plugin: https://github.com/markdown-it/markdown-it-footnote
I will add it to my to-do list to generally update our editor guides, and make them more discoverable. Currently not documented anywhere.
Context
1. This is the first in a series of internal LessWrong 2.0 team documents we are sharing publicly (with minimal editing) in an effort to help keep the community up to date with what we're thinking about and working on.
2. Caveat! This is internal document and does not represent any team consensus or conclusions; it was written by me (Ruby) alone and expresses my in-progress understanding and reasoning. To the extent that the models/arguments of the other team members are included here, they've been filtered through me and aren't necessarily captured with high fidelity or strong endorsement. Since it was written on March 17th, it isn't even up to date with my own thinking
3. I, Ruby (Ruben Bloom), am trialling with the LessWrong 2.0 team in a generalist/product/analytics capacity. Most of my work so far has been trying to help evaluate the hypothesis that Q&A is a feasible mechanism to achieve intellectual progress at scale. I've been talking to researchers; thinking about use-cases, personas, and jobs to be done; and examining the data so far.
.
.
Epistemic status: this is one of the earlier documents I wrote in thinking about Q&A and my thinking has developed a lot since, especially since interviewing multiple researchers across EA orgs. Subsequent documents (to be published soon) have much more developed thoughts.
In particular, subsequent docs have a much better analysis of the uncertainties and challenges of making Q&A work that this one. This document is worth reading in addition to them mostly for an introduction to thinking about the different kinds of questions, our goals, and how things are going so far.
Originally written March 17th
I’ve been thinking a lot about Q&A the past week since it’s a major priority for the team right now. This doc contains a dump of many of my thoughts. In thinking about Q&A, it also occurred to me that an actually marketplace for intellectual labor could do a lot of good and is strong in a number of places where Q&A is weak. This document also describes that vision and why I think it might be a good idea.
1. Observations of Q&A so Far.
First off, pulling some stats from my analytics report (numbers as of 2019-03-11):
Note: "viewCount" is a little unreliable on LW2 (I think it might double-count sometimes); "num_distinct_viewers" refers only to logged-in viewers.
Spreadsheet of Questions as of 2019-03-08
List of Q&A Uncertainties
See Appendix for all my workings on the Whiteboard
Q&A might be a single feature/product in the UI and in the codebase, but there are multiple distinct uses for the single feature. Different people trying to accomplish different things. Going over the questions, I see rough clusters, listed pretty much in order of descending prevalence:
These questions are roughly ordered from "high prevalence + easier to answer" to "low prevalence + harder to answer".
A few things stick out. I know the team has noticed already, but want to list them here anyway is part of the bigger argument. The questions which are most prevalent are those which are:
What is apparent is that questions which break from the above trends, e.g. questions which can be hard to explain (taking a long to write up), require skill/expertise to answer, can’t be answered purely from an answerer’s existing knowledge (unless by fluke they’re expert in a niche area), and require more effort than simply typing an answer or explanation -- these questions are really of a very different kind. They’re a very different category and both asking and answering such questions is a very different activity from asking the other kind.
What we see is that LessWrong’s Q&A is doing very well with the first kind -- the kind of questions people are already used to asking and answering elsewhere. There’s been roughly a question per day for the three months Q&A has been live, but the overwhelming majority are requests for recommendations and advice, opinions, and philosophical discussion. Only a small minority (no more than a couple dozen) are solid research-y questions.
There’ve been a few of the “help me understand”/confusions type you might see on StackExchange (which I think are real good). And a few pure research-y type questions, but around half of those were asked by the LessWrong team and friends. Around 10% of questions, really on the order of 10 questions or less in the last three months by my count.
I think these latter questions are more the sort we’d judge to be “actual serious intellectual progress”, or at least, those are the questions we’d love to see people asking more. They’re the kinds of questions that predominantly the LessWrong team is creating rather than users.
2. Our vision for Q&A is getting people to do a new and effortful thing. That’s hard.
The previous section can be summarized as follows:
The thing about the LW vision for Q&A is that it means getting people to do a new and different thing from what they’re used to, plus that thing is way more effort. It’s not impossible, but it is hard.
It’s not a new and better way to do something they’re already doing, it’s a new thing they haven’t even dreamt of. Moreover, it looks like something else which they are used to, e.g. Quora, StackExchange, Facebook - so that’s how they use it and how they expect other to use it by default. The term “category creation” comes to mind, if that means anything. AirBnB was new category. LessWrong is trying to create a new category, but it looks like existing categories.
3. Bounties: the potential solution and its challenges
The most straightforward way to get people to expend effort is to pay them. Or create the possibility of payment. Hence bounties. Done right, I think bounties could work, but I think it’s going to be a tough uphill battle to implement them in a way which does work.
[Edited: Raemon has asked a question about incentives/bounties for answering "hard questions." Fitting in this paradigm here, we'd really value further answers.]
4. Challenges facing bounties (and Q&A in general)
5. What would it take to get it to work
Thinking about the challenges, it seems it could be made to work if we following happens:
Even then, I think getting it to work will depend on understand which research questions can be well handled by this kind of system.
6. My Uncertainties/Questions
Much of what I’m saying here is coming from thinking hard about Q&A for several days, using models from startups in general, and some limited user interaction. I could just be wrong about several of the assumptions being used above.
Some of the key questions I want answered to be more sure of models are:
There are other questions, but that’s a starter.
7. Alternative Idea: Marketplace for Intellectual Labor
Once you’re talking about paying out bounties for people researching answers, you’re most of the way towards just outright hiring people to do work. A marketplace. TaskRabbit/Craigslist for intellectual labor. I can see that being a good idea.
How it would work
Why this is a good idea
You could build up several dozen or hundred worker (laborer?) profiles before you approach highly acclaimed researchers and say “hey, we’ve got a list of people willing to offer intellectual labor”, interested in taking a look? Or “we’ve got tasks from X, Y, Z - would you like to look and see if you can help?”
[redacted]: “I’d help [highly respected person] with pretty much whatever.” Right now [highly respected person] has no easy way to reach out to people who might be able to do work for them. I’m sure X and Y [redacted] wouldn't mind a better way for people to locate their services.
In the earlier stages, LessWrong could do a bit of matchmaking. Using our knowledge and connections to link up suitable people to tasks.
Existing services like this (where the platform is kind of a matchmaker) such as TaskRabbit and Handy struggle because people use the platform initially to find someone, e.g. a house cleaner, but then bypass the middleman to book subsequent services. But we’re not trying to make money off of this, we don’t need to be in the middle. If a task happens because of the LW marketplace and then two people have an ongoing work relationship - that is fantastic.
Crazy places where this leads
You could imagine this ending up with LessWrong playing the role of some meta-hirer/recruiting agency type thing. People create profiles, upload all kinds of info, get interviewed - and then they are rated and ranked within the system. They then get matched with suitable tasks. Possibly only 5-10% of the entire pool ever gets work, but it’s more surface area on the hiring problem within EA.
80k might offer career advice, but they’re not a recruiting agency and they don’t place people.
Why it might not be that great (uncertainties)
It might turn out that all the challenges of hiring people generally apply when hiring just for more limited tasks, e.g. trusting them to do a good job. If it’s too much hassle to vet all the profiles vying to work on your task, learn how to interact with a new person around research, etc., then people won’t do it.
If it turns out that it is really hard to discretize intellectual work, then a marketplace idea is going to face the same challenges as Q&A. Both would require some solution of the same kind.
I’m sure there’s a lot more to go here. I’ve only spent a couple of hours thinking about this as of 3/17.
Q&A + Marketplace: Synergy
I think there could be some good synergies. Ways in which each blend into each other and support each other. Something I can imagine is that there’s a “discount” on intellectual labor hired if those engaged in the work allow it to be made public on LW. The work done through the marketplace gets “imported” as a Q&A where further people can come along and comment and provide feedback.
Or someone is answering your question and like what they’ve said, but you want more. You could issue an “invite” to hire them them to work more on your task. Here you’d get the benefits of a publicly posted question anyone can work plus the benefits of a dedicated person you’re paying and working closely with. This person, if they become an expert in the topic, could even begin managing the question thread freeing up the important person who asked the question to begin with.
8. Appendix: Q&A Whiteboard Workings