I got excited about this briefly. I think it's too simple to be interesting, today. Incentivizing curation wont have much impact at these scales. Incentivizing production would, but it makes no attempt to identify and credit creators.
You get money for writing posts that people like. Upvoting posts doesn't get you money. I imagine that creats an incentive to write posts. Maybe I'm misunderstanding you?
Uh, you get money for having your submissions upvoted, right? and most of the articles that are upvoted wont be written on the site, they'll be linked, so the submitter will get the money instead of the author. Submission is curator work.
Oh, got it.
I mean, that still sounds fine to me? I'd rather know about a cool article because it's highly upvoted (and the submitter getting money for that) than not know about the article at all.
If the money starts being significant I can imagine authors migrating to the sites where they can get money for their writing. (I imagine this has already happened a bit with things like substack)
I don't think people are going to be motivated by the monetary incentive to post much more than they already do. People seem to already like sharing stuff they think is good.
If the money starts being significant I can imagine authors migrating to the sites where they can get money for their writing. (I imagine this has already happened a bit with things like substack)
Maybe. But that transition could be accelerated by having a credit assignment system (where the author of the post gets money set aside even before they're aware of the site and collecting it), and you're going to need a credit assignment system later anyway when people start reposting things and trying to claim credit for them.
there's also hive (formerly steemit) that tries to reward posters of highly upvoted things, and early upvoters who correctly predict what will become big.
I think empirically money-based social media hasn't really taken off, but I suspect it's mostly due to transaction costs, bad UI, and the public goods problem (as information is freely copied). These are all solvable!
Upvoted for interesting take on an important question: why do people do things that don't involve money? I disagree with the framing that quantified exchange and debt is the "base case" for human interaction, nor the implication that most groups would be improved by imposing such requirements.
I first came across the word "autotelic" in the rulebook for an '80s nerdy card game, which amused me that they felt the need to say it about a game, but also has stuck with me. My view is that a whole lot of human interactions are self-rewarding. People like to explain, be explained to, argue, and generally be part of things. At a certain scale, and above a certain production value / cost to produce, payment and legible value needs to be tracked, because it stops being autotelic.
But trying to make it legible before that point actually RUINS the experience for a lot of people. It's no longer a casual participation that anyone can do as much or as little as they enjoy, and no longer people seeking places to interact with compatible others, now it's a job, or a quantified entertainment expense.
A lot of sites walk the line pretty well - non-obvious revenue through ads or donations or all-you-can-eat subscriptions, some amount of paid moderation/curation, possibly even some amount of paid content. But still keeping most of the illegible value in autotelic posting and reading of non-curated content. And a lot of sites TRY to walk the line, but utterly fail. A few of them fail so spectacularly that they accidentally take over the world, get very rich, and are exposed as clearly evil.
Note that debt is logically downstream of exchange, and exists to time-shift a part of the transaction. It's not core to the transaction, in most cases. 'debtlessness' is just a side-effect of 'paymentless', which is itself an abstraction over 'unquantified or untracked value exchange'. There are some places that DO track some level of value exchange, without money - upload/download ratio or karma requirements are a bit of this.
My view is that a whole lot of human interactions are self-rewarding. People like to explain, be explained to, argue, and generally be part of things.
If you don't pay, you only get people who like doing X. From their perspective, not being paid provides a sense of freedom. (If you don't pay me, I can write about topics I like. If you paid me, it would imply an obligation to write about topics you like.)
There is always a risk that people will do an unpaid job from wrong reasons. People may volunteer as authors because they have a product to sell, or volunteer as moderators because they want to win the virtual space for their political tribe.
But if you start paying, then on top of this there is a risk of attracting people who don't care about the project, and just want the money.
I think it makes sense to pay if you want someone to do a full-time job, such as rewriting the LessWrong software from scratch. Or maybe a moderator on Reddit. Basically, if you want guaranteed nontrivial amount of work, to be done even when it stops being fun.
But trying to make it legible before that point actually RUINS the experience for a lot of people.
Being strictly unable to enjoy things that people pay you to do is obviously not healthy. I think you're mostly just describing pathologies of the present labor relations that don't actually entail from tracking credit. We can imagine a world where people are allowed to just work as much as they feel like and where the jobs available are fairly enjoyable, and we must.
In practice, yes, we don't really know how to track debt in systems of dialog and collaborative filtering systems, but that could change very quickly as the internet becomes more amenable to experiments in that sort of thing.
I think there are situations where the overhead of tracking debt will always be too much, peer to peer protocols, maybe, or interactions between bacteria. For anything higher level, I doubt it's the best we can do.
Being strictly unable to enjoy things that people pay you to do is obviously not healthy.
I think this is a crux (or a communication failure that is important). I didn't say that payment ruins the experience, I said that the attempt to make it legible in order to calculate payment ruins it. Or at least changes it enough that it's not the same experience, and I'm likely to look elsewhere for the illegible parts, while gladly accepting payment for the legible (but not as valuable as before) parts.
My main point is that the overhead of debt tracking or payment handling is small, compared to the mental model changes and reconfiguration of expectations when the value is framed in measured units rather than personal, incomparable enjoyable activities.
So, hypothetically speaking, do you think that paying people without measuring them would be harmless? Though I am not sure whether this question even makes sense in real world, because at least you need to make a decision how long you keep paying someone (unless you just committed to paying them forever, regardless of whether they do something or not), which implies some form of measure.
I don't know how it would be possible to pay without measuring. At the extreme of "unmeasured payments", I presume a national or global UBI wouldn't significantly harm any given community or shared space that isn't related to the UBI criteria. Narrower payment schemes require AT LEAST objective identification of who pays (or is paid), why, and some exclusion mechanism to prevent overpayment. It's possible this could be minimally-invasive, but improbable.
And, to the extent it's not distortionary, it's also not motivating. Payment is (almost always) intended to have an effect on behavior. When it has no effect, there's not much reason for payment (there may be reason for grants or gifts as rewards, thank-yous, or kindness, but if they're regular enough they become expected and impactful on future motivation).
Relatedly, the legibility and universality of currency will cause any payment scheme to be considered in terms of scaling. How do we make these measured dimensions bigger (or keep them smaller, in some cases), regardless of impact on all the unmeasured values that people take from the interactions.
When I started this subthread, I didn't think this was the same as Goodhart, but maybe it is: if people like doing something, and you measure part of it for monetary purposes, you've got an imperfect proxy for that value.
Although most of the text produced by humans and published on the internet has the dynamic you describe, most of the hours of video watched on Youtube is produced by people making a living from those videos or expecting (realistically) to start making a living soon, and IMHO there's tons of great information on YT (although it can be hard to find).
Unsure whether the dominance of youtube is due to youtube paying creators or due to it having a good recommender algorithm. Considering the amount of content I watch that's just people talking, I'm fairly sure it's mainly the algorithm.
Youtube is certainly dominant on measures like person-hours, but I'm not interested in that: I'm interested in how to make the internet more informative to the kind of people who will read this conversation (i.e., smart knowledgeable people). (So for example, how many people use YT to veg out at the end of the day are not what I'm interested in.)
I'm not seeing how YT's recommendation algorithm is good for the kind of people who will read this. (More precisely, although there was an interval of a few months during which the algorithm was quite informative in my experience, that ended many months ago with the result that the current situation is that there is no good way AFAIK to discover the great content on YT in my experience without wasting a lot of time wading through mostly-worthless content.) I doubt you mean the way the algorithm influences the kind of content that creators choose to create. (Creators tend to learn a lot about the algorithm because it has a large effect on their view count and consequently their revenue).
Also, I'm confused about your "just people talking": a huge fraction of what I consider great YT content is just people talking. I don't find that strange or regrettable and fail to see how that supports your point.
The quality of the recommender is highly variable and depends on the... psychological resilience... of the subject. If it sees a way to melt you into a passive consumer, it will. Mine seems to have turned recently. I didn't watch enough of the lecture videos in my watchlater. It sensed weakness.
The point was that even if they stopped paying people, many people would continue producing People Talking content.
Which likely suggests that an algorithm of the same strength as the youtube algorithm, applied to written content, would create a compelling system for discovering interesting stuff.
And for some people, supposedly, Google News is that, but I was never able to get it to recommend stuff I wanted, and it seems to have a hard bias for a closed list of newspapers (it's never going to recommend something from a substack) which is creepy.
A lot depends on what you mean by "algorithm of the same strength". Youtube is a closed loop - they know how much of what things you watched, what you searched on, what you responded to and didn't respond to. And they use that information to pay content producers in proportion to "success" of the content via their algorithms. And the additional feedback loop of knowing what videos you're watching allows them to charge more for ads you're shown.
It's VERY good at optimizing for what it measures (people's willingness to watch targeted ads around what content). I'd argue that's more about data acquisition than algorithmic power. I'd further argue that it's absolutely not what I want to be optimized in noncommercial interactive discussion spaces.
It MAY BE what I want in curated, directed, long- and short-form text publication. I could see Substack evolving to a similar model (where in addition to subscriptions to authors, you have it recommend per-read or ad-supported articles). I'd love it if an engine could aggregate dozens of magazines and publishers into that model, but I don't think most of the current participants will agree to that level of central control.
(hmm does the lay-meaning of "algorithm" encompass the data, especially any ongoing recurring effects it would have. I think it must. A ML model is a product of its data.)
I think the trick with these systems is letting users talk back to the algorithm and help it out. Likes, or more meaningful signals of appreciation, help. Reddit go by without a recommender system because users were expected to essentially explicitly communicate all of their interests by subscribing to subreddits. Ranking is another way.
>an algorithm of the same strength as the youtube algorithm, applied to written content, would create a compelling system for discovering interesting stuff.
We have very different assessments. My guess is that the reason the youtube recommendations algorithm used to be better than any of the engines for searching text is because the engines cannot distinguish the high-quality textual pages from the low-quality pages (plus the sheer number of low-quality pages). Although there is certainly a lot of bad information and temptations to waste your time on Youtube, it is easier for an engine to avoid it and surface the informative content (but Youtube is not even trying to do that anymore) basically because the vast majority of the temptations to waste your time on Youtube aren't even trying to deceive anybody that they're informative.
Also the average youtube video that is sincerely trying to be informative rather than just entertaining is of distinctly higher quality IMHO than the average textual web page trying for the same thing.
Some good thoughts here.
My thinking is that participation in online communities is mostly incentivised through status and inclusion. Upvotes or informal status mechanisms enable someone to be perceived as a valued member of an online community that they identify with. But power and status are subtly different - moderators have the power to ban, admonish, censor, and sometimes signal-boost, but they don't necessarily gain respect or status based on this - in fact, it's often the opposite.
Creators acquire status based on the quality of their output (through formal (Karma) and informal (general reputation) mechanisms), but the power that this affords is usually quite indirect (extra upvoting power on LW and EA forums). This can potentially transform into real-world power over the moderators in the case of a coup or a protest, but I'd say that this attracts a different kind of person to moderation.
I'm not sure how useful the "duty vs. privilege" framing is, but the idea that some of these activities may be over- or under-incentivised is an important question. I'd have thought that moderation would be under-incentivised, which is why I've always been a bit fascinated by voluntary moderation. 4chan-type forums are the most bizarre example; it has always perplexed me that someone is taking this "responsible" social role to make sure that /pol stays "on-topic" despite the sub-forum being an anarchic cesspit, and probably getting influxes of hate from censored/ banned participants while doing so. But the existence of moderation suggests that there must be a type of person who genuinely enjoys the power that it affords.
Writing high-quality original posts is probably appropriately incentivized - there are enough people who like writing and it provides internal and external validation. Like with meta-analysis and replicating in academia, there are probably some curation tasks that are under-incentivized in most online spaces, but LW/ EA seem pretty good at that.
I think that there is a good idea here. My first thought is that debt requires an authority to enforce on collection of debts. For communities where accounts can be pseudonymous, there is little at stake and therefore little that can be staked.
Another thought is to make a comparison with free markets. Is it a duty or privilege to buy or sell something? I think this is highly context dependent.
I notice that there's a lot of work done in online communities that's hard to price. We know so little about its price that it's a bit ambiguous as to whether it should be paid for or charged for:
Reflecting on this, I think it might not be a coincidence that all of the most common roles in online communities have equal or greater benefits than costs: We don't have ways of compensating an imbalance. There's no process for tracking and repaying debt (and even with better technologies, crediting the production of information — a non-excludable good — will remain difficult), which explains why it would be the case that:
It seems as if little is done, in an online community, unless it's from the narrow category of acts that are in themselves self-compensating.
Which is concerning, because most conceivable beneficial exchanges pair acts which are not in that category.