New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 10:09 PM

Epistemic Status: Spitballing

A thought, inspired by this, Facebook's React options, and the failure of some previous attempts by websites to disambiguate "I agree" from "I thought this was funny" and "I think the reasoning here was sound."

On OKCupid, a while back they tried to have "personality" and "looks" as separate category. In practice, people seemed to lump them together, in a vague-halo-effect phenomenon (spoilers: it turns out people mostly cared about looks)

Slashdot's separating of "funny" and "insightful" helps separate some of this, but I think people still tend to downvote ideological opponents when given the chance, and it takes special effort to reward ideological opponents who are making good arguments.

But I just noticed Facebook's React options force something: you might find something sad, and heartwarming, and funny all at the same time. But you still ultimately have to pick just one of the reacts to give. This forces me to think a bit about how something made me feel beyond "I liked it."

I agree with Zvi about the barrier-to-entry/complexity costs of the OP here, but a lot of that is because slack doesn't have built in tools to make it clear what the various symbols mean.

But, in a hypothetical Less Wrong 2.0 or Arbitral 2.0 or whatever, you could have multiple upvote/downvote options, and be forced to pick one (maybe, at most one upvote, and most one downvote).

So upvotes might be:

Funny
Agree
Well-Reasoned

And some downvotes might be:

Irrelevant
Poorly-Reasoned
Inflammatory

Being forced to pick one would maybe shortcircuit the halo affect. And the main difference between the OP and this is that the icons would be clearly visible with mouse-over explanations so that people can learn as they go.

You could also do things to make Funny and Agree very viscerally rewarding to click (maybe with cool animations and bigger icons), but have Well-Reasoned actually be weighted higher in the default-sorting-algorithm.

Also, I noticed something a little interesting just now:

Facebook has Like, Love, Haha, Wow and Angry.

Some people use Love and 'strong like'. Sometimes I use it as a 'oh my god, the thing you experiencing seems really profoundly human and resonates with me and I'm connecting with you right now' (and this is most interesting to me when the thing they're posting is an abstract philosophical thing or argument, that for some reason I find heart-inducing)

But the Love is not costless. It carries... a similar quality but lower magnitude of the social awkwardness of hugging someone you don't know. It feels kinda weird (to me).

How willing I am to Love something that somebody posted varies on how well I know the person, and how strongly the thing resonates with me. And this... seems kinda... good? The barrier to entry is rooted in something more human. Or something.

[/musing]

My impression of these ideas is that they carry a high complexity cost, in terms of everyone involved having to track a lot of signals and a lot of things, and the barriers to entry being higher. Automoderation already is on the high end of reasonable complexity costs. A similar analogy could be made with bandwidth costs, as explicit communication is attention taxing.

I do agree that these are important contrasts, and important things to communicate; in a one-on-one conversation, it is good to say explicitly which mode you are in.

Another note is that it is not obvious that people agreeing for different reasons should be presumed to be stronger evidence than agreeing for the same reason. It means there is another reason out there, but also means that the person did not agree with your reason, which may or may not be because they did not know about your reason.

True, I didn't think about the added burden. This is especially important for a group with frequent newcomers.

I try hard to communicate these distinctions, and distinctions about amount and type of evidence, in conversation. However, it does seem like something more concrete could help propagate norms of making these sorts of distinctions.

And, you make a good point about these distinctions not always indicating the evidence difference that I claimed. I'll edit to add a note about that.

It's a bit of a misleading name: this is not moderation, auto or not, this is a signaling system focused on showing intent and feedback.

In the online environment it basically gives you a set of easy-to-express signals with more or less defined meaning. You can build something like that into the karma system (as Slashdot did very long time ago with its +funny +insightful) or just use it as shorthand in the body of the comments.

These signals could be used outside of automoderation. I didn't focus on the moderation aspect. Automoderation itself really does seem like a moderation system, though. It is an alternate way to address the concerns which would normally be addressed by a moderator.

Not a long note or a detailed dissection, but just a reminder: whenever you take single-dimensional data and make it multidimensional, it becomes harder and more subjective to analyze it. (EDIT: To clarify, you can represent multidimensional data multidimensionally. But mapping multidimensional data to a lower-dimensional space usually involves finding a fit, which can introduce error. Mapping it to a lower-dimensional space is usually an important step in explaining it.) I suspect you'll find that if you have this many dimensions for people to respond by, you'll get lots of different-looking representations of the same underlying signal.

Maybe that's not bad: the default sort order is newest-to-oldest -- basically arbitrary -- and for most cases, "generally positive" and "generally negative" signal will be sorted in the correct order. But I still feel some suspicion because it's just one UI feature and it took you about two pages of words to pitch it.