Update: This has now been pushed to the live site. For now, on Desktop, strong upvotes require click-and-hold. Mobile users just tap multiple times. This is most likely a temporary solution as we get some feedback about how the respective modes work under realistic conditions.
Over on our dev-site, lessestwrong.com, we have a fairly major feature-branch: Recalibrated voting power, and Strong Upvotes/Downvotes.
tl;dr – Normal votes on the test server range in power from 1 to 3 (depending on your karma). You have the option of holding down the up/downvote button for a strong vote, which ranges in power from 1 to 15.
We're looking for feedback on the UI, and some implementation details.
Flow vs Stock
This post by Jameson Quinn notes that there's two common reasons to upvote or downvote things. This is similar to my own schema:
- Conversational Flow – When you like (or dislike) the effect a comment has on a conversation, and you want to give the author a metaphorical smile of appreciation, or awkward silence/stare.
- "Ah, good point" (+)
- "Hmm, this gives me something to think about" (+)
- "This comment cited sources, which is rare. I want to reward that." (+)
- "This was clever/funny." (+)
- "I think this post contains an error." (–)
- "This comment is technically fine but annoying to read" (–)
- "I don't think the author is being very charitable here" (–)
- Some combination of the above (upvote or downvote, depending)
- Signifying Importance – When you think other people should go out of their way to read something (or, definitely should not). Ideally, posts and comments that contribute to the longterm stock of value that LessWrong is accumulating.
- "I learned something new and useful" (++)
- "The argumentation or thought process illustrated by this post helped me learn to think better." (++)
- "This post contains many factual errors" (––)
- "This comment is literal spam" (––)
- "The reasoning here is deeply bad." (––)
People instinctively use upvoting to cover both Flow and Importance, and this often results in people upvoting things because they were a good thing to say in a conversation. But then later, if you want to find the most useful comments in a discussion, you end up sifting through a bunch not-actually-useful stuff.
People also often unreflectively upvote things they like, without paying much attention to whether the arguments are good, or whether it's good for the longterm health of the site. This means people who think hard about their upvotes get counted just as much as people casually clicking.
So the idea here is that by default, clicking results in a Normal Upvote. But, if you hold the button down for a couple seconds, you'll get a Strong Upvote. (And same for downvotes).
Can you technically Strong Upvote everything? Well, we can't stop you. But we're hoping a combination of mostly-good-faith + trivial inconveniences will result in people using Strong Upvotes when they feel it's actually important.
I have some more thoughts on "what good effects precisely are we aiming for here", which I'll flesh out in the comments and/or the final blogpost when we actually deploy this change to production.
Vote-Power by Karma
Quick overview of the actual numbers here (vote-power/karma)
Normal votes
- 2 – 1,000 karma
- 1 – 0 karma
Strong Votes
- 16 – 500,000 (i.e. Thousand year old vampire - the level above Eliezer)
- 15 – 250,000
- 14 – 175,000
- 13 – 100,000
- 12 – 75,000
- 11 – 50,000
- 10 – 25,000
- 9 – 10,000
- 8 – 5,000
- 7 – 2,500
- 6 – 1,000
- 5 – 500
- 4 – 250
- 3 – 100
- 2 – 10
- 1 – 0
(We considered using another log scale, but log5 didn't quite give us the granularity we wanted, and smaller log scales produced weird numbers that just didn't really correspond to the effect we wanted. So we just picked some numbers that felt right.)
Feedback
We're still hashing out the exact UI here – in particular, the UI for helping users discover the feature. (Posts basically have little-to-know discoverability, comments have a little hover-over message).
Check out lessestwrong.com and note your feedback here. (If you created your user account recently, you may need to create an alternate account on the lessestwrong development database)
It is not difficult to make people notice the feature exists; cf. the GreaterWrong implementation. (Some people will, of course, still fail to notice it, somehow. There are limits to how much obliviousness can be countered via reasonable UX design decisions.)
[emphasis mine]
This is a good point, but a subtle and easily-mistakable one.
There is a misinterpretation of the bolded claim, which goes like this:
The UI should not permit an action which the user would not want to take.
The response to this, of course, is that the designers of the UI do not necessarily know in advance what actions the user does or does not want to take. Therefore let the UI permit all manner of actions; let the user decide what he wishes to do.
But that is not what (I am fairly sure) nshepperd meant. Rather, the right interpretation is:
The UI should not permit an action which the user, having taken, will (predictably) be informed was a wrong action.
In other words, if it’s known, by the system, that a certain action should not be taken by the user, then make it so that action cannot be taken! If you know the action is wrong, don’t wait until after the user does it to inform him of this! Say, in advance: “No, you may not do this.”
And with this view I entirely agree.
It is my understanding that some or all of the LW team (as well as, possibly, others?) do not take this view. As I understand it, the contrary view is that the purpose of voting is to adjust the karma that a post/comment ends up with to some perceived “proper” value, rather than to express an independent opinion of it. The former may involve voting up, or down, strongly or weakly… I confess that I find this view perplexing, myself, so I will let its proponents defend it further, if they wish.
I don't think it's super productive to go into this with a ton of debt, but I do also think that voting is for expressing preferences, just that it's better to model the preference as "on a scale from 1 to 1000, how good is this post?", instead of "is this post good or bad?". And you implement the former by upvoting if it is below your threshold, and downvoting if it is above, with the strong version being used when it's particularly far away from where your assessment is. This gives you access to a bunch more data t... (read more)