Experiment: Reaction-Ballot Voting

This open thread is using an new experimental voting system: reaction-ballot voting. 

In addition to voting on a comment's overall quality, you can also vote separately on a number of axes, and apply a small set of emoji reactions. Try out and discuss this voting system here! Notes:

  • Voting on each axis is weighted according to the voter's karma the same way as votes are in the regular voting system. All axes can be strong-voted with click-and-hold. The commenter's karma is only affected by your vote on the "overall" access, not your vote on any of the other axes.
  • This is one experiment among several. Bugs are possible. We're interested in what effect this has on the quality of conversation, what the experience of voting in this system is like, and what the experience of skimming a thread and seeing these scores is like.
  • The user interface for this voting system doesn't work well on touch/mobile devices. Third-party clients such as GreaterWrong will only be able to see and cast overall votes, but should work fine otherwise.

Please tell us what you think! Love it/hate it/think it should be different? Let us know.

Regular Open Thread Boilerplate

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New to LessWrong?

New Comment
89 comments, sorted by Click to highlight new comments since: Today at 4:52 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Zach Stein-Perlman2y46
4Truth
9Aim
18Clarity
7Seeking
🤨 6
❤️ 4
😮 1

Please tell us what you think! Love it/hate it/think it should be different? Let us know.

I think it's a fine experiment but... right now I'm closest to "hate it," at least if it was used for all posts (I'd be much happier if it was only for question-posts, or only if the author requested it or a moderator thought it would be particularly useful, or something).

  • It makes voting take longer (with not much value added).
  • It makes reading comments take longer (with not much value added). You learn very little from these votes beyond what you learn from reading the comment.
  • It's liable to make the more OCD among us go crazy. Worrying about how other people vote on your writing is bad enough. I, for one, would write worse comments in expectation if I was always thinking about making everyone else believe that my comments were true and well-aimed and clear and truth-seeking &c.

If this system was implemented in general, I would almost always prefer not to interact with it, so I would strongly request a setting to hide all non-karma voting from my view.


Edit in response to Rafael: for me at least the downside isn't anxiety but mental effort to optimize for comment quality rather than votes and mental effort to ignore votes on my own comments. I'm not sure if the distinction matters; regardless, I'd be satisfied with the ability to hide non-karma votes.

[-]__nobody2y20
2Truth
-6Aim
-9Seeking
🎉 1

I largely agree with this. Multi-axis voting is probably more annoying than useful for the regulars who have a good model of what is considered "good style" in this place. However, I think it'd be great for newbies. It's rare that your comment is so bad (or good) that someone bothers to reply, so mostly you get no votes at all or occasional down votes, plus the rare comment that gets lots of upvotes. Learning from so little feedback is hard, and this system has the potential to get you much more information.

So I'd suggest yet another mode of use for this: Offer newbies to enable this on their comments and posts (everywhere). If the presence of extended voting is visible even if no votes were cast yet, then that's a clear signal that this person is soliciting feedback. That may encourage some people to provide some, and just clicking a bunch of vote buttons is way less work than writing a comment, so it might actually happen.

3MikkW2y
I broadly agree, but I'd say I consider myself a regular (have been active for nearly 2 years, have deeper involvement with the community beyond LW, have a good bit of total karma), and I still expect this to provide me with useful information.
3MikkW2y
I agree that it should be an option to turn this off for oneself, but I currently feel that this will be net-positive for most users
2__nobody2y
Defaults matter: Opt-in may be better than opt-out. For opt-out, you only know that people who disabled it care enough about not wanting it to explicitly disable it. If it's enabled, that could be either because they're interested or because they don't care at all. For opt-in, you know that they explicitly expended a tiny bit of effort to manually enable it. And those who don't care sit in the group with those who don't want it. That means it's much more likely that your feedback is actually appreciated and not wasted. Additionally, comments with extended voting enabled will stand out, making them more likely to catch feedback. (And there will probably still be plenty of comments with extended votes for passive learning.)
2Gunnar_Zarncke2y
The ability to disable the voting by the user is valuable. An alternative would be to make it optional to enable for authors. Or require a minimum karma.
[-]Adam Zerner2y22
2Aim
2Clarity
12Seeking
🎉 6

I was talking to a friendly recently who is an experienced software developer looking to get into AI safety. Both of us have been reading LessWrong for a long time, but were unclear on various things. For example, where can you go to see a list of all job and funding opportunities? Would jobs be ok with someone with a software engineering background learning AI related things on the job? Would grants be ok with that? What remote opportunities are available? What if there is a specific type of work you are interested in? What does the pay look like?

These are just a few of the things we were unclear on. And I expect that if you interviewed other people in similar boats, there would be different things that they are unclear on, and that this results in lots of people not entering the field of AI safety who otherwise would. So then, perhaps having some sort of comprehensive career guide would be a high level action that would result in lots more people entering the field.

Or, perhaps there are good resources available, and I am just unaware of them. Anyone have any tips? I found 80,000 hours' career review of AI safety technical research and johnswentworth's post How To Get Into Indepen... (read more)

6Chris_Leong2y
Are you aware of AI Safety Support? You can book a call with Frances Lorenz or JJ.
6Adam Zerner2y
I am not, it looks awesome, thanks for sharing! I will pass it along to my friend.
5Brendan Long2y
Thanks for posting this! Yeah, my experience has been that there's a lot of posts talking about how AI safety companies want engineers, but it seems like it's all wanting engineers who live in Berkley, San Francisco, or NYC, or wanting people to retrain as researchers coming at problems from a specific direction. The "How to get into independent research" post is more useful, but assumes you're financially-independent and/or have an extreme tolerance to risk (step 1: quit your job and self-finance education for a few months). I'm currently in the process of saving up enough money to be able to do this, but it seems like I must not be the only one stuck here.
[-]nmehndir2y15
5Seeking

[deleted]

[-]jimrandomh2y13
5Aim
3Clarity
17Seeking
🤨 1
🎉 10
❤️ 1

The idea behind this voting system is to act as a culture-shaping tool: the ballot you see when you hover over the vote buttons is meant to tell you what we think makes for good and bad comments. Ideally, this message comes across even if you aren't voting much.

I've given some thought to the specific axes and reactions, but they should still be treated very much as a first draft. I'm very interested in comparing other people's lists of axes and reactions, and in arguments about what should and shouldn't be included. What makes for a good comment? What should people be paying attention to? What have you wanted to communicate to authors, which you wish you had a button for instead of having to write a whole comment?

A big uncertainty I have about this voting system is how much of a problem the extra complexity is. Is seeing the extra score components on comments distracting? Does having a bunch of extra axes to vote on make voting too time consuming or overwhelming? Feedback on this is appreciated.

And of course, now that we have a setup in place where we can try out alternative voting systems, we're interested in any original ideas people have for different ways of voting on comments that we haven't thought of.

[-]MondSemmel2y19
1Truth
3Aim
1Clarity
6Seeking
🎉 1

Feedback:

  • I do find the added complexity distracting and overwhelming.
  • But that's less of a problem if such a voting system would only become selectively enabled in contexts where the benefits are worth the cost of extra complexity.
  • And obviously that feeling of being overwhelmed is only partly due to the increased complexity, and partly because it's an experimental unpolished feature.
  • That said, four axes plus extra emojis really is a lot, and I'd imagine a final system would be more like 2-3 axes (or a dropdown with some orthogonal options) than as many as here.
  • Regarding that point, I do think it's important that all axes in such a system are reasonably orthogonal. General Upvote, Truth, Aim, Truthseeking, and Clarity all seem far too correlated to be on separate axes.
  • Also, I'm slightly wary that some of these votes maybe impute too much motive into a comment? If you vote "seeks conflict", that implies that you think the comment author was intentionally unvirtuous. Whereas if your vote was an Angry Face à la Facebook's system, that's obviously at least partly about your own state of mind, and your reaction to the comment. (Not that Facebook's system would be particularly aligned
... (read more)
4Pattern2y
It took a while to read this, because I'd have said 'hits the mark/misses the point' and clear/muddled were the ones that seemed perhaps too similar. Then I noticed that you mentioned those - processing what these words mean quickly is going to take a bit to get down. A comment, or post, could be clear, and yet, the point might not be. (And I might comment asking there.) Yes. I'm fine with the complexity now, but in a big thread, with loads of comments, that are very long...that's going to be more challenging. Hopefully the new system will make some things more clear, so that the process of understanding gets easier, but at first? It'll be a lot.
2hamnox2y
Agree it's overwhelming. Agree it'll get better if limited to relevant contexts and polished up. Agree the axes are difficult to distinguish from one another. True speech, truth-seeking speech, precisely specified speech, and accurately aimed speech are all distinctly important! buuuut they're strongly correlated so the distinctions are usually only useful to point out on the extreme ends of the quality spectrum, or on very short comments. There's an axis? reaction? that is not quite muddled or conflict-seeking or missing the point or false, nor does it warrants skepticism or surprise. It's just... an ugh field. It's the category of too much text, too far outside my base context, too ugly, too personally triggering, too why I should even try. My browser shows does not display the skepticism or enthusiasm icons, I too have great difficulty identifying their meaning.
2MondSemmel2y
Good point. I would not consider all those quite the same axis, but they're sure orthogonal to the axes we have here. Here are some potential word pairs to name this axis: Energizing/Inspiring<->Exhausting, Enjoyable/Joyful<->Ugh, Polished<->Mess(y)/Unfocused/Meandering.
2hamnox2y
if i had to redesign the system right now based on these thoughts, I'd go for 3 sections of feedback. First, reactions: Skepticism, Enthusiasm, Surprise, Empathy, Ugh, Wrath Second, upvote/downvote. Third, rubric breakdown. this is collapsed by default, if you voted Strong in either direction then it automatically opens.  * False | True * Muddled | Clear * Irrelevant* | On the Mark * Seeds Discord | Truth Converging *-possible alternative: out of bounds?
2MikkW2y
Personally, I don't find these 4 axes to be too much to handle. I don't necessarily agree that the axes have to be very orthogonal. The point of this system is to promote LW's desired culture of seeking truth, so it makes sense that the axes are going to have that all in common. The important thing is that each axis should have some significance that is not communicated by any of the other axes- which I feel at least 3 of the 4 axes accomplish ("true" is about whether something is actually true, "clarity" is about how well the thoughts are expressed, regardless of the truth, "seeking" is about demonstrating proper epistemic hygine, (Which overlaps slightly with clarity, but clarity is more about having a line of thought that can be followed, with less emphasis on the quality of the tools of reasoning, while truth-seeking emphasizes using tools that give good results, with less focus on how clear their use is, or the actual resulting thesis). I'd say I have the hardest time distinguishing "aim" from "truth", because ultimately something that hits the mark is true, though "misses the point" seems not quite the same as "false". Actually, now that I think about it, "hits the mark" and "misses the point" don't really feel complementary to me- 'hits the mark' is basically about agreement, while 'misses the point' seems to be more about how well the thoughts in a comment understand and relate to the conversation it is a part of. I would maybe suggest trying to adjust "hits the mark" to also be on this axis- highlighting not just truth, but relating to the broader context of the conversation in a good way.
8MondSemmel2y
Bug: So apparently in my Android smartphone's Firefox browser (v95.2.0), the Skepticism emoji is rendered as a grey rectangle. The other three emojis display fine, though they look different than on my desktop Firefox. (Which means they probably look different in other browsers too, right? It seems slightly weird to let browsers decide part of the aesthetics of one's website.)
4MondSemmel2y
Trying new voting systems in open threads is a fine idea, but it has the unfortunate side effect of crowding out normal Open Thread comments. That seems bad to me, since these threads have a legitimate purpose of being the place where new users can introduce themselves, ask basic questions, etc.
4Adam Zerner2y
My biggest thought is that the bar for experimenting is a lot lower than the bar for, say, committing to this site-wide for 12 months. And with that said, it's hard for me to imagine this not being promising enough to experiment with. Eg. by enabling it on select posts and seeing what the results and feedback are.
4Ruby2y
My first experience trying to react to your comment was feeling that none of the axes felt applicable, but then "enthusiasm" did capture how I felt; however, the icon for it felt discordant since I associate party-hat/streamers with "celebration" rather than "enthusiasm". I don't know what icon I'd use for enthusiasm.
6jimrandomh2y
Some possibilities: Clapping hands, sparkles, smiling face with open hands, 100 I'm a bit worried about having too many yellow-circle-face icons, because they're hard to tell apart when they're small.
[-]Ruby2y19
4Truth
12Aim
2Clarity
🤨 2
🎉 2
❤️ 6

I think for any lasting system I want zero yellow-circle-face icons if for no other reason to preserve LessWrong's aesthetic (or my sense of it).

[-]TurnTrout2y10
4Aim
2Seeking
🤨 1
🎉 2

In addition, maybe any emoji should be grayscale so as to be less distracting?

3Pattern2y
A user setting to toggle off the grayscale would be useful as well, though it makes things more complicated.
1MikkW2y
I generally think having toggles is good design, as long as things function well for everybody without needing to use toggles
7Ruby2y
I notice that I am uncertain how to interpret the "hearts" on this comment. Do they mean that people have empathy for my feeling or that they "love" this comment (a la strong upvote) with the meaning that heart reacts have in other places?
4hamnox2y
I interpreted it as "vibing with this" or "mood". Feeling a moment of connection with another human being through their words, either because it matches your own experience or because they painted a foreign vista vivid enough to inhabit.
1MikkW2y
I agree that ❤️ and "empathy" don't really match with eachother
1Alex Vermillion2y
I think that the heart matching "empathy" makes me think it's intended to show emotional support for someone, like if I had a post about my dog dying or something. You might not "agree" since there might not be anything factual going on, but it would still be nice to be able to somehow make me know you noticed.
9Gunnar_Zarncke2y
Don't use 100 for enthusiasm. Numbers should be reserved for numeric stuff like e.g., 100% of smth.
2hamnox2y
I am getting rectangle boxes for both Enthusiasm and Skepticism.
1MikkW2y
I think sparkles works well, clapping hands is also okay (and I'm actually personally fine with the 🎉 icon). 100 doesn't feel like it matches "enthusiasm" very well. Hugging face kinda works, but I prefer the others (aside from 100), and I agree that yellow faces should be minimized (though I'm probably less opposed to it than Ruby)
4Gunnar_Zarncke2y
I also used Enthusiasm, though mostly because of the post, not the comment. I am delighted to see a voting mechanism added. I missed the old polling feature from LW 1.0. 
3John_Maxwell2y
I wrote a comment here arguing that voting systems tend to encourage conformity. I think this is a way in which the LW voting system could be improved. You might get rid of the unlabeled quality axis and force downvoters to be specific about why they dislike the comment. Maybe readers could specify which weights they want to assign to the remaining axes in order to sort comments. I think Agree/Disagree is better than True/False, and Understandable/Confusing would be better than Clear/Muddled. Both of these axes are functions of two things (the reader and the comment) rather than just one (the comment) and the existing labels implicitly assume that the person voting on the comment has a better perspective on it than the person who wrote it. I think the opposite is more likely true -- speaking personally at least, my votes tend to be less thoughtful than my comments. Other axis ideas: Constructive/nonconstructive, important/unimportant. Could also try a "thank" react, and an "intriguing" or "interesting" react (probably replacing "surprise" -- I like the idea of reinforcing novelty but the word "surprise" seems like too high of a bar?) Maybe also reacts for "this comment should've been longer/shorter"?
2MikkW2y
I wish the buttons for 'truth' said 'agree / disagree' instead, because while sometimes the truth is objective, other times comments are more subjective, and I desire to communicate that I disagree ('false' feels more icky, because I feel they are honestly communicating what they feel, I just don't agree with their perspective) In the case that some people say 'true', and an equal number say 'false', I would appreciate it if the 'truth' box (same for the other boxes) was still visible, and said 'truth 0', instead of disappearing. That way, one can distinguish between no box (Which means no-one has expressed an opinion), and a divided opinion.
2hamnox2y
Polar opposite opinion on the truth buttons. Agree / Disagree is not a relevant axis of quality on Lesswrong. True / False is so relevant, when a comment contains explicit or implicit claims about reality to fact check. I tentatively infer that the use case you're thinking of is some kind of Quick Poll, where someone shares subjective anecdata and others can quickly chime in with whether their anecdata is alike or in contrast of that example. This would be an incredibly valuable tool; I really want to have that. What I don't want is to have that tool in the place of a quality control system.
2Yoav Ravid2y
Wasn't clear to me what "aim" was referring to. Based on seeing "seeking" too I can guess that it's for hits the mark / misses the point, but otherwise it's not clear.
1MikkW2y
I believe 'aim' refers to "hits the mark / misses the point"
1Joe Rocca2y
I really like this experimentation. Some thoughts: * Regarding finding the ideal set of axes: I wonder if it would make sense to give quite a few of them (that seem plausibly good), and then collect data for a month or so, and then select a subset based on usage and orthogonality. Rather than tentatively trying new axes in a more one-by-one fashion, that is. You'd explicitly tell users that the axes are being experimented with, and to vote on the axes which seem most appropriate. This might also be a way to collect axis ideas - if the user can't find the axis that they want, they can click a button to suggest one. Relying on the in-the-moment intuitions of users could be a great way to quickly search the "axis space". * I really like the "seeks truth/conflict" axis. A comment has an inherent "gravity" to it which makes it inappropriate/costly for pointing out "small" things. If a comment is very slightly hostile, then there's a kind of social cost to pointing it out, since it really isn't worth a whole comment. This results in a threshold under which incivility/conflict-seeking can simmer - being essentially immune to criticism. * One weird experiment that probably wouldn't work, but which I'd love to see is for the reactions to be more like a tag system, where there are potentially hundreds of different tags. They're essentially "quick comments", and could be quite "subtle" in their meaning. It would be a bit like platforms that allow you to react with any emoji, except that you can be much more precise with your reactions - e.g. "Unnecessary incivility" or "Interesting direction" or "Good steelman" or "Please expand" or "Well-written" or "Hand-wavy" or "Goodhart's Law" (perhaps implying that the concept is relevant in a way that's unacknowledged by the author). There could also be some emergent use-cases with tags. For example, tags could be used as a way for a commenter to poll the people reading the comment by asking them to tag a digit between 1 and 5, for

Twitter has announced a new policy of deleting accounts which have had no activity for a few years. I used the Wayback Machine to archive Grognor's primary twitter account here. Hal Finney's wife is keeping his account alive. 
I do not know who else may have died, or cryo-suspended, over the years of LW; nor how long the window of action is to preserve the accounts.

[-]Adam Zerner2y12
2Truth
2Clarity
🎉 5
❤️ 2

Feature idea: Integrating https://excalidraw.com/ into the editor so that users can quickly and easily draw sketches and diagrams. I have been doing so a little bit, eg. this diagram in this post.

I'm a big fan of visual stuff. I think it is pretty useful. And their GitHub repo says it isn't that hard to integrate.

Try out @excalidraw/excalidraw. This package allows you to easily embed Excalidraw as a React component into your apps.

2Donald Hobson2y
Its a vector drawing package. I don't think it's the best one out there. There are quite a few pieces of software people might use. A majority of the users won't use this feature because their favourite software is better in some way. I mean if you have unlimited developer time, go for it. But I think the delta usability/ delta dev time is low. The software needs to have all the basic features and have a good UI before it starts offering any advantage at all. And half your users will still use other software, because that's what they know how to use, or it has some obscure feature they like.
2Adam Zerner2y
I'm not clear on what you mean here. It sounds like you are saying that even if Excalidraw was integrating into the LW text editor users would still find their favorite drawing software and use it instead. But that almost never happens currently, so I don't see why adding Excalidraw would change that. If what they said in the docs is correct, it wouldn't actually require too much dev time. Usability-wise, I've found Excalidraw to be one of the rare pieces of software that is intuitive to use out of the box, and doesn't have much of a learning curve.
2Donald Hobson2y
From my experience of making a few diagrams for my posts, most of the work is in thinking what you want to represent, and making it. Not in saving and uploading a file. So I am predicting the rise in diagrams to be fairly small. Maybe as much as 50% more.  Ok, if you are planning to copy and paste existing software, that could involve less dev time. Or it could be integration hell. And the learning curve on it is pretty shallow. Such a feature would encourage more quick scribbly diagrams.
2Adam Zerner2y
My prediction is that the rise in diagrams would be much larger, based on the following model. 1) Making diagrams is currently not a thing that crosses peoples minds, but if it were an option in the text editor it would cross their minds. 2) Having to save and upload a file is a trivial inconvenience that is a large barrier for people.
2Donald Hobson2y
You may have a point. The trivial inconvenience effects are probably large for some people.
2NunoSempere2y
Woah.
[-]Ruby2y12
-11Truth
24Aim
-17Clarity
14Seeking
🤨 17
🎉 17
❤️ 10
😮 15

This is a test comment. You may react to it with impunity! Vote at will!

5MondSemmel2y
Oh, apparently you can (strong-) upvote all new categories for your own comments. What a deeply true, clear, and truthseeking comment this is. And boy does it hit the mark!
1Alex Vermillion2y
Darn, I tried to strong surprise-react and it didn't take. Color me... surprised
[-]MikkW2y10
-2Truth
6Aim
6Seeking

I'd appreciate it if clicking on the regular upvote / downvote didn't open the more complex dialog, and rather just did a simple up / down vote, and instead there was a button to access the more detailed voting. That way, by default, voting is easy and I can ignore the more nuanced system unless I deliberately wanted to use it.

(Also, since we're on the topic of the voting UI, I've mentioned to multiple members of the LW team that strong upvoting is broken on iPad, since the OS says long press = select text. On iPhone, a different gesture is used, but it's activated based on screen size, so it doesn't work on iPad. This should be easily fixable by simply adding a check for OS that makes the double-tap always work on iOS (though things are often not as simple as one may expect). I'm a little frustrated that this hasn't been fixed yet, though I also understand that dev resources are limited)

[-]steven04612y9
1Truth
1Clarity
2Seeking
❤️ 1
😮 1

It took a minute to "click" for me that the green up marks and red down marks corresponded to each other in four opposed pairs, and that the Truth/Aim/Clarity numbers also corresponded to these axes. Possibly this is because I went straight to the thread after quickly skimming the OP, but most threads won't have the OP to explain things anyway. So my impression is it should be less opaque somehow. I do like having votes convey a lot more information than up/down. I wonder if it would be best to hide the new features under some sort of "advanced options" in... (read more)

[-]Measure2y9
7Truth
1Aim
1Seeking
🤨 3
❤️ 1

It's easy to write a comment that's net positive overall. It's hard to write one that's separately net positive on each axis. I expect a system like this would lead to me spending more time crafting my comments and posting fewer (better) comments overall.

[-]iceman2y7
-1Truth
2Clarity
-2Seeking
🎉 2

After this and the previous experiments on jessicata's top level posts, I'd like to propose that these experiments aren't actually addressing the problems with the karma system: the easiest way to get a lot of karma on LessWrong is to post a bunch (instead of working on something alignment related), and the aggregate data is kinda meaningless and adding more axis doesn't fix this. The first point is discussed at length on basically all sites that use upvote/downvotes (here's one random example from reddit I pulled from Evernote), but the second isn't. Give... (read more)

3MondSemmel2y
I don't think it's a problem that people can get karma by posting a bunch? The only reward a user gets for having tons of karma is that their votes are worth a bit more; I don't know the exact formula, but I don't expect it to be so egregious that it would be worth farming karma for. And it's certainly not the intention on the content-agnostic Less Wrong website that alignment posts should somehow be privileged over other content; that's what the alignment forum is there for. As I understand it, just like on Reddit, the primary goal of the karma system is for content discoverability - highly upvoted content stays on the frontpage for longer and is seen by more people; and similarly, highly upvoted comments are sorted above less upvoted comments. Upvoting something means stuff like "I like this", "I agree with this", "I want more people to see this", etc. However, this breaks down when people e.g. want to indicate their appreciation (like an act of courage of speaking out), even if they believe the content is low quality or something. In that case, it seems like one voting axis is obviously not enough. I understand that sockpuppeting and vote manipulation is a big problem on Reddit, but why do you think it is a relevant problem on LW? I'd expect this kind of thing to only become an important problem if LW were to get orders of magnitude more users.
2iceman2y
The only formal reward. A number going up is its own reward to most people. This causes content to tend closer to consensus: content people write becomes a Keynesian beauty contest over how they think people will vote. If you think that Preference Falsification is one of the major issues of our time, this is obviously bad. I mentioned the Eugene Nier case, where a person did Extreme Botting to manipulate the scores of people he didn't like, which drove away a bunch of posters. (The second was redacted for a reason.)
1MikkW2y
I hadn't seen the experiments on Jessicata's posts before, and I assume others will have not as well, so here's a link to one of the posts featuring the experiment. (It's a two-axis thing, with 'overall' and 'agreement' as the two axes. Part of me prefers that setup to the one used in this experiment)
[-]Jon Garcia2y7
6Truth
6Aim
1Clarity
4Seeking
🎉 2
❤️ 1

I like the idea of using the Open Thread for testing new karma systems.

Adding multidimensionalilty to it certainly seems like a good idea. In my experience, karma scores on comments seem to be correlated not just to quality of content but also to how well it aligns with the community narrative, to entertainment value, to the prior status of the commenter, and even to the timing of the comment relative to that of the post. Disentangling these would be helpful.

But then, what is it we really want karma to represent? If community members are not vigilant in ho... (read more)

[-]MondSemmel2y6
1Truth
3Clarity
2Seeking
🎉 1
😮 1

Some tiny bugs on this Walled Garden LW page:

  • Clicking on the "Walled Garden" banner leads to this URL, which doesn't exist.
  • The second sentence here is incomprehensible: "If you have an event invite link, please use that to enter. If you have been granted full-access, to log in."
[-]Rafael Harth2y5
2Clarity
2Seeking

Some thoughts on the individual axes:

  • True/False -- this seems not always applicable and similar to agree/disagree in the cases where it is? Could be missing something, but I don't think I like this.
  • Hits the Mark/Misses the point -- SlateStarCodex had the "two of {true/kind/necessary}" policy; going off topic was fine as long as it was kind and true. My impression has been that something like this is true on LessWrong as well, which seems good? I don't think this one is a good idea at all, at least not if people vote on it all the time.
  • Clear/Muddled --
... (read more)
[-]hamnox2y4
🎉 1

Epistemic Status: groping around at an event idea I hope others are interested in

I don't know how to communicate this yet, but there's a ritual I want to do with friends this summer. The following describes some inspirations and gestures toward the general aesthestic.

  • It was part of my step-family's lore to learn camping skills and wilderness survival, at one point even giving little "merit badges" for demonstrating mastery. With a similar spirit they would also host summer 'art shows' where'd we'd learn about a different culture and put things we made that
... (read more)

I don’t like this voting feature on mobile. It makes it impossible to press the normal vote arrow without zooming in because I keep fat-fingering something other than the regular vote arrow.

[-]Chris_Leong2y4
1Truth
❤️ 1

I'm not a fan of this voting system tbh. I guess I just find it too distracting.

Reaction-ballot voting has a "you make what you measure" feel to me.

  1. You make what you measure.

I learned this one from Joe Kraus. [3] Merely measuring something has an uncanny tendency to improve it. If you want to make your user numbers go up, put a big piece of paper on your wall and every day plot the number of users. You'll be delighted when it goes up and disappointed when it goes down. Pretty soon you'll start noticing what makes the number go up, and you'll start to do more of that. Corollary: be careful what you measure.

http://www.paulgraham.

... (read more)
1Alex Vermillion2y
On the contrary, I wonder if this might be useful in highlighting "true but conflict-seeking" things or whatnot. When I see a user with -10 because they were being a jerk, maybe now they could be at -20 conflict and +10 truth. To note: I do kind of expect people to (accidentally?) correlate on the axes (like a halo-effect sort of thing), but the current system FORCES that at all times, so I think it would still be better to be 75% correlated instead of 100%
[-]Gunnar_Zarncke2y4
2Aim
❤️ 1
😮 1

I love it. 

I reviewed my voting proposal and see my suggestions covered except for the checkmark suggestion.

4hamnox2y
I read through your proposal, and I don't understand how all your suggestions are covered. Can you run through which of your proposed elements * lightbulb: is used for surprising or insightful information * exclamation mark: is used to warn about something that requires attention * question mark: flags open questions that should be answered * trend (up/down): information about a general positive/negative trend * checkmark: Different from an up-vote; indicates that something was completed and does not need further attention you see as corresponding to which elements in the current setup?
2Gunnar_Zarncke2y
* lightbulb: surprise * exclamation mark: Skepticism or Hits the Mark * question mark: Seeks Truth * trend (up/down): Up/Down - if indicated by the OP to be used that way - though I agree that messes with the karma system * checkmark: missing Reflecting a bit more on it, I think my original excitement must have clouded my judgment: I no longer think that the mapping is obvious or even halfway clear.
[-]nmehndir2y3
2Seeking
❤️ 2

[deleted]

2Dirichlet-to-Neumann2y
I'd be interested although I'm in France, UCT+1 so it may be a bit difficult to arrange a meeting twice a day. I'm a PhD student in mathematics.
1nmehndir2y
PM'd!

I kind of feel like there should be a funny/not funny axis. Sometimes I read a good joke or a fun take in a comment, and I would like to signal I liked it, but the overall karma does not seems like a good way to signal that.

Also true and hits the mark do not seem orthogonal to me. Can something be false and still get the point ?
 

[-]Pattern2y3
6Aim
🎉 1
❤️ 1

1. Experiments

The missed opportunity to be able to vote on the post itself this new way stands out - so I'll put it here: 🎉!


2.

The old voting problem is still present: Roughly, the longer the comment (or post), the more likely the same person has different views of parts.


3.

The (modular) anonymity aspect (and the dichotomy aspect) limit some combinations as being clearly from the same person.


Seeks Truth/Seeks Conflict

Combine both and you can get something. Seeks Cruxes? (Currently blocked by exclusive restriction.)


Skeptical + Enthusiastic*

Not quite the same... (read more)

Another weird bug, found here. Also see the comment by Zack_M_Davis.

The bug: this is an intentionally broken unclickable link. It's supposed to link to "http:// asdf", and it seems like leaving a space in it is enough to make it unclickable.

[-]MondSemmel2y2
2Clarity

Tiny bug with the LW 2020 Review progress bar: The dates on the progress bar disappear depending on horizontal window size. At its full size, the bar ends on "Feb 1st"; at a slightly smaller window size, the bar ends on "Final Voting", with "Feb 1st" out of view; and once the window is small enough, and the progress bar is displayed at the top, then it no longer displays dates at all.

Here is a gif of the problem.

epistemic status: felt sense poetry

Think about a tree. A tree with roots going deep into the ground, and leaves spread out to catch as much sun as it can. Hold that tree in mind.

We often dream of leaving the earth and solar system under our own power. It's an important goal. It's not, however, immediately achievable. We are, for now, tied to this pale blue dot. Sol III, Terra, the world that birthed us. And when we do leave, we will take much of it with us. Some of it we will take intentionally, because we're sentimental like that. But some we will take in... (read more)

[-]MondSemmel2y2
2Clarity

Bug: In this post, there's one footnote, but its return-from-footnote link does not work. That is, when I click on it, the browser screen doesn't move back to the footnote link. However, when I load that return-from-footnote link in a new tab, it does correctly center the screen on the footnote link.

[-]gjm2y2
2Seeking
🎉 1

My immediate (kneejerk System-1-ish emotional) reaction to the experimental voting popup is along the lines of "meh, too much effort, won't bother".

My slightly less immediate reaction turns out to be much the same. So e.g. I think "I should give this a try", take a look at the comment currently presented to me first, and ... well, there are now 9x as many things to decide about it as before (overall opinion, 4 "axes", and 4 possible reaction emoji), and all but the first feel as if they require substantially more mental work to arrive at a useful opinion a... (read more)

[-]gjm2y2
6Aim
❤️ 1

I'm not sure what it says about LW that in the current Open Thread there is only one comment that isn't either (1) about the voting-system experiment or (2) deleted.

(And I slightly suspect that that one comment may have been inspired at least slightly by the voting-system experiment, not that there's anything at all wrong with that.)

I actually really enjoyed these voting axes.

I wouldn't be opposed to them being rewritten, but I really liked being able to separate these things out. I will say that not knowing whether or not I voted on an axis from overview is annoying (like how you can see green or red arrows on a post when you regular-vote it.

[-]Zian2y1
-2Clarity

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8297542/ (bias: authors include employees of one of the companies being evaluated, Xpert) might help us choose between all the different tests floating around for sale. It was published on July, 2021 and discusses products from the following firms:

  • Accula
  • BioFire
  • cobas
  • Cue
  • ID NOW
  • Lucira
  • Xpert
  • Visby

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7785428/ (Bias: "Cue Health provided readers and cartridges for the study.") was published on May 2021 and evaluates Cue Health.

[-]PeterL2y1
2Truth

I agree that voting might be little bit annoying. 

On the other side, it could potentially make the search for specific qualities of comment much easier if automated (by sorting). (E.g. "Now I am not in the mood for solving difficult concepts so I want something with high clarity evaluation." or "Now I am too tired to argue/fight so I want something empathic now.")

[-]Nnotm2y1
2Seeking
😮 1

Is there a post as part of the sequences that's roughly about how your personality is made up of different aspects, and some of them you consider to be essentially part of who you are, and others (say, for example, maybe the mechanisms responsible for akrasia) you wouldn't mind dropping without considering that an important difference to who you are?

For years I was thinking Truly Part Of You was about that, but it turns out, it's about something completely different.

Now I'm wondering if I had just imagined that post existing or just mentally linked the wrong title to it.

3hamnox2y
I don't remember reading anything like that. If I had to make a wild guess of where to find that topic I'd assume it was part of the Luminosity sequence.
1Nnotm2y
I haven't read the luminosity sequence, but I just spent some time looking at the list of all articles seeing if I can spot a title that sounds like it could be it, and I found it: Which Parts are "Me"? - I suppose the title I had in mind was reasonably close.
[-]MikkW2y1
2Clarity

The UI for the reactions works pretty well on iPhone, the only issue is that it's tricky to dismiss the dialog, though it can generally be done with less than 10 seconds of fiddling (usually closer to 1 or 2 seconds). If there was a button to dismiss the dialog, that could make it a lot smoother to use (and should work well on other platforms as well, even if it's not strictly needed on other platforms)

[+][comment deleted]2y0