Can someone please provide hard data on trolling, to assess its kind and scale? I can only remember a single example of repeated apparent trolling - comments made by private_messaging and presumed sockpuppets. I'm not very active though, and miss many discussions while they're still unvoted-on.
Seconded. The first time that I saw any indications of LW having a troll problem was a couple of days back, when people started complaining about us having one.
This rule is asinine.
If I see a post at -3 that I desire to reply too, I am incentivized to upvote it so that I may enter my comment.
Furthermore, it stifles debate.
Look at this post of Eliezer's at -19 In the new system, the worthwhile replies to that post are not encouraged.
In the new system, instead of people expressing their disagreement, they will not want to reply. The negatives of this system grossly override any upsides.
I have not noticed a worsening trolling problem. Does anyone have any evidence of such a claim?
Problem:
General signal to noise. Tags on articles are used very badly. This makes finding interesting content on a topic harder than it needs be.
Idea:
Let's let users with enough karma edit tags on articles! Seriously, why aren't we doing this already?
It would make research for writing new articles. It would help people interested in a particular topic read more about it. I use tags a lot and even as poorly used as they currently are, I've found a lot of interesting material through them.
Also most importantly it would be a step towards better indexing.
Instead of trying to stop noise, you can filter it. Instead of designing to prevent errors, you can design to be robust to them.
I'll repeat something I said in the other thread:
To the extent that all the griping over signal to noise is about a desire to control what you see, and not control what others see or say, there are decades old solutions to discussion filtering. The fancy shmancy Web has been a marked deevolution of capabilities in this regard. It's pitiful. No web discussion forum I know of has filtering capabilities even in the ball park of Usenet, which was available in the 80s. Pitiful.
I also suggest that any solution which is not fundamentally about user customization is a failure under my assumption above, because one man's noise is another man's signal.
You've made me understand the root of one of my own dissatisfactions with the current system. If I look through my post history and roughly group my posts into bins based on how I would summarize them, this is what I see:
Silly posts in the HP:MOR threads: ~ +20 karma
Posts of mine having little content except to express agreement with other high-karma posts: ~ +10 karma
Important information or technical corrections in serious discussions: ~ +1 karma
Posts which I try to say something technical which I retrospectively realize were poorly worded but could have been clarified if someone pointed out an issue instead of just downvoting: ~ -5 karma
Perhaps I exaggerate slightly but my point is that if I were to formulate a posting strategy aimed at obtaining karma, then I would avoid saying anything technical or discussing anything serious and stick to applause lights and fluff.
On top of this, I tend to watch how the karma of my most recent comments behaves, and so I notice that, for example, a comment might have +5 upvotes and -3 downvotes, with no replies. This is just baffling to me. Was there something wrong with the post that three people noticed? Were the three separa...
Perhaps I exaggerate slightly but my point is that if I were to formulate a posting strategy aimed at obtaining karma, then I would avoid saying anything technical or discussing anything serious and stick to applause lights and fluff.
That's about right. Also, stick to high traffic threads. Hit the HPMOR threads hard!
As I pointed out that people want different things out of the list, you finish by pointing out that the karma votes themselves are clearly used differently by different people. They're also used to a different extent by different people.
One nice thing that Slashdot does is limit your karma votes. That keeps individual Karma Kops from have a disproportionate effect on total score. But I don't think the Slashdot system of multiple scores is that helpful.
From my experience in the grand old days of Usenet, the most useful filters were on people, and the important ease of use features were a single screen view of all threads, expand and contract, sort by date or thread, and sort by date for a subset of threads.
The recent implementation of a -5 karma penalty for replying to comments that are at -3 or below has clearly met with some disagreement and controversy.
How about we wait a couple weeks to try the new feature; instead of jumping up in outrage and proposing even more complicated schemes?
I'd be in favor of an official "no complaining about feature X for the first two weeks" rule, after which a post could be created for discussion. Like that the discussion could be about what actually happened, and not about what people imagine might happen.
It's not as if two weeks of using an experimental feature was some unbearable burden.
I'm much more comfortable with this sort of intervention as a "We think this will improve the forums, let's test this for a month or two" rather than "Lesswrong sucks but this will fix it guys, trust us"
It's not a huge burden, but we're already seeing some negative effects worth discussing. For example I have now twice paid the 5 karma penalty replying to downvoted comments which were not trolling at all; they were downvoted because people disagreed with what they were proposing.
If we decide to wait two weeks, we need to decide on specific criteria that we will judge in two weeks' time to decide whether to modify or remove the new feature. If the new feature stays anyway because a few people decide unilaterally, then we might as well discuss it now.
Wiki
We would benefit from more and better wiki articles since they seem the best way to compress information that is often scattered across several articles and dozens of comments. This should help us maintain our level of discussion by making it easier to bring users up to speed on topics.
I used to think the most straigthforward fix would be:
Eliminate the trivial inconvenience of creating a separate count for the wiki. Lets just make it so you use your LW log in. Also perhaps limit edits to people with more than 100 karma, since I hear they had some problems with spamming.
Let people up and down vote edits. Let karma whoring work for us!
But when talking about this on IRC with gwern and he thought it probably wouldn't do much good and isn't worth the effort to implement. What do fellow rationalist think might be a good way to encourage more quantity and quality in the wiki?
(as I noted in the buried thread)
The mental model being applied appears to be sculpting the community in the manner of sculpting marble with a hammer and chisel. Whereas how it'll work will be rather more like sculpting human flesh with a hammer and chisel. Giving rather a lot of side effects and not quite achieving the desired aims. Sculpting online communities really doesn't work very well. But, geeks keep assuming the social world is simple, even when they've been in it for years.
Why on Earth do people keep saying this? Sending out a party invite via email is a technical solution to a social problem, and it's great! For God's sake, taking the train to see a friend is a technical solution to a social problem. This phrase seems to have gained currency through repetition despite being trivially, obviously false on the face of it.
Burglar alarms, voting, Pagerank? Pagerank is definitely a very technological solution to a serious conflict of interest problem, and its effectiveness is a key driver of Google's initial success. Why would you expect technology not to be helpful here?
When I hear people say "you're proposing a technical improvement to a social problem", they are not cheering on the effort to continually tweak the technology to make it more effective at meeting our social ends; they are calling for an end to the tweaks. From what you say above, that's the wrong direction to move in. Pagerank got worse as it was attacked and needed tweaking, but untweaked Pagerank today would still be better than untweaked AltaVista. "This improvement you're proposing may be open to even greater improvement in the future!" doesn't seem like a counter argument.
In many instances, the technology doesn't directly try to determine the best page, or candidate; it collects information from people. The technology is there to make a social solution to a social problem possible. That's what we're trying to do here.
Consider the prior art, here. The first place I saw the "reply to any negatively-scored comment inherits the parent's score" concept was at SensibleErection. That policy has been in place there longer than LW has been a forum (possibly longer than reddit), so it seems to work for them.
Prior art for special "oldschool/karma-proven" sections: hacker news. Paul Graham is intensely interested in keeping a high-quality forum going, and is very willing to experiment. Here's the normal frontpage, here's the oldschool view, and here's the recent members view. Hacker news also has several threshholds for voting privileges.
One more step HN took is hiding comment scores, while continuing to sort comments from highest to lowest. It's dramatic, almost draconian, but it definitely had an effect on the karma tournament system
Could someone please point out some examples of trolling to me? I find this discussion surprising because I perceived the trolling rate as low to non-existent. Perhaps I've frequented the wrong threads.
I'm not proposing a solution. I'm thinking about the problem for five minutes.
edit: Well, it didn't even take five minutes!
We need a reliable predictor of troll-nature. I mean, I'm not even sure that P( troll comment | at -3 ) is above, say, 0.25 - much less anywhere high enough to be comfortable with a -5 penalty.
Of course, I'd be comfortable with asserting that P( noise comment | at -3 ) is pretty high, like 0.6 or something. Still not high enough to justify a penalty, in my opinion, but high enough that I can see how another's opinion might be that it justifies a penalty. If that is the case, well, the discussion is being severely negatively impacted by conflating noise and trolling.
I might go and figure out how to get some data off of LessWrong commenting system, to try and determine a good indicator for troll-nature. (I don't plan to try and figure out noise-nature. That's the problem that the Internet has faced for the last 15 years, I'm not that hubristic.) That in turn would would put some numbers into this discussion. I don't know that arguing over how many genuine comments can be inadvertently caught in a filter is any better than arguing over whether there should be a filter at all, but to my mind it's more constructive.
Proposed solution: remove the karma penalty and do exactly the same thing we were doing before. That is, if someone is pretty sure that they will not benefit from reading the replies to a particular thread, they don't read them. No disincentives from posting such a reply needed. What is the problem with that system?
Edit: As of this edit, if one more person decides they don't like my comment, then no one can tell me why they don't like my comment without loosing 5 karma. One of many reasons the new system is terrible.
To address the Big Bad Threads problem specifically (as opposed to other problems), what we need is ability to close threads in some sense, but not necessarily as a side effect of voting on individual comments.
For example, moderators could be given the power to declare a thread (or a post) a Toll Thread, so that making a comment within that thread would start costing 5 Karma or something like that, irrespective of what you reply to or what the properties of your user account are. This would work like a mild variant of the standard (but not on LW) closed thread feature.
I'm active (I read literally everything on Less Wrong, or at least skim) but I'm timid. I don't know what I am and am not supposed to be banning/editing, so I confine banning to spam and editing to obvious errors of formatting or spelling/grammar.
In June I asked Eliezer for moderation guidelines, since there has been an uptick in trolling or just timewasting poorly-informed ranters, but he just said that he thought it needed a software fix (the recent controversial one).
You don't deter SuperTrolls. You ban them and move on. This is a very simple problem that you guys are vastly over-complicating.
Ban him and ostracize him socially.
You're right. It seems silly to say that nothing, with emphasis, will stop Will when banning him and any obvious sockpuppets hasn't even been tried. (This isn't particularly advocating that course of action, just agreeing that Luke's prediction is absurd.)
Warning: a rant follows!
The general incompetence of the replies to the OP is appalling. Fantastically complicated solutions with many potential harmful side effects are offered and defended. My estimate of the general intelligence of the subset of LWers who replied to this post has gone way down. This reminds me of the many pieces of software I had a misfortune to browse the source code of: half-assed solutions patched over and over to provide some semblance of the desired functionality without breaking into pieces.
For comparison, I have noted a trivial low-risk one-line patch that would fix a potential exploit in the recent (and also easy to implement) anti-troll feature: paying with 5 karma to reply to comments downvoted to -3 or lower (patch: only if the author has negative 30-day karma). Can you do cheaper and better? if not, why bother suggesting something else?
After a long time in the software business, one of the lessons I have learned (thanks, Steve McConnell) is that every new feature can be implemented cheaply or expensively, with very little effect on its utility. Unfortunately, I have not heard of any university teaching design simplification beyond using some Boolean a...
The proposals I have (not all of which are mutually exclusive) are :
Make comments within highly downvoted subthreads not appear on recent comments. Since the main problem with trolling is drowning out of recent comments, this will solve many of the issues. Moreover, it will discourage continued replies.
Have a separate section of the website where threads can be moved to or have a link to continue to. This section would have its own recent changes section. Moderators could move threads there or make it so that replies went to that section, and would be used for subthreads that are fairly downvoted. This has the advantage of quarantining the worst threads. This is a variation of an old system used at The Panda's Thumb which works well for that website.
Use the -5 penalty system but adjust either the trigger level or the penalty size. It isn't obvious that -3 and -5 are the best values for such a system if it is a good idea. The fact is that -3 isn't that negative as comment scores go, so something like -3 can be obtained without saying that much about a comment's quality. -5 and -5 or or -5 and -1 may be better values. The second would offer softer discouragement for more
Downvoted for putting more than one suggestion in a single comment.
Punish me for this anti-social act if you must, but as one of the dudes who tries to act after reading these suggestions (and tries hard to discount his own opinion and be guided by the community) this practice makes it much harder for me to judge community support for ideas. Does your comment having a score of 10 suggest 2.5 points per suggestion? ~10 points per suggestion? 15 points each for 3 of your suggestions and --35 for one of them (and which one is the -35?)?
Can we please adopt a community norm of atomicity in suggestions?
I think #1 is the way to go here, and the only method that will have any effect in most cases.
The recent implementation of a -5 karma penalty for replying to comments that are at -3 or below has clearly met with some disagreement and controversy. See http://lesswrong.com/r/discussion/lw/eb9/meta_karma_for_last_30_days/7aon .
Be sure to distinguish between the controversy surrounding Eliezer's provocative comments and the policy as he declares he wishes it and actual disagreement with the implementation as it currently stands. I, for example, are tentatively in favor of the current implementation---but not in favour of the policy as he intends it to be implemented.
Under the new rule, if I reply to a post that is later downvoted to -3, am I docked the 5 points when that happens? No chance to opt in to the deduction at that point.
Surely this makes it very tough for a non-trolling user to figure out what was wrong with his post? Few people are going to explain it to him. You need to be familiar with LW jargon before you can expect to write a technical comment and not be downvoted for it, so this would very easily deter a lot of new users. "These guys all downvoted my post and nobody will explain it to me. Jerks. I'll stick to rationalwiki."
As it's currently implemented it appears that replies to the -3 comments still start at a rating of 0. Why not match the -5 karma and set the new comment's rating to -5 as well? This would be a strong disincentive to others replying to the new 0-rated comment and extending the thread at no cost.
This would be an improvement since then one's karma would still remain in principle obtainable by summing the karma of all one's comments and posts. But then, why have the arbitrary numbers -3 and -5? Wouldn't it be better if a reply to a negatively rated comment started at the same karma as the parent comment? Smooth rewarding schemes usually work better than those with thresholds and steps.
(I still don't support karma penalties for replies in general.)
Reading troll comments has negative utility. Replying to a troll means causing that loss of utility to each reader who wants to read the reply (times the probability that they read the troll when reading the reply)
That's exactly the kind of consideration that should lead people to downvote responses to "trolls." If you think someone is stupidly "feeding trolls," you should downvote them.
It seems that E.Y. is miffed that readers aren't punishing troll feeders enough and that he's personally limited to a single downvote. As an end-run around this sad limitation, he seeks to multiply his downvote by 6 by instituting an automatic penalty for this class of downvotable comment.
Nothing is so outrageously bad about troll feeding that it can't be controlled by the normal means of karma allocation. The bottom line is that readers simply don't mind troll feeding as much as E.Y. minds it; otherwise they'd penalize it more by downvotes. E.Y. is trying to become more of an autocrat.
Solution: Ban their IP addresses. This actually works, I'll tell you why. Not because they can't get new ones, but because they can't infinitely get new ones. If you've ever sought an unsecured proxy (a key way of obscuring your IP address) you'll know that it's tough to find a good proxy, they're slow, and they frequently leak your IP address regardless. Even programs like Tor only have so many IP addresses. To make it worse, (for them) it's no fun to use proxies that are far away - they're slow as all get out. This technique worked on spammers on a...
There should be a different discussion forum which is readable to all but can only be posted in by those with over, say 1000 karma. This solution seems "obvious" as we already have a robust karma system, and it's very difficult to acquire that much karma without actually being a good poster (I don't have that much and I've been here for over a year).
This system could lead to the use of the "open" discussion forum as a kind of training grounds where you prove your individual signal-to-noise ratio and "rationality" is high enou...
There should be a different discussion forum which is readable to all but can only be posted in by those with over, say 1000 karma.
Actually I'd find restrictions on who can or can't on vote on the comments to be a more interesting option. What would a forum look like if only those with over 1000 karma on LW could vote?
I didn't originally propose this for LW in general, but a different forum or section. People can earn their LW karma elsewhere. But let us for the sake of this exchange suppose here we make this a general rule. I actually like it much more than what I had in mind at first!
It should be emphasised the reverse of what you describe is constantly happening. It is easier and easier to amass 1000 karma as LessWrong grows. Comparing older to newer articles shows clear evidence of ongoing karma inflation.
There aren't that few people with karma over 1000, I'd guesstimate there are at least 100 of them. Many of those are currently active. But again making it harder to get over 1000 karma in order to vote might be a good think. A key feature of the Eternal September problem is that when you have newcomers of a community interacting mostly with other new members old norms have a hard time taking root. And yes since users takee the karma mechanism, especially negative votes, so seriously it is a very strong kind of interaction. Putting the karma mechanism in the hands of proven members should produce better poster quality. It somewhat alleviates the problems of rapid growth.
It also further subsidizes the creation of new articles. Recall your karma from writing a Main Article is boosted 10 fold.
The reverse is happening precisely because there are so many new users who are voting. I'd say that the way LW started out could be used as an estimate of what that would look like. It was very rare for a comment to reach as many as 5 upvotes, and if you see an old comment that has more than that, most likely it had help from someone more recently upvoting it.
I agree LW in say 2010 seems an ok proxy for what it would be like. With one key difference, posting Main articles is much more karma rewarding than it was back then. Articles did get over 10 or 20 karma even back then
We should remember that we don't really care how many of the lurkers become posters. Growing the number of users is not a goal in itself, though I think for some communities it becomes a lost purpose. What we actually care about is having as much high quality content that has as many readers as possible.
I would argue the median high quality comment is already made by a 1000+ user. In any case the limit is something we can easily change based on experience and isn't something that should be set without at least first seeing a graph of karma distribution among users.
On prior forums I have been on, attempts to split into a only some posters and all posters forums have ended badly.
When there are enouph high class posters, everything goes into the high class forum and the open forum collapses leaving no worthwhile "in" for new users. When there are too few high class users, everyone double posts to both forums in order to get discussion and you wind up with a functional 1 forum system except with lots of links and more burden and top level menus.
I have not seen an open / closed forum system with exactly the goldilocks number of high class users to maintain stable equilibria in both forums.
I think we need to have a better way to separate true trolls (an admittedly loose category) from people who can be reasoned with and/or raise interesting points but are being down voted for other reasons (like poor grammar/writing). Once we have this we need to convince people to stop feeding the trolls. One way that I have seen proposed is tagged karma (eg +1 insightful -1 trolling). Additionally sockpuppets haven't been a major problem here, but given the role they tend to play in website decline we should have a strategy for dealing with them in advance.
Two more variations on the penalty system:
Have the penalty dependent on history, e.g. replies to a comment with -3 score are penalised only if the original commenter has negative karma over the last 30 days (or maybe if the commenter has total karma less than 10 or 50 etc.). (also suggested by shminux)
Use comment score to compute the penalty, so a comment with -2 only takes 2 karma to reply to, while one with -10 takes 10. (Obviously some other proportionality factor could be used, or even a different relationship between comment karma and reply penalt
Please keep your suggestions programmatically very simple. There are all sorts of bright ideas we could be trying, but the actual strong filter by which only a very few are implemented is that programming resources for LW are very scarce and very expensive.
(This filter is so strong that it's the main reason why discussion of potential LW features didn't in-advance-to-me seem very publicky - most suggestions are too complicated, and the critical discussion is the one where SIAI/CFAR decides what we can actually afford to pay for. It hadn't occurred to me that anyone would dislike this particular measure, and I'll try to update more in that direction in the future.)
can't you just not read the replies to downvoted comments? How is it hurting anybody when someone replies to a comment with a score at or below -3? I don't see a reason to disincentivise it.
If the problem is with spam in the 'recent comments' sidebar, then it seems like we should change fix that. I would be on board with a rule that posts in hidden sub-threads don't show up on the 'recent comments' sidebar. If we can remove posts from the sidebar, then perhaps posts that drop to -3 are removed from the 'recent comments' sidebar as well.
If you want to nuke trolling, use the Metafilter strategy: new accounts have to pay $5 (once). Troll too much, lose your account and pay $5 for a new one. Hurts a lot more than downvotes.
This will deter some (a lot?) of non-trolls from making new accounts. It will slow community growth. On the other hand, it will tighten the community and align interests. Casual users don't contribute to Less Wrong's mission: we need more FAI philanthropist/activists. Requiring a small donation will make it easier for casual users to make the leap to FAI philanthropist/ac...
If you want to nuke trolling, use the Metafilter strategy: new accounts have to pay $5 (once).
I don't know if I would have made my account here if I had to pay $5 to do so. I would pay $5 now to remain a member of the community- but I've already sunk a lot of time and energy into it. I mean, $5 is less cost to me than writing a new post for main!
I am deeply reluctant to endorse any strategy that might have turned me away as a newcomer.
If I encountered an unfamiliar blog or forum and wanted to leave a comment, I wouldn't give my phone number to do so, even if it seemed quite interesting. Then I would probably leave the site.
An unintended side-effect: readers without credit/debit cards may find it harder to join the site. This disproportionately affects younger people, a demographic that may be more open to LW ideas.
Another unintended side-effect is that it may increase phyg pattern-matching. Now new recruits have to pay to join the site, and surely that money is being secretly funneled into EY's bank account.
That said, I think that on balance this is a good policy proposal. I also think that the similar proposal using phone verification is plausible, and doesn't run into the above two problems.
I don't think anyone at SI agrees with you about Less Wrong's mission. The site is supposed to be about rationality. There is hope (and track record) of the Less Wrong rationality community having helpful spinoffs for SI's mission, but those benefits depend on it having its own independent life and mission. An open forum on rationality and a closed board for donors to a charity aren't close substitutes for one another.
we need more FAI philanthropist/activists
Who is "we"?
I think the percentage of "casual" users who participate on this site because they enjoy intelligent conversations on rationality-related topics while having no FAI agenda is non-negligible. I suspect that reinforcing the idea of equality between LW and FAI activism will make many of them leave. It may be a net negative even if LW's mission is FAI activism as there are positive externalities of greater diversity of both discussion topics and participant opinions (less boredom, more new ideas, better critical scrutiny of ideas, less danger of community evaporative cooling, greater ability to attract new readers...)
Also, I don't like the idea of LW's mission being FAI activism. There is still written in the header: "A community blog devoted to refining the art of human rationality", and I'd appreciate if I could continue believing that description. Of course I realise that the owners of the site are FAI enthusiasts, but that's not true about the community as a whole. LW is a great rationality blog even without all its FAI/philanthropy stuff, not only for the texts already written, but also for the productive debating standards used here and a lot of intelligent people around. I would regret if I had to leave, which I would if LW turned to a solely FAI activist webpage.
Casual users don't contribute to Less Wrong's mission: we need more FAI philanthropist/activists.
The tagline is still "A community blog devoted to refining the art of human rationality". If you want FAI and philanthropy, you should I suspect be asking for those specifically up front.
If you want to nuke trolling, use the Metafilter strategy: new accounts have to pay $5 (once). Troll too much, lose your account and pay $5 for a new one. Hurts a lot more than downvotes.
It's a good idea. Some variations, like associating accounts with mobile phone numbers, may slow good growth less. Maybe it would help to have multiple options to signal being a legitimate new user.
Casual users don't contribute to Less Wrong's mission: we need more FAI philanthropist/activists.
I would like to see more x-risk philanthropists/activists, but I don't want to make that a requirement for LW users. It would be good to have more users who want to be stronger because they have something to protect, rather than thinking rationality is shiny.
associating accounts with mobile phone numbers
I don't have a phone, and if I did I would refuse to give it out in case someone did something horrible like call me. I'm not the only phone-hater around; we overlap with phone-hater demographics a fair amount.
Problem:
Karma inflation due to more users means old articles aren't as up voted as they should. Also because they are old they don't get read or updated as much as they should. We tried to at least correct people not reading the sequence with reruns. It didn't exactly work.
Idea:
Currently karma earned from posting a Main article is boosted by a factor of 10. Lets boost the value of karma by a factor of 2 or some other low value for any new comments on articles older than 2 years.
We tried to at least correct people not reading the sequence with reruns. It didn't exactly work.
I also didn't like how they fragmented the commentary. Many found that a feature rather than a bug. I found it plain annoying when reading an old article I had to do a search to see if there was any recent discussion in the rerun threads too.
I mean surely eventually some of the things we wrote back in 2007 or 2009 will turn out to have been plain wrong, obsolete or incomplete right? It would be neat to see that noted at least in their comment section.
Many people read through the sequences much like they would a textbook. We practically encourage them to do so. New well written comments to old article might be very useful.
I am soon to post a well thought out solution to "endless September" that will cover this. It's nearly finished in my drafts right now.
Problem:
There are many many polite or on topic posts that are not very good or even inane which hover at 0 or 1 karma. For many readers they simply aren't worth the opportunity cost .
Idea:
Set the default visible level not to 0 but to 2 karma or some such number, much like people can currently set negative comments to unhidden. The exception to this should be when "Sort By" is set to "New".
The recent implementation of a -5 karma penalty for replying to comments that are at -3 or below has clearly met with some disagreement and controversy. See http://lesswrong.com/r/discussion/lw/eb9/meta_karma_for_last_30_days/7aon . However, at the same time, it seems that Eliezer's observation that trolling and related problems have over time gotten worse here may be correct. It may be that this an inevitable consequence of growth, but it may be that it can be handled or reduced with some solution or set of solutions. I'm starting this discussion thread for people to propose possible solutions. To minimize anchoring bias and related problems, I'm not going to include my ideas in this header but in a comment below. People should think about the problem before reading proposed solutions (again to minimize anchoring issues).