PhD in Computer Science, Software Engineer, Climber, Go-player, socially awkward human being and rationalist in training.
Should we consider a mechanism to reduce conformity bias? For example, we could allow users to blind themselves to (the nature of) existing reactions until they choose to reveal them or react themselves.
Such a mechanism may come with its own drawbacks, of course. And it's possible I'm just overthinking this. But I hadn't seen the idea discussed yet, so I thought I'd bring it up.
Yes, definitely. If we want to be really rigorous about this, Context wouldn't be a mere logical predicate, but a probability mass function of some kind. And we'd want to sort the list by:
But it may not be worth the added complexity. At least not right away. :-)
I think it's unintentional. I don't see how to parse that as a valid English sentence. (Even though it starts out so promising: "And still, it is not 'a', given that ...")
And there are some other errors too:
I'm not aware of any such guide, but it's a good idea. Here are some thoughts on how we might break this down if we ever want to start such a guide:
Those would be useful aspects to track. People who are generally interested in becoming more prepared could filter by context, start with the low-cost high-reward items and work their way down. Furthermore, it would be useful if each item could be separately discussed and voted up/down by the community.
Thoughts?
Oh, I don't think there's a disagreement here. I strong-upvoted the comment I responded to. "We can ban a Nazi because they're a Nazi." is a bad rule.
What I'm trying to add to the conversation (apart from an attempted steel-man of that footnote) is that the actual reason we ban people from communities is not because of what they've done in the past, but what they're likely to do in the future if they stay.
Usually we need to observe someone's actions before we can make such a determination, so it almost always makes sense to give people a fair chance; even a second and third. But I can imagine scenarios where a utility maximizer can be confident much earlier. Even if those scenarios are contrived, it seems important to keep an eye on our terminal values (e.g., keeping the community healthy and prospering), and recognize that our instrumental values may admit of exceptions, lest we become prisoners of our own rules.
You accidentally another word: "We are open to unusual ideas are willing to doubt conventional wisdom."
My most charitable interpretation of footnote 1 is this: It's possible to imagine a profile picture, bio or first post so beyond the pale that the best course of action is to ban that person outright. And if you cannot imagine such a profile picture, bio or first post, then you have a poor imagination.
That would be quite a high bar for me, though. There would have to be overwhelming evidence that this person is going to be a net-negative influence. "They are a self-professed Nazi" would not clear that bar.
These are good questions that would need to be answered if it weren't for "and the like", which makes the rule fuzzy again no matter how unambiguously we define "Nazi".
But would this account for a cumulative 8 pairs per person per year? Socks that end up in a sibling's drawer, fall on the floor, are carelessly paired up or lost in the dryer would eventually find their way back to where they belong, so they wouldn't make a difference in the long term.
I can think of several explanations for that number being a bit too high. It seems possible, for example, that Samsung is counting socks that were lost but then found soon after. Why else would their innovative AddWash™ system (a small door to add extra items to an ongoing wash cycle) be proposed as a solution?
But I think I prefer to believe that the average is being upset by a small number of pet ferrets.
I want to like this idea, but I'm not sure yet. The process of writing down your own reasoning and assumptions seems incredibly valuable to me. But I wonder how much the framing of this exercise would actually help someone who is already introspective enough to attempt it.
Do you think it could mitigate certain cognitive biases? I can easily imagine different people writing contradictory children's picture books, just as they write contradictory blog-posts. Not because they're lying, but because of confirmation bias.
Also, if you take the framing too literally, there may be the temptation to oversimplify. Your global warming example has a lot of complexity for a children's picture book. :-)