There were times when internet was something for nerds they could enjoy in their free time, and suddenly it's a place where all the village idiots gather and where your future employers will make background checks about you. I miss the times when internet was... not even anonymous, but more like you could have used your real name or a fake name and no one gave a fuck anyway.
Today, any asshole can take your quote out of context, retweet it to thousands of people in a few minutes, and they are going to contact your employer and relatives, and there is no way to stop that or explain... it feels like if the whole world is covered by hidden cameras, which can take any random snapshot of your life and show it on television screens of the whole country.
Okay, two different aspects here:
-- annoying idiots; generally harmless, but can take your time and make you frustrated when they appear in your comments
-- the hysterical internet culture, where making a mountain out of a molehill is actually a great way to make AdSense bucks
Maybe these two are somewhat in tension. Being able to link online identities to real-world identities would help one to get rid of the annoying idiots once and for all; but it also would be a great tool for punishing people in real life for things they said online. On the other hand, anonymity means you can't stop someone from coming back after being banned, or from making an army of sock puppets. (In theory I can imagine a hypothetical solution: such as a third party registering pseudonymous handles for people on various web discussions, but only one handle per person per website. But that would be easy to abuse: by the third party, by it's random employees, by the government.)
In real life, I don't use the same persona for everyone. I behave differently and talk about different topics with my family vs with my friends. I talk differently with rationalists vs with less sane people I still need to keep good relationships with. On many platforms it is too easy to link these things together. Or, they will allow me to hide in a tiny group of my friends... but of course, I can't make new friends while hidden.
Real life sometimes provides... uhm... partial social filters. For example, if I go to a sci-fi convention, I am likely to meet other sci-fi fans I don't know (so it's open towards new people), but I am also unlikely to meet there my grandma or to meet people who completely dislike sci-fi (so it's mostly closed towards people who hate the topic itself). This is what I usually want -- a setting sufficiently open that I can discuss a topic with new people who care about the topic; and yet sufficiently closed that people who don't care about the topic are not watching me. And the risk that someone at the sci-fi con would film me on a camera, and later share the movie with my grandma (or ten years later with my potential employers) is close to zero.
If people hide from public squares, they can’t be exposed to Socrates’ questions.
The historical Socrates was also permabanned from the planet for sealioning. :D
I think this is key:
It’s almost as though the issue were accountability.
A blog is almost a perfect medium for personal accountability. It belongs to you, not your employer, and not the hivemind. The archives are easily searchable. The posts are permanently viewable. Everything embarrassing you’ve ever written is there. If there’s a comment section, people are free to come along and poke holes in your posts. This leaves people vulnerable in a certain way. Not just to trolls, but to critics.
I do want accountability. I do want criticism. I want to be made less wrong and more right. I want a community that helps me become more the person I want to be. This project seems well-suited to getting that, and terribly important for getting communal intellectual norms right.
And yet, I'm not going to cross-post everything on my blog to LessWrong. Here's why.
Sometimes I'm writing something where I'm fairly sure of the conceptual framework. In this case, most moderately intelligent, informed criticism will be helpful. But if I'm writing not to explain but to explore - or just using a nonstandard model and not taking great care to lead people in baby steps - criticism has to meet a much higher standard to be helpful. If I'm engaged in the act of concept-formation, I'm trying to point to the center of a cluster of things, and get other people to look there and report what they see. If someone responds by literal-mindedly assessing my initial pointer's fit to reality instead of taking a look at where I'm pointing and reporting what they see, they actively pull attention away from where it needs to go.
This is not the same as how certain I am that my conclusion is correct. If I'm uncertain about the facts or the inferences I'm drawing from them, detail-oriented criticism is helpful. If I'm uncertain about the basic conceptual framework, I don't understand the thing well enough to evaluate criticism, and I need collaborative seeing first.
I'm worried that this will be obscured by the superficially similar thing you're pointing to here:
You can preempt embarrassment by declaring that you’re doing something shitty anyhow.
I don't currently trust, based on experience, that the level of discourse at Less Wrong will be an interesting place to work out the ideas I can only barely point to on the verbal level. I don't see moving my whole intellectual life here. I feel some need to hold onto my own blog where I play with things unaccountably. I think it's important to have space for unaccountable displays of personality as part of the exploration process.
That said, I want more accountability, I want more spaces with accountability, and I plan to post things where I have some hope that criticism and accountability will help move the ball forward.
Maybe we could start tagging such stuff with epistemic status: exploratory or epistemic status: exploring hypotheses or something similar? Sort of the opposite of Crocker's rules, in effect. Do you guys think this is a community norm worth adding?
We have a couple concepts around here that could also help if they turned into community norms on these sorts of posts. For example:
triangulating meaning: If we didn't have a word for "bird", I might provide a penguin, a ostrich, and an eagle as the most extreme examples which only share their "birdness" in common. If you give 3+ examples of the sort of thing you're talking about, generally people will be able to figure out what the 3 things have in common, and can narrow things down to more or less the same concept you are trying to convey to them.
Principle of Charity: I think we pretty much have this one covered. We do have a bad nitpicking habit, though, which means...
Steel manning: If I'm trying to build up an idea, but it's only in the formative stages, it's going to have a lot of holes, most of which will be fairly obvious. This means making a lot of sweeping generalizations while explaining.
These are literally just the first couple things that popped into my head, so feel free to suggest others or criticize my thoughts.
In general, it seems like such discussions should be places to share related anecdotes, half-baked thoughts on the matter, and questions. Criticism might be rephrased as questions about whether the criticism applies in this instance. Those who don't "get" what is being gestured at might be encouraged to offer only questions.
I remember some study about innovation, which found that a disproportionate amount happened around the water cooler. Apparently GPS was invented by a bunch of people messing around and trying to figure out if they could triangulate Sputnik's position, and someone else wondering whether they could do the reverse and triangulate their own position from satellites with known orbits. We need places for that sort of aimless musing if we want to solve candle problems.
More broadly, we could start applying some of these norms to Discussion. After all, it's supposed to be for, you know, discussion. :p I think it's long overdue.
There's a discussion of a related split in norms at Status 451, Splain It to Me:
Here’s a series of events that happens many times daily on my favorite bastion of miscommunication, the bird website. Person tweets some fact. Other people reply with other facts. Person complains, “Ugh, randos in my mentions.” Harsh words may be exchanged, and everyone exits the encounter thinking the other person was monumentally rude for no reason. [...] I’ll name “ugh, randos” Sue and an archetypal “rando” Charlie. [...]
From Sue’s perspective, strangers have come out of the woodwork to demonstrate superiority by making useless, trivial corrections. Some of them may be saying obvious things that Sue, being well-versed in the material she’s referencing, already knows, and thus are insulting her intelligence, possibly due to their latent bias. This is not necessarily an unreasonable assumption, given how social dynamics tend to work in mainstream culture. People correct others to gain status and assert dominance. An artifice passed off as “communication” is often wielded as a blunt object to establish power hierarchies and move up the ladder by signaling superiority. Sue responds in anger as part of this social game so as not to lose status in the eyes of her tribe.
From Charlie’s perspective, Sue has shared a piece of information. Perhaps he already knows it, perhaps he doesn’t. What is important is that Sue has given a gift to the commons, and he would like to respond with a gift of his own. Another aspect is that, as he sees it, Sue has signaled an interest in the topic, and he would like to establish rapport as a fellow person interested in the topic. In other words, he is not trying to play competitive social games, and he may not even be aware such a game is being played. When Sue responds unfavorably, he sees this as her spurning his gift as if it had no value. This is roughly as insulting to Charlie as his supposed attempt to gain status over Sue is to her. At this point, both people think the other one is the asshole. People rightly tend to be mean to those they are sure are assholes, so continued interaction between them will probably only serve to reinforce their beliefs the other is acting in bad faith.
Not every stranger who responds to Sue is necessarily deemed by her a “rando,” however. Those who emphatically agree with her, for instance, before presenting their own information tend to get a much warmer response. It’s a social cue, a way to signal friendliness. Additionally, people she sees as belonging to her in-group may be given the benefit of the doubt, while out-groupers are more often judged to be malicious or stupid. Even her tweeting “ugh, randos” serves as peer bonding, like complaining with strangers about bad weather or train delays, an announcement soliciting solidarity from those who have been in her situation and sympathy from those who have not.
The reason these responses are seen as good-faith participation is because this model of communication emphasizes harmonious emotional experience. The responses that don’t attempt to establish emotional rapport are merely coming from a different context, one in which communication is about information sharing.
For nerds, information sharing is the most highly valued form of communication possible.
I say all this having once been an “ugh, randos” person myself. I thought the “randos in my mentions” were playing social games, so I often responded harshly, and when they met my hostility with hostility, I felt vindicated in my initial judgment. But as my perspective shifted from “ugh, randos” to “awesome, information,” virtually every signal I’d relied on to indicate an interaction was going to be unpleasant completely fell apart. Even many replies I’d pegged as obvious, blatant misogyny turned out to have been people eagerly offering me gifts of information bewildered that I rudely rejected them.
Two particular exchanges stick out in my mind, both “corrections” in response to a series of tweets of mine on esoteric Unix history. One of these replies was simply wrong, while the other assumed I was mistaken about something that I actually knew but had glossed over. I responded to both with information of my own that demonstrated this, and both people acknowledged that I was right. I thought they were attacking my credibility and that I’d defeated their attacks and won the status game. Except: both of them were happy I corrected them. This seemed bizarre to me at the time, and the only explanation I could come up with was that they must be attempting to save face. But, now that I realize they likely weren’t attempting to play the status game at all, their responses make perfect sense. All they saw was that I had accepted their gift and given them yet another.
I think there's two forces involved here.
It’s almost as though the issue were accountability.
And I think this is one of them. Under a Hanson hat, Talk isn't about Information. That is, for most things most people say on the net, this:
Author: “Hi! I just said a thing!”
is their only genuine content, no matter what words they happen to pick to express it. The fear is that others will hold them accountable for what they said rather than what they meant. They're playing the "I just said a thing!" game, but on a personal blog they might get accosted by people playing the "let's have discussions" game, and that would be awkward because one of the conceits of the former is that it pretends to be the latter.
In short, blogs signal the wrong things to non-nerds, they're the wrong kind of conversation. Our signal is their noise.
For one rather public and hilarious example, witness Scott Alexander’s flight from LessWrong
But I think something different is going on here, and with other diasporists. Scott et al are clearly playing the discussion game, really well. I think the second force driving people from forums to blogs and from blogs to social media is convenience.
Not having direct control of your posting environment is a trivial inconvenience. Having to run your own posting environment is also a trivial inconvenience, once the novelty of owning it wears off. Tumblr and twitter are extremely convenient. Especially twitter; you don't have to feel bad about emitting opinions without thought if the format makes depth of thought impossible! Both even make it possible to Say A Thing without saying any thing!
Never bet against convenience. Discussion moves from formats that ask more of the discussants to those that ask less. This rule is good when applied to the process of posting, and bad when applied to the content of posting, but in practice applies equally to both.
Never bet against convenience. Discussion moves from formats that ask more of the discussants to those that ask less. This rule is good when applied to the process of posting, and bad when applied to the content of posting, but in practice applies equally to both.
Well, not for everyone on LW, but certainly some, esp. those at CFAR trying to revive it, having an open place for discussion par excellence is a crucial part of learning how to enhance group rationality and coordination in an online environment. If something is a crucial part of reducing x-risk, I can imagine many thinking "convenience be damned! This needs to get done!"
So, I think a key question is: how do we make LW more convenient? Or, rather, since rewriting the codebase will take a while yet, and I imagine people want to move discussion back to LW before several months go by, what can we do to make LW more attractive to overcome the trivial inconveniences of being here rather than on social media, other blogs, etc.? What are some robust incentives we can implement to draw people back? Are there any better/more suggestions than "generate good blog content", "ask people what they want to read about and then blog it", and "stop trolls/increase moderation/fix voting system" we can generate for making LW more magnetic?
You can't have friendly debates on a web forum containing a stalker psycho who will take offense at opinions you expressed and then will keep "punishing" you. It's simply not fun. And people come here for intelligent debate and fun.
In the "before Eugine" era, we had once in a while also debates on more or less political topics. People expressed their opinions, some agreed, some disagreed, then we moved on. The "politics is the mindkiller" was a reminder to not take this too seriously. Some people complained about these debates, but they had the option of simply avoiding them. And whatever you said during the political debate stopped becoming relevant when you changed the topic.
The karma system was here to allow feedback, and I think everyone understood that it was an imperfect mechanism, but still better than nothing. (That it's good to have a convenient mechanism for saying "more of this, please" and "less of this, please" without having to write an explanation every time, and potentially derailing the debate by doing so.) The idea of using sockpuppets to win some pissing contest simply wasn't out there.
Essentially, the karma feedback system is quite fragile, because it assumes being used in good faith. It assumes that people upvote stuff they genuinely like, downvote stuff they genuinely dislike, and that there is only one vote per person. With these assumptions, negative karma means "most readers believe this comment shouldn't be here", which is a reason to update or perhaps ask for an explanation. Without these assumptions, negative karma may simply mean "Eugine doesn't like your face", and there is nothing useful to learn from that.
(At this moment I notice that I am confused -- how does Reddit deal with the same kind of downvote abuse? Do their moderators have better tools, e.g. detecting sockpuppets by IP addresses, or seeing who made the votes? I could try to find out...)
Articles and debates influence each other. People come to debate here, because they want to debate the articles posted here. But people decide to post their articles here, because they expect to see a good discussion here... and I believe this may simply not be true anymore for many potential contributors.
At this moment, the downvoting is stopped, but it exposes us to the complementary risk -- drowning in noise, because we have removed the only mechanism we had against it. We yet have to see how that develops.
Some people complained about these debates, but they had the option of simply avoiding them.
Not if we wanted to use the "recent comments" page, and not if we were worried about indirect effects on the site, e.g. through drawing in bad commenters.
Well... that's a good point... and perhaps also how Eugine got here. :(
On the other hand, I am also surprised that we didn't attract someone Eugine-like much sooner. For example, LW was openly antireligious from the first days. Why did no religious fanatic start the same kind of crusade against us? Or why any of the crackpots that once in a while appeared on the website and got downvoted didn't start a sockpuppet war to promote their theory? Uhm, what I am trying to say is that the possibility to get attention of undesired people was always there. Even if we increased it further.
Couple of random observations.
First, with regard to privacy, I think it took a remarkably long time to sink in that "I hope you know this goes down on your permanent record". Internet activity is publishing, it is, in a large number of cases, both forever and searchable. And, of course, "anything you say can and will be used against you".
For nobodies that's not much of problem. But for people with something to lose it is. The net effect is evaporative cooling where smart, interesting, important people either withdraw from the open 'net or curate their online presence into sterility.
Second, "let's play at discussions" vs sealioning and "randos in my mentions" -- I think a major issue here is the value of time. There is a rather obvious inverse relationship between the value of someone's time and how much that someone spends wandering the 'net and random-commenting things which catch his eye. So random comments are generally low-value and worthless -- which means that people who value their time are not only going to not make them, they are also not going to pay much attention to them.
In the golden age covered by the mists of time (aka before the Eternal September) the barriers to entry were high and the people who made it inside were both smart and similar. Thus the early 'net was a very high-trust club. But that changed. Oh, how that changed.
The issue, of course, is discovery: how do you locate and identify new interesting people in the sea of hyperactive idiots? It's an interesting problem. You can create walled gardens. You can set out bait and filter mercilessly, or just hang out in places where only interesting-to-you people are likely to wander into. You can try to follow connections from people you know piggybacking on their filtering skills. Any other ways come to mind?
Invite-only clubs. Perhaps this is a special case of "walled gardens". If LW is too full of randos, one could make LW-Exclusive where you invite only people who meet some subjective criterion of rationality. This would even give you the ability to provisionally invite people you weren't quite sure about, on a trial period, and kick them out if they don't adapt quickly/aren't rational enough/aren't cool.
I thought about doing this, but I don't think I rank highly enough in the community that {top five high-visibility rationalists} would even acknowledge my invitation. cries
I thought about doing this, but I don't think I rank highly enough in the community that {top five high-visibility rationalists} would even acknowledge my invitation.
I think you may be looking at this from a wrong angle. What exactly would be your "garden"? If you simply create a private Slack channel and invite Eliezer and Anna Salamon there, I would understand if they either refuse, or say "uhm, this actually seems like a good idea" and then create their own official CFAR channel. But that's simply because you did what anyone else could easily do. (And even then you actually would have a chance for success, only very small, if they decide that joining your channel is more convenient than creating their own.)
But if you would come with some kind of LW replacement which has all current LW features and none of its greatest problems... and you would start populating it with "cool, but not highest-status" writers first... and it would work... I think there is a big chance gradually everyone would switch to your solution.
I feel like I should put some kind of "epistemic status" tag on this indicating that I am arguing for a position that I don't really hold so much as I find the idea of it interesting.
I think whether or not it's the wrong angle depends on the goals.
Pretend for the moment that I am respected enough that Eliezer and Anna Salamon would accept my invitation to a private Slack channel or private forum. Further stipulate that we agree that we will only invite or accept a new member to the Slack channel if we all look over the candidate's history and agree. We establish a constitution such that new members also gain some measure of voting power in admitting new members. Members who have been admitted but turn out to be jerks can be ejected by some method.
This allows all the uncontroversially good, smart, conscientious LWers and outlying rationalists to be admitted relatively quickly. At a certain point, perhaps LW-Exclusive members start sponsoring newer folks who don't have established track records of good posts or accomplishments, and these people can be admitted on a provisional basis.
It's true that "anybody could do this", but anybody could create a new forum and call it Less Wrong 2.0 and imbue it with better features, and that wouldn't necessarily be a stupid thing to do.
If you did that but didn't actually make it a properly walled garden, and everyone indeed switched over, then that means all the lower-quality posters and randos also switch over, which somewhat defeats the purpose.
This is not directly addressing your point, just inspired by it:
If you make a private Slack channel, how will other potentially valuable members know there is something they should desire to join? I mean, at this moment we would be drawing on the knowledge existing outside the channel, but a few years later, to the outside world your private Slack channel would seem like some kind of a black hole, where smart people disappear and you never hear about them again.
Of course, unless there is also some kind of output -- for example a blog without a comment section, where the members of the private Slack sometimes publish their wisdom -- that other people can see. Now it's about convincing the people on the channel that they should once in a while publish an article for the outside world, inside of just debating comfortably within their bubble.
Now a more direct answer:
Yes, it's true that sometimes a very simple plan, executed well, has great value, while a grand design may be ruined by an unforeseen but fatal flaw. I still believe that, statistically speaking, doing some work upfront is a signal of being serious about something.
Like, there are two different issues here: (1) whether your plan will work well, on condition that people will join you, and (2) whether you can convince people about it, so they actually will join you. You could easily succeed in the first step and fail in the second one. Being a respected celebrity can make the second step easier. Doing some work upfront is solving the second step the hard way.
For nobodies that's not much of problem. But for people with something to lose it is. The net effect is evaporative cooling where smart, interesting, important people either withdraw from the open 'net or curate their online presence into sterility.
I'm flagging this as a really important failure mode nobody noticed. It strikes me as very surprising this in hindsight seems so obvious when I know so many former top contributors not to have considered this as a failure mode. They didn't anticipate as they got older and advanced in their social circles and their careers, they'd go from being nobodies to being somebodies. Scott Alexander is a psychiatrist now; he has to watch what he says on the internet more than Scott the pre-med/philosophy student needed to watch what he said several years ago. Many of the legacy contributors on LW, like Eliezer Yudkowsky, Anna Salamon, Carl Shulman, Luke Muehlhauser and Andrew Critch work for nonprofits with budgets over a million dollars a year, part of the EA community, which seems hyper-conscious of status and prestige, and in a way thrusts all of them into the limelight more.
I think it took a remarkably long time to sink in that "I hope you know this goes down on your permanent record".
Yeah. I remember times where web pages mostly kept disappearing after a few years. Like, you would look at an old page you made 3 or 5 years ago, click on the hyperlinks, and most of them would show "404 not found". That is not the case anymore.
The net effect is evaporative cooling where smart, interesting, important people either withdraw from the open 'net or curate their online presence into sterility.
Yeah, it seems like a binary choice: either you expect your online record to matter or not, but you cannot go halfway.
My children will not be allowed to ever use their real names online. They will thank me later. (Problem is, 20 years later, there will probably be technology to connect you even to texts you wrote under pseudonym.)
The problem is, too many websites today insist in you providing your real name. Yeah, you can provide a fake name. And be ready to lose you account at any random moment, if the website decides to ask you to provide some information to confirm your identity. But sometimes this is not an option, e.g. when you need someone to pay you money, for example Google Play. But the Google Play account is also linked to YouTube, and Google+, and... yeah, the "don't be evil" days are gone, too.
You can create walled gardens.
There are different ways how to implement a garden. For example, it can be open to reading but closed to writing. Or it can have a public discussion, and a private chat. This way you can announce your existence to new people who may share your interests, but protect yourself from exposing too much. You could make pseudonyms mandatory. (For example, you could have a rule that a username is a five-digit number, and also display a gravatar-like picture to visually distinguish similar usernames. After a while, all insiders would know the persona of "red triangle 48873", but to outsiders it would mean nothing.) You could have inner-circle and outer-circle membership, where new members go to the outer circle, and have to somehow prove their worth; for example on LW it could be by writing a few good articles. (There would be a private chat for all members, and a separate private chat for inner-circle members. Or maybe a rule that outer-circle members are anonymous, but inner-circle members must meet in person. Or whatever you consider best for your group.)
too many websites today insist in you providing your real name
The only one that I know of and which matters is Facebook. Google doesn't ask for your real name -- it used to for a while with Google+ and then gave up on this idea.
It doesn't matter, we know your real name anyway
The contemporary trend is to insist on your phone number which is trivially connected to the real name, etc.
There are different ways how to implement a garden.
All true, though it's a fair amount of work to set up something that customized -- and if you're a non-technical person, you're pretty screwed here.
But I would say that these disadvantages are necessary evils that, while they might be possible to mitigate somewhat, go along with having a genuinely public discourse and public accountability.
I'm often afraid of being an unwanted participant, so I've thought about this particular point somewhat. The worst case version of this phenomenon is the Eternal September, when the newbies become so numerous that the non-newbies decide to exit en masse.
I think there's something important that people miss when they think about the Eternal September phenomenon. From Wikipedia:
Every September, a large number of incoming freshmen would acquire access to Usenet for the first time, taking time to become accustomed to Usenet's standards of conduct and "netiquette". After a month or so, these new users would either learn to comply with the networks' social norms or tire of using the service.
The lever that everyone has already thought to pull is the 'minimize number of new users' lever, primed perhaps by the notion that not pulling this lever apparently resulted in the destruction of Usenet. Additionally, social media platforms often have moderation features that make pulling this lever very easy, and thus even more preferable.
But cultures don't have to leave new users to learn social norms on their own; you could pull the 'increase culture's capacity to integrate new users' lever. It makes sense that that lever hasn't been pulled, because it requires more coordination than the alternative. This post calls for a similar sort of coordination, so it seems like a good place for me to mention this possibility.
This applies not just to social norms, but to shared concepts, especially in cultures like this one, where many of the shared concepts are technical. It's easy to imagine that everyone who decreases discussion quality lacks the desire or wherewithal to become someone who increases discussion quality, but some newbies may have the aspirations and capability to become non-newbies, and it's better for everyone if that's made as easy as possible. In that way, I find that some potential improvements are not difficult to imagine.
As far as "playing the comments game", I admit I am guilty of that. At a deeper level it comes from a desire to connect with like-minded people. I may even be doing it right now.
We like to think people post because they are genuinely intellectually engaged in the material we've written, but the truth is people post comments for a myriad of different reasons, including wanting to score comment 'points' or 'karma' or engage in a back-and-forth with a figure they admire. People like getting attention. [even shy nerdy people who are socially isolated or socially awkward, for which commenting on an internet blog may count as a significant social engagement] As you point out, the 'comments game' motivation isn't necessarily bad in terms of the consequences -- it gets debate and discussion going. Given the importance of the topics discussed on LW and elsewhere, even low quality discussion is better than no discussion, or shutting people out.
Obviously though, there is a tension in the 'rational-sphere', though between wanting to draw in lots of new people and wanting to maintain a sense of community, or people who are on the 'same wavelength'. This tension is not at all unique to rationalism, and it typically leads to some type of fragmentation -- people who want to 'spread rationalism' and grow the movement go one way and the people who want to maintain a sense of community and maintain purity go another. I've seen the same dynamic at work in the Libertarian party and in Christian churches. I think we have to accept both sides have good points.
But getting back to your post, it seems like you are more on the 'we need to maintain a sense of community' side. Personally I haven't been very active in forums or online communities, but from what I have seen, maintaining a community online is possible , but it takes work - it requires considerable organization, active moderators and administrators, etc. Some platforms are more conducive to it than others. I can't really comment on the viability of LW, since I'm kinda new here, but it seems to be a good place.
As a side note, I'm not sure how much 'social trust' is required for commenting. While I might be very hesitant to talk to someone at a cocktail party for fear of annoying them, or because I don't trust them to take me seriously, I don't feel that way about commenting, or if I do, it's to a much lower extent. There is a difference - talking to someone in real life requires really interrupting them and taking their time, while writing a comment doesn't really interrupt someone as they can always ignore it if they want to. What you said about more socially privileged people being more trusting or confident is definitely true though.
people who want to 'spread rationalism' and grow the movement go one way and the people who want to maintain a sense of community and maintain purity go another. I've seen the same dynamic at work in the Libertarian party and in Christian churches. I think we have to accept both sides have good points.
I believe the proper solution is like an eukaryotic cell -- with outer circle, and inner circle(s). In Christianity, the outer circle is to be formally a Christian, and to visit a church on (some) Sundays. The inner circles are various monastic orders, or becoming a priest, or this kind of stuff. Now you can provide both options for people who want different things. If you just want the warm fuzzy feelings of belonging to a community, here you go. If you want some hardcore stuff, okay, come here.
These two layers need to cooperate: the outer circle must respect the inner circle, but the inner circle must provide some services for the outer circle. -- In case of LW such services would mostly be writing articles or making videos.
The outer circle must be vague enough that anyone can join, but the inner circles must be protected from invasion of charlatans; they must cooperate with each other so that they are able to formally declare someone "not one of us", if a charlatan tries to take over the system or just benefit from claiming that he is a part of the system. In other words, the inner circles need some system to formally recognize who is an inner circle of the system and who is not.
Looking at rationalist community today, "MIRI representatives" and "CFAR representatives" seem like inner circles, and there are also a few obvious celebrities such as Yvain of SSC. But if the community is going to grow, these people are going to need some common flag to make them different from anyone else who decides to make "rationality" their applause light and gather followers.
But if the community is going to grow, these people are going to need some common flag to make them different from anyone else who decides to make "rationality" their applause light and gather followers.
What, you are not allowed to call yourself a rationalist if you are not affiliated with MIRI, even if you subscribe to branches of Western philosophy descended from Descartes and Kant and Vienna circle...?
I think there should exist a name for the cluster in thingspace that is currently known here as "the rationalist community". That is my concern. How specifically it will be called, that is less important. We just have to coordinate on using the new name.
Generic "subscribing to branches of Western philosophy descended from Descartes and Kant and Vienna circle" is not exactly the same thing.
there should exist a name
LW crowd.
"The rationalist community" sounds way too hoity-toity and pretentious to me.
Viliam is right that unless we have a name for the cluster in thingspace that is the rationalist community, it's difficult to talk about. While I can understand why one might be alarmed, but I think MIRI/CFAR representatives mostly want everyone to be able to identify them in a clearly delineated way so that they and only they can claim to speak on behalf of those organizations on manners such as AI safety, existential risk reduction, or their stance on what to make of various parts of the rationality community now that they're trying to re-engage it. I think everyone can agree that it won't make anyone better off to confuse people who both identify with the LW/rationality community and those outside of it what MIRI/CFAR actually believe, re: their missions and goals.
This is probably more important to MIRI's/CFAR's relationship to EA and academia than people merely involved with LW/rationalists, since what's perceived as the positions of these organizations could effect how much funding they receive, and their crucial relationships with other organizations working on the same important problems.
What, you are not allowed to call yourself a rationalist if you are not affiliated with MIRI
The rationality police will come and use the rationality spray on you, leaving your writhing on the floor crying "Oh, my eyes! It burns, IT BURNS!"
LessWrong itself doesn't have as much activity as it once did, but the first users on LessWrong have pursued their ideas on Artificial Intelligence and rationality, through the Machine Intelligence Research Institute (MIRI) and the Center for Applied Rationality (CFAR), respectively, they have a lot more opportunity to impact the world than they did before. If those are the sorts of things you or anyone, really, is passionate about, if they can get abreast of what these organizations are doing now and can greatly expand on it on LW itself, it can lead to jobs. Well, it'd probably help to be able to work in the United States and also have a degree to work at either CFAR or MIRI. I've known several people who've gone on to collaborate with them by starting on LW. Still, though, personally I'd find the most exciting part to be shaping the future of ideas regardless of whether it led to a job or not.
I think it's much easier to say now to become a top contributor on LW can be a springboard to much greater things. Caveat: whether those things are greater depends on what you want. Of course there are all manner of readers and users on LW who don't particularly pay attention to what goes on in AI safety, or at CFAR/MIRI. I shouldn't say building connections through LW is unusually likely to lead to great things if most LessWrongers might not think the outcomes so great after all. If LW became the sort of rationality community which was conducive to other slam-dunk examples of systematic winning, like a string of successful entrepreneurs, that'd make the sight much more attractive.
I know several CFAR alumni have credited the rationality skills they learned at CFAR as contributing to their success as entrepreneurs or on other projects. That's something else entirely different from finding the beginnings of that sort of success merely on this website itself. If all manner of aspiring rationalists pursued and won in all manner of domains, with all the beginnings of their success attributed to LW, that'd really be something else.
Oops, went on a random walk there. Anyway, my point even shy nerdy people...
[even shy nerdy people who are socially isolated or socially awkward, for which commenting on an internet blog may count as a significant social engagement]
...can totally think of LW as significant social engagement if they want to, because I know dozens of people for whom down the road it's brought them marriages, families, careers, new passions, and whole new family-like communities. That's really more common among people who attended LW meetups in the past, when those were more common.
To re-kindle the old-timers... Maybe re-opening the SL4 would help? I really liked its cleanliness, and the ability to participate directly via e-mail.
Three claims here:
(1) The public internet is better for accountability than social media.
(2) People fled the public internet for social media because of 1.
(3) 2 is bad and we should undo it.
1 seems right. I want to argue with 2. It's obviously right in some cases. The randos in my mentions thing happens. But sometimes, people go to Facebook because that's where the people are. Facebook's been optimizing for this, and almost exclusively this, for a long time. So any discourse-building strategy that doesn't take into account this competitor will hit walls it can't see.
I hadn't sufficiently considered the long term changes of LW to have occurred within the context of the overall changes in the internet before. Thank you very much for pointing it out. Reversing the harm of Moloch on this situation is extremely important.
I remember posting in the old vbulletin days where a person would use a screenname, but anonymity was much higher and the environment itself felt much better to exist in. Oddly enough, the places I posted at back then were not non-hostile and had a subpopulation who would go out of their way to deliberately and intentionally insult people as harshly as possible. And yet... for some reason I felt substantially safer, more welcome, and accepted there than I have anywhere else online.
To at least some extent there was a sort of compartmentalization going on in those places where serious conversation was in one area while pure-fluffy, friendly, jokey banter-talk was going on in another. Attempting to use a single area for both sounds like a bad idea to me and is the sort of thing that LessWrong was trying to avoid (for good reason) in order to maintain high standards and value of conversation but places like Tumblr allow and possibly encourage. (I don't really know about tumblr since I avoid it, but that's what it looks like from the outside.) There may also have been a factor that I had substantially more in common with the people who were around at that time whereas the internet today is full of a far mroe diverse set of people who have far less interest in acculturating into strange new environments.
Short-term thinking, slight pain/fear avoidance, and trivial conveniences that shifted everyone from older styles like vbulletin or livejournal to places like reddit and tumblr ultimately pattern matches to Moloch in my mind if it leads to things like less common widescale discussion of rationality or decreased development of rationalist-beloved areas. Ending or slowing down open, long-term conversations on important topics is very bad and I hope that LW does get reignited to change the progression of that.
I have a different experience with my blog. Seems no one feels comfortable commenting (or many people simply find nothing to say but seem to pass around the links enough). I move to these more obscure sites to find my audience and to have dialogue on the content/arguments. But I am not looking for students so much as teachers...people that might correct me.
Also I have learned and found of very significant writers/writings through these different social networks, that I wouldn't have found otherwise.
I think then we can re-solve the two views by suggesting we are each looking for something and these different outlets provide different solutions, but they are complex and inextricable in regard to the reasoning for our preferences for any one.
I came to LW for these sentiments, but the hoops and protocol I have to jump through just to make a submission isn't related to why I am here (or I don't understand it!) ;p
Epistemic Status: Casual
It’s taken me a long time to fully acknowledge this, but people who “come from the internet” are no longer a minority subculture. Senators tweet and suburban moms post Minion memes. Which means that talking about trends in how people socialize on the internet is not a frivolous subject; it’s relevant to how people interact, period.
There seems to have been an overall drift towards social networks over blogs and forums in general, and in particular things like:
At the moment I’m not empirically tracking any trends like this, and I’m not confident in what exactly the major trends are — maybe in future I’ll start looking into this more seriously. Right now, I have a sense of things from impression and hearsay.
But one thing I have noticed personally is that people have gotten intimidatedby more formal and public kinds of online conversation. I know quite a few people who used to keep a “real blog” and have become afraid to touch it, preferring instead to chat on social media. It’s a weird kind of perfectionism — nobody ever imagined that blogs were meant to be masterpieces. But I do see people fleeing towards more ephemeral, more stream-of-consciousness types of communication, or communication that involves no words at all (reblogging, image-sharing, etc.) There seems to be a fear of becoming too visible as a distinctive writing voice.
For one rather public and hilarious example, witness Scott Alexander’s flight from LessWrong to LiveJournal to a personal blog to Twitter and Tumblr, in hopes that somewhere he can find a place isolated enough that nobody will notice his insight and humor. (It hasn’t been working.)
What might be going on here?
Of course, there are pragmatic concerns about reputation and preserving anonymity. People don’t want their writing to be found by judgmental bosses or family members. But that’s always been true — and, at any rate, social networking sites are often less anonymous than forums and blogs.
It might be that people have become more afraid of trolls, or that trolling has gotten worse. Fear of being targeted by harassment or threats might make people less open and expressive. I’ve certainly heard many writers say that they’ve shut down a lot of their internet presence out of exhaustion or literal fear. And I’ve heard serious enough horror stories that I respect and sympathize with people who are on their guard.
But I don’t think that really explains why one would drift towards more ephemeral media. Why short-form instead of long-form? Why streaming feeds instead of searchable archives? Trolls are not known for their patience and rigor. Single tweets can attract storms of trolls. So troll-avoidance is not enough of an explanation, I think.
It’s almost as though the issue were accountability.
A blog is almost a perfect medium for personal accountability. It belongs to you, not your employer, and not the hivemind. The archives are easily searchable. The posts are permanently viewable. Everything embarrassing you’ve ever written is there. If there’s a comment section, people are free to come along and poke holes in your posts. This leaves people vulnerable in a certain way. Not just to trolls, but to critics.
You can preempt embarrassment by declaring that you’re doing something shitty anyhow. That puts you in a position of safety. I think that a lot of online mannerisms, like using all-lowercase punctuation, or using really self-deprecating language, or deeply nested meta-levels of meme irony, are ways of saying “I’m cool because I’m not putting myself out there where I can be judged. Only pompous idiots are so naive as to think their opinions are actually valuable.”
Here’s another angle on the same issue. If you earnestly, explicitly say what you think, in essay form, and if your writing attracts attention at all, you’ll attract swarms of earnest, bright-but-not-brilliant, mostly young white male, commenters, who want to share their opinions, because (perhaps naively) they think their contributions will be welcomed. It’s basically just “oh, are we playing a game? I wanna play too!” If you don’t want to play with them — maybe because you’re talking about a personal or highly technical topic and don’t value their input, maybe because your intention was just to talk to your friends and not the general public, whatever — you’ll find this style of interaction aversive. You’ll read it as sealioning. Or mansplaining. Or“well, actually”-ing.
I think what’s going on with these kinds of terms is something like:
Author: “Hi! I just said a thing!”
Commenter: “Ooh cool, we’re playing the Discussion game! Can I join? Here’s my comment!” (Or, sometimes, “Ooh cool, we’re playing the Verbal Battle game! I wanna play! Here’s my retort!”)
Author: “Ew, no, I don’t want to play with you.”
There’s a bit of a race/gender/age/educational slant to the people playing the “commenter” role, probably because our society rewards some people more than others for playing the discussion game. Privileged people are more likely to assume that they’re automatically welcome wherever they show up, which is why others tend to get annoyed at them.
Privileged people, in other words, are more likely to think they live in a high-trust society, where they can show up to strangers and be greeted as a potential new friend, where open discussion is an important priority, where they can trust and be trusted, since everybody is playing the “let’s discuss interesting things!” game.
The unfortunate reality is that most of the world doesn’t look like that high-trust society.
On the other hand, I think the ideal of open discussion, and to some extent the past reality of internet discussion, is a lot more like a high-trust society where everyone is playing the “discuss interesting things” game, than it is like the present reality of social media.
A lot of the value generated on the 90’s and early 2000’s internet was built on people who were interested in things, sharing information about those things with like-minded individuals. Think of the websites that were just catalogues of information about someone’s obsessions. (I remember my family happily gathering round the PC when I was a kid, to listen to all the national anthems of the world, which some helpful net denizen had collated all in one place.) There is an enormous shared commons that is produced when people are playing the “share info about interesting stuff” game. Wikipedia. StackExchange. It couldn’t have been motivated by pure public-spiritedness — otherwise people wouldn’t have produced so much free work. There are lower motivations: the desire to show off how clever you are, the desire to be a know-it-all, the desire to correct other people. And there are higher motivations — obsession, fascination, the delight of infodumping. This isn’t some higher plane of civic virtue; it’s just ordinary nerd behavior.
But in ordinary nerd behavior, there are some unusual strengths. Nerds are playing the “let’s have discussions!” game, which means that they’re unembarrassed about sharing their take on things, and unembarrassed about holding other people accountable for mistakes, and unembarrassed about being held accountable for mistakes. It’s a sort of happy place between perfectionism and laxity. Nobody is supposed to get everything right on the first try; but you’re supposed to respond intelligently to criticism. Things will get poked at, inevitably. Poking is friendly behavior. (Which doesn’t mean it’s not also aggressive behavior. Play and aggression are always intermixed. But it doesn’t have to be understood as scary, hostile, enemy.)
Nerd-format discussions are definitely not costless. You get discussions of advanced/technical topics being mobbed by clueless opinionated newbies, or discussions of deeply personal issues being hassled by clueless opinionated randos. You get endless debate over irrelevant minutiae. There are reasons why so many people flee this kind of environment.
But I would say that these disadvantages are necessary evils that, while they might be possible to mitigate somewhat, go along with having a genuinely public discourse and public accountability.
We talk a lot about social media killing privacy, but there’s also a way in which it kills publicness, by allowing people to curate their spaces by personal friend groups, and retreat from open discussions. In a public square, any rando can ask an aristocrat to explain himself. If people hide from public squares, they can’t be exposed to Socrates’ questions.
I suspect that, especially for people who are even minor VIPs (my level of online fame, while modest, is enough to create some of this effect), it’s tempting to become less available to the “public”, less willing to engage with strangers, even those who seem friendly and interesting. I think it’s worth fighting this temptation. You don’t get the gains of open discussion if you close yourself off. You may not capture all the gains yourself, but that’s how the tragedy of the commons works; a bunch of people have to cooperate and trust if they’re going to build good stuff together. And what that means, concretely, on the margin, is taking more time to explain yourself and engage intellectually with people who, from your perspective, look dumb, clueless, crankish, or uncool.
Some of the people I admire most, including theoretical computer scientist Scott Aaronson, are notable for taking the time to carefully debunk crackpots (and offer them the benefit of the doubt in case they are in fact correct.) Is this activity a great ROI for a brilliant scientist, from a narrowly selfish perspective? No. But it’s praiseworthy, because it contributes to a truly open discussion. If scientists take the time to investigate weird claims from randos, they’re doing the work of proving that science is a universal and systematic way of thinking, not just an elite club of insiders. In the long run, it’s very important that somebody be doing that groundwork.
Talking about interesting things, with friendly strangers, in a spirit of welcoming open discussion and accountability rather than fleeing from it, seems really underappreciated today, and I think it’s time to make an explicit push towards building places online that have that quality.
In that spirit, I’d like to recommend LessWrong to my readers. For those not familiar with it, it’s a discussion forum devoted to things like cognitive science, AI, and related topics, and, back in its heyday a few years ago, it was suffused with the nerdy-discussion-nature. It had all the enthusiasm of late-night dorm-room philosophy discussions — except that some of the people you’d be having the discussions with were among the most creative people of our generation. These days, posting and commenting is a lot sparser, and the energy is gone, but I and some other old-timers are trying to rekindle it. I’m crossposting all my blog posts there from now on, and I encourage everyone to check out and join the discussions there.
(Cross-posted from my blog, https://srconstantin.wordpress.com/)