Do you have a source on the claims about echo chambers? I feel like most studies I encounter on it say that echo chambers are an overrated issue, with people tending to interact with contradictory views, but I haven't looked into it in detail.
The way echo chambers work seems to be popularly mis-explained.
How’s it’s explained: everyone you encounter agrees with you
How it actually works: everyone you encounter who you disagree with appears to be insane or evil. Next time you encounter someone who disagrees with you, you expect them to be insane or evil, causing you to act in a way that seems to them to be insane or evil. Iterate.
That second phenomenon seems to be a thing, though I wouldn't use the word "echo chamber" to refer to it. More like "polarization" or "radicalization".
I've seen papers like this: https://www.researchgate.net/profile/Walter-Quattrociocchi/publication/331936299_Echo_Chambers_on_Facebook/links/5c93b14b299bf111693e20f4/Echo-Chambers-on-Facebook.pdf. However, given the difficulties in testing for their existence, I assign only a weak probability that echo chambers are real enough to affect human behavior.
Would you mind sharing your studies that they are an overrated issue? I would love to adjust my position.
The study you linked smells fishy to me. They found that the overwhelming majority of users were deeply ingrained in either a science community or a conspiracy community; but that doesn't match my experience on facebook, where most people just seem to share stuff from their life. Is it possible that they specifically sampled from science communities/conspiracy communities? (Which would obviously overstate the degree of polarization and echo chambers by a huge amount.) They don't seem to describe how they sampled their users, unless I'm missing something, but given the context I would guess that they specifically sampled users who were active in the communities they looked at.
Regarding the studies that said it was overstated, as I said I haven't looked into it in detail, I just follow a bunch of social science people on twitter and they've discussed this, with the people who seem more trustworthy converging towards a view that the echo chamber issue is overrated and based on misinformation. But based on your comment I decided to look at the studies more closely, and they seemed a lot less convincing than I had expected, sometimes updating me in the opposite direction. Probably the twitter user that has the most comprehensive set of links is Rolf Degen, but I've also seen it from other sources, e.g. Michael Bang Petersen.
Thanks for sharing those twitter handles, I'll check 'em out.
They found that the overwhelming majority of users were deeply ingrained in either a science community or a conspiracy community;
Yes, they mention this specifically in how they setup their experiment -- they sampled FB groups that were explicitly either about conspiracy theories or science -- so their sample is not representative of the larger group.
I think their main finding is this: "In the discussions here, users show a tendency to seek out and receive information that strengthens their preferred narrative (see the reaction to trolling posts in conspiracy echo chambers) and to reject information that undermines it (see the failure of debunking)". However, it does seem to me that their method would lead to this finding, like if you go out looking for echo chambers, you will end up finding them, because there are over four billion users on social media.
This all seems obviously correct to me, and good reason to worry about engaging with such things. I've never liked advertising of any kind, and crossing advertising with addictive apps just looks terrible.
Advertising is the reason why more addictive is more profitable.
If instead you had a model where users pay a monthly fee, it would not matter how much time they spend at screen. Actually, less would be somewhat better, because running the servers would cost less.
(Epistemic status: guesswork)
Hypothesis: more addictive may well not actually be more profitable. I wouldn’t be at all surprised if social media engagement was Pareto-distributed, with the most active people quite possibly not the most lucrative customers for advertisers to advertise to. This is immaterial if effectiveness-of-advertising is Goodharted into ”page impressions”, but a pile of Nash Equilibria which only works because multiple parties are Goodharting against noisy proxies can collapse abruptly.
So, kinda like... most people never click on any Facebook ads, but there is a small fraction of morons that click on all of them... but there is no simple known method to make the morons stay on the website longer without also making everyone else stay longer? (Exaggerated to make a point.) Sounds possible.
I suspect that the only way out is to provide a solution that has all the advantages of e.g. Facebook, without most of the disadvantages. Because there are advantages, especially for people who are less tech-savvy. Facebook allows them to communicate online with many people, and requires only minimum technical knowledge.
Before social networks, I usually communicated with people by e-mail. It was nice, but it required me to install and set up an e-mail program. (This problem was also solved by GMail.) Instant messagers were also nice, but again required installing and setup. Plus there were multiple instant messengers, and then you had some open-source client that could connect to all of them, but you still had to create the accounts, and configure the contact lists. Using these programs required some technical skills, or having someone with these skills in your family. I also visited all kinds of web forums.
Facebook is like GMail + instant messenger + web forums, all in one, and requires minimum setup. And although I hate the policy of providing your actual name and photo, it makes maintaining the contact lists easier. You do not have to install anything, which among other things means you can access Facebook at work from the company computer (unless it is specifically blocked); but there is also the optional smartphone app.
Blocking users solves the problem of spam. (Although you get ads, which are another form of spam.)
Multiple applications are, on one hand "yay competition" and less vendor lock-in, but it also makes maintaining contacts difficult in long term. If I have someone's ICQ number, but the person already moved elsewhere, how am I supposed to know, and how am I supposed to contact them again?
A possible way out would be to make a non-evil (or maybe just less evil) application that provides all of these services. And then somehow convince everyone to switch over. And if it isn't just as easy, people are definitely not going to switch over.
I prefer to have things sorted by topic, like one Reddit forum for this, and another Reddit forum for that. But from the perspective of a lazy publisher (or a publisher with near-zero technical skills), throwing everything on the wall is the easy way. So we need to allow this, at least as a default. (But of course, Facebook also supports joining groups, and writing on group walls.)
Maybe Facebook has already all figured out (they do spend lot of money on research), and the non-evil alternative would be surprisingly similar, only with more options and fewer ads.
Now another question is how to pay for the costs. Suppose you are not a profit maximizer, but you are not going to generate yourself a loss. And most people are not willing to pay something they can get for free at Facebook. Oops, are we stuck? Maybe not. Maybe we could allow advertising as a default alternative... and if you pay, dunno, $5 a month, the ads get turned off. Also, the ads would be less annoying, because we are not trying to generate profit, only to cover the costs of non-paying users.
Then we have the problem of policing content... you may prefer free speech, but at some moment the government is going to hold you responsible for something. We probably need to address users impersonating real people... first because it goes against the value of simple maintenance of contact lists, second because at some moment the impersonated people will sue you for libel. (That means, using a pseudonym would be ok, but using another real person's or organization's name would not.)
Sounds like a lot of work.
Another aspect: if you built software intended to deliberate on people’s needs and problems and then formulate plans and collect volunteers, the result would look fairly thoroughly not like Facebook. Any system for collating, corralling and organising different opinions and evidence would, also, look not at all like Facebook. You might end up with an argument map[1], or some “garden and the stream”[2] mix of dialogue and accumulated wisdom.
TL;DR: social software intended to avoid or ameliorate the problems we see with Facebook might function very little like Facebook does.
[1] https://en.wikipedia.org/wiki/Argument_map
[2] https://hapgood.us/2015/10/17/the-garden-and-the-stream-a-technopastoral/
I like the warren & plaza description of bi-level communities for productive discussion. 'Garden and stream' seems to overemphasize wikis as a mechanism, when it's really about persistence & specialization for filtering (eg Usenet discussion groups feeding into FAQs).
That's a good point. Another way to look at the difference between Facebook and X would be that Facebook/Twitter/etc. lean heavily on self-expression. Very little of the content on those sites actually aim to contribute to something, like a dialogue or body of knowledge. I think this is why communities focused around specific goals, say, writers, weight lifters, or rationalists do not do their work over Facebook/Twitter/etc. Some might use those to stay in touch, but the serious work gets done on yee old phpbb forums and the like, where self-expression is not the main point.
Wow, thanks for those links. I've spent a few hours going down the garden/stream rabbithole. I can't believe I hadn't seen it before - though I've seen tools like Roam or the Zettelkasten and such, and of course I've read the Vanevar Bush article, but somehow it never occurred to me that maybe we already have working, albeit relatively not so popular, systems that work very differently than Facebook.
You laid out the problem and all its sub-problems quiet well.
I suspect that the only way out is to provide a solution that has all the advantages of e.g. Facebook, without most of the disadvantages.
I'm writing a post about a potential solution like this that I picked up from reading a paper :). It's a very interesting space, which I feel doesn't get a lot of focus because a) most people dont want to build/program b) most people are satisfied with things as-is and c) the problem space is huge and searching it is hard. But with the malleability of software, I think that once we hit a new, working set of ideas, they will take over as quickly as social media pushed out traditional forums/mailing lists/link rings back in the 00's.
Facebook is like GMail + instant messenger + web forums, all in one, and requires minimum setup.
Well put. It's like Facebook/Twitter/etc. are an extra layer above everything else, a "layer 8" in the OSI model, that allows people to completely not care about all the protocols, filesystems, name-spacing schemes, storage requirements that sit underneath it. Just point your browser to this one page and you get it all (no installer, no plug'n'play, no versions/updates, no fees...).
Now another question is how to pay for the costs. (...) Then we have the problem of policing content... (...)
I feel this is very accurate in how it points out that we're dealing with issues on both the technical layer and the social one. My gut tells me that purely technical solutions like mastodon will never take over because they don't address any of the social issues like usability, moderation, accountability, etc.. I don't have a good example of something that would work well on the social layer, but not on a technical one.
Currently, I place a lot of hope on stuff like the push for decentralization and web3. (I need to read up more about it though as right now these are just utopian ideas in my head). If we were able to get the efficiencies of centralized platform on a decentralized one, then that, I think, would have a good chance of winning in the sense of migrating over hundreds of millions of users. I imagine it would work by allowing users to very precisely price/pay for what they use, eg. the average user would most likely pay most of their fee for photo/video storage, while a power use would dedicate most of their fee toward specific features, or even characteristics like uptime. In both cases, they could still enjoy living on a higher level of abstraction than running their own email servers, but they get as much value as the price they're willing to pay.
Yeah, speaking about decentralization... I would recommend using one default server (which will be used by all people who do not know what "server" means, that means most of population), but allow the protocol to connect to independent servers. They should be handled kinda like alternative app stores in smartphones. You connect to an alternative server, you get all the warnings, and then you choose your mode of contact-making with the alternative server: whether only you can actively seek for friends there (because your phishing resistance is zero, and you only wanted to connect with one specific person there), or whether other people can send you friend requests. (When you get a friend request, it is clearly shown that is comes from an alternative server.)
OK, now we just need to make this happen. ;)
After writing this, I wasn't happy enough with the result to post it. However, reading spkoc's comment on a recent COVID update made me think that I should share it because it seems as relevant as it did when I typed it out.
This post is my second attempt to explain and check the worry I feel about social media. The feedback from the first one told me that instead of clarifying my thoughts, I only managed to convey a vague sense of unease. Let's give this one more shot then.
Mainstream opinion of services like Facebook, Twitter, etc. is trending negative. It is said that, on a personal level, they're addictive, wasting people's time and harming their self-esteem. At the group level, they create echo chambers, fuel conspiracy theories, and even enable ethnic cleansing. But I think all of this misses a crucial point–that all these things are connected, parts of a single system, operating within normal parameters. And the negative effects I described? Merely expected by-products, like car exhaust or traffic jams.
This error leads us to underestimate just how much these platforms degrade global epistemic conditions. To illustrate my point, let me describe three mechanisms that I consider core to social media.
As Digital Nicotine
I used to smoke back in high school. It made me feel good–calm but awake–especially between classes when I worried about my grades and my future. Cigarettes also helped connect with other people. Smoking signaled that I was alright, that I wasn't taking life too seriously and that I could be trusted.
Social media feels similar. Seeing photos of friends partying or people liking your post activates some ancient circuits in the brain. The ones that help us work together, build relationships with others, and track our position in hierarchies. The same ones which allowed our ancestors to form stable hunter-gatherer bands, which turned out to be such a good survival strategy that we've taken over the world. That's why it feels so good to to consume all that the social web has to offer: cute babies, pretty bodies, and outrageous news.
It would never have worked out without smartphones. They freed users from stationary, beige boxes that had to be shared with parents or siblings. Suddenly, everyone had a private device, a little portal into the social universe, so they could check in whenever and wherever they were. It was handy, instantaneous relief from boredom, kind of like having a quick smoke.
As Video Game
In games, players often create avatars, join groups, fight monsters, and collect experience points. On social media, they create profiles, join echo chambers, fight people from the outgroup, and collect followers. Both are fun and engaging and difficult to put down.
There's another genre of games that are hard to quit: slot machines. Casinos have greatly advanced the art of making them more addicting. For example, they play faster and with shorter breaks, increasing the number of games played per unit of time. That's clever, but still within the realm of acceptable practice. But how about this piece of psychological black magic: slot machines simulate the player almost winning, evoking the emotion of "almost getting it", tricking them to play another round. The only purpose of these techniques is to keep people playing, even if there's no end-game.
In a similar fashion, the most important metric for a social media platform is engagement–how much time users spend actively using their site or app. To increase that metric, platforms often introduce features like videos or payments. But sometimes, they go for something a little more clever, like algorithmic feeds. These track a user's interactions and use that to display content that's likely to generate more interactions, no matter if it makes users feel good or bad. What matters is if the user will come back and play some more. Luckily for the platforms, there is no end-game.
As Attention Harvester
In "The Attention Merchents", Tim Wu describes the story of modern advertising. It begins when one man, a printer by trade, noticed an opportunity in the daily paper market–if he could get people to pay him to print stories about their products, he could drop the price from 5-6 cents to just 1 cent. But to do that, he would have to ensure people were reading those stories. And he had just the idea to make that happen.
His paper printed scandals, crime reports, and sometimes complete fabrications. It was the first paper to hire a reporter dedicated solely to hanging around the courthouse and producing stories about all the wild and ugly cases that flowed through those walls. It worked so well, in fact, that the paper's circulation exceeded that of any other local competitor. As others copied his model and achieved success, the advertising industry was born.
Over time, the model was applied to other media like radio, TV, and finally the Web. And while it was refined and expanded every step of the way, the core remained the same: capture the attention of an audience and sell it. There are three parties to this transaction: someone who wants to sell a product or service; an audience seeking entertainment; and the middleman who captures the attention of the second group and sells it to the first.
(This is where the phrase "If you are not paying for it, you're not the customer; you're the product being sold" originates from.)
Social media companies are the attention merchants. They've refined the harvesting process by collecting data about their users and using it to display ads that are most likely to engage them. When one describes this in a positive light, they point out that these are ads about things people really care about. But attention merchants are locked into competition over a finite resource. So there's no natural limit to how far they're willing to "optimize" their harvesting. That is, at least in Wu's take, until the "product" revolts, like it has a few times before.
The Crux
This then is the crux of my worry: social media is an inherently exploitative medium. There's a lot of systematic effort–researcher/engineer hours–that goes into making it grab as much of people's attention as possible at any cost. Because we ignore this point, we look at social media's side-effects as separate, contained externalities, which in turn suppresses any society-level response to this problem. Until we find an alternative, basically a new set of schelling points for online presence and advertising, I expect these problems to continue getting worse.