Lumifer comments on Turning the Technical Crank - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (134)
Is this a good summary of your argument?
NNTP was a great solution to a lot of the problems caused by mailing lists. The main ones being:
We are facing similar problems now. A lot of people have their own sites where they host their own content. We either miss out on great content if we don't trawl through a ton of different sites or we try to make lesswrong a central source for content and face problems with:
NNTP would solve the problems we have now in a similar way to how it solved the problems with mailing lists. That is, it would provide a central repository for content and a way to access this content.
I am currently thinking that the best way to think about the last point is that it means that we should set up a Web API similar to the Blogger Web API. Discussing NNTP, at least to me, is making the solution appear a lot more complicated than it needs to be. Although, I don't know much about NNTP, so I could be overlooking something very important and am interested about what your future posts will explore.
With a Less Wrong Web API, websites could be created that act like views in a database. They would show only the content from a particular group or author. This content would, of course, be styled according to the style rules on the website.
These websites could be free, dns name and web development costs aside, using services like github pages. This is because there should be no need for a back-end as the content and user information is all hosted on Less Wrong. You post, retrieve content and vote using the API. It should also be fairly easy to create more complicated websites that could aggregate and show posts based on user preferences or even to create mobile applications.
The solution to reading all that content is RSS. The solution to, basically, cross-linking comments haven't been devised yet, I think.
So, that's Reddit with more freedom to set up custom CSS for subreddits? Or there are deeper differences?
As far as I see it, there are 2 basic classes of solutions.
The first type of solution is something like reddit or Facebook's newsfeed which involves two concepts: linkposts which are links to or cross posts of outside content and normal posts which are hosted by the site itself. Making use of RSS or ATOM can automate the link posts.
The second type of solution is something like the Blogger API with extended functionality to allow you to access any content that has been posted using the API. Other things it would include would be, for example, the ability the list top pages based on some ranking system.
In the first type of solution, LessWrong.com is a hub that provides links to or copies of outside content. Smooth integration of the comments and content hosted outside of this site would, I think, be hard to do. Searching of the linked content and handling permissions for it nicely would be difficult as well.
In the second type of solution LessWrong.com is just another site in the LessWrong Sphere. The functionality of all the sites in this sphere would be driven by the API. You post and retrieve using the API which means that all posts and comments regardless of their origination sites can be available globally. Creating a prototype for this type of solution shouldn't be too hard either which is good.
The deeper difference is the elimination of linkposts. All content posted using the API can be retrieved using the API. It is not linked to. It is pulled from the one source using the API.
The closest existing solutions are off-site comment management systems like Disqus. But they're proprietary comment storage providers, not a neutral API. And each such provider has its own model of what comments are and you can't change it to e.g. add karma if it doesn't do what you want.
Disqus is just a SaaS provider for a commenting subsystem. The trick is to integrate comments for/from multiple websites into something whole.
Solving such integration and interoperability problems is what standards are for. At some point the Internet decided it didn't feel like using a standard protocol for discussion anymore, which is why it's even a problem in the first place.
(http is not a discussion protocol. Not that I think you believe it is, just preempting the obvious objection)
That's an interesting point. What are the reasons NNTP and Usenet got essentially discarded? Are some of these reasons good ones?
Usenet is just one example of a much bigger trend of the last twenty years: the Net - standardized protocols with multiple interoperable open-source clients and servers, and services being offered either for money or freely - being replaced with the Web - proprietary services locking in your data, letting you talk only to other people who use that same service, forbidding client software modifications, and being ad-supported.
Instant messaging with multi-protocol clients and some open protocols was replaced by many tens of incompatible services, from Google Talk to Whatsapp. Software telephony (VOIP) and videoconferencing, which had some initial success with free services (Jingle, the SIP standards) was replaced by the likes of Skype. Group chat (IRC) has been mostly displaced by services like Slack.
There are many stories like these, and many more examples I could give for each story. The common theme isn't that the open, interoperable solution used to rule these markets - they didn't always. It's that they used to exist, and now they almost never do.
Explaining why this happened is hard. There are various theories but I don't know if any of them is generally accepted as the single main cause. Maybe there are a lot of things all pushing in the same direction. Here are a few hypotheses:
There are other possibilities, too, which I don't have the time to note right now. This is late in the night for me, so I apologize if this comment is a bit incoherent.
The Web, of course, if nothing but a standardized protocol with multiple interoperable open-source clients and servers, and services being offered either for money or freely. I am not sure why would you want a lot of different protocols.
The net's big thing is that it's dumb and all the intelligence is at the endpoints (compare to the telephone network). The web keeps that vital feature.
That's not a feature of the web as opposed to the 'net. That's business practices and they are indifferent to what your underlying protocol is. For example you mention VOIP and that's not the "web".
Never do? Really? I think you're overreaching in a major way. Nothing happened to the two biggies -- HTTP and email. There are incompatible chat networks? So what, big deal...
Sigh. HTTP? An ad-based service would prefer a welded-shut client, but in practice the great majority of ads are displayed in browsers which are perfectly capable of using ad-blockers. Somehow Google survives.
No, not really. Here: people like money. Also: people are willing to invest money (which can be converted into time and effort) if they think it will make them more money. TANSTAAFL and all that...
This is like asking why, before HTTP, we needed different protocols for email and IRC and usenet, when we already had standardized TCP underneath. HTTP is an agnostic communication protocol like TCP, not an application protocol like email.
The application-level service exposed by modern websites is very rarely - and never unintentionally or 'by default' - a standardized (i.e. documented) protocol. You can't realistically write a new client for Facebook, and even if you did, it would break every other week as Facebook changed their site.
I use the example of Facebook advisedly. They expose a limited API, which deliberately doesn't include all the bits they don't want you to use (like Messenger), and is further restricted by TOS which explicitly forbids clients that would replace a major part of Facebook itself.
That's true. But another vital feature of the net is that most traffic runs over standardized, open protocols.
Imagine a world where nothing was standardized above the IP layer, or even merely nothing about UDP, TCP and ICMP. No DNS, email, NFS, SSH, LDAP, none of the literally thousands of open protocols that make the Net as we know it work. Just proprietary applications, each of which can only talk to itself. That's the world of the web applications.
(Not web content, which is a good concept, with hyperlinks and so forth, but dynamic web applications like facebook or gmail.)
I mentioned VOIP exactly because I was talking about a more general process, of which the Web - or rather modern web apps - is only one example.
The business practice of ad-driven revenue cares about your underlying protocol. It requires restricting the user's control over their experience - similarly to DRM - because few users would willingly choose to see ads if there was a simple switch in the client software to turn them off. And that's what would happen with an open protocol with competing open source clients.
Email is pretty much the only survivor (despite inroads by webmail services). That's why I said "almost" never do. And HTTP isn't an application protocol. Can you think of any example other than email?
Google survives because the great majority of people don't use ad blockers. Smaller sites don't always survive and many of them are now installing ad blocker blockers. Many people have been predicting the implosion of a supposed ad revenue bubble for many years now; I don't have an opinion on the subject, but it clearly hasn't happened yet.
That doesn't explain the shift over time from business models where users paid for service, to ad-supported revenue. On the other hand, if you can explain that shift, then it predicts that ad-supported services will eschew open protocols.
Huh? HTTP is certainly an application protocol: you have a web client talking to a web server. The application delivers web pages to the client. It is by no means an "agnostic" protocol. You can, of course, use it to deliver binary blobs, but so can email.
The thing is, because the web ate everything, we're just moving one meta level up. You can argue that HTTP is supplanting TCP/IP and a browser is supplanting OS. We're building layers upon layers matryoshka-style. But that's a bigger and a different discussion than talking about interoperability. HTTP is still an open protocol with open-source implementations available at both ends.
You are very persistently ignoring reality. The great majority of ads are delivered in browsers which are NOT restricting the "user's control over their experience" and which are freely available as "competing open source clients".
Sure. FTP for example.
Why is that a problem? If they can't survive they shouldn't.
The before-the-web internet did not have a business model where users paid for service. It pretty much had no business model at all.
This from here seems pretty accurate for Usenet:
For NNTP for LessWrong, I would think that we have to also take into account that people want to control how their content is displayed/styled. Their own separate blogs easily allow this.
Not just about how it's displayed/styled. People want control over what kinds of comments get attached to their writing.
I think this is the key driver of the move from open systems to closed: control. The web has succeeded because it clearly defines ownership of a site, and the owner can limit content however they like.
My opinion? Convenience. It's more convenient for the user to not have to configure a reader, and it's more convenient for the developer of the forum to not conform to a standard. (edit: I would add 'mobility', but that wasn't an issue until long after the transition)
And its more convenient for the owner's monetization to not have an easy way to clone their content. Or view it without ads. What Dan said elsewhere about all the major IM players ditching XMPP applies.
[Edited to add: This isn't even just an NNTP thing. Everything has been absorbed by HTTP these days. Users forgot that the web was not the net, and somewhere along the line developers did too.]
I find it difficult to believe that mere convenience, even amplified with the network effect, would have such a drastic result. As you say, HTTP ate everything. What allowed it to do that?
It's more appropriate to say that the Web ate everything, and HTTP was dragged along with it. There are well known reasons why the Web almost always wins out, as long as the browsers of the day are technologically capable of doing what you need. (E.g. we used to need Flash and Java applets, but once we no longer did, we got rid of them.)
Even when you're building a pure service or API, it has to be HTTP or else web clients won't be able to access it. And once you've built an HTTP service, valid reasons to also build a non-HTTP equivalent are rare: high performance or efficiency or full duplex semantics. These are rarely needed.
Finally, there's a huge pool of coders specializing in web technologies.
HTTP eating everything isn't so bad. It makes everything much slower than raw TCP, and it forces the horribly broken TLS certificate authority model, but it also has a lot of advantages for many applications. The real problem is the replacement of open standard protocols, which can be written on top of HTTP as well as TCP, with proprietary ones.
I've been asking for them and got nothing but some mumbling about convenience. Why did the Web win out in 90s? Do you think it was a good thing or a bad thing?
If you specify that your client is a browser, well, duh. That is not always the case, though.
But you've been laying this problem at the feet of the web/HTTP victory. So HTTP is not the problem?
Just a guess: having to install a special client? The browser is everywhere (it comes with the operating system), so you can use web pages on your own computer, at school, at work, at neighbor's computer, at web cafe, etc. If you have to install your own client, outside of your own computer, you are often not allowed to do it. Also, many people just don't know how to install programs.
And when most people use browsers, most debates will be there, so the rest will follow.
That doesn't explain why people abandoned Usenet. They had the clients installed, they just stopped using them.
The amount of people using the Internet and the Web has been increasing geometrically for more than two decades. New users joined new services, perhaps for the reasons I gave in my other comment. Soon enough the existing usenet users were greatly outnumbered, so they went to where the content and the other commenters were.
Yes, the network effect. But is that all?
The e-mail client that came pre-installed with Windows 95 and several later Windowses also included newsgroup functionality.