The Web, of course, if nothing but a standardized protocol with multiple interoperable open-source clients and servers, and services being offered either for money or freely. I am not sure why would you want a lot of different protocols.
This is like asking why, before HTTP, we needed different protocols for email and IRC and usenet, when we already had standardized TCP underneath. HTTP is an agnostic communication protocol like TCP, not an application protocol like email.
The application-level service exposed by modern websites is very rarely - and never unintentionally or 'by default' - a standardized (i.e. documented) protocol. You can't realistically write a new client for Facebook, and even if you did, it would break every other week as Facebook changed their site.
I use the example of Facebook advisedly. They expose a limited API, which deliberately doesn't include all the bits they don't want you to use (like Messenger), and is further restricted by TOS which explicitly forbids clients that would replace a major part of Facebook itself.
The net's big thing is that it's dumb and all the intelligence is at the endpoints (compare to the telephone network). The web keeps that vital feature.
That's true. But another vital feature of the net is that most traffic runs over standardized, open protocols.
Imagine a world where nothing was standardized above the IP layer, or even merely nothing about UDP, TCP and ICMP. No DNS, email, NFS, SSH, LDAP, none of the literally thousands of open protocols that make the Net as we know it work. Just proprietary applications, each of which can only talk to itself. That's the world of the web applications.
(Not web content, which is a good concept, with hyperlinks and so forth, but dynamic web applications like facebook or gmail.)
That's not a feature of the web as opposed to the 'net. That's business practices and they are indifferent to what your underlying protocol is. For example you mention VOIP and that's not the "web".
I mentioned VOIP exactly because I was talking about a more general process, of which the Web - or rather modern web apps - is only one example.
The business practice of ad-driven revenue cares about your underlying protocol. It requires restricting the user's control over their experience - similarly to DRM - because few users would willingly choose to see ads if there was a simple switch in the client software to turn them off. And that's what would happen with an open protocol with competing open source clients.
Never do? Really? I think you're overreaching in a major way. Nothing happened to the two biggies -- HTTP and email. There are incompatible chat networks? So what, big deal...
Email is pretty much the only survivor (despite inroads by webmail services). That's why I said "almost" never do. And HTTP isn't an application protocol. Can you think of any example other than email?
Sigh. HTTP? An ad-based service would prefer a welded-shut client, but in practice the great majority of ads are displayed in browsers which are perfectly capable of using ad-blockers. Somehow Google survives.
Google survives because the great majority of people don't use ad blockers. Smaller sites don't always survive and many of them are now installing ad blocker blockers. Many people have been predicting the implosion of a supposed ad revenue bubble for many years now; I don't have an opinion on the subject, but it clearly hasn't happened yet.
people like money. Also: people are willing to invest money (which can be converted into time and effort) if they think it will make them more money. TANSTAAFL and all that...
That doesn't explain the shift over time from business models where users paid for service, to ad-supported revenue. On the other hand, if you can explain that shift, then it predicts that ad-supported services will eschew open protocols.
HTTP is an agnostic communication protocol like TCP, not an application protocol like email.
Huh? HTTP is certainly an application protocol: you have a web client talking to a web server. The application delivers web pages to the client. It is by no means an "agnostic" protocol. You can, of course, use it to deliver binary blobs, but so can email.
The thing is, because the web ate everything, we're just moving one meta level up. You can argue that HTTP is supplanting TCP/IP and a browser is supplanting OS. We're building layers upon layers matry...
A few months ago, Vaniver wrote a really long post speculating about potential futures for Less Wrong, with a focus on the idea that the spread of the Less Wrong diaspora has left the site weak and fragmented. I wasn't here for our high water mark, so I don't really have an informed opinion on what has socially changed since then. But a number of complaints are technical, and as an IT person, I thought I had some useful things to say.
I argued at the time that many of the technical challenges of the diaspora were solved problems, and that the solution was NNTP -- an ancient, yet still extant, discussion protocol. I am something of a crank on the subject and didn't expect much of a reception. I was pleasantly surprised by the 18 karma it generated, and tried to write up a full post arguing the point.
I failed. I was trying to write a manifesto, didn't really know how to do it right, and kept running into a vast inferential distance I couldn't seem to cross. I'm a product of a prior age of the Internet, from before the http prefix assumed its imperial crown; I kept wanting to say things that I knew would make no sense to anyone who came of age this millennium. I got bogged down in irrelevant technical minutia about how to implement features X, Y, and Z. Eventually I decided I was attacking the wrong problem; I was thinking about 'how do I promote NNTP', when really I should have been going after 'what would an ideal discussion platform look like and how does NNTP get us there, if it does?'
So I'm going to go after that first, and work on the inferential distance problem, and then I'm going to talk about NNTP, and see where that goes and what could be done better. I still believe it's the closest thing to a good, available technological schelling point, but it's going to take a lot of words to get there from here, and I might change my mind under persuasive argument. We'll see.
Fortunately, this is Less Wrong, and sequences are a thing here. This is the first post in an intended sequence on mechanisms of discussion. I know it's a bit off the beaten track of Less Wrong subject matter. I posit that it's both relevant to our difficulties and probably more useful and/or interesting than most of what comes through these days. I just took the 2016 survey and it has a couple of sections on the effects of the diaspora, so I'm guessing it's on topic for meta purposes if not for site-subject purposes.
Less Than Ideal Discussion
To solve a problem you must first define it. Looking at the LessWrong 2.0 post, I see the following technical problems, at a minimum; I'll edit this with suggestions from comments.
I see these meta-technical problems:
Slightly Less Horrible Discussion
"Solving" community maintenance is a hard problem, but to the extent that pieces of it can be solved technologically, the solution might include these ultra-high-level elements:
As with the previous, I'll update this from the comments if necessary.
Getting There From Here
As I said at the start, I feel on firmer ground talking about technical issues than social ones. But I have to acknowledge one strong social opinion: I believe the greatest factor in Less Wrong's decline is the departure of our best authors for personal blogs. Any plan for revitalization has to provide an improved substitute for a personal blog, because that's where everyone seems to end up going. You need something that looks and behaves like a blog to the author or casual readers, but integrates seamlessly into a community discussion gateway.
I argue that this can be achieved. I argue that the technical challenges are solvable and the inherent coordination problem is also solvable, provided the people involved still have an interest in solving it.
And I argue that it can be done -- and done better than what we have now -- using technology that has existed since the '90s.
I don't argue that this actually will be achieved in anything like the way I think it ought to be. As mentioned up top, I am a crank, and I have no access whatsoever to anybody with any community pull. My odds of pushing through this agenda are basically nil. But we're all about crazy thought experiments, right?
This topic is something I've wanted to write about for a long time. Since it's not typical Less Wrong fare, I'll take the karma on this post as a referendum on whether the community would like to see it here.
Assuming there's interest, the sequence will look something like this (subject to reorganization as I go along, since I'm pulling this from some lengthy but horribly disorganized notes; in particular I might swap subsequences 2 and 3):
(Meta-meta: This post was written in Markdown, converted to HTML for posting using Pandoc, and took around four hours to write. I can often be found lurking on #lesswrong or #slatestarcodex on workday afternoons if anyone wants to discuss it, but I don't promise to answer quickly because, well, workday)
[Edited to add: At +10/92% karma I figure continuing is probably worth it. After reading comments I'm going to try to slim it down a lot from the outline above, though. I still want to hit all those points but they probably don't all need a full post's space. Note that I'm not Scott or Eliezer, I write like I bleed, so what I do post will likely be spaced out]