(Part of a sequence on discussion technology and NNTP. As last time, I should probably emphasize that I am a crank on this subject and do not actually expect anything I recommend to be implemented. Add whatever salt you feel is necessary)1


If there is one thing I hope readers get out of this sequence, it is this: The Web Browser is Not Your Client.

It looks like you have three or four viable clients -- IE, Firefox, Chrome, et al. You don't. You have one. It has a subforum listing with two items at the top of the display; some widgets on the right hand side for user details, RSS feed, meetups; the top-level post display; and below that, replies nested in the usual way.

Changing your browser has the exact same effect on your Less Wrong experience as changing your operating system, i.e. next to none.

For comparison, consider the Less Wrong IRC, where you can tune your experience with a wide range of different software. If you don't like your UX, there are other clients that give a different UX to the same content and community.

That is how the mechanism of discussion used to work, and does not now. Today, your user experience (UX) in a given community is dictated mostly by the admins of that community, and software development is often neither their forte nor something they have time for. I'll often find myself snarkily responding to feature requests with "you know, someone wrote something that does that 20 years ago, but no one uses it."

Semantic Collapse

What defines a client? More specifically, what defines a discussion client, a Less Wrong client?

The toolchain by which you read LW probably looks something like this; anyone who's read the source please correct me if I'm off:

Browser -> HTTP server -> LW UI application -> Reddit API -> Backend database.

The database stores all the information about users, posts, etc. The API presents subsets of that information in a way that's convenient for a web application to consume (probably JSON objects, though I haven't checked). The UI layer generates a web page layout and content using that information, which is then presented -- in the form of (mostly) HTML -- by the HTTP server layer to your browser. Your browser figures out what color pixels go where.

All of this is a gross oversimplification, obviously.

In some sense, the browser is self-evidently a client: It talks to an http server, receives hypertext, renders it, etc. It's a UI for an HTTP server.

But consider the following problem: Find and display all comments by me that are children of this post, and only those comments, using only browser UI elements, i.e. not the LW-specific page widgets. You cannot -- and I'd be pretty surprised if you could make a browser extension that could do it without resorting to the API, skipping the previous elements in the chain above. For that matter, if you can do it with the existing page widgets, I'd love to know how.

That isn't because the browser is poorly designed; it's because the browser lacks the semantic information to figure out what elements of the page constitute a comment, a post, an author. That information was lost in translation somewhere along the way.

Your browser isn't actually interacting with the discussion. Its role is more akin to an operating system than a client. It doesn't define a UX. It provides a shell, a set of system primitives, and a widget collection that can be used to build a UX. Similarly, HTTP is not the successor to NNTP; the successor is the plethora of APIs, for which HTTP is merely a substrate.

The Discussion Client is the point where semantic metadata is translated into display metadata; where you go from 'I have post A from user B with content C' to 'I have a text string H positioned above visual container P containing text string S.' Or, more concretely, when you go from this:

Author: somebody
Subject: I am right, you are mistaken, he is mindkilled.
Date: timestamp
Content: lorem ipsum nonsensical statement involving plankton....

to this:

<h1>I am right, you are mistaken, he is mindkilled.</h1>
<div><span align=left>somebody</span><span align=right>timestamp</span></div>
<div><p>lorem ipsum nonsensical statement involving plankton....</p></div>

That happens at the web application layer. That's the part that generates the subforum headings, the interface widgets, the display format of the comment tree. That's the part that defines your Less Wrong experience, as a reader, commenter, or writer.

That is your client, not your web browser. If it doesn't suit your needs, if it's missing features you'd like to have, well, you probably take for granted that you're stuck with it.

But it doesn't have to be that way.

Mechanism and Policy

One of the difficulties forming an argument about clients is that the proportion of people who have ever had a choice of clients available for any given service keeps shrinking. I have this mental image of the Average Internet User as having no real concept for this.

Then I think about email. Most people have probably used at least two different clients for email, even if it's just Gmail and their phone's built-in mail app. Or perhaps Outlook, if they're using a company system. And they (I think?) mostly take for granted that if they don't like Outlook they can use something else, or if they don't like their phone's mail app they can install a different one. They assume, correctly, that the content and function of their mail account is not tied to the client application they use to work with it.

(They may make the same assumption about web-based services, on the reasoning that if they don't like IE they can switch to Firefox, or if they don't like Firefox they can switch to Chrome. They are incorrect, because The Web Browser is Not Their Client)

Email does a good job of separating mechanism from policy. Its format is defined in RFC 2822 and its transmission protocol is defined in RFC 5321. Neither defines any conventions for user interfaces. There are good reasons for that from a software-design standpoint, but more relevant to our discussion is that interface conventions change more rapidly than the objects they interface with. Forum features change with the times; but the concepts of a Post, an Author, or a Reply are forever.

The benefit of this separation: If someone sends you mail from Outlook, you don't need to use Outlook to read it. You can use something else -- something that may look and behave entirely differently, in a manner more to your liking.

The comparison: If there is a discussion on Less Wrong, you do need to use the Less Wrong UI to read it. The same goes for, say, Facebook.

I object to this.

Standards as Schelling Points

One could argue that the lack of choice is for lack of interest. Less Wrong, and Reddit on which it is based, has an API. One could write a native client. Reddit does have them.

Let's take a tangent and talk about Reddit. Seems like they might have done something right. They have (I think?) the largest contiguous discussion community on the net today. And they have a published API for talking to it. It's even in use.

The problem with this method is that Reddit's API applies only to Reddit. I say problem, singular, but it's really problem, plural, because it hits users and developers in different ways.

On the user end, it means you can't have a unified user interface across different web forums; other forum servers have entirely different APIs, or none at all.2 It also makes life difficult when you want to move from one forum to another.

On the developer end, something very ugly happens when a content provider defines its own provision mechanism. Yes, you can write a competing client. But your client exists only at the provider's sufferance, subject to their decision not to make incompatible API changes or just pull the plug on you and your users outright. That isn't paranoia; in at least one case, it actually happened. Using an agreed-upon standard limits this sort of misbehavior, although it can still happen in other ways.

NNTP is a standard for discussion, like SMTP is for email. It is defined in RFC 3977 and its data format is defined in RFC 5536. The point of a standard is to ensure lasting interoperability; because it is a standard, it serves as a deliberately-constructed Schelling point, a place where unrelated developers can converge without further coordination.

Expertise is a Bottleneck

If you're trying to build a high-quality community, you want a closed system. Well kept gardens die by pacifism, and it's impossible to fully moderate an open system. But if you're building a communication infrastructure, you want an open system.

In the early Usenet days, this was exactly what existed; NNTP was standardized and open, but Usenet was a de-facto closed community, accessible mostly to academics. Then AOL hooked its customers into the system. The closed community became open, and the Eternal September began.3 I suspect, but can't prove, that this was a partial cause of the flight of discussion from Usenet to closed web forums.

I don't think that was the appropriate response. I think the appropriate response was private NNTP networks or even single servers, not connected to Usenet at large.

Modern web forums throw the open-infrastructure baby out with the open-community bathwater. The result, in our specific case, is that if we want something not provided by the default Less Wrong interface, it must be implemented by Less Wrongers.

I don't think UI implementation is our comparative advantage. In fact I know it isn't, or the Less Wrong UI wouldn't suck so hard. We're pretty big by web-forum standards, but we still contain only a tiny fraction of the Internet's technical expertise.

The situation is even worse among the diaspora; for example, at SSC, if Scott's readers want something new out of the interface, it must be implemented either by Scott himself or his agents. That doesn't scale.

One of the major benefits of a standardized, open infrastructure is that your developer base is no longer limited to a single community. Any software written by any member of any community backed by the same communication standard is yours for the using. Additionally, the developers are competing for the attention of readers, not admins; you can expect the reader-facing feature set to improve accordingly. If readers want different UI functionality, the community admins don't need to be involved at all.

A Real Web Client

When I wrote the intro to this sequence, the most common thing people insisted on was this: Any system that actually gets used must allow links from the web, and those links must reach a web page.

I completely, if grudgingly, agree. No matter how insightful a post is, if people can't link to it, it will not spread. No matter how interesting a post is, if Google doesn't index it, it doesn't exist.

One way to achieve a common interface to an otherwise-nonstandard forum is to write a gateway program, something that answers NNTP requests and does magic to translate them to whatever the forum understands. This can work and is better than nothing, but I don't like it -- I'll explain why in another post.

Assuming I can suppress my gag reflex for the next few moments, allow me to propose: a web client.

(No, I don't mean write a new browser. The Browser Is Not Your Client.4)

Real NNTP clients use the OS's widget set to build their UI and talk to the discussion board using NNTP. There is no fundamental reason the same cannot be done using the browser's widget set. Google did it. Before them, Deja News did it. Both of them suck, but they suck on the UI level. They are still proof that the concept can work.

I imagine an NNTP-backed site where casual visitors never need to know that's what they're dealing with. They see something very similar to a web forum or a blog, but whatever software today talks to a database on the back end, instead talks to NNTP, which is the canonical source of posts and post metadata. For example, it gets the results of a link to http://lesswrong.com/posts/message_id.html by sending ARTICLE message_id to its upstream NNTP server (which may be hosted on the same system), just as a native client would.

To the drive-by reader, nothing has changed. Except, maybe, one thing. When a regular reader, someone who's been around long enough to care about such things, says "Hey, I want feature X," and our hypothetical web client doesn't have it, I can now answer:

Someone wrote something that does that twenty years ago.

Here is how to get it.



  1. Meta-meta: This post took about eight hours to research and write, plus two weeks procrastinating. If anyone wants to discuss it in realtime, you can find me on #lesswrong or, if you insist, the LW Slack.

  2. The possibility of "universal clients" that understand multiple APIs is an interesting case, as with Pidgin for IM services. I might talk about those later.

  3. Ironically, despite my nostalgia for Usenet, I was a part of said September; or at least its aftermath.

  4. Okay, that was a little shoehorned in. The important thing is this: What I tell you three times is true.

New Comment
47 comments, sorted by Click to highlight new comments since:

I completely agree with everything you've said in the first half of your post. But I strongly disagree that NNTP is a good choice for a backend standard. (At this point you can say that you'll argue your case in a future post instead of replying to this comment.)

The NNTP model differs from modern forums in a crucial respect: it is a distributed system. (I use this in the sense of 'decentralized'.) More precisely, it is a AP system: it doesn't provide consistency in synchronizing messages between servers (and it makes messages immutable, so it gets the worst of both worlds really). This directly leads to all the problems we'd have in using NNTP for a forum, such as no true editing or deleting of messages. Because a message is not tied to a domain (or a server), and is not referenced but copied to other servers, authentication (proving you own an identity and wrote a post) and authorization (e.g. mod powers) become nontrivial. Messages don't have globally unique IDs, or even just global IDs. Implementing something like karma becomes an interesting computer science exercise involving decentralized consensus algorithms, rather than a trivial feature of a centralized database. And so on.

But we don't need to deal with the problems of distributed systems, because web forums aren't distributed! What we want is a standard that will model the way forums already work, plus or minus some optional or disputed extensions. Making NNTP resemble a forum would require adding so many things on top that there's no point in using NNTP in the first place: it just doesn't fit the model we want.

A good forum model would tie users and messages to a particular server. It would make messages mutable (or perhaps expose an immutable history, but make the 'current' reference mutable, like git does). It would at least provide a substrate for mutable metadata that karma-like systems could use, even if these systems were specified as optional extensions to the standard. It would allow for some standardized message metadata (e.g. Content-Encoding and Content-Type equivalents). It would pretty much look like what you'd get if you designed the API of a generalized forum, talking json over http, while trying not imagine the clientside UI.

There's probably an existing standard or three like this somewhere in the dustbin of history.

NNTP also has a lot of more minor ugliness that I'd be happy to argue against. It's one of the http/mime/email family of headers-body encodings, which is well known for producing fragile implementations (quick, recite the header folding rules!) and are all subtly different from one another to make sure everyone's sufficiently confused. It relies on sometimes complex session state instead of simple separate requests. There's a bunch of optional features (many of historical interest), but at the same time the protocol is extremely underspecified (count how many times it says you SHOULD but not MUST do something, and MAY do quite the opposite instead). Any client-server pair written from scratch inevitably ends up speaking a highly restricted dialect, which doesn't match that of any other client or server.

Given all of this, the only possible value of using NNTP is the existing software that already implements it. But there's no implementation of an NNTP client in Javascript (unless you want to use emscripten), if only because Javascript in a browser can't open a raw TCP socket, so until the recent advent of websocket-to-tcp proxies, nobody could write one. And implementing a new HTTP-based server to a new (very simple) standard, basically just CRUD on a simple schema, is much easier than writing an NNTP JS client - IF you're willing to not make a distributed system.

A final note: one may well argue that we do want a distributed, decentralized system with immutable messages (or immutable old-message-versions), because such systems are inherently better. And in an ideal world I'd agree. But they're also far, far harder to get right, and the almost inevitable tradeoffs are hard to sell to users. I'm not convinced we need to solve the much harder distributed version of the problem here. (Also, many decentralization features can be added in a secondary layer on top of a centralized system if the core is well designed.)

At this point you can say that you'll argue your case in a future post instead of replying to this comment.

I will, but I'll answer you here anyway -- sorry for taking so long to reply.

I strongly disagree that NNTP is a good choice for a backend standard

I feel I should clarify that I don't think it's "good", so much as "less bad than the alternatives".

But we don't need to deal with the problems of distributed systems, because web forums aren't distributed!

Well, yes and no. Part of what got me on this track in the first place is the distributed nature of the diaspora. We have a network of more-and-more-loosely connected subcommunities that we'd like to keep together, but the diaspora authors like owning their own gardens. Any unified system probably needs to at least be capable of supporting that, or it's unlikely to get people to buy back in. It's not sufficient, but it is necessary, to allow network members to run their own server if they want.

That being said, it's of interest that NNTP doesn't have to be run distributed. You can have a standalone server, which makes things like auth a lot easier. A closed distribution network makes it harder, but not that much harder -- as long as every member trusts every other member to do auth honestly.

The auth problem as I see it boils down to "how can user X with an account on Less Wrong post to e.g. SSC without needing to create a separate account, while still giving SSC's owner the capability to reliably moderate or ban them." There are a few ways to attack the problem; I'm unsure of the best method but it's on my list of things to cover.

Given all of this, the only possible value of using NNTP is the existing software that already implements it.

This is a huge value, though, because most extant web forum, blogging, etc software is terrible for discussions of any nontrivial size.

There's probably an existing standard or three like this somewhere in the dustbin of history.

Is there?

That's a serious question, because I'd love to hear about alternative standards. My must-have list looks something like "has an RFC, has at least three currently-maintained, interoperable implementations from different authors, and treats discussion content as its payload, unmixed with UI chrome." I'm only aware of NNTP meeting those conditions, but my map is not the territory.

I feel I should clarify that I don't think it's "good", so much as "less bad than the alternatives".

Your proposal requires a lot of work: both coding, and the social effort of getting everyone to use new custom software on their backends. So we should compare it not to existing alternatives, but to potential solutions we could implement at similar cost.

Let's talk about a concrete alternative: a new protocol, using JSON over HTTP, with an API representing CRUD operations over a simple schema of users, posts, comments, et cetera; with some non-core features provided over existing protocols like RSS. An optional extension could provide e.g. server push notifications, but that would be for performance or convenience, not strictly for functionality.

It would be simpler to specify (compared to contorting NNTP), and everyone's used to JSON/HTTP CRUD. It would be simpler to implement - almost trivial, in fact - in any client or server language, easier than writing an HTTP to NNTP gateway even though NNTP servers already exist. It would better match the existing model of forums and users. And it would (more easily) allow integration with existing forum software, so we don't have to tell everyone they have to find a Linux host and install custom software, rather than finding a Wordpress+MySql host and installing this one plugin.

Part of what got me on this track in the first place is the distributed nature of the diaspora. We have a network of more-and-more-loosely connected subcommunities that we'd like to keep together, but the diaspora authors like owning their own gardens. Any unified system probably needs to at least be capable of supporting that, or it's unlikely to get people to buy back in. It's not sufficient, but it is necessary, to allow network members to run their own server if they want.

I think the current model is fine. Posts and comments are associated with forums (sites), and links to them are links to those sites. (As opposed to a distributed design like NNTP that forwards messages to different hosts.) User accounts are also associated with sites, but sites can delegate authentication to other sites via Google/Facebook login, OpenID, etc. Clients can aggregate data from different sites and crosslink posts by the same users on different sites. A site owner has moderator powers over content on their site, including comments by users whose account is registered at a different site.

The UXs for posters, commenters, readers, and site owners all need to be improved. But I don't see a problem with the basic model.

That being said, it's of interest that NNTP doesn't have to be run distributed. You can have a standalone server, which makes things like auth a lot easier.

Then you suffer all the problems of NNTP's distributed design (which I outlined in my first comment) without getting any of the benefits.

The auth problem as I see it boils down to "how can user X with an account on Less Wrong post to e.g. SSC without needing to create a separate account, while still giving SSC's owner the capability to reliably moderate or ban them." There are a few ways to attack the problem; I'm unsure of the best method but it's on my list of things to cover.

It seems easy to me. The user account lives on LW, but the actual comment lives on SSC, so an SSC mod can moderate it or ban the user from SSC. There are plenty of competing cross-site authentication systems and we don't even have to limit ourselves to supporting or endorsing one of them.

Also, we can just as easily support non-site-associated accounts, which are authenticated by a pubkey. System designers usually don't like this choice because it's too easy to create lots of new accounts, but frankly it's also very easy to create lots of Google accounts. SSC even allows completely auth-less commenting, so anyone can claim another's username, and it hasn't seemed to hurt them too badly yet.

This is a huge value, though, because most extant web forum, blogging, etc software is terrible for discussions of any nontrivial size.

I'll just repeat my core argument here. Extant NNTP software is far more terrible, if you penalize it for things like not supporting incoming hyperlink, not allowing editing posts, not having karma, no existing Web clients, etc. Adding those things to NNTP (both the protocol and the software) requires more work than building a new Web-friendly forum standard and implementations, and would also be much more difficult for site admins to adopt and install.

That's a serious question, because I'd love to hear about alternative standards. My must-have list looks something like "has an RFC, has at least three currently-maintained, interoperable implementations from different authors, and treats discussion content as its payload, unmixed with UI chrome."

I don't know of any concrete ones, but I haven't really searched for them either. It just feels as though it's likely there were some - which were ultimately unsuccessful, clearly.

Having an RFC isn't really that important. There are lots of well-documented, historically stable protocols with many opensource implementations that aren't any worse just because they haven't been published via the IETF or OASIS or ECMA or what have you.

Your proposal requires a lot of work

Well, yes. That's more or less why I expect it to never, ever happen. I did say I'm a crank with no serious hopes. ;-)

a new protocol, using JSON over HTTP, with an API representing CRUD operations over a simple schema of users, posts, comments, et cetera

While I don't object in theory to a new protocol, JSON over HTTP specifically is a paradigm I would like to destroy.

(which is kind of hilarious given that my day job involves an app with exactly that design)

Some kind of NNTP2 would be nice. The trouble with taking that approach is that, if the only implementation is your own, you haven't actually gained anything.

Admittedly every protocol has to start somewhere.

sites can delegate authentication to other sites via Google/Facebook login, OpenID

I had actually forgotten about OpenID until you and Lumifer mentioned it. Also, since you mention it, I'm a huge fan of pubkey-based auth and am bitterly disappointed that almost nothing I use supports it.

I'll just repeat my core argument here. Extant NNTP software is far more terrible, if you penalize it for things like...

I think this is our core disagreement. I find web forum software worse even after penalizing NNTP for everything you mention. Well, partially penalizing it; I don't acknowledge the lack of editing (supercedes exist), and it turns out links to netnews posts also exist. Which is something else that I'd forgotten. Which is funny because following such a link is how I discovered Usenet.

Having an RFC isn't really that important.

Agreed. Any spec would do as long as it's widely implemented and can't be pulled out from under you. The RFC "requirement" is really trying to rule out cases where one party has de-facto control of the spec and an incentive to abuse it.

Well, yes. That's more or less why I expect it to never, ever happen. I did say I'm a crank with no serious hopes. ;-)

It's a pity that whatever energy exists on LW for discussing technological changes to the diaspora is exhausted on a non-serious proposal.

When you argue for something you don't expect to be accepted, you lose any reason to make reasonable compromises, lowering the chances of finding a mutually beneficial solution.

While I don't object in theory to a new protocol, JSON over HTTP specifically is a paradigm I would like to destroy.

I may share your feelings. But if you want an API to be accessible to Web clients, it pretty much has to be JSON over HTTP. Any other format you support will have to be in addition, not instead of that.

Json isn't actually bad as a slightly-structured, self-describing, human-readable format. Maybe you prefer YAML or something, but I don't feel there's a lot of difference to be had. Certainly it's far better than non-self-describing, non-textual formats unless you really need to optimize for parsing performance or for size on-the-wire. And I'd argue you don't, for this usecase.

HTTP is horrible (and I say this as someone who wrote a lot of low-level HTTP middleware, even a parser once). Using maybe 50% of the semantics, and pretty much 0% of the syntax, and adding the features of HTTP/2 and some others they couldn't fit into the spec, would be wonderful. But we don't really have that option; we're stuck with it as something we can't change or avoid using. And I too hate having to do that.

But you know what? The same is true of TCP, and of IPv4, and of the BSD socket API, and a thousand other fundamental API designs that have won in the marketplace. At some point we have to acknowledge reality to write useful software. A forum / discussion protocol doesn't conflict with JSON over HTTP (much). We need to to focus on designing a good API, whatever it runs on.

If it helps, you can put the HTTP/JSON encoding in a separate specification, and be the happy user of a different-but-compatible encoding over a gateway.

I think this is our core disagreement. I find web forum software worse even after penalizing NNTP for everything you mention.

You don't address the point I feel is most important: the NNTP model (distributed immutable messages, users not tied to servers, mod powers and karma not in the spec, ...) just isn't the one we use and want to keep using on discussion forums.

it turns out links to netnews posts also exist.

But they don't work with supercedes, because they link to immutable message IDs. So the server has to dereference the link, has to have kept all the old (superceded) versions, and has to prove the supercede chain validity to the client in case of signed messages. This is just unnecessarily ugly.

Besides, they are URIs, not URLs. That's not something the web can handle too well. You can include a server in the link, making a URL, but NNTP doesn't have a concept of an authoritative host (origin), so once again, why use NNTP if you're not going to move messages between servers, which is the whole point of the protocol? If you just want to store them at a single place, it would make as much sense to use shared IMAP. (Which is to say, not much.)

Before we get deep into protocols, is there any kind of a spec sheet anywhere?

Saying you want better software for discussions is... horribly vague. I have a strong feeling that we should figure out things like lists of desirable features, lists of undesirable misfeatures, choices of how one list will be traded off against the other list, etc. before we focus all the energy on stomping JSON into tiny little pieces.

Here's my shortlist of requirements:

Basic architecture: network of sites sharing an API (not an interface). A site can have a web client as part of the site (or several), but at least some clients can be written independently of a site. Users can choose to use different/customizable clients, and in particular, aggregate and cross-link content and users across sites. It should be possible, at least in theory, to write a non-web cross-site client with lots of custom features and use it as one's only interface to all discussion forums without any loss of functionality.

We need at least feature parity with LW, which is the most feature-full of diaspora blogs and forums; other sites tend to have subsets of the same features, so they should be able to disable e.g. private messages if they want to. So: top-level posts with trees of comments, both of which can be edited or retracted; posts have special status (tags, categories, require permissions to post, etc); authenticated users (unless the site allows anonymous or pesudonymous comments), so a user's comments can be collated; permalinks to posts and comments; RSS feeds of various things; etc.

Users should follow the user@host pattern, so they can be followed across sites. Different authentication methods can be integrated (Local/Google/Facebook/OpenID/...) but the spec doesn't concern itself with that. User permissions should be stored at each site, and be powerful enough to allow different configurations, mod and admin powers, etc. Posts and messages should allow pubkey signatures, and users should be able to configure a signing key as part of their account, because some people really enjoy that.

In the LW 2.0 discussions, people proposed different variations on karma. The API should include the concept of a user's karma(s) on a site, but for voting etc. it should probably limit itself to storing and querying data, and let the implementation decide how to use it. So e.g. the server implementation could disallow posting to a user with insufficient karma, or the client implementation could hide downvoted comments. The API would specify the mechanism, not the policy.

Finally, there need to be implementations that are pain-free and cost-free for site admins to install. At the very least, it should not involve running completely custom server software, or completely rewriting existing web clients and their UX. Ideally, there would be easy adapters/plugins/... for existing client and/or server software.

I agree with most of this, with the exception that top-level posts should not have any special status at the protocol level other than not having a parent. Clients are free to present them specially, though, including whatever 'default' interface each site has. Whatever moderation layer exists may do the same.

I also dislike private messaging systems -- not so much because they shouldn't exist, but because they should be implemented as email accounts that only deliver mail among local users, so you can handle them in your regular email client if you want.

[Edit: Note that tags and a lot of other post metadata could be implemented as extra headers in a news article. Not karma, though.]

Your description of basic architecture in particular is an excellent summary of what I want out of a discussion protocol.

top-level posts should not have any special status at the protocol level other than not having a parent.

Those are implementation details. The point is that top-level or parent-less posts have a special semantic status: they start a new conversation.

I also dislike private messaging systems -- not so much because they shouldn't exist, but because they should be implemented as email accounts that only deliver mail among local users, so you can handle them in your regular email client if you want.

It's a matter of integration: I want the same settings, and client software, that you use for the rest of the forum to apply to privmsgs. For instance, blocking a user's messages, sending privmsgs as replies to forum threads (and displaying that correctly in the client), ...

And I don't want to have to use two different client applications at the same time (email & forum) for private vs public messages.

And most people only use webmail, and you can't tell gmail.com to display messages that live on the lesswrong.com IMAP server, if that's what you intended.

It's a matter of integration: I want the same settings, and client software, that you use for the rest of the forum to apply to privmsgs.

I don't share the preference, but I don't think this represents a conflict. There's no reason a web client couldn't present one UI to its users while doing two different things on the back end, IMAP for PMs and whatever else for the forum. Newsreaders do exactly that to support reply-by-email, and it works fine from what I've seen.

I'm very willing to engage in this. (And I described what I want in some of my other comments). I'll post my spec sheet (which I think includes most of Error's) in a separate reply. But first, before we get deep into feature lists and spec sheets:

Suppose we agree on a protocol (or whatever). Suppose it's so good that we can convince most people it's technologically and socially superior to existing solutions - not counting the unavoidable costs of using custom software and of changing things, which are significant.

Given all that, how likely will we be to 1) write all the code needed, to the quality of a production project (actually, multiple ones), and provide support etc. for the foreseeable future (or convincing others to help us do so); 2) convince enough diaspora site admins, and readers/commenters/users if applicable, to switch over?

Obviously this depends on how much our proposal improves (or promises to improve) on what we have now.

See my answer to Error, but for the "how likely" question the only possible answer that I can see is "One step at a time".

First you need an idea that's both exciting and gelled enough have some shape which survives shaking and poking.

If enough people are enthusiastic about the idea, you write a white paper.

If enough people (or the right people) are enthusiastic about the white paper, you write a spec sheet for software.

If enough people continue to be enthusiastic about the idea, the white paper, and the spec sheet, you start coding.

If you get this far, you can start thinking about OSS projects, startups, and all these kinds of things.. :-)

P.S. Oh, and you shouldn't think of this project as "How do we reanimate LW and keep it shambling for a bit longer". You should think about it as "What kind of a new discussion framework can we bestow on the soon-to-be-grateful world" :-)

I get the impression most projects do that backwards, and that that's a large part of how we got into this giant mess of incompatible discussion APIs.

Somewhere later in this sequence I'm going to address the social problem of convincing people to buy back in. The very short version is: Make it more powerful than what they've got, so they have an incentive to move. Make sure they are still running their own shows, because status and sovereignity matters. And make it more convenient to migrate than to manage what they've got, because convenience is everything.

Once you get three or four diasporists back, network effects does the rest. But it needs to be an improvement to the individual migrant even if nobody else does it, otherwise the coordination problem involved is incredibly hard to beat.

You should think about it as "What kind of a new discussion framework can we bestow on the soon-to-be-grateful world" :-)

Sometimes I think the best way to promote my ideas would be to start an NNTP-backed forum hosting service. I know it's within my capabilities.

Then I realize that 1. that would be a lot of work, and I have a day job, 2. nobody cares except me, and 3. I would be competing with Reddit.

I had a list of...not features, exactly, but desirable elements, in the first post. I intended to update it from comments but didn't.

I want higher and deeper X-)

Higher in the sense of specifying desirables from some set of more-or-less terminal goals. For example, you say "centralized from the user perspective" -- and why do we want this? What is the end result you're trying to achieve?

Deeper in the sense of talking about base concepts. Will there be "posts" and "comments" as very different things? If so, will the trees be shallow (lots of posts, mostly with few comments, no necroing) or deep (few posts, mostly with lots of comments, necroing is encouraged)?

Will there be a "forum"? "subforums", maybe? Or will there be a pile of tagged pieces of text from which everyone assembles their own set to read? Will such concept as "follow an author" exist? How centralised or decentralised will things be? Who will exercise control and what kind of powers will they have?

That's not a complete set of questions at all, just a pointer at the level which will have to decided on and set in stone before you start discussing protocols.

When you argue for something you don't expect to be accepted, you lose any reason to make reasonable compromises, lowering the chances of finding a mutually beneficial solution.

If it helps, any compromises I make or don't make are irrelevant to anything that will actually happen. I don't think anyone in a position to define LW2.0 is even participating in the threads, though I do hope they're reading them.

I figure the best I can hope for is to be understood. I appreciate your arguments against more than you may realize -- because I can tell you're arguing from the position of someone who does understand, even if you don't agree.

Maybe you prefer YAML or something

YAML's the least-bad structured format I'm aware of, though that may say more about what formats I'm aware of than anything else. It's certainly easier to read and write than JSON; you could conceivably talk YAML over a telnet session without it being a major hassle.

I agree that non-textual formats are bad for most cases, including this one.

If it helps, you can put the HTTP/JSON encoding in a separate specification, and be the happy user of a different-but-compatible encoding over a gateway.

I wouldn't object to that, as long as 1. the specs evolved in tandem, and 2. the gateway was from http/json to (NNTP2?), rather than the other way around.

The temptation that's intended to avoid is devs responding to demands for ponies by kludging them into the http/json spec without considering whether they can be meaningfully translated through a gateway without lossage.

But they don't work with supercedes, because they link to immutable message IDs.

This...might trip me up, actually. I was under the impression that requests for a previous message ID would return the superceding message instead. I appear to have gotten that from here but I can't find the corresponding reference in the RFCs. It's certainly the way it should work, but, well, should.

I need to spin up INN and test it.

You don't address the point I feel is most important: the NNTP model (distributed immutable messages, users not tied to servers, mod powers and karma not in the spec, ...) just isn't the one we use and want to keep using on discussion forums.

We either disagree on the desirable model or else on what the model actually is. I'm ambivalent about distributed architecture as long as interoperability is maintained. Mod powers not in the spec seems like a plus to me, not a minus. Today, as I understand it, posts to moderated groups get sent to an email address, which may have whatever moderation software you like behind it. Which is fine by me. Users not being tied to a particular server seems like a plus to me too. [edit: but I may misunderstand what you mean by that]

Karma's a legitimately hard problem. I don't feel like I need it, but I'm not terribly confident in that. To me its main benefit is to make it easier to sort through overly large threads for the stuff that's worth reading; having a functioning 'next unread post' key serves me just as well or better. To others...well, others may get other things out of it, which is why I'm not confident it's not needed.

I'll have to get back to you on immutability after experimenting with INN's response to supercedes.

If it helps, any compromises I make or don't make are irrelevant to anything that will actually happen.

That depends on how much you're willing to compromise before you see it as wasted effort to participate. Somewhere in the space of ideas there might be a proposal that everyone would accept as an improvement on the status quo.

I don't think anyone in a position to define LW2.0 is even participating in the threads, though I do hope they're reading them.

Someone is upvoting your posts besides me. This one is at +19.

I wouldn't object to that, as long as 1. the specs evolved in tandem, and 2. the gateway was from http/json to (NNTP2?), rather than the other way around.

I meant we could have one spec chapter spec describing types, messages, requests and responses, and then one or more 'encoding' chapters describing how these messages are represented in JSON over HTTP, or in... something else. So all encodings would be equal; there could be gateways, but there also could be servers supporting different encodings.

I don't think this is necessary, but if you insist on non-json/http encodings, it's probably better to do it this way rather than by translation.

I'm ambivalent about distributed architecture as long as interoperability is maintained.

A distributed system necessarily has fewer features and weaker guarantees or semantics than a non-distributed one. Distributed systems can also be much harder to implement. (NNTP is easy to implement, because it has very few features: messages are immutable, users are not authenticated...) So if you don't need a true distributed system, you shouldn't use one.

Mod powers not in the spec seems like a plus to me, not a minus.

As long as comments are stored on private servers, then mods (=admins) can delete them. A spec without mod powers has to store data where no-one but the poster can remove or change it. We're getting into distributed system design again.

Well, actually, there are ways around that. We could put all comments into a blockchain, which clients would verify, and you can't retroactively remove a block without clients at least knowing something was removed, and anyone with a copy of the missing block could prove it was the real one. But why?

Today, as I understand it, posts to moderated groups get sent to an email address, which may have whatever moderation software you like behind it.

We're talking about two different schemes. You're describing moderated mailing lists; messages need to be approved by mods before other members see them. I'm talking about the LW model: mods can retroactively remove (or, in theory, edit) messages. This too stems from the basic difference between systems with and without mutable messages. In a mailing list or an NNTP group, once clients got their copies of a post, there's no way for a mod to force them to forget it if they don't want to.

Users not being tied to a particular server seems like a plus to me too.

By "tied to a server" I mean authentication tied to the DNS name. To authenticate someone as foo@gmail.com using Google login or OpenID or an actual email-based auth system, you talk to gmail.com. The gmail.com admin can manipulate or revoke the foo account. And there's only one foo@gmail.com around.

Whereas in NNTP, if I understand correctly, I can put any string I like in the From: field. (Just like in classical email.) I might say I'm foo@gmail.com, but NNTP software won't talk to gmail.com to confirm that.

Someone is upvoting your posts besides me. This one is at +19.

Touche. It's kind of a shame that Main is out of commission, or I'd be earning a gazillion karma for this.

I meant we could have one spec chapter spec describing types, messages, requests and responses, and then one or more 'encoding' chapters describing how these messages are represented in JSON over HTTP, or in... something else.

Hrm. I actually really like this idea; it fits right in with my separate-form-from-function philosophy, and I think standardizing semantics is much more important than standardizing the format of messages over the wire (even though I do have strong preferences about the latter). You'd have to be awfully careful about what went into the spec, though, to allow for a wide range of representations. e.g. if you have a data structure that's an arbitrarily-nested dictionary, you're limiting yourself to formats that can represent such a type; otherwise you have the same sort of potential lossage you'd get through a gateway.

But in principle I like it.

[edit: If you were really careful about the spec, you might even be able to get an NNTP-compatible representation "for free"]

Whereas in NNTP, if I understand correctly, I can put any string I like in the From: field.

True with respect to the protocol. I was going to write about this in a future post but maybe it's better to talk about it now, if only to expose and (hopefully) repair flaws beforehand.

Yes, you can forge From headers, or mod approval headers, or anything really. But the forged message has to enter the network through a server on which you have an account, and that system knows who you are and can refuse to originate messages where the From header doesn't match the authenticated user. On Usenet this is ineffective; the network is too large. But in a small private network it's possible for node owners to collectively agree "none of us will allow our users to forge From headers."

Moderated groups theoretically work like the mailing lists you describe; articles get redirected to a moderation email address. Direct posts are only accepted by the moderator. The address can be (probably is) monitored by software rather than a person, and that software can enforce a policy like "reject posts by users on the Banned list, reject posts with no parent from users not on the Local Sovereign list, accept all other posts."

As I understand it, cancels and supercedes are also posts in their own right and go through the same moderation queue, so you can extend that policy with "accept cancels or supercedes by the same user as the original post, accept cancels or supercedes by users on the Moderator list, reject all other cancels or supercedes.

I think this works as long as the From header can be trusted -- and that, as above, that can be arranged on a closed network (and only on a closed network).

I probably haven't covered all bases on this; how would you sneak a forged message through such a setup?

In a mailing list or an NNTP group, once clients got their copies of a post, there's no way for a mod to force them to forget it if they don't want to.

I consider that a feature, not a bug, but I think I'm misunderstanding you here; no system that permits local caching can prevent clients from keeping old versions of posts around. And web pages are certainly cached (and archived). So I don't think you mean what it sounds like you mean.

Any unified system probably needs to at least be capable of supporting that

It also has to have clear advantages over the default of just having a browser with multiple tabs open.

The auth problem as I see it boils down to "how can user X with an account on Less Wrong post to e.g. SSC without needing to create a separate account, while still giving SSC's owner the capability to reliably moderate or ban them."

That's an old problem. Google and Facebook would love to see their accounts be used to solve this problem and they provide tools for that (please ignore the small matter of signing with blood at the end of this long document which mentions eternity and souls...). There is OpenID which, as far as I know, never got sufficiently popular. Disqus is another way of solving the same problem.

I think this problem is hard.

most extant web forum, blogging, etc software is terrible for discussions of any nontrivial size.

That's a rather strong statement which smells of the nirvana fallacy and doesn't seem to be shared by most.

I think this problem is hard.

It's hard to solve better than it's been solved to date. But I think the existing solution (as described in my other reply) is good enough, if everyone adopts it in a more or less compatible fashion.

That's a rather strong statement which smells of the nirvana fallacy and doesn't seem to be shared by most.

FWIW I completely agree with that statement - as long as it says "most" and not "nearly all".

It would make messages mutable (or perhaps expose an immutable history, but make the 'current' reference mutable, like git does)

As an aside, git is about as good a fit as NNTP (which is to say, neither is really all that good in my opinion).

Git has immutable messages, but it also has mutable references (branches) for edits, and the references can be deleted for retractions. It has a tree structure for comments. It has pseudonymous authentication (sign your commits). It has plenty of room for data and metadata (e.g. specify a standard mapping filenames to headers). It can run over HTTP and has existing servers and clients including Javascript ones. It can be deployed in a centralized model (everyone pushes to the same server) but others can mirror your server using the same protocol, and there are RSS and email gateways available. Messages (commits) have globally unique IDs, allowing for incoming links. It makes your server state trivial to backup and to mirror. I could go on.

In fact, someone has already thought of this and wrote a similar system, called GitRap! (I didn't know about it before I checked just now.) It doesn't do exactly what I described, and it's tied to github right now, but you can view it as a POC.

To be clear: I am 95% not serious about this proposal. Solving the vastly simpler centralized problem is probably better.

I think that's a terrible idea and it is awesome that it exists. :-P

I think you are making the argument for what was known as "the semantic web" -- the term seem to have fallen into disuse, though.

I also think that my browser is a client. It's not a client for structured raw information, though, because there is no server which feeds it that (a client is just one half of a client-server pair, after all. A server-less client is not of much use). My browser is a client for web pages which used to mean mostly HTML and nowadays mean whatever JS can conjure.

By the way, where does RSS fit into your picture of the world?

I use RSS all the time, mostly via Firefox's subscribe-to-page feature. I've considered looking for a native-client feed reader, but my understanding is that most sites don't provide a full-text feed, which defeats the point.

I dislike that it's based on XML, mostly because, even more so than JSON, XML is actively hostile to humans. It's no less useful for that, though.

So far as I know it doesn't handle reply chains at all, making it a sub-par fit for content that spawns discussion. I may be wrong about that. I still use it as the best available method for e.g. keeping up with LW.

[-][anonymous]20

But consider the following problem: Find and display all comments by me that are children of this post, and only those comments, using only browser UI elements, i.e. not the LW-specific page widgets. You cannot -- and I'd be pretty surprised if you could make a browser extension that could do it without resorting to the API, skipping the previous elements in the chain above. For that matter, if you can do it with the existing page widgets, I'd love to know how.

This is actually trivial (but it breaks readily if LW changes it's stylesheet):

from lxml import html
root = html.open("http://lesswrong.com/lw/njn/the_web_browser_is_not_your_client_but_you_dont/").getroot()
for div in root.cssselect(".comment-meta"):
    if div.cssselect("span.author")[0].text_content() == "Error":
        print div.text_content()

Kuro5hin's corpse finally disintegrated and there is some interesting discussion on HN about online forums and their evolution.

But consider the following problem: Find and display all comments by me that are children of this post, and only those comments, using only browser UI elements, i.e. not the LW-specific page widgets. You cannot -- and I'd be pretty surprised if you could make a browser extension that could do it without resorting to the API, skipping the previous elements in the chain above. For that matter, if you can do it with the existing page widgets, I'd love to know how.

If you mean parse the document object model for your comments without using an external API, it would probably take me about a day, because I'm rusty with WatiN (the tool I used to used for web scraping when that was my job a couple years ago). About four hours of that would be setting up an environment. If I was up to speed, maybe a couple hours to work out the script. Not even close to hard compared to the crap I used to have to scrape. And I'm definitely not the best web scraper; I'm a non-amateur novice, basically. The basic process is this: anchor to a certain node type that is the child of another node with certain attributes and properties, and then search all the matching nodes for your user name, then extract the content of some child nodes of all the matched nodes that contain your post.

WatiN:: http://watin.org/

Selenium: http://www.seleniumhq.org/

These are the most popular tools in the Microsoft ecosystem.

As someone who has the ability to control how content is displayed to me (tip - hit f12 in google chrome), I disagree with the statement that a web browser is not a client. It is absolutely a client and if I were sufficiently motivated I could view this page in any number of ways. So can you. Easy examples you can do with no knowledge are to disable the CSS, disable JS, etc.

Sure, and if I want a karma histogram of all of my posts I can scrape my user page and get them. But that requires moving a huge amount of data from the server to me to answer a fairly simple question, which we could have computed on the server and then moved to me more cheaply.

There's no extra load on the server; you're just parsing what the page already had to send you. If your goal is just to see the web page and not data collection, it's a different solution but also feasible.

What you can do is create a simple browser plugin that injects jQuery into the page to get all the comments by a name. I'll go into technical details a bit - Inject an extra version of jQuery into the page (that you know always uses the same code, in case lesswrong changes their version of jQuery). Then use JQuery selectors to anchor to all your posts using a technique similar to the one I described for the scraper. Then transform the page to consist of nothing but the anchored comments you acquired via Jquery.

You could make this a real addon where you push a button in the top right of your chrome browser, type a username, and then you see nothing but all the posts by that user on a given page.

Same principle as Adblock plus or other browser addons.

There's no extra load on the server; you're just parsing what the page already had to send you.

If I look at 200 comments pages, doesn't that require the server processing my request and sending me the comments page 200 times? Especially if telling it something like "give me 10 comments by user X after comment abc" means that it's running a SQL query that compares the comment id to abc.

I do agree that there are cool things you can do to manipulate comments on a page.

If I look at 200 comments pages, doesn't that require the server processing my request and sending me the comments page 200 times?

As for finding your comments regardless of the thread they are on, that is already a feature of Reddit's platform - click on your username, then click "comments" to get to the LW implementation of that feature.

Regardless, that isn't what you were describing earlier. It would not put extra load on the server to have jQuery transform this thread, which has all the comments, to show only your comments on the thread. It's a client-side task. That's what you originally said was not feasible.

All this talk has actually made me consider writing an addon that makes slashdot look clean and in-line like LW, Reddit, Ycombinator, etc.

That's what you originally said was not feasible.

Are you confusing me with Error? What I said was inefficient was writing a scraper to get my karma histogram, on every comment (well, I wrote post) that I've ever written.

All this talk has actually made me consider writing an addon that makes slashdot look clean and in-line like LW, Reddit, Ycombinator, etc.

I do think that'd be a cool tool to have (though I don't use Slashdot).

Upvoted for actually considering how it could be done. It does sort of answer the letter if not the spirit of what I had in mind.

I admit I didn't think it all the way through. If your goal isn't ultimately data collection, you would make a browser addon and use javascript injection (the frontend scripting language for rendering web pages). I replied to another person with loose technical details, but you could create a browser addon where you push a button in the top right corner of your browser, type a username, and then it transforms the page to show nothing but posts by the user of that page by leveraging the web page's frontend scripting language.

So there's a user-friendly way to transform your browser's rendering without APIs, clunky web scrapers or excess server load. It's basically the same principle that adblockers work on.

Epistemic status: devil's advocate

The web browser is your client, because the display is the content.

Why did web forums, rather than closed NNTP networks, become the successor to Usenet? One possibility is that the new internet users weren't savvy enough to install a program without having CDs of it stacked on every available public surface. But another is that web sites, able to provide a look and feel appropriate to their community, plainly outcompeted networks of plaintext content. The advantages aren't necessarily just aesthetic; UI 'nudges' might guide users to pay attention to the same things at the same times, allowing a more coordinated and potentially more tailored set of discussion norms.

Notice that on mobile, users have rejected the dominance of the browser--in favor of less standardization and interoperability, via native apps that dispense with HTML.

Put another way, a web community does have a comparative advantage at designing a UI for itself, because UIs are not interchangeable.

But another is that web sites, able to provide a look and feel appropriate to their community, plainly outcompeted networks of plaintext content.

Ah, yes, porn as the engine of technology.

The web had pictures, "networks of plaintext content" did not. Case closed.

Objection: I'm pretty sure Usenet had a colossal amount of porn, at least by the standards of the day. Maybe even still the case. I know its most common use today is for binaries, and I assume that most of that is porn.

I'm pretty sure Usenet had a colossal amount of porn

Of course it had. But, compared to the web, it was (1) less convenient to get; and (2) separated from the textual content. Think about the difference between a web page and a set of files sitting in a directory.

I'm not sure I'm getting out of that comparison what you meant to put into it. I find the set of files in a directory a heck of a lot more convenient.

I'll often find myself snarkily responding to feature requests with "you know, someone wrote something that does that 20 years ago, but no one uses it."

You and me both, buddy.

But you forgot "Now get offa my lawn!"

Use a standardized API that properly models your data and it's usage, and whaddya know, we already have one in NNTP. Let people put whatever clients they want for presentation on top of that.

Sounds about right to me.

Thinking about it, I doubt that NNTP would have modeled edits or karma. I think posts were write once. And looking around a little, I've seen Reddit called Usenet 2.0. Maybe they've made things better. They've probably extended the discussion model beyond what NNTP had.

And as a practical matter, probably the shortest distance to a new client for LW would be to expose the reddit API service, then use existing reddit clients that can be pointed to whatever reddit server you want.

Could LW expose the reddit API as a web service?

I think posts were write once.

The netnews standards allow cancellation and superseding of posts. These were sometimes used by posters to retract or edit their posts. They were often used for spam control and moderation. Cancels and supersedes were occasionally also used for hamhanded attempts at censorship ... by entities clueless enough to think they could get away with it.

Cancels and supersedes are themselves posts, and don't travel through the network any faster than the original post. And as sites can restrict whom they'll accept posts from, they can also restrict whom they'll accept or process cancel or supersede messages from.

I didn't know there were supersedes in nntp. Cool.

Super pedantic nitpick: the netnews medium predates NNTP.

Use a standardized API that properly models your data and it's usage, and whaddya know, we already have one in NNTP.

You didn't even ask about my data and its usage, and you are already sure NNTP is the answer?

I intended that as a succinct summary of the article as a proposition for action.

In addition to Brotherzed's solution to this:

But consider the following problem: Find and display all comments by me that are children of this post, and only those comments, using only browser UI elements, i.e. not the LW-specific page widgets. You cannot -- and I'd be pretty surprised if you could make a browser extension that could do it without resorting to the API, skipping the previous elements in the chain above. For that matter, if you can do it with the existing page widgets, I'd love to know how.

It would require a relatively simple XPath/XSLT-based browser extension. I had the XPath expression written, but removed it because it could be used for evil. (I feel mentioning the possibility is safe because the expression is sufficiently ugly that only those who would already think of it, or those who are sufficiently devoted that they will solve the problem anyways regardless of the solution they take, are going to be able to write it.)

I'm having trouble parsing your purpose. What's the objective here? Are we looking at ways to include non-LW content in LW?

Do an HTTP Get, run some simple XSLT on the response. For Slate Star Codex, <xsl:variable name="body" select="div[@class='pjgm-postcontent'] />, then do whatever it is you want to do in the for-each.

(I expect my XSLT will get mauled, so edits may be required.)

Sure, but what's the drawback to simply adding an API (that allows other clients) on top of existing website infrastructure?

Edit: I had this page open for some time and haven't refreshed, so sorry for duplication with buybuydandavis's comment. Haha UI problems.