See some earlier discussion on Facebook, here.


I've been doing more writing recently and have been heavily leaning on LLMs to help. This has been useful, but it seems clear that the fundamental LLM abilities are far outpacing the frontends and tools that use them.

Here's a quick list of ideas that I'd like to see in new writing platforms. I came up with these quickly, so I'm sure there are many other great ideas.

  1. Predictive Clustering: Whenever your writing is predictable (for example, when responding to something or after the first few sentences of a new post), an LLM could vaguely predict the points you might make. It could cluster these points, allowing you to point and click on the relevant cluster. For instance, in a political piece, you might first click, "I [Agree | Disagree | Raise Interesting Other Point | Joke]." You then select "Raise interesting point," and it presents you with 5-20 points you might want to raise, along with a text box to add your own. Once you add your point, you can choose a length.
  2. Facial Feedback: A video camera could monitor your facial expressions. If you're grimacing or frowning, it would automatically realize something is wrong and suggest ways to improve the text. This feature would be optional, of course.
  3. Parallel Writing: Text writing should be done in parallel. For long posts, simple orchestration could split up the post into parts and have different LLM calls do each part.
  4. Incremental Edits: Making edits to long texts, like, "Make it all 20% more polite," shouldn't require rewriting the entire thing from scratch. We really need Diffs.
  5. Multiple Drafts: There should be several attempts at each piece of writing for you to choose between. Maybe start with a few widely different styles, then narrow that down.
  6. Attribute Sliders: Similar to the buttons idea, there could be custom sliders for attributes based on the text in question. For instance, when writing a fiction scene, the LLM might be uncertain how smart you want one of the characters to be, so it brings up a slider for "how smart is character X in this scene?". As you move the slider, the text changes.
  7. Sidebar Suggestions: In any long document, there should be a sidebar with a long list of potential changes. The exact interface could vary, but there are many options.
  8. Research Agents: We could use AI agents that gradually research and explain content. This should have two information representations: 1) the text or versions of text that the user wants to present, and 2) saved information from the web or brainstorming that is useful when writing the text in the first version. This setup would have simple prioritization systems that could identify the most useful next improvements to a research post.
  9. Response Predictions: As you write your post, you should see predictions of the responses it will receive. What typical mistakes will people make? What will sentiment be like for various demographics? What will comments be like for various demographics?

These ideas make me want to experiment with some prototypes. It seems very easy to make some neat demos here.

At the same time, I personally still think that "compositional AI reasoning systems" are generally more exciting than "human-centric AI-assisted writing systems." Focusing too much on human-first systems could quickly become obsolete.

However, these ideas do make me more optimistic for the future of writing. I think it could get a whole lot faster and better with a bit of R&D work and current LLMs.

I'd be excited to see more work in this area, particularly if it could be funded from outside the EA ecosystem. (This could make for some interesting startups or small businesses, for example).

Also, I'd be curious if others have good ideas! Do add them in the comments.

New Comment
4 comments, sorted by Click to highlight new comments since:

Being Wrong on the Internet: The LLM generates a flawed forum-style comment, such that the thing you've been wanting to write is a knockdown response to this comment, and you can get a "someone"-is-wrong-on-the-internet drive to make the points you wanted to make. You can adjust how thoughtful/annoying/etc. the wrong comment is.

Target Audience Personas: You specify the target audience that your writing is aimed at, or a few different target audiences. The LLM takes on the persona of a member of that audience and engages with what you've written, with more explicit explanation of how that persona is reacting and why than most actual humans would give. The structure could be like comments on google docs.

Heat Maps: Color the text with a heat map of how interested the LLM expects the reader to be at each point in the text, or how confused, how angry, how amused, how disagreeing, how much they're learning, how memorable it is, etc. Could be associated with specific target audiences.

I'd mention my Nenex. A good phrase here is "Photoshop for text": https://interconnected.org/home/2024/05/31/camera

It would be a good idea to just look at what existing LLM writing services like Sudowrite or NovelAI do offer now. ChatGPT/Claude-3 may be the most convenient & powerful LLMs to use, but they are obviously not going to be the best for writing: their interfaces are simple and not the focus of their companies, and the RHLF/assistant-tuning devastates their creative writing ability.

Predictive Clustering: Whenever your writing is predictable (for example, when responding to something or after the first few sentences of a new post), an LLM could vaguely predict the points you might make. It could cluster these points, allowing you to point and click on the relevant cluster. For instance, in a political piece, you might first click, "I [Agree | Disagree | Raise Interesting Other Point | Joke]." You then select "Raise interesting point," and it presents you with 5-20 points you might want to raise, along with a text box to add your own. Once you add your point, you can choose a length.

This seems like something that is very likely to come into existence in the near future, but I hope does not. Not only does it rob people of the incredibly useful practice of crafting their own arguments, I think putting better words in the user's mouth than the user planned to say can influence the way the user actually thinks.

I also use LLMs (Claude, mostly) to help with writing and there are so many things that I find frustrating about the UX. Having to constantly copy/paste things in, the lack of memory across instances, the inability to easily parallelize generation, etc.

I'm interested in prototyping a few of these features and potentially launching a product around this — is that something you'd want to collaborate on?