Based on low-quality articles that seem to be coming up with more regularity, and as mentioned in a few recent posts, AI-generated posts are likely to be a permanent feature of LW (and most online forums, I expect).  I wonder if we should focus on harm reduction (or actual value creation, in some cases) rather than trying to disallow something that clearly people want to do.

I wonder how feasible it would be to have a LessWrong-specific workflow for using any or all of the major platforms to assist with (and not fully write) a LW question, a LW summary-of-research post, or a LW rationalist-exploration-of-a-question post (and/or others).  This could simply be a help page with sample prompts for "how to generate and use a summary paragraph", "how to generate and modify an outline/thesis sketch", and "how to use the summary and outline to flesh out your ideas on a subtopic".

I've played with these techniques, but I tend to do it all in my captive meatware LLM rather than using an external one, so I don't have a starter example.  Do any of you?

New Answer
New Comment

3 Answers sorted by

habryka

72

We are actually currently in a sprint where we are experimenting with integrating LLM systems directly into LW in various ways. 

A thing I've been thinking about is to build tools on LW so that it's easy to embed LLM generated content, but in a way where any reader can see the history of how that content was generated (i.e. seeing whatever prompt or conversational history lead to that output). My hope would be that instead of people introducing lots of LLM-slop and LLM-style-errors into their reasoning, the experience of the threads where people use LLMs becomes more one of "collectively looking at the output of the LLM and being curious about why it gave that answer", which I feel like has better epistemic and discourse effects.

We've also been working with base models and completion models where the aim is to get LLM output that really sound like you and picks up on your reasoning patterns, and also more broadly understands what kind of writing we want on LW. 

This is all in relatively early stages but we are thinking pretty actively about it.

That's awesome.  One of my worries about this (which applies to most harm-reduction programs) is that I'd rather have less current-quality-LLM-generated stuff on LW overall, and making it a first-class feature makes it seem like I want more of it.

Having a very transparent not-the-same-as-a-post mechanism solves this worry very well.

ChristianKl

51

I don't think there should be a help page that says "This is the official LW way to generate a summary paragraph" but at the same time I would appreciate individual users sharing knowledge about how they might use LLMs for the task. 

LLM capabilities differ and evolve quite fast, so a help page might be out of date pretty soon.

One example is how to deal with sources. I recently wanted to explore a question about California banning the use of hypnosis for medical purposes by non-licensed people. ChatGPT was able to give me actual links to the relevant portions of the legal code. 

Claude was not able to do that, and I think there a good chance that the capability is quite recent in ChatGPTs history and comes with their push for building their own search engine.

As far as standard prompts go I would expect that something along the lines of "What are the most likely objections people on LessWrong are going to have on LessWrong to the following post I want to write and what's the merit of those objections: '... My draft ...', to be a prompt that would be good to run for most people before they publish a post. 

Richard_Kennaway

20

I’m not sure if we have formally written guidelines about the purpose of LW and what is appropriate to post here, but there certainly have been postings to that effect. Would such guidelines themselves be a suitable prompt?

In other words, make it easy for people to ask the bot to write what LW wants to see.

2 comments, sorted by Click to highlight new comments since:

Damn, I hate to think about this, but it is unavoidable anyway, isn't it?

From my perspective, LLM generated texts vs human generated texts are kinda like spam e-mails vs letters written on paper. It was possible to get a stupid and annoying letter on paper, but it usually happened rarely, because there was a time cost to writing them, and writing a letter takes more time than reading it. With spam e-mails, it is the other way round, sending the message takes a few minutes, and the collective time spent reading it (even if most people immediately decide to delete it) could be hours or more.

Emotionally, I would hate to spend my time reading an article, and then decide that I either don't like it or feel ambivalent about it, and maybe write a comment that tries to be helpful and explain why... and then find out that it took me 15 minutes to read the article, think about it, and write a response, but it only took the author 1 minute to generate it and post it.

But what other options are there? Should I stop reading insufficiently interesting articles after the first half page, and just downvote them and close without commenting? Yeah, that would be better for me, but it would make LW a worse experience for the new authors.

If we could trust that people honestly disclose in a standardized way at the beginning of the article (or even better, click a checkbox when posting it, and an icon would appear next to the title) that the article was written by an LLM, then I would be harsh to the LLM posts and lenient to the new authors. But this creates an obvious incentive to lie about using LLMs. (Possible solution: great punishment for authors who use an LLM and fail to disclose it. But also: false positives.)

I like habryka's proposed solution, to integrate the LLMs with LW and publish the prompts. Preferably at the top of the page, so that I can read the prompt first and decide whether I want to read the article at all. For example, if the prompt already specifies the bottom line, I probably won't pay much attention to the arguments LLM makes, because I know it is not even trying to paint an objective picture.

I can imagine legitimate uses of LLM, for example if someone describes something in abstract, and then uses an LLM to suggest specific examples (and then carefully reviews those examples). Articles with examples are usually better than abstract arguments, and sometimes the examples just don't come to my mind, or they are all similar to each other and the LLM might notice something different.

Maybe the parts written by LLM should be displayed using a different font (e.g. monospace)?

[-]Dagon2-1

I feel like more and more LLM use is inevitable, and at least some of it is no different from the age-old (as far as "old" applies to LessWrong and online forums) problem of filtering new users who are more enthusiastic than organized, and generate a lot of spew before really slowing down and writing FOR THIS AUDIENCE (or getting mad and going away, in the case of bad fit).  LLMs make it easy to increase that volume of low-value stuff.

I really want to enable the high-value uses of LLM, because I want more from many posts and I think LLMs CAN BE a good writing partner.  My mental model is that binary approaches ("identify LLM-generated content") are going to fail, because the incentives are wrong, but also because it discourages the good uses.

The problem of voluminous bad posts has two main tactics for us to use against it.  
1) identification and filtering (downvoting and admin intervention).  This works today (though it's time-consuming and uncomfortable), and is likely to continue to work for quite some time.  I haven't really played with LLM as evaluation/summary for things I read and comment on, but I think I'm going to try it.  I wonder if I can get GPT to estimate how much effort went into a post...

2) assistance and encouragement of posters to think and write for this audience, and to engage with comments (which are sometimes very direct).  This COULD include recommendations for LLM critiques or assistance with organization or presentation, along with warnings that LLMs can't actually predict your ideas or reason for posting - you have to prompt it well and concretely.

We need both of these, and I'm not sure the balance changes all that much in an LLM world.