LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
TAP for fighting LLM-induced brain atrophy:
"send LLM query" ---> "open up a thinking doc and think on purpose."
What a thinking doc looks varies by person. Also, if you are sufficiently good at thinking, just "think on purpose" is maybe fine, but, I recommend having a clear sense of what it means to think on purpose and whether you are actually doing it.
I think having a doc is useful because it's easier to establish a context switch that is supportive of thinking.
For me, "think on purpose" means:
I do plan to post a couple other places but I think I do need people with both good artistic taste and good familiarity with LessWrong. (I'm planning to ask on Bountied Rationality)
For this role to actually save us work, they need to not require that much onboarding. We could hypothetically train someone with less familiarity with LessWrong but I think that'd take more time than it's worth. (We need someone who's able to both understand the existing LessWrong aesthetic, and what we're going for with that aesthetic, and when/how/why it'd be appropriate to deviate from it. Most of the work often involves figuring out what broad choices would be appropriate for a given piece, so we need to be able to give pretty vague instructions and have them figure it out from context)
(The particular project I'm looking to hire for is designing cover art and ~6 illustrations for a Sequence Highlights book, which involves figuring out an overall unifying motif for the book that is somewhat-distinct from the usual LessWrong vibe but compatible with it.)
This didn't feel particularly informative/useful to me, what did you think you learned (or I should have learned) from the chat transcript?
I lean towards not using models directly as "conversation participants", which feels too likely to spiral out of control, but instead do things like have white-listed specific popups that they decide when to trigger.
I'm so torn about "for like 75% or maybe 99% of humans, the chatbot saying 'are you sure you want to say that?' is probably legit an improvement. But... it just feels so slippery-slope-orwellian to me." (In particular, if you build that feature, you need to be confident not only that the current leadership of your company won't abuse it, but that all future leadership won't either, and that the AI company you're renting models from won't enshittify in a way you don't notice)
(I am saying this as, like, a forum-maintainer who is actually taking the idea seriously and trying to figure out how to get the good things from the idea, not just randomly dunking on it. Interested in more variants or takes)
Up for sharing your system prompt?
This advice is sorta reasonable, but I think is just not the right next step for people in my-guess-of the OP's situation.
LessWrong mods get ~20 people per day who describe themselves as "collaborating with AIs, finding important patterns worth exploring" that don't really make much sense. AIs have infinite patience to follow along and encourage you down basically any idea you come up with, and I think the right move if you've been in that situation is to just get distance from LLMs, not try to keep doing the same thing with some adjustments.
Ah, yeah my eyes kinda glossed over the footnote. I agree all-else-equal it's good to establish that we do ever followup on our deals, I'm theoretically fine with donating $100 to AMF. I'm not sure I'd be comfortable donating to some other charity that I don't know and is plausibly some part of a weird long game.
I think this is one of the standard rebuttals to this position: GPTs are Predictors, not Imitators
I'm often in situations where either
a) I do basically expect the LLMs to get the right answer, and for it to be easily checkable. (like, I do in fact have a lot of boilerplate code to write)
and/or b) my current task is sufficiently tree structured, that it's pretty cheap to spin up an LLM to tackle one random subproblem while I mostly focus on a different thing. And the speedup from this is pretty noticeable. Sometimes the subproblem is something I expect it to get right, sometimes I don't really expect it to, BUT, there's a chance it will, and meanwhile I have something else to do.
(During a recent project, I had 3 different copies of my git repo open, and spent ~half my time managing 3 different "junior dev LLM employees")
I'm also just trying to specialize a bit in "be an early LLM adopter/pioneer who tries to anticipate what more powerful llm+human pairs will be able to do in 6 months. Try to figure out what cognitive habits are adaptive for that world, so that I can distill out tips/tools for others as capabilities rise."