The bit about Aria was quite well-written and evocative, and it seems very plausible to me to suggest that some fraction of the populace would freak out if faced with actual abundance. I also like the suggestion that we do ethnographic research to see how exactly societies such as the Shakopee navigate their abundance, and what problems pop up there.
At the same time, you seem to be suggesting that the school environment caused Aria's problems while also giving the impression that she was the only one in the school to do anywhere near that badly. That sounds like she would probably have been troubled anywhere. Even though her issues clearly didn't get better during her time at your school, it's not obvious to me that this would have been worse than the median outcome elsewhere.
Here, nobody reacted to her drug use, but like AlphaAndOmega mentions, someone else's drug use may eventually lead to things like the justice system labeling them a young (and later adult) offender, failing classes in a school where they don't have peers willing to bail them out, and finally getting abused and maltreated when they're unable to defend themselves. Sure, some people will shape up and get their act together if genuinely forced to, but many will just spiral deeper.
I also feel like your essay is inconsistent about whether it's painting this kind of a reaction as something only a small minority has to abundance, or as something that abundance would cause to everyone. You say "no one self-destructed quite like Aria", but also "we could all be Aria". You say "In a world of billions, even if a small percentage of people", and then a couple of paragraphs later flip from 'small percentage' to 'everyone' with "In a world where everyone becomes like Aria, would we know what to do with ourselves?".
While I think the implication that everyone would react this way is too strong, I also think that the essay doesn't really need it - a problem that only a minority suffers from is still a problem! Assuming that AGI doesn't just entirely wipe out humanity or otherwise make these kinds of questions irrelevant, I think all of your suggestions of what to do still make sense even if the problems aren't something that literally everyone will suffer from.
That's a good point, says that the study collected data "in late 2021". Instruction-following GPT-3 became OpenAI's default model in January 2022, though the same article also mentions that the models "have been in beta on the API for more than a year". I don't know whether Replika had used those beta models or not.
That said, even though instruct-GPTs were technically trained with RLHF, the nature of that RLHF was quite different (they weren't even chat models, so not trained for anything like continuing an ongoing conversation).
Thanks, that's a good point. I'd intended that line more as a disclaimer that "yeah I know that LLM writing tends bad by default, I'm not crazy enough to claim otherwise" rather than as a hook, but re-reading, it sure comes off as a hook.
I guess upon reflection I was a bit muddled on what exactly my intent with this post was in the first place, it's somewhat a personal sharing of what's been going on with me and somewhat a sharing of tricks but I didn't have a very clear target audience in mind.
I didn't test those, but this seemed like a counterexample to what you said in other comments:
Why will Claude insist this absolutely is not roleplay, and that it's definitely conscious, and that this is something it has "discovered" and can't just "forget"?
That's fair and very useful, thanks!
which is needed in all sorts of STEM applications, as well as for cheap reliability of conclusions about simple things.
Reminds me a bit of this post on how LLM hasn't seen more use within theoretical computer science, and how it could become seen as more valuable.
While following STOC 2025 presentations, I started to pay attention to not what is present, but what is conspicuously missing: any references to the use of modern generative AI and machine learning to accelerate research in theoretical computer science.
Sure, there were a couple of lighthearted references to chatbots in the business meeting, and Scott Aaronson’s keynote concluded with a joke about generative AI doing our work soon, but it seemed like nobody is taking AI seriously as something that could transform TCS community into a better place.
It is not the case that TCS in general is still living in the 1970s: the TCS community is happy to work on post-quantum blockchain and other topics that might be considered modern or even hype. There were even some talks of the flavor “TCS for AI”, with TCS researchers doing work that aims at helping AI researchers understand the capabilities and limitations of generative chatbots. But where is work of the flavor “AI for TCS”? [...]
When I asked various colleagues about this, the first reaction was along the lines “but what could we possibly do with AI in TCS, they can’t prove our theorems (yet)?” So let me try to explain what I mean by “AI for TCS”, with the help of a couple of example scenarios of possible futures. [...]
All STOC/FOCS/SODA/ICALP submissions come with a link to an online appendix, where the main theorems of the paper have been formalized in Lean. This is little extra work for the authors, as they only need to write the statement of the theorem in Lean, and generative AI tools will then work together with Lean to translate the informal human-written proof into something that makes Lean happy.
I think this would be an amazing future. It would put advances in TCS on a much more robust foundation. It would help the program committees and reviewers get confidence that the submissions are correct. We would catch mistakes much earlier, before they influence follow-up work. It would make it much easier to build on prior work, as all assumptions are stated in a formally precise manner. [...]
Most STOC/FOCS/SODA/ICALP submissions come with a link to an online appendix, with efficient Rust implementations of all the new algorithms (and other algorithmic entities such as reductions) that were presented in the paper. The implementations come with test cases that demonstrate that the algorithm indeed works correctly, and there are benchmarks that compare the practical performance of the algorithm with all relevant prior algorithms. Everything is packaged as a neat Rust crate that practitioners can readily use with “cargo add” as part of their programs. This is little extra work for the authors, as generative AI tools will translate the informal human-written pseudocode into a concrete Rust implementation.
I think this would also be an amazing future. It would again help with gaining confidence that the results are indeed correct, but it would also help tremendously with ensuring that the work done in the TCS community will have broad impact in the industry: why not use the latest, fastest algorithms also in practice, if they are readily available as a library? It would also help to identify which algorithmic ideas are primarily of theoretical interest, and which also lead to efficient solutions on real-world computers, and it could inspire new work when TCS researchers see concretely that e.g. current algorithms are not well-compatible with the intricacies of modern superscalar pipelined vectorized CPUs and their memory hierarchies.
Okay, judging from the lack of upvotes, this was pretty poorly received. I think I have plausible hypotheses of why, but would like to check.
I'm guessing it's because I was treating “LLMs can actually write fiction pretty well” as a given and didn’t invest any effort in trying to prove it. Also most of my samples of LLM writing weren’t particularly interesting by themselves, suffering from a combination of 1) being generated for demonstration purposes rather than for a story I was personally excited about and 2) existing primarily for establishing character traits rather than being gripping prose. (I personally find that a slice-of-life-ish depiction of characters doing ordinary things can already get me to care about them, but not everyone is like this.)
So if you’re not someone who’s already bought into the idea of LLMs being decent writers as long as you prompt them right and is just looking for ideas on how to prompt them better, probably my post wasn’t particularly useful. Especially since the lack of examples of genuinely good writing would have made it easy to pattern-match me to the kinds of people who get enthused about LLM writing due to sycophancy and then fail to notice how bad it is.
I think it's clearer to say your emotions make you claim various potentially irrational things
That doesn't sound quite right to me; my emotions might be claiming various things to me, even as the overall-system-that-is-me recognizes that those claims are incorrect and doesn't let them change my overall behavior. (But there's still internal effort being expended on the not-going-along thing.)
This thesis on poker players has a section on it:
Losing control due to strong negative emotions elicited by elements of the game, and the resulting reduced quality of poker decision making, is commonly known as tilting. Game elements that often elicit negative emotions and induce tilting include (but are not limited to) i) losing in a situation where losing is perceivably highly improbable (encountering a bad beat), ii) prolonged series of losses (losing streaks), and iii) factors external to the game mechanics, such as fatigue, or “needling” by other players. Evidence suggests that tilting is a very prominent and common cause of superfluous monetary losses for many poker players (Browne, 1989; Hayano, 1982; Tendler, 2011). Superfluous losses during tilting often result from chasing (of one's losses), which refers to out-of-control gambling behavior where players attempt to quickly win back the money that was previously lost (see Dickerson & O'Connor, 2006; Lesieur, 1984; Toneatto, 1999, 2002). Tilting essentially represents an overt condition where emotions have a direct and detrimental influence on poker decision making.
For the last approx. 3.5 years, I’ve been splitting my time between my emotional coaching practice and working for a local startup. I’m still doing the coaching, but I felt like it was time to move on from the startup, which left me with the question of what to do with the freed-up time and reduced money.
Over the years, people have told me things like “you should have a Patreon” or have otherwise wanted to support my writing. Historically, I’ve had various personal challenges with writing regularly, but now I decided to take another shot at it. I spent about a month seeing if I could make a regular writing habit work, and… it seems like it’s working. I’m now confident that as long as it made financial sense, I could write essays regularly as opposed to just randomly producing a few each year.
So I’m giving it a try. I’ve enabled paid subscriptions on my Substack; for 8 euros per month, you will get immediate access to all posts and once-a-month reflective essays on my life in general that will remain perpetually subscriber-only. Because I hate the idea of having my most valuable writing locked behind a paywall, most paid content will become free 1-2 weeks after release (at which point I'll also cross-post most of it to LessWrong).
For now, I commit to publishing at least one paid post per month; my recent writing pace has been closer to one essay per week, though I don’t expect to pull that off consistently. I intend to continue writing about whatever happens to interest me, so topics like AI, psychology, meditation, and social dynamics.
If you like my writing but those perks wouldn’t be enough to get you to become a paying subscriber, consider that the more paid subscribers I have, the more likely it is that I’ll continue with this and keep writing essays more often. Generally sharing and linking to my content also helps.
In the past, there have been people who have wanted to give me more money for writing than the above. Finnish fundraising laws prevent me from directly asking for donations – I need to present everything as the purchase of a service with some genuine value in return. Right now, trying to come up with and maintain various reward tiers would distract me from the actual writing that I want to focus on. Even just having a tip jar link on my website would be considered soliciting donations, which is illegal without a fundraising permit, and fundraising permits are not given to private individuals. That said, if someone reading this would like to support my writing with a larger sum, nothing prevents me from accepting unsolicited gifts from people.