The new feed doesn't load at all for me.
There's another way in which pessimism can be used as a coping mechanism: it can be an excuse to avoid addressing personal-scale problems. A belief that one is doomed to fail, or that the world is inexorably getting worse, can be used as an excuse to give up, on the grounds that comparatively small-scale problems will be swamped by uncontrollable societal forces. Compared to confronting those personal-scale problems, giving up can seem very appealing, and a comparison to a large-scale but abstract problem can act as an excuse for surrender. You probably know someone who spends substantial amounts of their free time watching videos, reading articles, and listening to podcasts that blame all of the world's problems on "capitalism," "systemic racism," "civilizational decline," or something similar, all while their bills are overdue and dishes pile up in their sink.
This use of pessimism as a coping mechanism is especially pronounced in the case of apocalypticism. If the world is about to end, every other problem becomes much less relevant in comparison, including all those small-scale problems that are actionable but unpleasant to work on. Apocalypticism can become a blanket pretext for giving in to your ugh fields. And while you're giving in to them, you end up thinking you're doing a great job of utilizing the skill of staring into the abyss (you're confronting the possibility of the end of the world, right?) when you're actually doing this exact opposite. Rather than something related to preverbal trauma, this usability as a coping mechanism is the more likely source of the psychological appeal of AI apocalypticism for many people who encounter it.
Another experiment idea: testing whether the reduction in hallucinations that Yao et al. achieved with unlearning can be made robust.
Do LLMs perform better at games that are later in the Pokemon series? If difficulty interpreting pixel art is what's holding them back, it would be less of a problem when playing later Pokemon games with higher-resolution sprites.
Have you tried seeing how ChatGPT responds to individual lines of code from that excerpt? There might be an anomalous token in it along the lines of " petertodd".
Occasionally something will happen on the train that I want to hear, like the conductor announcing a delay. But not listening to podcasts on the train has more to do with not wanting to have earbuds in my ears or carry headphones around.
I hardly ever listen to podcasts. Part of this is because I find earbuds very uncomfortable, but the bigger part is that they don't fit into my daily routines very well. When I'm walking around or riding the train, I want to be able to hear what's going on around me. When I do chores it's usually in short segments where I don't want to have to repeatedly pause and unpause a podcast when I stop and start. When I'm not doing any of those things, I can watch videos that have visual components instead of just audio, or can read interview transcripts in much less time than listening to a podcast would take. The podcast format doesn't have any comparative advantage for me.
Metroid Prime would work well as a difficult video-game-based test for AI generality.
There are no current AI systems that are anywhere close to being able to autonomously complete Metroid Prime. Such a system would probably have to be at or near the point where it could automate large portions of human labor.
I recently read This Is How You Lose the Time War, by Max Gladstone and Amal El-Mohtar, and had the strange experience of thinking "this sounds LLM-generated" even though it was written in 2019. Take this passage, for example:
You wrote of being in a village upthread together, living as friends and neighbors do, and I could have swallowed this valley whole and still not sated my hunger for the thought. Instead I wick the longing into thread, pass it through your needle eye, and sew it into hiding somewhere beneath my skin, embroider my next letter to you one stitch at a time.
I found that passage just by opening to a random page without having to cherry-pick. The whole book is like that. I'm not sure how I managed to stick it out and read the whole thing.
The short story on AI and grief feels very stylistically similar to This Is How You Lose the Time War. They both read like they're cargo-culting some idea of what vivid prose is supposed to sound like. They overshoot the target of how many sensory details to include, while at the same time failing to cohere into anything more than a pile of mixed metaphors. The story on AI and grief is badly written, but its bad writing is of a type that human authors sometimes engage in too, even in novels like This Is How You Lose the Time War that sell well and become famous.
How soon do I think an LLM will write a novel I would go out of my way to read? As a back-of-the-envelope estimate, such an LLM is probably about as far away from current LLMs in novel-writing ability as current LLMs are from GPT-3. If I multiply the 5 years between GPT-3 and now by a factor of 1.5 to account for a slowdown in LLM capability improvements, I get an estimate of that LLM being 7.5 years away, so around late 2032.
This strategy suggests that decreasing ML model sycophancy should be a priority for technical researchers. It's probably the biggest current barrier to the usefulness of ML models as personal decision-making assistants. Hallucinations are probably the second-biggest barrier.