LESSWRONG
LW

Nate Showell
457Ω241050
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Nate Showell's Shortform
2y
20
‘AI for societal uplift’ as a path to victory
Nate Showell1d30

This strategy suggests that decreasing ML model sycophancy should be a priority for technical researchers. It's probably the biggest current barrier to the usefulness of ML models as personal decision-making assistants. Hallucinations are probably the second-biggest barrier.

Reply
LessWrong Feed [new, now in beta]
Nate Showell2d10

The new feed doesn't load at all for me.

Reply
Consider chilling out in 2028
Nate Showell15d2615

There's another way in which pessimism can be used as a coping mechanism: it can be an excuse to avoid addressing personal-scale problems. A belief that one is doomed to fail, or that the world is inexorably getting worse, can be used as an excuse to give up, on the grounds that comparatively small-scale problems will be swamped by uncontrollable societal forces. Compared to confronting those personal-scale problems, giving up can seem very appealing, and a comparison to a large-scale but abstract problem can act as an excuse for surrender. You probably know someone who spends substantial amounts of their free time watching videos, reading articles, and listening to podcasts that blame all of the world's problems on "capitalism," "systemic racism," "civilizational decline," or something similar, all while their bills are overdue and dishes pile up in their sink.

 

This use of pessimism as a coping mechanism is especially pronounced in the case of apocalypticism. If the world is about to end, every other problem becomes much less relevant in comparison, including all those small-scale problems that are actionable but unpleasant to work on. Apocalypticism can become a blanket pretext for giving in to your ugh fields. And while you're giving in to them, you end up thinking you're doing a great job of utilizing the skill of staring into the abyss (you're confronting the possibility of the end of the world, right?) when you're actually doing this exact opposite. Rather than something related to preverbal trauma, this usability as a coping mechanism is the more likely source of the psychological appeal of AI apocalypticism for many people who encounter it.

Reply21
Distillation Robustifies Unlearning
Nate Showell23d32

Another experiment idea: testing whether the reduction in hallucinations that Yao et al. achieved with unlearning can be made robust.

Reply
What's up with AI's vision
Nate Showell2mo30

Do LLMs perform better at games that are later in the Pokemon series? If difficulty interpreting pixel art is what's holding them back, it would be less of a problem when playing later Pokemon games with higher-resolution sprites.

Reply
This prompt (sometimes) makes ChatGPT think about terrorist organisations
Nate Showell2mo50

Have you tried seeing how ChatGPT responds to individual lines of code from that excerpt? There might be an anomalous token in it along the lines of " petertodd".

Reply
Against podcasts
Nate Showell3mo10

Occasionally something will happen on the train that I want to hear, like the conductor announcing a delay. But not listening to podcasts on the train has more to do with not wanting to have earbuds in my ears or carry headphones around.

Reply
Against podcasts
Nate Showell3mo33

I hardly ever listen to podcasts. Part of this is because I find earbuds very uncomfortable, but the bigger part is that they don't fit into my daily routines very well. When I'm walking around or riding the train, I want to be able to hear what's going on around me. When I do chores it's usually in short segments where I don't want to have to repeatedly pause and unpause a podcast when I stop and start. When I'm not doing any of those things, I can watch videos that have visual components instead of just audio, or can read interview transcripts in much less time than listening to a podcast would take. The podcast format doesn't have any comparative advantage for me.

Reply
Nate Showell's Shortform
Nate Showell3mo80

Metroid Prime would work well as a difficult video-game-based test for AI generality.

  • It has a mixture of puzzles, exploration, and action.
  • It takes place in a 3D environment.
  • It frequently involves backtracking across large portions of the map, so it requires planning ahead.
  • There are various pieces of text you come across during the game. Some of them are descriptions of enemies' weaknesses or clues on how to solve puzzles, but most of them are flavor text with no mechanical significance.
  • The player occasionally unlocks new abilities they have to learn how to use.
  • It requires the player to manage resources (health, missiles, power bombs)
  • It's on the difficult side for human players, but not to an extreme level.

There are no current AI systems that are anywhere close to being able to autonomously complete Metroid Prime. Such a system would probably have to be at or near the point where it could automate large portions of human labor.

Reply
They Took MY Job?
Nate Showell3mo3-6

I recently read This Is How You Lose the Time War, by Max Gladstone and Amal El-Mohtar, and had the strange experience of thinking "this sounds LLM-generated" even though it was written in 2019. Take this passage, for example:

You wrote of being in a village upthread together, living as friends and neighbors do, and I could have swallowed this valley whole and still not sated my hunger for the thought. Instead I wick the longing into thread, pass it through your needle eye, and sew it into hiding somewhere beneath my skin, embroider my next letter to you one stitch at a time.

I found that passage just by opening to a random page without having to cherry-pick. The whole book is like that. I'm not sure how I managed to stick it out and read the whole thing.

 

The short story on AI and grief feels very stylistically similar to This Is How You Lose the Time War. They both read like they're cargo-culting some idea of what vivid prose is supposed to sound like. They overshoot the target of how many sensory details to include, while at the same time failing to cohere into anything more than a pile of mixed metaphors. The story on AI and grief is badly written, but its bad writing is of a type that human authors sometimes engage in too, even in novels like This Is How You Lose the Time War that sell well and become famous.

 

How soon do I think an LLM will write a novel I would go out of my way to read? As a back-of-the-envelope estimate, such an LLM is probably about as far away from current LLMs in novel-writing ability as current LLMs are from GPT-3. If I multiply the 5 years between GPT-3 and now by a factor of 1.5 to account for a slowdown in LLM capability improvements, I get an estimate of that LLM being 7.5 years away, so around late 2032.

Reply
Load More
26How are you preparing for the possibility of an AI bust?
Q
1y
Q
16
2Nate Showell's Shortform
2y
20
23Degamification
2y
3
8Reinforcement Learner Wireheading
3y
2