The link is to a particular timestamp in a much longer podcast episode. This segment plays immediately after the (Nonlinear co-founder) Kat Woods interview. (Skipping over the part about requesting donations.) In it, the podcast host John Sherman specifically calls out the apparent lack of instrumental rationality on the part of the Rationalist and Effective Altruism communities when it comes to stopping our impending AI doom. In particular, our reluctance to use the Dark Arts, or at least symmetric weapons (like "marketing"), in the interest of maintaining our epistemic "purity".

(For those not yet aware, Sherman was persuaded by Yudkowsky's TIME article and created the For Humanity Podcast in an effort to spread the word about AI x-risk and thereby reduce it. This is an excerpt from Episode #24, the latest at the time of writing.)

I have my own thoughts about this, but I'm not fully aware of trends in the broader community, so I thought I'd create a space for discussion. Is the criticism fair? Are there any Rationalist/EA projects Sherman is unaware of that might change his mind? Have we failed? Are we just not winning hard enough? Should we change? If so, what should we change?

My (initial) Thoughts

I'm less involved with the EA side, but I feel that LessWrong in particular is a bastion of sanity in a mad world, and this is worth protecting, even if that means that LessWrong proper doesn't get much done. Maxims like "Aim to explain, not persuade" are good for our collective epistemics, but also seem like a prohibition on prerequisites to collective action.

I think this is fine? Politics easily become toxic; they risk poisoning the well. There's no prohibition on rationalists building action- or single issue–focused institutions outside of LessWrong. There have been reports of people doing this. (I even kind of co-founded one, starting from LessWrong, but it's not super active.) Announcing what they're starting, doing postmortems on how things went, or explaining organizational principles seem totally kosher for LessWrong to me. I feel like I'm seeing some of this happening too, but maybe not enough?

What I'm not seeing is any kind of pipeline on skilling up our group rationality, especially of the instrumental flavor. Not to say there's been zero effort.

Also, I'm personally not a marketer, or even very skilled socially. The kind of action Sherman seems to be asking for is probably not my comparative advantage. Should I be doing something else to contribute? Or should I skill up in whatever seems the most important? I'm not sure, and I expect my answer won't be the same for everyone.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 11:46 PM

youtube channels

https://www.youtube.com/@RationalAnimations (lesswrong stuff)

https://www.youtube.com/@RobertMilesAI (ai safety in particular)

https://www.youtube.com/@aiexplained-official (less of a particular perspective, more "the only sober analysis of current ai landscape on youtube")

incomplete results of stuff sponsored by givewell

(I was doing this search, but it's annoying to find the actual results so to save others time here are some of them)

We Now Have TOO MANY Bees (You Read That Right) | Lightning Round

The Lifesaving Tech Drivers Hate

The worst vulnerability of the decade?

Steve Hsu on the Future of Everything

Which Energy Source is Best w/ Age of Miracles

DECONSTRUCTION - Terrible Writing Advice

2023: A Year In Climate Change

The Crustacean Tier List

Conservative Populism's Gospel Of Victimhood w/ Paul Elliott Johnson - 12/20/21 | MR Live

Thamslink: London’s Other Cross-City Railway

📈 Chris Rufo vs Claudine Gay #podcast #economics #economy #politics #international #conservative

(editorial note: I link the above link to show that it happened but very much hesitated to do so given that the people there would like me dead)

How Life Survives Inside Underwater Volcanoes

I accidentally found some nearly-lost Scooby-Doo stories (and now they're yours!)

Geosynchronous Orbits are WEIRD

Hiatus.

Balaji Srinivasan and Nathan Labenz on the Future of AI, AI Gods, and AI Control

In Defense of Fairytale Magic

The TRUE VILLAIN of Christmas

How Humans Made Malaria So Deadly

incomplete results of stuff sponsored by 80k hours:

(same as above, but with this search)

Why Doesn’t the Palo Verde Tree Need Water?

Physics Is Nearly Complete.

The Dev's Creed: Being Wrong is Essential

The Questionable Engineering of Oceangate

Crossing the Street Shouldn't Be Deadly (but it is)

The Moon Isn't As Dead As You Think

The Environmentally Friendly Fuel That Can Kill You | Lightning Round

What if Death was a Person?

Why Continents Are High

The Little Prince: Adulthood is a Scam

What’s Up With the Weird Pockmarks Up and Down the East Coast?

Does Antimatter Create Anti-Gravity?

Oppenheimer's warning lives on

6-month-old Steak, Ice Cream Bread & more debunking | How To Cook That Ann Reardon

Why Giants Aren't Actually Monsters

The Best Reading Skill No One Ever Taught You

I Read 2,216 Resumes. Here’s How You Stand Out 🚀

The Problem With Britain's Economy

6 Inventors Who Were Killed By Their Own Inventions

How Altruism Evolved in Humans

Trains’ Weirdly Massive Problem with Leaves

Is The Twilight Zone Still Good?

Why No One’s Sure If This Is Part Of The US Constitution

Can you trick your own brain?

Why 'pudding' refers to sausages and desserts

Ask Adam: Why is European food bland? Are closed mussels actually bad? Career advice? (PODCAST E19)

Johnny Harris Is Wrong About Inflation

The Insane Rise of YEAT

Are The First Stars Really Still Out There?