If you have listening stats, it would be useful to make those publicly-visible somehow. It is generally useful to me as an author to know how much traction different posts get, and if audio versions end up being a major channel then I'll want to be able to compare performance between different posts in that channel specifically.
Nice! I don't how well it works to listen to the top posts as a playlist. The top 4 has Luke's textbook list, and Eliezer's preface to R:AZ. The first isn't even an essay, and the second is very good as a preface to the book, but doesn't have much value if listened to as standalone essay. I think sequences to audio and playlists would be the best route.
I said on the previous post that it would be nice if this was integrated into LessWrong, but until that happens, it would be nice if you had a bot that posts a comment on each post when it gets an audio version, so it would be easy to discover and find (and I would also make it post that comment retroactively on every post already converted). Just need to make sure this doesn't spam the site, so I would consult the admins to see if that's fine and if maybe they can have that bot's comments not show up on the frontpage.
Thanks for the feedback! We think a bot could make sense as well - we're exploring this internally.
A few other issues that - as someone who stays away from podcasts and video at (not literally) all costs - drive me away from voice and video.
The intent of this is 'possible other things to solve, or to determine that TLW is just weird'.
Of course, I am someone who is already reading LW via written text, so beware selection bias.
Update #1: It’s a rite of passage to binge the top LessWrong posts of all time, and now you can do it on your podcast app.
We (Nonlinear) made “top of all time” playlists for LessWrong, the EA Forum, and the Alignment Forum. Each is around ~400 of the most upvoted posts:
Update #2: The original Nonlinear Library feed includes top posts from the EA Forum, LessWrong, and the Alignment Forum. Now, by popular demand, you can get forum-specific feeds:
Stay tuned for more features. We’ll soon be launching channels by tag, so you can listen to specific subjects, such as longtermism, rationality, animal welfare, or global health. Enter your email here to get notified as we add more channels.
Below is the original explanation of The Nonlinear Library and its theory of change.
We are excited to announce the launch of The Nonlinear Library, which allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs.
In the rest of this post, we’ll explain our reasoning for the audio library, why it’s useful, why it’s potentially high impact, its limitations, and our plans. You can read it here or listen to the post in podcast form here.
Listen here: Spotify, Google Podcasts, Pocket Casts, Apple, or elsewhere
Or, just search for it in your preferred podcasting app.
Goal: increase the number of people who read EA research
A koan: if your research is high quality, but nobody reads it, does it have an impact?
Generally speaking, the theory of change of research is that you investigate an area, come to better conclusions, people read those conclusions, they make better decisions, all ultimately leading to a better world. So the answer is no. Barring some edge cases (1), if nobody reads your research, you usually won’t have any impact.
Nonlinear is working on the third step of this pipeline: increasing the number of people engaging with the research. By increasing the total number of EA and rationalist articles read, we’re increasing the impact of all of that content.
This is often relatively neglected because researchers typically prefer doing more research instead of promoting their existing output. Some EAs seem to think that if their article was promoted one time, in one location, such as the EA Forum, then surely most of the community saw it and read it. In reality, it is rare that more than a small percentage of the community will read even the top posts. This is an expected-value tragedy, when a researcher puts hundreds of hours of work into an important report which only a handful of people read, dramatically reducing its potential impact.
Here are some purely hypothetical numbers just to illustrate this way of thinking:
Another way the audio library is high expected value is that instead of acting as a multiplier on just one researcher or one organization, it acts as a multiplier on nearly the entire output of the EA research community. This allows for two benefits: long-tail capture and the power of large numbers and multipliers.
Long-tail capture. The value of research is extremely long tailed, with a small fraction of the research having far more impact than others. Unfortunately, it’s not easy to do highly impactful research or predict in advance which topics will lead to the most traction. If you as a researcher want to do research that dramatically changes the landscape, your odds are low. However, if you increase the impact of most of the EA community’s research output, you also “capture” the impact of the long tails when they occur. Your probability of applying a multiplier to very impactful research is actually quite high.
Power of large numbers and multipliers. If you apply a multiplier to a bigger number, you have a proportionately larger impact. This means that even a small increase in the multiplier leads to outsized improvements in output. For example, if a single researcher toiled away to increase their readership by 50%, that would likely have a smaller impact than the Nonlinear Library increasing the readership of the EA Forum by even 1%. This is because 50% times a small number is still very small, whereas 1% times a large number is actually quite large. And there’s reason to believe that the library could have much larger effects on readership, which brings us to our next section.
Why it’s useful
EA needs more audio content
EA has a vibrant online community, and there is an amazing amount of well researched, insightful, and high impact content. Unfortunately, it’s almost entirely in writing and very little is in audio format.
There are a handful of great podcasts, such as the 80,000 Hours and FLI podcasts, and some books are available on Audible. However, these episodes come out relatively infrequently and the books even less so. There’s a few other EA-related podcasts, including one for the EA Forum, but a substantial percentage have become dormant, as is far too common for channels because of the considerable amount of effort required to put out episodes.
There are a lot of listeners
The limited availability of audio is a shame because many people love to listen to content. For example, ever since the 80,000 Hours podcast came out, a common way for people to become more fully engaged in EA is to mainline all of their episodes. Many others got involved through binging the HPMOR audiobook, as Nick Lowry puts it in this meme. We are definitely a community of podcast listeners.
Why audio? Often, you can’t read with your eyes but you can with your ears. For example, when you’re working out, commuting, or doing chores. Sometimes it’s just for a change of pace. In addition, some people find listening to be easier than reading. Because it feels easier, they choose to spend time learning that might otherwise be spent on lower value things.
Regardless, if you like to listen to EA content, you’ll quickly run out of relevant podcasts - especially if you’re listening at 2-3x speed - and have to either use your own text-to-speech software or listen to topics that are less relevant to your interests.
Existing text-to-speech solutions are sub-optimal
We’ve experimented extensively with text-to-speech software over the years, and all of the dozens of programs we’ve tried have fairly substantial flaws. In fact, a huge inspiration for this project was our frustration with the existing solutions and thinking that there must be a better way. Here are some of the problems that often occur with these apps:
In the end, this leads to only the most motivated people using the services, leaving out a huge percentage of the potential audience. (2)
How The Nonlinear Library fixes these problems
To make it as seamless as possible for EAs to use, we decided to release it as a podcast so you can use the podcast app you’re already familiar with. Additionally, podcast players tend to be reasonably well designed and offer great customizability of playlists and speeds.
We’re paying for some of the best AI voices because old voices suck. And we spent a bunch of time fixing weird formatting errors and mispronunciations and have a system to fix other recurring ones. If you spot any frequent mispronunciations or bugs, please report them in this form so we can continue improving the service.
Initially, as an MVP, we’re just posting each day’s top upvoted articles from the EA Forum, Alignment Forum, and LessWrong. (3) We are planning on increasing the size and quality of the library over time to make it a more thorough and helpful resource.
Why not have a human read the content?
The Astral Codex Ten podcast and other rationalist podcasts do this. We seriously considered this, but it’s just too time consuming, and there is a lot of written content. Given the value of EA time, both financially and counterfactually, this wasn’t a very appealing solution. We looked into hiring remote workers but that would still have ended up costing at least $30 an episode. This compared to approximately $1 an episode via text-to-speech software.
On top of the time costs leading to higher monetary costs, it also makes us able to make a far more complete library. If we did this with humans and we invested a ton of time and management, we might be able to convert seven articles a week. At that rate, we’d never be able to keep up with new posts, let alone include the historical posts that are so valuable. With text-to-speech software, we could have the possibility of keeping up with all new posts and converting the old ones, creating a much more complete repository of EA content. Just imagine being able to listen to over 80% of EA writing you’re interested in compared to less than 1%.
Additionally, the automaticity of text-to-speech fits with Nonlinear’s general strategy of looking for interventions that have “passive impact”. Passive impact is the altruistic equivalent of passive income, where you make an upfront investment and then generate income with little to no ongoing maintenance costs. If we used human readers, we’d have a constant ongoing cost of managing them and hiring replacements. With TTS, after setting it up, we can mostly let it run on its own, freeing up our time to do other high impact activities.
Finally, and least importantly, there is something delightfully ironic about having an AI talk to you about how to align future AI.
On a side note, if for whatever reason you would not like your content in The Nonlinear Library, just fill out this form. We can remove that particular article or add you to a list to never add your content to the library, whichever you prefer.
Future Playlists (“Bookshelves”)
There are a lot of sub-projects that we are considering doing or are currently working on. Here are some examples:
Who we are
We're Nonlinear, a meta longtermist organization focused on reducing existential and suffering risks. More about us.
Footnotes
(1) Sometimes the researcher is the same person as the person who puts the results into action, such as Charity Entrepreneurship’s model. Sometimes it’s a longer causal chain, where the research improves the conclusions of another researcher, which improves the conclusions of another researcher, and so forth, but eventually it ends in real world actions. Finally, there is often the intrinsic happiness of doing good research felt by the researcher themselves.
(2) For those of you who want to use TTS for a wider variety of articles than what the Nonlinear Library will cover, the ones I use are listed below. Do bear in mind they each have at least one of the cons listed above. There are probably also better ones out there as the landscape is constantly changing.
(3) The current upvote thresholds for which articles are converted are:
25 for the EA forum
30 for LessWrong
No threshold for the Alignment Forum due to low volume
This is based on the frequency of posts, relevance to EA, and quality at certain upvote levels.