I am not sure if this has been well enough discussed elsewhere regarding Project Lawful, but it is worth reading despite some fairly high value-of-an-hour multiplied by the huge time commitment and the specifics of how it is written adds many more elements to "pros" side of the general "pros and cons" considerations of reading fiction.
It is also probably worth reading even if you've got a low tolerance for sexual themes - as long as that isn't so low that you'd feel injured by having to read that sorta thing.
If you've ever wondered why Eliezer describes himself as a decision theorist, this is the work that I'd say will help you understand what that concept looks like in his worldview.
I read it first in the Glowfic format, and since enough time had passed since finishing it when I found the Askwho AI audiobook version, I also started listening to that.
It was taken off of one of the sites hosting for TOS, and so I've since been following it update to update on Spotify.
Takeaways from both formats:
Glowfic is still superior if you have the internal motivation circuits for reading books in text. The format includes reference images for the characters in different poses/expressions to follow along with the role playing. The text often includes equations, lists of numbers, or things written on whiteboards which are hard to follow in pure audio format. There are also in-line external links for references made in the work - including things like background music to play during certain scenes.
(I recommend listening to the music anytime you see a link to a song.)
This being said, Askwho's AI audiobook is the best member of its format I've seen so far. If you have never listened to another AI voiced audiobook, I'd almost recommend not starting with this one, because you risk not appreciating it as much as it deserves, and simultaneously you will ruin your chances of being able to happily listen to other audiobooks done with AI. This is, of course, a joke. I do recommend listening to it even if it's the first AI audiobook you'll ever listen to - it deserves being given a shot, even by someone skeptical of the concept.
I think a good compromise position, with the audio version is to listen to chapters with lecture content with the glowfic in another tab, in "100 posts per page" mode, on the page containing the rough start-to-end transcript for that episode. Some of the discussion you will likely be able to follow in working memory while staring at a waiting room wall, but good luck on heavily-math stuff. If you're driving and get to heavy-math, it'd probably also be a good idea to just have that section open on your phone so you can scroll through those parts again 10 minutes later while you're waiting for your friend to meet you out in the parking lot.
TL;DR - IMO Project Lawful is worth reading for basically everyone, despite length and other tiny flinches from content/genre/format. Glowfic format has major benefits, but Askwho did a extraordinarily good job at making the AI-voiced format work. You should probably have the glowfic open somewhere alongside the audiobook, since some things are going to be lost if you're trying to do it purely as an audiobook.
I gave it a try two years ago, and I rly liked the logic lectures early on (basicly a narrativization of HAE101 (for beginners)), but gave up soon after. here are some other parts I lurned valuable stuff fm:
do u have recommendations for other sections u found especially insightfwl or high potential-to-improve-effectiveness? no need to explain, but link is appreciated so I can tk look wo reading whole thing.
(edit: formatting on this appears to have gone all to hell and idk how to fix it! Uh oh!)
(edit2: maybe fixed? I broke out my commentary into a second section instead of doing a spoiler section between each item on the list.)
(edit3: appears fixed for me)
Yep, I can do that legwork!
I'll add some commentary, but I'll "spoiler" it in case people don't wanna see my takes ahead of forming their own, or just general "don't spoil (your take on some of) the intended payoffs" stuff.
https://www.projectlawful.com/posts/6334 (Contains infohazards for people with certain psychologies, do not twist yourself into a weird and uncomfortable condition contemplating "Greater Reality" - notice confusion about it quickly and refocus on ideas for which you can more easily update your expectations of future experience within the universe you appear to be getting evidence about. "Sanity checks" may be important. The ability to say to yourself "this is a waste of time/effort to think about right now" may also be important.) (This is a section of Planecrash where a lot of the plot-relevant events have already taken place and are discussed, so MAJOR SPOILERS.) (This is the section that "Negative Coalition" tweet came from.)
"No rescuer hath the rescuer. No Lord hath the champion, no mother and no father, only nothingness above." What is the right way to try to become good at the things Eliezer is good at? Why does naive imitation fail? There is a theme here, one which has corners that appear all over Eliezer's work - see Final Words for another thing I'd call a corner of this idea. What is the rest? How does the whole picture fit together? Welp. I started with writing a conversation in the form of Godel Escher Bach, or A Semitechnical Introduction to Solomonoff Induction, where a version of me was having a conversation with an internal model of Eliezer I named "Exiezer" - and used that to work my way through connecting all of those ideas in an extended metaphor about learning to craft handaxes. I may do a LessWrong post including it, if I can tie it to an sufficiently high-quality object-level discussion on education and self improvement.
This is a section titled "the meeting of their minds" where Keltham and Carissa go full "secluded setting, radical honesty, total mindset dump." I think it is one of the most densely interesting parts of the book, and I think represents a few techniques more people should try. "How do you know how smart you really are?" Well, have you ever tried writing a character smarter than you think you are doing something that requires more intelligence than you feel like you have? What would happen if you attempted that? Well, you can have all the time in the world to plan out every little detail, check over your work, list alternatives, study relevant examples/material..m etc. etc. This section has the feeling of people actually attempting at running the race they've been practicing for using the crispest versions of the techniques they've been iterating on. Additionally, "have you ever attempted to 'meet minds' with someone? What sort of skills would you want to single out to practice? What sort of setting seems like it'd work for that?" This section shows two people working through a really serious conflict. It's a place where their values have come seriously into conflict, and yet, to get more of what they both want, they have to figure out how to cooperate. Also, they've both ended up pretty seriously damaged, and they have things they need to untangle.
This is a section called "to earth with science" and... Well, how useful it is depends on how much it's going to be useful to you to think more critically about the academic/scientific institutions we have on this planet. It's very much Eliezer doing a psuedo-rant about what's broken here that echoes the tone of something like Inadequate Equilibria. The major takeaway would be something like the takeaways you get from a piece of accurate satire - the lèse majesté which shatters some of the memes handed down to you by the wiser-than-thou people who grimly say "we know it's not perfect, but it's the best we have" and expect you not to have follow-up questions about that type of assertion.
This is my favorite section from "to hell with science." The entire post is a great lecture about the philosophy and practice of science, but this part in particular touches on a concept I expect to come up in more detail later regarding AIs and agency. One of the cruxes of this whole AI debate is whether you can separate out "intelligence" and "agency" - and this part provides an explanation for why that whole idea is something of a failure to conceptualize these things correctly.
This is Keltham lecturing on responsibility, the design of institutions, and how to critique systems from the lens of someone like a computer programmer. This is where you get some of the juiciest takeaways about Civilization as Eliezer envisions it. The "basic sanity check" of "who is the one person responsible for this" & requisite exception handling is particularly actionable, IMO.
"Learn when/where you can take quick steps and plant your feet on solid ground." There's something about feedback loops here, and the right way to start getting good at something. May not be terribly useful to a lot of people, but it stood out as a prescription for people who want to learn something. Invent a method, try to cheat, take a weird shortcut, guess. Then, check whether your results actually work. Don't go straight for "doing things properly" if you don't have to.
Keltham on how to arrive at Civilization from first-principles. This is one of the best lectures in the whole series from my perspective. The way it's framed in the form of a thought-experiment that I could on-board and play with in spare moments.
Hopefully some of these are interesting and useful to you Mir, as well as others here. There's a ton of other stuff, so I may write a follow-up with more later on if I have more time.
This is awesome, thank you so much! Green leaf indicates that you're new (or new alias) here? Happy for LW! : )
"But how does Nemamel grow up to be Nemamel? She was better than all her living competitors, there was nobody she could imitate to become that good. There are no gods in dath ilan. Then who does Nemamel look up to, to become herself?"
I first learned this lesson in my youth when, after climbing to the top of a leaderboard in a puzzle game I'd invested >2k hours into, I was surpassed so hard by my nemesis that I had to reflect on what I was doing. Thing is, they didn't just surpass me and everybody else, but instead continued to break their own records several times over.
Slightly embarrassed by having congratulated myself for my merely-best performance, I had to ask "how does one become like that?"
My problem was that I'd always just been trying to get better than the people around me, whereas their target was the inanimate structure of the problem itself. When I had broken a record, I said "finally!" and considered myself complete. But when they did the same, they said "cool!", and then kept going. The only way to defeat them, would be by not trying to defeat them, and instead focus on fighting the perceived limits of the game itself.
To some extent, I am what I am today, because I at one point aspired to be better than Aisi.
Two years ago, I didn't realize that 95% of my effort was aimed at answering what ultimately was other people's questions. What happens when I learn to aim all my effort on questions purely arising from bottlenecks I notice in my own cognition?
When he knows that he must justify himself to others (who may or may not understand his reasoning), his brain's background-search is biased in favour of what-can-be-explained. For early thinkers, this bias tends to be good, because it prevents them from bullshitting themselves. But there comes a point where you've mostly learned not to bullshit yourself, and you're better off purely aiming your cognition based on what you yourself think you understand.
— from a comment critiquing why John still has to justify his research priorities to be funded
I hate how much time my brain (still) wastes on daydreaming and coming up with sentences optimized for impressing people online. What happens if I instead can learn to align all my social-motivation-based behaviours to what someone would praise if they had all the mental & situational context I have, and who's harder to fool than myself? Can my behaviour then be maximally aligned with [what I think is good], and [what I think is good] be maximally aligned with my best effort at figuring out what's good?
I hope so, and that's what Maria is currently helping me find out.
Thanks so much! Glad you are enjoying the audio format. I really agree this story is worth "reading" in some form, it's why I'm working on this project.
Is the recording schedule based on Patreon cash flow? I.e. if more people support, could we get episodes faster? Or is it also limited by your time? (not sure how much manual labour goes into this vs just paying for the service) Would it be possible to put money toward a specific project? This may be an interesting incentive for people who'd like to see more of their favourite story sooner:)
(ElevenLabs reading of this post:)
I'm excited to share a project I've been working on that I think many in the Lesswrong community will appreciate - converting some rational fiction into high-quality audiobooks using cutting-edge AI voice technology from ElevenLabs, under the name "Askwho Casts AI".
The keystone of this project is an audiobook version of Planecrash (AKA Project Lawful), the epic glowfic authored by Eliezer Yudkowsky and Lintamande. Given the scope and scale of this work, with its large cast of characters, I'm using ElevenLabs to give each character their own distinct voice. It's a labor of love to convert this audiobook version of this story, and I hope if anyone has bounced off it before, this might be a more accessible version.
Alongside Planecrash, I'm also working on audiobook versions of two other rational fiction favorites:
I'm also putting out a feed where I convert any articles I find interesting, a lot of which are in the Rat Sphere.
My goal with this project is to make some of my personal favorite rational stories more accessible by allowing people to enjoy them in audiobook format. I know how powerful these stories can be, and I want to help bring them to a wider audience and to make them easier for existing fans to re-experience.
I wanted to share this here on Lesswrong to connect with others who might find value in these audiobooks. If you're a fan of any of these stories, I'd love to get your thoughts and feedback! And if you know other aspiring rationalists who might enjoy them, please help spread the word.
What other classic works of rational fiction would you love to see converted into AI audiobooks?