I’m currently writing a fantasy novel to encourage people to join the EA community.
I’ve made a living self publishing novels for years (mainly Pride and Prejudice fan fiction, but also two litrpg-esque works), and I’ve been interested in writing something to directly promote Effective Altruism for a while.
I applied for one of Scott Alexander’s ACX grants to fund taking three - five months off from other projects to write this, and I received an ACX plus grant. Now that I’m actively developing the story, I’m hoping to get some thoughts from the community on ideas (especially about AI safety and regulation) that are important to signal boost/ treat as info hazards. Also, I think it would be cool to try brainstorming and bouncing my ideas off of people who are interested in this sort of fiction outreach project. I’ve never really showed my work to people who might meaningfully critique it while it was still in the outline phase, and I’d like to see if I get a lot of value from doing that.
I’ve set up a discord to talk about the story in, and here is a link to a shared google doc that has my outline. It’s here — if you are quick you might be the first person there who isn’t me. Come say hi!
Here is a link to my current outline if you want to look at it and leave comments:
Also, I’m planning on having a discussion event on the discord server on Tuesday April 5 at 6 pm Central European Time (which is, I’m afraid, 9 am in California). The plan is for me to talk a bit more about the project, and then talk about any questions or ideas people have. So it would be awesome if you join, and even if you are just vaguely curious, I’d love you to come.
The theory underlying why writing a novel like this is a good idea has two components:
HPMOR, Atlas Shrugged, and a variety of books that did not influence me personally such as The Alchemist, Who Moved the Cheese, or Ishmael so the value of works of fiction in getting people to take ideas seriously, start talking about issues, and engage with a community.
A good novel for exposing people to these ideas needs to be successful as a novel, and hence writing something that targets an existing market niche makes sense.
Two years ago I actually wrote a whole essay about the topic of using fiction to change the world that I cross posted on the EA forum and Less Wrong.
I’m looking for some help in terms of what ideas to fill the novel with:
So, are there ideas around AI risk that people think are robustly likely to make the long term future better, despite the problem of cluelessness and missing crucial considerations?
If we don’t really have ideas that are robustly positive despite cluelessness, are there things that even though there are plausible worlds in which they make things worse, the balance of the evidence suggests they will improve the odds of a good long-term future, and/or make a good long term future even better than it would otherwise be?
While I don’t have any ideas that I think can robustly meet a cluelessness constraint, I do think that certain things are sufficiently likely to meet this requirement to be a good idea to promote, for example:
Strongly promoting basic income and some form of universal ownership of space and deep sea resources.
Successfully increasing collaboration and peaceful feelings among major powers
Convincing lots of AI researchers and the top business people in companies researching AI that AI could be really dangerous (this seems more likely than not to be good, even though it has infohazard ways it can backfire)
Passing luddite style regulations designed to burden AI research teams with enormous safety reporting requirements (definitely plenty of ways to backfire, again)
Expanding the Effective Altruist community’s size and influence
If you have ideas that you think are robustly likely to do good, please tell me (along with the robustness argument).
Or do you have ideas that are likely to be long term good, even if the argument for their goodness is not robust?
I intend to keep the novel neutral in terms of not taking stands on arguments within the EA community, but to have different characters express a version of most popular views.
I also want to have sympathetic characters express the most common objections to EA thinking, including:
Why aren’t you pursuing systemic change?
I feel vastly more certain that I’m doing good when I help nearby (time/space) people.
I have a particular duty towards my own community and projects rather than other communities.
Donating ‘that’ much is just crazy.
What is important is being a kind person, rather than making the biggest difference.
I just can’t think that way in terms of choosing a career or a cause, I want to spend my time doing the things that matter to me personally even if they aren’t the best possible thing to do.
What are other common criticisms or arguments that are worth having someone express?
This outline lists the majority of the things I’m currently trying to include in the novel, and most of them already have a place in the current (fairly tentative and non detailed) outline.
Donate a larger portion of their income (ie 10% giving pledges)
The book will have characters act and think about their actions in ways that model behavior that a normal person can use
Conversations and constructing a fictional society designed to normalize donating ten percent as something that ordinary people just decide to do.
It will try to minimize guilt and perfectionism appeals that might make someone feel really bad or filled with regrets after reading this.
Evaluating giving opportunities to improve effectiveness
Cause neutrality
Helping distant people is just as important, and just as much a thing for normal people to do as helping nearby people
A discussion of long termism, without having the text necessarily side one way or the other
Expected Value Calculations
Some interventions are vastly more effective than others
Tractability/ Neglectedness/ Importance framework
Thinking about direct work in EA terms
Replaceability
Are talent gaps or funding gaps bigger
Leveraging existing skills in new ways
Awareness of the basic cause areas
X-risk/ Long termism
AI safety and biosecurity
Cluelessness and crucial considerations
Should potential future humans dominate our decision making
Carl Shulman’s argument that long termism isn’t necessary to motivate x-risk as an important issue
Uncertainty criticisms
Paul Torres messianism and extreme behavior criticism
Animal welfare
Include the weird ideas like wild animal suffering and insect suffering
Global Health and welfare
Self Care for effective altruists
We are not altruistic good maximizing machines
Any step in the right direction is an improvement, and worth doing
Any good policy is a policy that actual human beings can use.
Secondary Cause areas:
Institutional decision making
Improvements in the scientific process
Other possibly important ideas?
Actual things that have been done to enormously improve the world, and the ability for individuals to make new good things happen
Do people think I’ve left out anything important from this list that is essential to be included in an introduction to effective altruism? Reversing the question: Are there things people think are essential to not include in an introductory discussion of effective altruism?
I want to note, I don’t think having more advanced topics intermixed with everything else is a bad thing or a problem. The goal very much will be for everything to flow in the text, and to be discussed through arguments and dialogues that are supposed to be fun to read (and that will therefore be forced to simplify the actual ideas and arguments, perhaps occasionally too far).
The other area I’m looking for help is with the publication plan. I’m currently planning when it is published to upload the novel chapter by chapter to Royal Road and the Space Battles forums, both of which are popular places to serially publish fantasy/scifi fiction for free. Around when the regular posting of chapters gets to the middle of the novel/ first book, I plan to publish the whole thing on Amazon.com and other ebook retailers while continuing to publish new chapters on the free sites every few days. I’d also probably have a version on my own website that is updated at the same time as the Royal Road and Space Battles versions.
I know there are several other free fiction sites that are popular, which I’d like to post at, but I am not sure which ones will be worth the time and effort to keep updated. So if people have ideas about that, please tell me.
I’m currently writing a fantasy novel to encourage people to join the EA community.
I’ve made a living self publishing novels for years (mainly Pride and Prejudice fan fiction, but also two litrpg-esque works), and I’ve been interested in writing something to directly promote Effective Altruism for a while.
I applied for one of Scott Alexander’s ACX grants to fund taking three - five months off from other projects to write this, and I received an ACX plus grant. Now that I’m actively developing the story, I’m hoping to get some thoughts from the community on ideas (especially about AI safety and regulation) that are important to signal boost/ treat as info hazards. Also, I think it would be cool to try brainstorming and bouncing my ideas off of people who are interested in this sort of fiction outreach project. I’ve never really showed my work to people who might meaningfully critique it while it was still in the outline phase, and I’d like to see if I get a lot of value from doing that.
I’ve set up a discord to talk about the story in, and here is a link to a shared google doc that has my outline. It’s here — if you are quick you might be the first person there who isn’t me. Come say hi!
Here is a link to my current outline if you want to look at it and leave comments:
Also, I’m planning on having a discussion event on the discord server on Tuesday April 5 at 6 pm Central European Time (which is, I’m afraid, 9 am in California). The plan is for me to talk a bit more about the project, and then talk about any questions or ideas people have. So it would be awesome if you join, and even if you are just vaguely curious, I’d love you to come.
The theory underlying why writing a novel like this is a good idea has two components:
Two years ago I actually wrote a whole essay about the topic of using fiction to change the world that I cross posted on the EA forum and Less Wrong.
I’m looking for some help in terms of what ideas to fill the novel with:
So, are there ideas around AI risk that people think are robustly likely to make the long term future better, despite the problem of cluelessness and missing crucial considerations?
If we don’t really have ideas that are robustly positive despite cluelessness, are there things that even though there are plausible worlds in which they make things worse, the balance of the evidence suggests they will improve the odds of a good long-term future, and/or make a good long term future even better than it would otherwise be?
While I don’t have any ideas that I think can robustly meet a cluelessness constraint, I do think that certain things are sufficiently likely to meet this requirement to be a good idea to promote, for example:
If you have ideas that you think are robustly likely to do good, please tell me (along with the robustness argument).
Or do you have ideas that are likely to be long term good, even if the argument for their goodness is not robust?
I intend to keep the novel neutral in terms of not taking stands on arguments within the EA community, but to have different characters express a version of most popular views.
I also want to have sympathetic characters express the most common objections to EA thinking, including:
Why aren’t you pursuing systemic change?
I feel vastly more certain that I’m doing good when I help nearby (time/space) people.
I have a particular duty towards my own community and projects rather than other communities.
Donating ‘that’ much is just crazy.
What is important is being a kind person, rather than making the biggest difference.
I just can’t think that way in terms of choosing a career or a cause, I want to spend my time doing the things that matter to me personally even if they aren’t the best possible thing to do.
What are other common criticisms or arguments that are worth having someone express?
This outline lists the majority of the things I’m currently trying to include in the novel, and most of them already have a place in the current (fairly tentative and non detailed) outline.
Do people think I’ve left out anything important from this list that is essential to be included in an introduction to effective altruism? Reversing the question: Are there things people think are essential to not include in an introductory discussion of effective altruism?
I want to note, I don’t think having more advanced topics intermixed with everything else is a bad thing or a problem. The goal very much will be for everything to flow in the text, and to be discussed through arguments and dialogues that are supposed to be fun to read (and that will therefore be forced to simplify the actual ideas and arguments, perhaps occasionally too far).
The other area I’m looking for help is with the publication plan. I’m currently planning when it is published to upload the novel chapter by chapter to Royal Road and the Space Battles forums, both of which are popular places to serially publish fantasy/scifi fiction for free. Around when the regular posting of chapters gets to the middle of the novel/ first book, I plan to publish the whole thing on Amazon.com and other ebook retailers while continuing to publish new chapters on the free sites every few days. I’d also probably have a version on my own website that is updated at the same time as the Royal Road and Space Battles versions.
I know there are several other free fiction sites that are popular, which I’d like to post at, but I am not sure which ones will be worth the time and effort to keep updated. So if people have ideas about that, please tell me.
Here’s the discord link again
And here is my current outline
Also, I'd like to thank Milan, Richard Horvath and Gergő Gáspár for reading the draft of this post for suggestions and mistakes.