Review
Thanks! Regarding the survey, some people might be having issues like me for their lack of google account or google device. If you can consider using other forms like the one supplied by nextcloud (framaform etc) that might help!
Sorry for being that guy and thanks for the summaries :)
We've just passed the half year mark for this project! If you're reading this, please consider taking this 5 minute survey - all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already!
Back to our regularly scheduled intro...
This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
If you'd like to receive these summaries via email, you can subscribe here.
Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!
Object Level Interventions / Reviews
AI
Proposals for the AI Regulatory Sandbox in Spain
by Guillem Bas, Jaime Sevilla, Mónica Ulloa
Author’s summary: “The European Union is designing a regulatory framework for artificial intelligence that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.”
Power laws in Speedrunning and Machine Learning
by Jaime Sevilla
Paper by Epoch. World record progressions in video game speedrunning fit very well to a power law pattern. Due to lack of longitudinal data, the authors can’t provide definitive evidence of power-law decay in Machine learning benchmark improvements (though it is a better model than assuming no improvement over time). However, if they assume this model, it would suggest that a) machine learning benchmarks aren’t close to saturation and b) sudden large improvements are infrequent but aren’t ruled out.
No, the EMH does not imply that markets have long AGI timelines
by Jakob
Argues that interest rates are not a reliable instrument for assessing market beliefs about transformative AI (TAI) timelines, because of two reasons:
My Assessment of the Chinese AI Safety Community
by Lao Mein
On April 11th, the Cybersecurity Administration of China released a draft of “Management Measures for Generative Artificial Intelligence Services” for public comment. Some in the AI safety community think this is a positive sign that China is considering AI risk and may participate in a disarmament treaty. However, the author argues that it is just a PR statement, no-one in China is talking about it, and the focus if any is on near-term stability.
They also note that the EA/Rationalist/AI Safety forums in China are mostly populated by expats or people physically outside of China, most posts are in English, and there is little significant AI Safety work in China. They suggest there is a lack of people at the interface of Western EA and Chinese technical work, and that you can’t just copy Western EA ideas over to China due to different mindsets.
AI doom from an LLM-plateau-ist perspective
by Steven Byrnes
Transformative AI (TAI) might come about via a large language model (LLM), something similar to / involving LLMs, or a quite different algorithm. An ‘LLM-plateau-ist’ believes LLMs specifically will plateau in capabilities before reaching TAI levels. The author makes several points:
Transcript and Brief Response to Twitter Conversation between Yann LeCunn and Eliezer Yudkowsky
by Zvi
Transcript of a twitter conversation between Yann LeCun (Chief AI Scientist at Meta) and Eliezer Yudkowsky. Yann shares their proposal for making AIs more steerable by optimizing objectives at run time, rejects that matching objectives with human values is particularly difficult, and argues Eliezer needs to stop scaremongering. Eliezer argues that inner alignment is difficult, and there is a real risk being ignored by Yann.
My views on “doom”
by paulfchristiano
The author thinks the chances of humanity irreversibly messing up our future within 10 years of building powerful AI are a total of 46%, split into:
Other Existential Risks (eg. Bio, Nuclear)
Genetic Sequencing of Wastewater: Prevalence to Relative Abundance
by Jeff Kaufman
Identifying future pandemics via sequencing wastewater is difficult because sequencing reads are several steps removed from infection rates. The author and several others at the Nucleic Acid Observatory are working through a plan to understand how relative abundance (fraction of sequencing reads matching an organism) varies with prevalence (what fraction of people are currently infected) and organism (eg. when sampling wastewater you'd expect disproportionately more gastrointestinal than blood pathogens). They’ve gathered some initial data from papers that published it in the Sequencing Read Archive, and begun cleaning it - they welcome others to let them know if anything looks off in this data.
Report: Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS)
by JorgeTorresC, Jaime Sevilla, Mónica Ulloa, Daniela Tiznado, Roberto Tinoco, JuanGarcia, Morgan_Rivers, Denkenberger, Guillem Bas
Linkpost for this report. According to Xia et al. (2022) ~75% of the world's population could starve to death in a severe nuclear winter. Argentina has better conditions to survive this scenario than most countries, and is one of the world’s leading producers and exporters of food. Because of this, the authors have put together a strategic proposal recommending initiatives and priority actions for the Argentinian government to consider, including:
Animal Welfare
Developing Farmed Animal Welfare in China - Engaging Stakeholders in Research for Improved Effectiveness
by Jack_S, jahying
Asia holds >40% of farmed land animals and >85% of farmed fish, the majority in China. However Asian advocates only receive an estimated ~7% of global animal advocacy funding. Good Growth describes two stakeholder-engaged studies they conducted to better understand animal advocates and consumers in China.
Key findings about the animal welfare community:
Key findings about attitudes of the public toward animal welfare:
Key findings about attitudes of the public toward alternative proteins:
These findings got a positive reception from both Chinese and international advocacy organisations. The authors suggest similar stakeholder-engaged and qualitative methods (see post for details on methodologies used) are under-utilized in EA. They’re happy to chat at info@goodgrowth.io with those interested in exploring this.
Global Health and Development
Better weather forecasting: Agricultural and non-agricultural benefits in low- and lower-middle-income countries
by Rethink Priorities, Aisling Leow, jenny_kudymowa, bruce, Tom Hird, JamesHu
Shallow investigation on whether improving weather forecasting could have benefits for agriculture in low- and lower-middle income countries. Global Numerical Weather (GNW) predictions are often used in these countries, and aren't of great quality. The author’s estimate additional observation stations would not cross Open Philanthropy's cost-effectiveness bar (16x - 162x vs. a bar of 1000x). However, they suggest other interventions like identifying where global numerical weather predictions are already performing well (they work better in some areas than others) or extending access to S2S databases could be worthwhile.
World Malaria Day: Reflecting on Past Victories and Envisioning a Malaria-Free Future
by 2ndRichter, GraceAdams, Giving What We Can
Giving What We Can is running a fundraiser for World Malaria Day, and overviews efforts to date in preventing the disease.
In 2021, over 600,000 people died of malaria. It costs ~$5000 USD to save one of these lives via bednets or seasonal medicine. Using data from openbook.fyi, the authors estimate that donations from EAs have saved >25,000 lives from malaria. Some EAs have also actively developed new interventions / implementations (eg. ZZapp Malaria).
They also note almost half of the world’s countries have eradicated malaria via public health efforts since 1945, with it being eradicated from Europe in 1970. 95% of malaria cases now occur in Africa. Recent advances in vaccines and gene drives provide hope for eradicating malaria in countries still affected.
Rationality, Productivity & Life Advice
What are work practices that you’ve adopted that you now think are underrated?
by Lizka
Top comments include:
No, *You* Need to Write Clearer
by NicholasKross
Suggests the AI alignment and safety community needs to write exceptionally clearly and specifically, spelling out full reasoning and linking pages that explain baseline assumptions as needed. This is because the field is pre-paradigmatic, so little can be assumed and there are no ‘field basics’ to fall back on.
Community & Media
Current plans as the incoming director of the Global Priorities Institute
by Eva
Eva Vivalt is Assistant Professor in the Department of Economics at the University of Toronto, and the new Executive Director of the Global Priorities Institute (GPI). Their current views on what GPI should do more of are:
Suggest candidates for CEA's next Executive Director
by MaxDalton, Michelle_Hutchinson, ClaireZabel
The Centre for Effective Altruism (CEA) is searching for a new Executive Director. You can suggest candidates by May 3rd and/or provide feedback on CEA’s vision and hiring process via this form.
They are open to and enthusiastic about candidates who want to make significant changes (eg. shutting down or spinning off programs, focusing on specific causes areas vs. promoting general principles) - though this isn’t a requirement. It’s also not a requirement candidates have experience working in EA, are an unalloyed fan of EA, or live in Oxford. The post also lays out the hiring process, which includes input from an advisor outside of EA.
Seeking expertise to improve EA organizations
by Julia_Wise, Ozzie Gooen
A task force - including the authors and others in the EA ecosystem that are TBD - is being created to sort through reforms that EA organizations might enact and recommend the most promising ideas. As part of the process the authors are keen to gather ideas and best practices from people who know a lot about areas outside EA (eg. whistleblowing, nonprofit boards, COI policies, or organization and management of sizeable communities). You can recommend yourself or others here.
Life in a Day: The film that opened my heart to effective altruism
by Aaron Gertler
Life in a Day is a 90-minute film which shows what different people around the world are doing throughout a day. It shows in many ways we are all the same, and creates empathy. The author thinks without watching this, they may not have had the “yes, this is obviously right” experience when hearing about a philosophy dedicated to helping people as much as possible.
Two things that I think could make the community better
by Kaleem
1. CEA’s name should change because it leads to misunderstanding of what they do / are responsible for. Eg. see these two quotes by executive directors of CEA, which contrast with some community members' perceptions:
In the comments, Ben West (CEA Interim Managing Director) mentions renaming CEA would be a decision for a permanent Executive Director, so won’t happen in the short term.
2. The ‘community health team’ is part of CEA, which is something which might reduce the community’s trust in the community health team. Separating it would allow it to build an impartial reputation, and reduce worries of:
In the comments, Chana Messinger (interim head of Community Health) mentions they’ve been independently thinking about whether to spin out or be more independent, and gives considerations for and against.
David Edmonds's biography of Parfit is out
by Pablo
A biography of philosopher Derek Parfit is now published, which includes coverage of his involvement with effective altruism.
Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023)
by Chris Scammell, DivineMango
A collection of resources on how to be okay in the face of transformative AI approaching. Includes:
Story of a career/mental health failure
by zekesherman
The author shares their personal career story, involving attempting to switch pathway from finance (earning to give) into computer science in order to maximize impact, despite poor personal fit for the latter. This resulted in years of unemployment and poor mental health and is something they regret. They also suggest some actions the community could take to reduce these risks eg. being more proactive about watching and checking in on other members of the EA community.