I no longer keep this up to date. You can find my bio on Linkedin.
https://clay.earth looks interesting! Are you still using it now (7 months later)? Would you still recommend it?
FYI: The link in the first line didn't work for me ("Invalid URL: https://ai-plans.com"). This link works: https://www.ai-plans.com/
Using ChatGPT etc gives people such an advantage in (some) jobs and is easy to use "secretly" that it seems highly unlikely that a significant amount of people would boycott it.
My guess is that at most maybe 1-10% of a population would actually adhere to a boycott, and those who do would be in a much worse position to work on AI Safety and other important matters.
What about democratically elected non-profit boards?
Most national EA organisations with paid staff (like EA France, EA Norway or EA Germany just to mention a few) are registered associations that have their board (re-)elected by its members every 1-2 years. That way board members can be fired by the association members they represent.
I don't think this is perfect, the average member often does not have enough info to judge the performance of a board member and elections have their own downsides (like sometimes favoring popular and charismatic candidates over the best candidates for the job), but at least for national EA orgs it does seem like the best option to me (medium confidence).
This seems a lot more common in mainland Europe than in the UK or the US. Is this something we should explore more for other nonprofits as well? What other non-profits have clearly defined members (e.g. beneficiaries, stakeholders, ...) that could elect a board?
In case the organiser does not update this anymore, we'll now meet on Saturday (15 January) from 1pm onwards at Baobab in Santa Cruz, Tenerife. So far seven LWers indicated interest. Feel free to join! :)
Address: C. Antonio Domínguez Alfonso, 30, 38003 Santa Cruz de Tenerife, Spain
Google Maps Link: https://g.page/baobab-santa-cruz?share
Thanks for writing this up!
This seems really useful for aspiring rationality organisers, will forward it to those I meet.
Thanks a lot for compiling this! This is useful - I'll forward it to some friends who are looking into ML programs right now.
I think a lot of EA community members would be interested in this as well, but many of them may not be active on the LW forum. Maybe it's worth reposting this on the EA forum? Just this post with links to the LW posts should be enough - don't think you need to repost everything on the EA forum. In case you think it's useful but can't repost it yourself for some reason I can also do it, just let me know (though I think it's better if you repost it).
I've been following Sam Altman's messaging for a while, and it feels like Altman does not have one consistent set of beliefs (like an ethics/safety researcher would) but tends to say different things in different times and places, depending on what seems currently most useful for achieving his goals. Many CEOs do that, but he seems to do that more than other OpenAI staff or executives at Anthropic or Deepmind. I agree with your conclusion, to pay less attention to their messaging and more to their actions.