Hi Alistair! You might want to look into more strategic ways of planning activism work. It's true that many social movements start becoming visible with protests, but there is a lot of background work involved in a protest, or any activism.
It looks like your goal is to slow down AI development.
First, you'll want a small working group who can help you develop your message, analyses, and tactics. A few of your colleagues who are deeply concerned about AI risk would work. When planning most things, it's helpful to have people who can temper your impulses and give you more ideas.
I see that you want to "Develop clear message, and demands, and best approach to this protest. Clear explanation of ai dangers that anybody can understand." I recommend doing this more than 2 weeks out from launching a campaign, with help from your working group. There are many important talking points you can use around AI risk, but if you just pick one clear phrase for your campaign, it can get more traction.
After you're clear on the one most important message for you to spread right now, you want to know who you need to tell and who can help you tell it. This is the time for a stakeholder analysis. Be clear on:
Constituencies (who you represent the interests of)
Allies
Opponents
Targets (who can change things)
Secondary targets (who can influence them)
Then, and only then, you want to think of which tactic is best for you to influence your target towards your goal. A protest might not be the best way to convince them, for many reasons that I'll leave up to people who know more about AI and the stakeholders. Protests are one tactic, but so is a well-planned email campaign to officials (with a template, gone through rounds of revision and feedback, for your participants), a social media campaign with text and images about the risk, or visual/literary art about a hypothetical future where AI development does not slow down.
After you've selected a tactic (and made a plan for the project that has been reviewed by people who are experienced in AI safety), you can organize your community to help you carry out that tactic as massively as you can.
This process from start to finish might take a few months, but it is worth getting it right the first time. Then, you will have less issues to fix, and you can build on your momentum and scale up. The more people who help you think through how to effectively influence your targets towards your goal, the better. It is best if you have some people to work with who are deeply familiar with your target. Networks are everything in organizing. Good luck.
Edit: I figured you might want me to tell you why I'm recommending all these other steps. I'm doing that because I'm seeing you receive feedback (from people more involved in the issue than I am) that this could cause harm. I saw you say "in my view it will highly likely be better than nothing" above. It might be worse than nothing. Hence the planning. It seems you want to act fast because this is an urgent threat, and I get that. But acting fast and making things worse is worse than planning for a few months and making things much better.
Hi Alistair! You might want to look into more strategic ways of planning activism work. It's true that many social movements start becoming visible with protests, but there is a lot of background work involved in a protest, or any activism.
It looks like your goal is to slow down AI development.
First, you'll want a small working group who can help you develop your message, analyses, and tactics. A few of your colleagues who are deeply concerned about AI risk would work. When planning most things, it's helpful to have people who can temper your impulses and give you more ideas.
I see that you want to "Develop clear message, and demands, and best approach to this protest. Clear explanation of ai dangers that anybody can understand." I recommend doing this more than 2 weeks out from launching a campaign, with help from your working group. There are many important talking points you can use around AI risk, but if you just pick one clear phrase for your campaign, it can get more traction.
After you're clear on the one most important message for you to spread right now, you want to know who you need to tell and who can help you tell it. This is the time for a stakeholder analysis. Be clear on:
Then, and only then, you want to think of which tactic is best for you to influence your target towards your goal. A protest might not be the best way to convince them, for many reasons that I'll leave up to people who know more about AI and the stakeholders. Protests are one tactic, but so is a well-planned email campaign to officials (with a template, gone through rounds of revision and feedback, for your participants), a social media campaign with text and images about the risk, or visual/literary art about a hypothetical future where AI development does not slow down.
After you've selected a tactic (and made a plan for the project that has been reviewed by people who are experienced in AI safety), you can organize your community to help you carry out that tactic as massively as you can.
This process from start to finish might take a few months, but it is worth getting it right the first time. Then, you will have less issues to fix, and you can build on your momentum and scale up. The more people who help you think through how to effectively influence your targets towards your goal, the better. It is best if you have some people to work with who are deeply familiar with your target. Networks are everything in organizing. Good luck.
Edit: I figured you might want me to tell you why I'm recommending all these other steps. I'm doing that because I'm seeing you receive feedback (from people more involved in the issue than I am) that this could cause harm. I saw you say "in my view it will highly likely be better than nothing" above. It might be worse than nothing. Hence the planning. It seems you want to act fast because this is an urgent threat, and I get that. But acting fast and making things worse is worse than planning for a few months and making things much better.