Take fewer (and shorter) meetings. Most things don’t need to be a meeting.
Also, if something needs a meeting, send relevant information (or links to information) to participants before the meeting, and write a summary after the meeting and send it to participants.
Sending information before the meeting means that all participants start on the same page. If you ignore it, you may spend half of the meeting explaining things to half of the participants, that the other half already knows.
Sending a summary means creating a written record that the participants can review if they forget something, and you can also send it to people who were not present at the meeting. Also, if there was an illusion of understanding (some participants believe you concluded X, other participants believe you concluded non-X), sending a summary makes it possible to notice misunderstanding and raise an objection. If you ignore it, you may soon have another meeting on exactly the same topic, because people forget or get confused.
EDIT:
By the way, I think there is nothing wrong if some people prefer to explain things verbally in a meeting, but the meeting should be clearly marked as such, and a recording should be made if it also may concern other people. For example, instead of making a meeting "what is X and how we could use it in our project", split it into two meetings: "what is X" and "brainstorming about how to use X in our project". The first one can be skipped by people who already know what is X; and the invitation should include links to already existing resources explaining X. The result of the first one should be at least a video recording, but preferably a written record. The invitation to the second one should include the recording of the first one, plus the links to external resources.
This is some great advice. Especially 1 and 2 seem foundational for anyone trying to reliably shift the needle by a notable amount in the right direction.
I spent the first half-3/4 of 2022 focused on AIS field-building projects. In the last few months, I've been focusing more on understanding AI risk threat models & strategy/governance research projects.
Before 2022, I was a PhD student researching scalable mental health interventions (see here).
Have you written or published any ML related papers? Perhaps you are working on that now? Why did you chose to switch from mental health to AI Alignment?
Here are 10 of helpful pieces of advice I received in 2022:
Disclaimer: It’s plausible that the people I “credit” would find my summaries inaccurate or no longer endorse the advice.