A small group of AGI existential safety field-builders and I are starting research exploring a potential initiative about informing the public and/or important stakeholders about the risks of misaligned AI and the difficulties of aligning it.
We are aware that a public communication initiative like this carries risks (including of harming the AGI x-safety community’s reputation, of sparking animosity and misunderstandings between communities, or drawing attention to ways to misuse or irresponsibly develop scaleable ML architectures). We are still in the stage of evaluating whether/how this initiative would be good to pursue.
We are posting this on the forum to avoid the scenario where someone else starts a project about this at the same time and we end up doing duplicate work.
How you can get involved:
If you are currently undertaking work similar to this or are interested in doing so, message me your email address along with a bit of context about yourself/what you are doing.
We are drafting a longer post to share our current considerations and open questions. Message me if you would like to review the draft.
We are looking for one or two individuals who are excited to facilitate a research space for visiting researchers. The space will run in Oxford (one week in Sep ’22) and in Prague (9-16 Oct ’22) with accommodation and meals provided for. If you take on the role as facilitator, you will receive a monthly income of $2-3K gross for 3 months and actually get to spend most of that time on your own research in the area (of finding ways to clarify unresolved risks of transformative AI to/with other stakeholders). If you are interested, please message me and briefly describe your research background (as relevant to testing approaches for effective intergroup communication, conflict-resolution and/or consensus-building).
Crossposted from the EA forum
A small group of AGI existential safety field-builders and I are starting research exploring a potential initiative about informing the public and/or important stakeholders about the risks of misaligned AI and the difficulties of aligning it.
We are aware that a public communication initiative like this carries risks (including of harming the AGI x-safety community’s reputation, of sparking animosity and misunderstandings between communities, or drawing attention to ways to misuse or irresponsibly develop scaleable ML architectures). We are still in the stage of evaluating whether/how this initiative would be good to pursue.
We are posting this on the forum to avoid the scenario where someone else starts a project about this at the same time and we end up doing duplicate work.
How you can get involved: