Thanks, Jacob! This is helpful. I've made the relevant changes to my copy of the poster.
Regarding the 'biological anchors' point, I intended to capture the notion that it is not just the level/amount of computation that matters by prefixing with the word 'relevant'. When expanding on that point in conversation, I am careful to point out that generating high levels of computation isn't sufficient for creating human-level intelligence. I agree with what you say. I also think you're right about the term "biological anchors" not being very meaningful to my audience. Given that, from my experience, many academics see the poster but don't ask questions, it's probably a good idea for me to substitute this term for another. Thanks!
Totally! I'll make sure to include such a section next time I present on AI safety or AI governance. After a quick Google search I found the following link post which would have been useful prior to the PPE Society poster session: https://forum.effectivealtruism.org/posts/kvkv6779jk6edygug/some-ai-governance-research-ideas
I think that's right, but I think that who I cite in this case matters a lot to whether people take it seriously. This is why I chose not to cite Miles or Yudkowsky, though I'm aware that this is academically bad practice. In hindsight, I could have included some quote from Peter Railton but it doesn't feel right to do this for the sake of adding an authority to the list of citations. Thanks!