Like plex said, getting gpt or like to simulate current top researchers, where you can use it as a research assistant, would be hugely beneficial given how talent constrained we are. Getting more direct data on the actual process of coming up with AI Alignment ideas seems robustly good and I'm currently working on this.
Can you expand on which readings you think are dumb and wrong?