Why we're not founding a human-data-for-alignment org
TL;DR One-paragraph summary: we (two recent graduates) spent about half of the summer exploring the idea of starting an organisation producing custom human-generated datasets for AI alignment research. Most of our time was spent on customer interviews with alignment researchers to determine if they have a pressing need for such...
Relevant classic paper from Steven Levitt. Abstract [emphasis mine]: