coo @ ought.org. by default please assume i am uncertain about everything i say unless i say otherwise :)
All good, thanks for clarifying.
This was really interesting, thanks for running and sharing! Overall this was a positive update for me.
Results are here
I think this just links to PhilPapers not your survey results?
and Ought either builds AGI or strongly influences the organization that builds AGI.
"strongly influences the organization that builds AGI" applies to all alignment research initiatives right? Alignment researchers at e.g. DeepMind have less of an uphill battle but they still have to convince the rest of DeepMind to adopt their work.
I also appreciated reading this.
I found this post beautiful and somber in a sacred way. Thank you.
This was really helpful and fun to read. I'm sure it was nontrivial to get to this level of articulation and clarity. Thanks for taking the time to package it for everyone else to benefit from.
If anyone has questions for Ought specifically, we're happy to answer them as part of our AMA on Tuesday.
I think we could play an endless and uninteresting game of "find a real-world example for / against factorization."
To me, the more interesting discussion is around building better systems for updating on alignment research progress -
Thanks for that pointer. It's always helpful to have analogies in other domains to take inspiration from.
Sure! Prior to this survey I would have thought:
I was also encouraged that the majority of people thought the majority of research is crap.
...Though not sure how that math exactly works out. Unless people are self-aware of their publishing crap :P