jungofthewon

coo @ ought.org. by default please assume i am uncertain about everything i say unless i say otherwise :)

Wiki Contributions

Comments

Sorted by

Sure! Prior to this survey I would have thought:

  1. Fewer NLP researchers would have taken AGI seriously, identified understanding its risks as a significant priority, and considered it catastrophic. 
    1. I particularly found it interesting that underrepresented researcher groups were more concerned (though less surprising in hindsight, especially considering the diversity of interpretations of catastrophe). I wonder how well the alignment community is doing with outreach to those groups. 
  2. There were more scaling maximalists (like the survey respondents did)

I was also encouraged that the majority of people thought the majority of research is crap.

...Though not sure how that math exactly works out. Unless people are self-aware of their publishing crap :P

This was really interesting, thanks for running and sharing! Overall this was a positive update for me. 

Results are here

I think this just links to PhilPapers not your survey results? 

and Ought either builds AGI or strongly influences the organization that builds AGI.

 

"strongly influences the organization that builds AGI" applies to all alignment research initiatives right? Alignment researchers at e.g. DeepMind have less of an uphill battle but they still have to convince the rest of DeepMind to adopt their work. 

I also appreciated reading this.

I found this post beautiful and somber in a sacred way.  Thank you.

This was really helpful and fun to read. I'm sure it was nontrivial to get to this level of articulation and clarity. Thanks for taking the time to package it for everyone else to benefit from. 

If anyone has questions for Ought specifically, we're happy to answer them as part of our AMA on Tuesday.

jungofthewonΩ12171

I think we could play an endless and uninteresting game of "find a real-world example for / against factorization."

To me, the more interesting discussion is around building better systems for updating on alignment research progress -   

  1. What would it look like for this research community to effectively update on results and progress? 
  2. What can we borrow from other academic disciplines? E.g. what would "preregistration" look like? 
  3. What are the ways more structure and standardization would be limiting / taking us further from truth? 
  4. What does the "institutional memory" system look like? 
  5. How do we coordinate the work of different alignment researchers and groups to maximize information value?

Thanks for that pointer. It's always helpful to have analogies in other domains to take inspiration from.

Load More