Shivam
  • PhD in Geometric Group Theory >> Postdoc in Machine Learning >> Independent AI safety and AI alignment Research.
  • Looking for mentors and paid positions to enable myself to continue working in AI safety.
  • Please feel free to contact at shivamaroramath@gmail.com

Wikitag Contributions

Comments

Sorted by

An important work in AI safety should be to prove equivalency of various Capability benchmarks to Risk benchmarks. So that, when AI labs show their model is crossing a capability benchmark, they are automatically crossing a AI safety level. 
"So we don't have two separate reports from them; one saying that the model is a PhD level Scientist, and the other saying that studies shows that the CBRN risk with model is not more than internet search."