Wiki Contributions

Comments

I feel pretty sympathetic to the desire not to do things by text; I suspect you get much more practiced and checked over answers that way.

which privacy skills you are able to execute.

 

This link goes to a private google doc, just fyi.

This is great!

I really like this about slack:

  • If you aren’t maintaining this, err on the side of cultivating this rather than doing high-risk / high-reward investments that might leave you emotionally or financially screwed.
    • (or, if you do those things, be aware I may not help you if it fails. I am much more excited about helping people that don’t go out of their way to create crises)


Seems like a good norm and piece of advice.

I'm confused how much I should care whether an impact assessment is commissioned by some organization. The main thing I generally look for is whether the assessment / investigation is independent. The argument is that because AISC is paying for it, that will influence the assessors? 

I have not read most of what there is to read here, just jumping in on "illegal drugs" ---> ADHD meds. Chloe's comment spoke to weed as the illegal drug on her mind.

AI has immense potential, but also immense risks. AI might be misused by China, or get of control. We should balance the needs for innovation and safety." I wouldn't call this lying (though I agree it can have misleading effects, see Issue 1).


Not sure where this slots in, but there's also a sense in which this contains a missing positive mood about how unbelievably good (aligned) AI could or will be, and how much we're losing by not having it earlier.

Interesting how many of these are "democracy / citizenry-involvement" oriented. Strongly agree with 18 (whistleblower protection) and 38 (simulate cyber attacks).

20 (good internal culture), 27 (technical AI people on boards) and 29 (three lines of defense) sound good to me, I'm excited about 31 if mandatory interpretability standards exist. 

42 (on sentience) seems pretty important but I don't know what it would mean.
 


 

The top 6 of the ones in the paper (the ones I think got >90% somewhat or strongly agree, listed below), seem pretty similar to me - are there important reasons people might support one over another?

  • Pre-deployment risk assessments
  •  Evaluations of dangerous capabilities
  • Third-party model audits
  • Red teaming
  • Pre-training risk assessments 
  • Pausing training of dangerous models

Curious if you have any updates!

Load More