Looks promising to me. Technological development isn't by default good.
Though I agree with the other commenters that this could fail in various ways. For one thing, if a policy like this is introduced without guidance on how to analyze the societal implications, people will think of wildly different things. ML researchers aren't by default going to have the training to analyze societal consequences. (Well, who does? We should develop better tools here.)
Agreed, I think of this like sending a signal that at least a limited concern for safety is important. I'm sure we'll see a bunch of papers with sections addressing this that won't be great, but over time it stands some chance of more regularizing considering concerns about safety and ethics of ML work in the field such that safety work will become more accepted as valuable. So even without a lot of guidance or strong evaluative criteria, this seems a small win to me that, at worst, causes some papers to just have extra fluff sections their authors wrote to pretend to care about safety rather than ignoring it completely.
So is this a good thing or bad thing? Is wokeness a stepping stone towards some future enlightened morality, and help bring AI/ML along to that destination (in which case perhaps we should excuse its current excesses), or will it ultimately collapse while doing a lot of damage in the meantime (like communism)?
NeurIPS (formerly NIPS) is a top conference in machine learning and computational neuroscience. The recently published call for papers for NeurIPS 2020 includes the following (which did not appear in previous years):