~All ML researchers and academics that care have already made up their mind regarding whether they prefer to believe in misalignment risks or not. Additional scary papers and demos aren't going to make anyone budge.
I think this mostly shows that the approach used so far has been ineffective. I don't think it's evidence that academics are incapable of changing their minds. Papers and demos seem like the intuitive way to persuade academics, but if this was the case how could they ever come to the conclusion that AI is safe by default, something which is not supported by evidence?
I think the most useful approach right now would be to find out why some researchers are so unconcerned with safety. When you know why someone believes the things they do it is much easier to change their mind.
I think this mostly shows that the approach used so far has been ineffective. I don't think it's evidence that academics are incapable of changing their minds. Papers and demos seem like the intuitive way to persuade academics, but if this was the case how could they ever come to the conclusion that AI is safe by default, something which is not supported by evidence?
I think the most useful approach right now would be to find out why some researchers are so unconcerned with safety. When you know why someone believes the things they do it is much easier to change their mind.