I'm curious what the LW community as a whole thinks about the work that Hugging Face is doing. Their main MO seems to be taking whatever new breakthrough in AI there is, and making it open-source and accessible to the public. (as a first order approximation)
I see a few aspects here:
Ignoring AGI risk, it's better to have open-source models rather than closed-source APIs. This should be pretty straight-forward.
Including AGI risk, then having a superintelligent AGI open-source is arguably more dangerous than having it closed-source.
At least one person that is involved in the alignment research space is trying to rebrand it as "CreepyFace", presumably for the "AI Art vs Human Artists" conflict, and implicitly (?) siding with the former by hosting datasets and models.
This aspect can be taken as a more general criticism for the open-source approach -- if you think it's unethical to train models on data obtained without explicit consent, and that hosting platforms have to verify that
If we're concerned about the short-term, "trivial" (i.e. non-existential) risks of AI misuse, then open-sourcing powerful models can put them in the wrong hands (which is a criticism I saw very often when Stable Diffusion was the news)
I'm curious what the LW community as a whole thinks about the work that Hugging Face is doing. Their main MO seems to be taking whatever new breakthrough in AI there is, and making it open-source and accessible to the public. (as a first order approximation)
I see a few aspects here: