LLaMA was leaked "by design". The way it was distributed, it would be impossible for it not to leak.
And yes, if a really powerful AI would leak, someone will make a Skynet with it in a hour just to see if they can. So, I would prefer for someone, anyone, would control it alone.
Agreed. I got the weights very quickly after filling out the form, even though I simply wrote "None" in the (required!) "Previous related publications" field. (It still felt great to get it, so thx Meta!)
Okay, so you believe this was a marketing stunt?
OPT-175B still did not leak, despite also being "accessible-by-request" only
I'm unsure whether it's a good thing that LLaMA exists in the first place, but given that it does, it's probably better that it leak than that it remain private.
What are the possible bad consequences of inventing LLaMA-level LLMs? I can think of three. However, #1 and #2 are of a peculiar kind where the downsides are actually mitigated rather than worsened by greater proliferation. I don't think #3 is a big concern at the moment, but this may change as LLM capabilities improve (and please correct me if I'm wrong in my impression of current capabilities).
It was reported that Meta's LLaMA models were leaked, with someone adding a PR with the magnet link into their official repository.
Now the public has access to the model that is apparently as powerful or even more powerful as GPT-3 on most of the benchmarks.
Is this a good or a bad event for humanity?
Are powerful models better being kept behind closed doors used only by the corporations that had produced them, or does the public having access to it evens out the playing field, despite the potential misuse by bad actors?
Should this continue to happen, and should there be our own Snowdens in the AI field, whistleblowing if they notice something that is in the public interest to be known?
What if they work at Large Corporation X, and they believe the first AGI had been invented? Is it better for humanity that the AGI is solely used by that CEO for the next five years ( / the board of directors / the ultra-rich that are able to pay billions of dollars to that AI company for exclusive use), amassing as much power as possible until they monopolize not just the industry, but potentially the whole world, or is leaking the AGI weights to the public the lesser of two evils and is in fact a moral responsibility, where the whole humanity is upgraded to having AGI capabilities instead of one person or a small group of people?
Let's discuss.