Jáchym Fibír

Wiki Contributions

Comments

Sorted by

Hmm... I see the connection to 1984. But isn't it useful having the words to spot something that is obviously already happening? (like drug clinical trial data being demonstrably presented or even altered with a certain malicious/monetary intent in mind)

Thank you for starting a discussion about this. I have two things to say:

1) In the post above, the "inftoxic" adjective means very much false, incorrect information. Additionally, it also means the falseness was intentionally "put in" the data or story with an intent to mislead, cause harm, etc. So, in fact, the term is different (and to me personally more useful) than the term "malinformation" (which I likewise find quite unhelpful).

2) Regardless of the usefulness of the terminology I used as an example, do you think that we could use new words in and around information, that could improve the way how we lead the debate in an attempt to be less wrong?

Thank you for such a detailed write-up! I have to admit that I am teetering on the issue whether to ban or not to ban open-source LLMs and as I a co-founder of an AI-for-drug-design startup I had taken the increased biosecurity risk as probably the single most important consideration. So I think the conversation sparked up by your post is quite valuable.

That said, even if I consider all that you presented, I am still leaning towards banning powerful open-source LLMs, at least until we get much more information and most importantly before we establish other safeguards against global pandemics (like "airplane lavatory detectors" etc.).

First, I think there is definitely a big difference between having online information about actually making lethal pathogens, having the full assistance of an LLM makes a quite significant difference, especially for people starting from near zero.

Then, if I consider all of my ideas for what kind of damage could be done with all the new capabilities, especially combining LLMs with generative bioinformatic AIs... I think a lot of caution is surely warranted.

Ultimately, If you take the potential benefits of open-source LLMs over fine-tuned LLMs (not crazy significant in my opinion but of course we don't have that data either), and compare to the risks posed by essentially removing all of the guardrails and safety measures everyone is working on in AI labs... I think at least waiting some time with open-sourcing is the right call now.