PhD in Technical AI Safety / Alignment at Dalhousie University.
Thanks for reaching out this is all great feedback.
That we will defenitly address. I will dm you for the vaccine implementation as we are currently working on this as well and to see what would be useful for code sharing since we are wee bit aways from having shareable replication of the whole paper.
Some answers
I’ll follow up privately but feel respond here as well for additional clarification your comment is much appreciated.
Thanks for pointing this out - I think its a critical point.
Im not imagining anything in particular (and ya in that paper we do very much do "baby" sized attacks)
Generally ya this is a problem we need to work out: what is the relationship between a defence strength and the budget an attacker would take to overcome this AND for large groups that have budget for training from scratch would defences here even make an impact.
I think your right in that the large budget groups who can just train from scratch would just not be impacted by defences of this nature. While that seems to really poo-poo this whole endevour, I think its still promising and valuable as you point out to want to prevent this happening at all budgests that preclude training from scratch.
A scenario that maybe help with thinking about this endevour still being valueable is:
Maybe we are talking about trillion/billion dollar models in the future where compute governance allows us to be able to trace whether this training from scratch is occuring somewhere, in which case defences that approch this limit for attacker spend are indeed quite valuable.
I think people underestimate formal study of research methods like reading texts / taking a course on research methodology for improving research abilities.
There are many concepts within control, experimental design, validitiy, and reliability like construct validity or conclusion validity that you would learn from a research methods textbook that are super helpful for improving the quality of research. I think many researchers implicitly learn these things without ever knowing what they exactly are but that is usually through trial and errors (peer review rejections and embarrasment) which can be avoided by looking into research methods texts.
Of course this doesn't help with the discovery aspect of research which I think your article is good at outlining but at some point research questions need to be investigated and understanding research design makes it really obvious the kinds of work you need to do in order to have a high quality investigation.
Thanks for the pointer! Yes RL has a lot of research of this kind - as an empirical research I just get stuck sometimes in translation
For my own clarity: What is the difference between mathematical approaches to alignment and other technical approaches like mechanistic interpretability work?
I imagine the focus is on in principal arguments or proofs regarding the capabilities of a given system rather than empirical or behavioural analysis but you mention RL so just wanted to get some colour on this.
Any clarification here would be helpful!
If someone did this - it would be nice to collect preference data over answers that are helpful to alignment and not helpful to alignment… that could be a dataset that is interesting for a variety of reasons like analyzing current models abilities to help with alignment, gaps in being helpful w.r.t alignment and of course providing a mechanism for making models better at alignment… a model like this could also maybe work as a specialized type of Constitutional AI to collect feedback from the models preferences that are more “alignment-aware” so to speak… none of this of course is a solution to alignment as the OP points out but interesting nonetheless.
I’d be interested in participating in this project if other folks set something up…
Im struggling to understand how this is is different from “we will build aligned ai to align ai”. specifically: Can someone explain to me how human-like and AGI are different? Can someone explain to me why human-like AI avoids typical x-risk scenarios (given those human-likes could say clone themselves, speed up themselves and rewrite their own software and easily become unbounded)? Why isnt an emulated cognitive system a real cognitive system… i don’t understand how you can emulate a human-like intelligence and it not be the same as fully human-like.
currently my reading of this is we will build human-like AI because humans are bounded so it will be too, those bounds are: (1) sufffiecent to prevent xrisk (2) helpful for (and maybe even the reason for) alignment. Isnt a big wide open unsolved part of the alignment problem “how do we keep itelligent systems bounded”? What am I missing here?
I guess one maybe supplementary question as well is: how is this different from normal NLP capabilities research which is fundamentally about developing and understanding the limitations of human like intelligence? Most folks in the field say who publish in ACL conferences would explicitly think of this as what they are doing and not trying to build anything more capable than humans.
What are your thoughts on prompt tuning as a mechanism for discovering optimal simulation strategies?
I know you mention condition generation as something to touch on in future posts but I’d be eager to hear about where you think prompt tuning comes in considering continuous prompts are differentiable and so can be learned/optimized for specific simulation behaviour.
Hey there,
I was just wondering how you deal with hallucination and faithfulness issues of large language models from a technical perspective? The user experience perspective seems clear - you can give users control and consent over what Elicit is suggesting and so on.
However we know LLMs are prone to issues of faithfulness and factuality (Pagnoni et al. 2021 as one example for abstractive summarization) and this seems like it would be a big issue for research where factual correctness is very important. In a biomedical scienario, if a user of Elicit gets an output that presents a figure wrongly extracted (say from a proceeding sentence, or hallucinated as the highest log likelihood token based on previous documents), this could potentially have very dangerous consequences. I'd love to know more about how you'd adress that?
My current thinking on the matter is that in order to address these safety issues in NLP for science we may need to provide models that "self-criticize" their outputs so to speak. I.e. provide counterfactual outputs that could be checked or something like this. Espeically since GopherCite (Menick et al 2022) and some of the similar self-supporting models seem to show that self-support is also prone to some issues that doesn't totally address factuality (in their case as measured on TruthfulQA) not to meantion self-explaining approaches which I belive suffer from the same issues (i.e. hallucinating an incorrect explanation).
Menick, J., Trebacz, M., Mikulik, V., Aslanides, J., Song, F., Chadwick, M., ... & McAleese, N. (2022). Teaching language models to support answers with verified quotes. arXiv preprint arXiv:2203.11147.
Pagnoni, Artidoro, Vidhisha Balachandran, and Yulia Tsvetkov. "Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics." arXiv preprint arXiv:2104.13346 (2021).
Thanks for giving this great work - I certainly agree with you on the limits of unlearning as it’s currently conceived for safety. I do wonder if a paradigm of “preventing learning” is a way to get around these limits.