Yeah, this is probably true. In training, I aimed to allow it to provide slightly broader alternatives, but not more specific alternatives, like this one is.
Since all groupthink is a form of consensus, under the rules I've been following it would be acceptable for it to highlight "groupthink" and provide "consensus" as an alternative, but not the other way around.
I'd be interested to see the results you got with Gemini. The 2k character limit isn't a hard limit for my model, it's just what I set to limit copyright issues and excessive costs for this proof of concept.
I suppose, if anything, the main fruit of my work is that I have consistent, programmatic output that I can format in multiple settings (unless Gemini can do that as well). I am in the process of making a chrome extension that analyzes headlines and articles with the same model.
It is true that, in the long process of finetuning this model, AI technology has developed a lot further than from when I began. I'm not opposed to using alternative methods.
I know it's been a long time since anyone posted here, but I just stumbled across this post. For the past year and a half, I have been training a finetuned ChatGPT model to identify and reverse examples of Russell Conjugations in given text. There are still improvements that need to be made, so I'll try to include some of these examples in the training set for the new model.
I released the tool last week, and anyone can try it out here: https://russellconjugations.com/
Seeing how competent these newer models are was quite helpful, and I have started using ChatGPT o4-mini-high to generate training sets for a new finetuned model. It does quite well, and once I get the hang of this, I should be able to distill the results and improve the performance of my model much more quickly. Wish me luck!