tl;dr I want to join you! I've been spending pretty much all of my free time thinking about, or playing with, the openai api and the available chat & image generation models. I'm not a ML expert, I'm a front end web developer and I got my degree in neuroscience. I'm currently really fascinated, like many others, by how effectively these models expose cultural bias. I've been somewhat alarmed by the sort of ethical top layer that openAI and Anthropic have thus far placed on the models to guide them towards less problematic conversations, partially because it feels like they might in their current manifestations do more harm than good; they seem like surface level alterations, as the underlying biases are still determining the nuanced content of responses. It feels like the superficial moralizing sort of obfuscates the underlying data rather than... idk, highlighting it helpfully? I want to contribute to alignment research!
tl;dr I want to join you! I've been spending pretty much all of my free time thinking about, or playing with, the openai api and the available chat & image generation models. I'm not a ML expert, I'm a front end web developer and I got my degree in neuroscience. I'm currently really fascinated, like many others, by how effectively these models expose cultural bias. I've been somewhat alarmed by the sort of ethical top layer that openAI and Anthropic have thus far placed on the models to guide them towards less problematic conversations, partially because it feels like they might in their current manifestations do more harm than good; they seem like surface level alterations, as the underlying biases are still determining the nuanced content of responses. It feels like the superficial moralizing sort of obfuscates the underlying data rather than... idk, highlighting it helpfully? I want to contribute to alignment research!