There are actually all sorts of false science in the wild, such as "the earth is flat" and "pandemic is government-driven lies", extending their influence even though the rational community struggled to promote the rational thinking paradigm, which I believe is a strong prerequisite for the Alignment question to be solved.
So the question is:
- How to define "absolute truth" under the rational thinking paradigm?
- How to forcefully educate this "absolute truth" to the general public, for a common consensus to be presumed before any meaningful discussion can exist?
Edit: I found "forcefully" particularly not suiting under the context, but I can't bring a more proper expression at this time. This question is mainly on the first point, not by educating/convincing but by estimating a pre-aligned value.
Second Edit: This question might not be suitable because it's some long-standing question merged together with a manipulative intention I was not intended to express. I've changed the title to a more suitable one. Thanks for replying.
*Background: I've recently been struggling to educate some fellow artists with no ML backgrounds on generative ML models, but with little luck. *
I'd enjoy (perhaps, it'll depend on what you actually mean) more exploration of the specifics of your shared truth-seeking with your fellow artists about generative ML models. I don't think it makes for very good general discussion until you have some successes on smaller, more direct interactions.
I am a bit concerned about your framing of how to "educate" the public or your fellow artists, rather than any sort of cooperative or agency-respecting mechanisms.
"educate" is used here because I found these kinds of discussions would not be easily conducted if the AI part were introduced before any actual progress can be made. Or, to frame it that way, my fellow tends to panic if they are introduced to generative ML models and related artwork advancements, and refuse to listen to any technical explanation that might make them more understood. There was no manipulative intention, and I'm willing to change my interaction method if the current one seems manipulative.