There are very few groups of people I would trust to correctly choose which "absolute truths" warrant forcibly shoving down everyone's throat, and none of them I would expect to remain in power for long (if they got it in the first place). Therefore, any mechanism for implementing this can be expected in the future to be used to shove falsehoods, propaganda, and other horrible things down everyone's throat, and is a bad idea.
Maybe people with ideas like yours need to be forcefully educated of the above truth. :-P
Maybe you meant "forcefully" less literally than I interpreted it.
Yes, that's also a reason I refuse to bring this question up for quite a long time because "forcefully" sounds too close to what a propaganda machine would do.
What I mean under this particular context is some objective truth, like physical law, that should be a consensus. Critics from my peer suggest that physical law itself may not be an absolute objective truth, thus I'm really curious from that standpoint: If there's no common ground one can reach, then how would any discussion prove useful?
People generally reject the teachings of mainstream culture because they belong to a subculture that tells them too. But rationalism is itself a culture with some contrarian beliefs, so why would rationalists know how to fix the problem?
Yes, I've done some reflection and recognized that my question is actually the "how to expand rationalism thinking paradigm", which is a long-standing problem not solved by the rationalist community. This is just another way of not inquiring about it correctly.
False premise. There is no “absolute truth”. I don’t want to come across as condescending but please have a look at any somewhat recent science textbook if you doubt this claim.
I would suggest reframing to: how can we establish common ground that a) all/most people can agree on and b) facilities productive inquiry.
And that arose a question: If there's no "absolute truth", then how "relative" the truth most people agree on (such as 1+1=2 mathematically) would be?
Sorry if this question seems too naive as I'm at an early stage of exploring philosophy, and any other views other than objectivity under the positivism view seems not convincing to me.
I'd enjoy (perhaps, it'll depend on what you actually mean) more exploration of the specifics of your shared truth-seeking with your fellow artists about generative ML models. I don't think it makes for very good general discussion until you have some successes on smaller, more direct interactions.
I am a bit concerned about your framing of how to "educate" the public or your fellow artists, rather than any sort of cooperative or agency-respecting mechanisms.
"educate" is used here because I found these kinds of discussions would not be easily conducted if the AI part were introduced before any actual progress can be made. Or, to frame it that way, my fellow tends to panic if they are introduced to generative ML models and related artwork advancements, and refuse to listen to any technical explanation that might make them more understood. There was no manipulative intention, and I'm willing to change my interaction method if the current one seems manipulative.
There are actually all sorts of false science in the wild, such as "the earth is flat" and "pandemic is government-driven lies", extending their influence even though the rational community struggled to promote the rational thinking paradigm, which I believe is a strong prerequisite for the Alignment question to be solved.
So the question is:
Edit: I found "forcefully" particularly not suiting under the context, but I can't bring a more proper expression at this time. This question is mainly on the first point, not by educating/convincing but by estimating a pre-aligned value.
Second Edit: This question might not be suitable because it's some long-standing question merged together with a manipulative intention I was not intended to express. I've changed the title to a more suitable one. Thanks for replying.
*Background: I've recently been struggling to educate some fellow artists with no ML backgrounds on generative ML models, but with little luck. *