A "moonshot idea" I saw brought up is getting Yudkowsky's Harry Potter fanfiction translated into Chinese (please never ever do this).
This has already been done, and has pretty good reviews and some discussions.
I've looked through the EA/Rationalist/AI Safety forums in China
If these are public, could you post the links to them?
there is only one group doing technical alignment work in China
Do you know the name of the group, and what kinds of approaches they are taking toward technical alignment?
Are there any alignment approaches that try to replicate how children end up loving their parents (or vice versa), except with AI and humans? Alternatively, approaches that look like getting an AI to do Buddhist lovingkindness?
For derendering latex in Emacs, see https://github.com/io12/org-fragtog.
For drawing images in line, you could try https://github.com/misohena/el-easydraw.
I like this idea and think it is worth exploring. It is not even just with training new models; AGI have to worry about misalignment with every self-modification and every interaction with the environment that changes itself.
Perhaps there are even ways to deter an AGI from self-improvement, by making misalignment more likely.
Some caveats are:
Escape. Invest in space travel and escape the solar system before they arrive.
If your AI timelines are long, this may be a viable strategy for preserving the human species in the event of unaligned AGI.
In your AI timelines are short, a budget solution is to just send human brains into space and hope they will be found and revived by other powerful species (hopefully at least one of them is "benevolent").
Given an aligned AGI, to what extent are people ok with letting the AGI modify us? Examples of such modifications include (feel free to add to the list):
What exact parts of being "human" do we want to preserve?