That would be very helpful; I expect we could relatively easily solve the technical problem if we could read their research notes.
As for the goal design, the intuitive way of "just hardcode your values... [in their full complexity and also determine what they 'really refer to' in the true ontology of reality (includes figuring out the true ontology of reality) in order to specify them and also make sure you really endorse this as your final choice]" is actually not doable if you're time pressed as we are; although maybe an alien civilization capable of solving alignment would not be so time pressed, and could figure that out carefully over very many years.
Known alternatives which avoid that hardness, and so are more appealing at least under time pressure, include:
Both these have the property of being copyable by us / not only working for the aliens' values.
I wasn't really thinking about a specific algorithm. Well I was kind of thinking about LLM's and the alien shogolith meme.
But yes. I know this would be helpful.
But I'm more thinking about what work remains. Like is it a idiot-proof 5 minute change? Or does it still take MIRI 10 years to adapt the alien code?
Also.
Domain limited optimization is a natural thing. The prototypical example is deep blue or similar. Lots of optimization power, over a very limited domain. But any teacher who optimizes the class schedule without thinking abou...
Right now, the answer is "it largely depends on what happened", but my prior is that it would be very useful at least as a clarification of how they did it, and it would at the very least guide our own efforts by making sure we learned what actually works.
Imagine aliens on a distant world. They have values very different to humans. However, they also have complicated values, and don't exactly know their own values.
Imagine these aliens are doing well at AI alignment. They are just about to boot up a friendly (to them) superintelligence.
Now imagine we get to see all their source code and research notes. How helpful would this be for humans solving alignment?