I was going to respond saying I didn't think that would work as a method, but now I'm not so sure.
My counterargument would be to suggest that there's no goal system which can't arbitrarily come about as a Fisherian Runaway, and that our AI's acausal trade partners could be working on pretty much any optimisation criteria whatsoever. Thinking about it a bit more, I'm not entirely sure the Fisherian Runaway argument is all that robust. There is, for example, presumably no Fisherian Runaway goal of immediate self-annihilation.
If there's some sort of structure to the space of possible goal systems, there may very well be a universally derivable distribution of goals our AI could find, and share with all its interstellar brethren. But there would need to be a lot of structure to it before it could start acting on their behalf, because otherwise the space would still be huge, and the probability of any given goal system would be dwarfed by the evidence of the goal system of its native civilisation.
There's a plot for a Ctrhulhonic horror tale lurking in here, whereby humanity creates an AI, which proceeds to deduce a universal goal preference for eliminating civilisations like humanity. Incomprehensible alien minds from the stars, psychically sharing horrible secrets written into the fabric of the universe.
Except for the eliminating humans part, the Ctrhulhonic outcome seems almost like the default. We build AI, proving that it implements out reflectively stable wishes and then it still proceeds to do almost pay very little attention to what we thought we wanted.
One thing that might push back in the opposite direction is that if humans have heavily path dependent preferences (which seems pretty plausible) or selfish wrt currently existing humans in some way then an AI built for our wishes might not be willing to trade much humanity away in exchange for resources far away.
If it's worth saying, but not worth its own post, even in Discussion, it goes here.