(Caveat: as an aspiring AI Safety researcher myself, I'm both qualified and unqualified to answer this. Also, I'll focus on technical AI Safety, because it's the part of the field I'm most interested in.)
As a first approximation, there is the obvious advice: try it first. Many of the papers/blog posts are freely available on the internet (which might not be a good thing, but that's a question for another time), and thus any aspiring researcher can learn what is going on and try to do some research.
Now, to be more specific about AI safety, I see at least two sub-questions here:
Am I the right "kind" of researcher for working in AI Safety? Here, my main intuition is that the field needs more "theory-builders" than "problem-solvers", to take the archetypes of Gower's Two Cultures of Mathematics. By that I mean that AI Safety has not yet cristallize into a field where the main approaches and questions are well understood and known. Almost every researcher has a different perspective on what is fundamental in the field. Therefore, the most useful works will be the ones that clarify, deconfuse and ch...
I'd say a pretty good way is to try out AI alignment research as best you can, and see if you like it. This is probably best done by being an intern at some research group, but sadly these spots are limited. Perhaps one could factor it into "do I enjoy AI research at all", which is easier to gain experience in, and "am I interested in research questions in AI alignment", which you can hopefully determine through reading AI alignment research papers and introspecting on how much you care about the contents.
In my mind it's something like you need:
I think people tend to emphasize the technical skills the most, and I'm sure other answers will offer more specific suggestions there, but I also think there's an import aspect of having the right mindset for this kind of work such that a person with the right technical skills might not make much progress on AI safety without these other "soft" skills.