If you use some form of noncausal decision theory, it can make a difference.
Suppose Omega flips a quantum coin, if its tails, they ask you for £1, if its heads they give you £100 if and only if they predict that you would have given them £1 had the coin landed tails.
There are some decision algorithms that would pay the £1 if and only if they believed in quantum many worlds. A CDT agent would never pay, and a UDT agent would always pay however.
It is of course possible to construct agents that want to do X if and only if quantum many worlds is true. It is also possible t construct agents that do the same thing whether it's true or false. (Eg Alpha Go)
The answer to this question depends on which wave function collapse theory you use. There are a bunch of quantum superposition experiments where we can detect that no collapse is happening. If photons collapsed their superposition in the double slit experiment, we wouldn't get an interference pattern. Collapse theories postulate a list of circumstances that we haven't measured yet when collapse happens. If you believe that quantum collapse only happens when 10^40 kg of mass are in a single coherent superposition, this belief has almost no effect on your predictions.
If you believe that you can't get 100 atoms into superposition then you are wrong, current experiments have tested that. If you believe that collapse happens at the 1gram level. Then future experiments could test this. In short, there are collapse theories in which collapse is so rare that you will never spot it. There are theories where collapse is so common that we would have already spotted it (so we know those theories are wrong), and there are theories in between. The in between theories will make different predictions about future experiments. They will not expect large quantum computers to work.
Another difference is that current QFT doesn't contain gravity. In the search for a true theory of everything, many worlds and collapse might suggest different successors. This seems important to human understanding. It wouldn't make a difference to an agent that could consider all possible theories.
I recommend reading the paper on Functional Decision Theory, to get an intuition on what an answer to this might look like. I think the question you're interested in is whether we should think of our action as actually having an effect on observers in another universe (or world, in MWI). This might seem absurd if you have the intuition that you can only affect things that are causally dependent on your actions. But if you drop the assumption of causal dependence, you can say that their decision is subjunctively dependent on yours.