My sense is that no, they're not, they're accelerating it very slightly due to bumping priority of simulated data but mostly don't have a significant impact. Serious business AGI algorithms will be able to cope with this problem just fine - shrinking the training dataset a bit will be a difference of a few hours' training time. It's still all in the scaling, algorithms, and compute; the current sense that we're data limited won't last, stronger models will be able to compensate for data limitation. Compare alphago vs alphazero.
More precisely: Are copyright lawsuits against companies developing large language model slowing down the creation of AGI?
For example, there's this ongoing lawsuit: https://githubcopilotlitigation.com/.
You could also imagine this kind of lawsuits extended to other groups whose data are used, such as artists.