UDT doesn't handle non-base-level maximization vantage points (previously "epistemic vantage points") for blackmail - you can blackmail a UDT agent because it assumes your strategy is fixed, and doesn't realize you're only blackmailing it because you're simulating it being blackmailable.
I'm not so sure about this one... It seems that UDT would be deciding "If blackmailed, pay or don't pay" without knowing whether it actually will be blackmailed yet. Assuming it knows the payoffs the other agent receives, it would reason "If a pay if blackmailed...I get blackmailed, whereas if I don't pay if blackmailed...I don't get blackmailed. I therefore should never pay if blackmailed", unless there's something I'm missing.
Perhaps we should also ask "Why do we feel we have free will?". The simplest answer, of course, is that we actually do. Albeit, it wouldn't be beyond the scope of human biases to believe that we do if we don't. Ultimately, if we were certain that we couldn't feel more like we have free will than we already do, then we could reduce the question "Do we have free will?" to "Would someone without free will feel any differently than we do?".
Superior (or even infinite) computing power does not imply he can make you be persuaded to cooperate, only that he knows whether you will. If there exist any words he could say to convince you to cooperate, he will say them and defect. However, if you cooperate only upon seeing omega cooperate (or prove that he will), he will cooperate.
As one of the players who submitted a cooperatebot (yes, I see the irony), allow me to explain my reasoning for doing so. I scoped the comments to see what bots were being suggested (mimicbots, prudentbots, etc) and I saw much more focus on trying to enforce mutual cooperation than trying to exploit bots that can be exploited. I metagamed accordingly, hypothesizing that the other bots would cooperate with a cooperatebot but possibly fail to cooperate with each other. My hypothesis was incorrect, but worth testing IMO.