There's a Haskell implementation of modal UDT.
Previous discussion of modal UDT: 1, 2.
FDT is a family of decision theories, modal UDT is a specific algorithm. As stated it requires a hypercomputer, however has bounded variants.
The main obstacle is that these problems are almost always grossly underspecified. FDT depends a lot more upon the structure of a scenario than simpler decision theories.
A lesser but still valid obstacle is that there is no specification language for describing all the interrelations between the components of the scenario.
A third obstacle is that while FDT is relatively simple to define in mathematical theory, in practice it can result in enormous state spaces to search even when there is only one binary action to decide.
Just to follow up on that third point a little more: FDT depends upon counterfactual responses, how you would have responded to inputs that you didn't in fact observe.
If you go into a scenario where you can observe even as little as 6 bits of information, then there are 2^6 = 64 possible inputs to your decision function. FDT requires that you adopt the function with the greatest expected value over the weighted probabilities of every input, not just the one you actually observed. In the simplest possible case, each output is just one of two deterministic a...
FDT operates on a causal graph. (Maybe the other requirements are also hard to satisfy?* I think this is the major obstacle.) You would probably have to create the graph, for the problem, then pass it to the program (if you write one).
One could argue that the trick in real world situations is figuring out the causal graph anyway.
This is rather disconcerting.
When people talk about Bayesian methods here, they don't seem like they're using code for the Monte-Carlo stuff, or stuff like that.
Edit:
*JBlack's answer indicates this the other requirements, are an issue.
I occasionally see a question like "what would FDT recommend in ....?" and I am puzzled that there is no formal algorithm to answer it. Instead humans ask other humans, and the answers are often different and subject to interpretation. This is rather disconcerting. For comparison, you don't ask a human what, say, a chessbot would do in a certain situation, you just run the bot. Similarly, it would be nice to have an "FDTbot" one can feed a decision theory problem to. Does something like that exist? If not, what are the obstacles?