I'm not sure if you're aware that my interest in these problems is mostly philosophical to begin with. For example I wrote the post that is the first link in my list in 1997, when I had no interest in AI at all, but was thinking about how humans would deal with probabilities when mind copying becomes possible in the future. Do you object to philosophers trying to solve philosophical problems in general, or just to AI builders making use of philosophical solutions or thinking like philosophers?
The philosophical thinking is usually done in terms of the concepts that are later found irrelevant (or which are known to be irrelevant to begin with). What I object to is philosopher's arrogance in form of gross overestimate of the relevance of the philosophical 'problems' and philosophical 'solutions' to anything.
If the philosophical notion of causality has a problem with abstracting away irrelevant low level details of the method of control of a manipulator, that is a problem with philosophical notion of causality, not a problem with the design of inte...
I noticed that recently I wrote several comments of the form "UDT can be seen as a step towards solving X" and thought it might be a good idea to list in one place all of the problems that helped motivate UDT1 (not including problems that came up subsequent to that post).