PhilGoetz comments on How can we compare decision theories? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (41)
If that's so, why do we spend so much time talking about Newcomb problems? Should we ban Omega from our decision theories?
Omega is relevant because AGIs might show each other their source code, at which point they gain the predictive powers, vis-a-vis each other, of Omega.
On the other hand, an AGI running CDT would self-modify to UDT/TDT if running UDT/TDT lead to better outcomes, so maybe we can leave the decision theoretic work to our AGI.
The issue there is that a 'proof' of friendliness might rely on a lot of decision theory.
If you want to build a smart machine, decision theory seems sooo not the problem.
Deep Blue just maximised its expected success. That worked just fine for beating humans.
We have decision theories. The main problem is implementing approximations to them with limited spacetime.
IMO, this is probably all to do with crazyness about provability - originating from paranoia.
Obsessions with the irrelevant are potentially damaging - due to the risks of caution.