private_messaging comments on List of Problems That Motivated UDT - Less Wrong

24 Post author: Wei_Dai 06 June 2012 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread.

Comment author: private_messaging 06 June 2012 10:24:14AM *  -2 points [-]

Look how robot controllers are implemented, look at real theories, observe that treating copies as extra servos is trivial change and works. It also works when the copies are not full and can distinguish between each other. Also, re-learn that values in theory are theoretical and are not homologous to underlying physical implementation; it is of no more interest that the action A is present in N physically independent systems, than that the action A is a real number but hardware is using floating point binary.

Philosophers have tendency to pick some random minor implementation detail, and get some sort of philosophical problem with it. For example the world may be deterministic, a minor implementation detail, the philosophers go "where's my free will?". Exact same thing with decision theories. Same theoretic action variable represents several different objects, that could be 2 robot arms wired in parallel, that could be two controllers with identical state wired to 2 robot arms, everything works the same but for the latter philosophers go "where's my causality?". Never mind that the physics is reversible at fundamental level and notion of causality is just a cognitive tool, for everyone else.

Comment author: khafra 06 June 2012 01:32:18PM 5 points [-]

This reminds me of the debate between programmers who want to design an elegant system that accomplishes all the desired functions as consequences of a fundamentally simple design, and the programmers who just want to make it work and ship. Depending on the problem you're solving, and the constraints you're working under, I think either approach can be appropriate. Peter Norvig's sudoku solver is in the "elegant" school, but if I were writing one from scratch, I'd do better to build something ugly and keep testing it until it seemed reliable.

I'm sorta leaning toward the "natural and elegant" approach for decision theories, since they'd have to face unknown new challenges without breaking, but patching CDT with cybernetics and such might work as well.

Comment author: David_Gerard 06 June 2012 03:02:34PM 0 points [-]

More to the point, actually solving some of these problems may well be NP-complete. But what do we and evolution do in practice, when we have to solve the problem and throwing up our hands is not an option? We and it use a numerical approximation which works pretty darned well. Worse is, in fact, better.

Comment author: private_messaging 07 June 2012 11:26:44AM *  -1 points [-]

This reminds me of the debate between programmers who want to design an elegant system that accomplishes all the desired functions as consequences of a fundamentally simple design, and the programmers who just want to make it work and ship. Depending on the problem you're solving, and the constraints you're working under, I think either approach can be appropriate.

I think the resemblance is only superficial. There is nothing inelegant in treating two wired-in-parallel robotic arms controlled by the same controller, in same way regardless of whenever the controller is same 'real physical object', especially considering that we live in the world where if you have two electrons (or two identical anything), them being separate objects is purely in the eye of the beholder.

The whole point is that you abstract out inelegant details such as whenever the same controllers are physically one system or not. This abstraction is not at odds with mathematical elegance, it is the basis for mathematical elegance. It however is at odd with philosophical compactness-by-confusion. This abstraction does not allow for the notion of causality that was oversimplified to the point of irrelevance.

Comment author: Wei_Dai 07 June 2012 05:05:53PM 3 points [-]

I'm not sure if you're aware that my interest in these problems is mostly philosophical to begin with. For example I wrote the post that is the first link in my list in 1997, when I had no interest in AI at all, but was thinking about how humans would deal with probabilities when mind copying becomes possible in the future. Do you object to philosophers trying to solve philosophical problems in general, or just to AI builders making use of philosophical solutions or thinking like philosophers?

Comment author: private_messaging 07 June 2012 05:58:39PM *  -1 points [-]

The philosophical thinking is usually done in terms of the concepts that are later found irrelevant (or which are known to be irrelevant to begin with). What I object to is philosopher's arrogance in form of gross overestimate of the relevance of the philosophical 'problems' and philosophical 'solutions' to anything.

If the philosophical notion of causality has a problem with abstracting away irrelevant low level details of the method of control of a manipulator, that is a problem with philosophical notion of causality, not a problem with the design of intelligent systems. Philosophy seems to be an incredibly difficult to avoid failure mode of intelligences - whereby the intelligence fails to establish relevant concepts and proceeds to reason in terms of faulty concepts.

Comment author: Wei_Dai 07 June 2012 06:59:20PM *  3 points [-]

What's your opinion of von Neumann and Morgenstern's work on decision theory? Do you also consider it to be "later found irrelevant" or do you think consider it an exception to "usually"? Or do you not consider it to be "philosophical"? What about philosophical work on logic (e.g., Aristotle's first steps towards formalizing correct reasoning)?