Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: alexflint 18 February 2014 07:57:42PM 0 points [-]

Me, 50%

Comment author: alexflint 24 December 2013 04:02:43AM *  2 points [-]

I think we should be at least mildly concerned about accepting this view of agents in which the agent's internal information processes are separated by a bright red line from the processes happening in the outside world. Yes I know you accept that they are both grounded in the same physics, and that they interact with one another via ordinary causation, but if you believe that bridging rules are truly inextricable from AI then you really must completely delineate this set of internal information processing phenomena from the external world. Otherwise, if you do not delineate anything, what are you bridging?

So this delineation seems somewhat difficult to remove and I don't know how to collapse it, but it's at least worth questioning whether it's at this point that we should start saying "hmmmm..."

One way to start to probe this question (although this does not come close to resolving the issue) is to think about an AI already in motion. Let's imagine an AI built out of gears and pulleys, which is busy sensing, optimizing, and acting in the world, as all well-behaved AIs are known to do. In what sense can we delineate a set of "internal information processing phenomena" within this AI from the external world? Perhaps such a delineation would exist in our model of the AI, where it would be expedient indeed to postulate that the gears and pulleys are really just implementing some advanced optimization routine. But that delineation sounds much more like something that should belong in the map than in the territory.

What I'm suggesting is that starting with the assumption of an internal sensory world delineated by a bright red line from the external world should at least give us some pause.

Comment author: Coscott 30 September 2013 04:05:07AM 3 points [-]

I agree with that response to the sleeping beauty problem, and the way you set up the payoff structure will probably make this problem equivalent to the St. Petersburg Paradox.

Comment author: alexflint 05 October 2013 03:49:50AM *  0 points [-]

This is unlike the St Petersburg paradox because it involves amnesia, so assigning probabilities arguably forces you to decide on some SIA/SSA-like quandary. But I do agree that making this into a decision problem is the key.

Comment author: alexflint 01 October 2013 03:23:13AM *  2 points [-]

Another way to think about Dave's situation is that his utility function assigns the same value to all possible futures (i.e. zero) because the one future that would've been assigned a non-zero value turned out to be unrealizable. His real problem is that his utility function has very little structure: it is zero almost everywhere.

I suspect our/my/your utility function is structured in a way that even if broad swaths of possible futures turn out to be unrealizable, the remainder will still contain gradients and local maxima, so there will be some more desirable and some less desirable possibilities.

Of course this is not guaranteed, but most utility functions have gradients and local maxima over most sets. You need a very special utility function and a very special set of realizable futures in order for all futures to be assigned exactly the same value.

Comment author: cousin_it 30 September 2013 07:57:00PM *  2 points [-]

The UDT solution says: "instead of drawing a graph containing <you>, draw one that contains <your abstract decision algorithm> and you will see that the independence between beliefs and decisions is restored!"

Can you try to come up with a situation where that independence is not restored? If we follow the analogy with correlations, it's always possible to find a linear map that decorrelates variables...

Comment author: alexflint 01 October 2013 02:35:19AM 0 points [-]

Ha, indeed. I should have made the analogy with finding a linear change of variables such that the result is decomposable into a product of independent distributions -- ie if (x,y) is distributed on a narrow band about the unit circle in R^2 then there is no linear change of variables that renders this distribution independent, yet a (nonlinear) change to polar coordinates does give independence.

Perhaps the way to construct a counterexample to UDT is to try to create causal links between <your decision algorithm> and <the world> of the same nature as the links between <you> and the <world> in e.g. Newcomb's problem. I haven't thought this through any further.

Comment author: alexflint 28 September 2013 12:47:02PM 3 points [-]

I have also been leaning towards the existence of a theory more general than probability theory, based on a few threads of thinking.

One thread is anthropic reasoning, where it is sometimes clear how to make decision, yet probabilities don't make sense and it feels to me that the information available in some anthropic situations just "doesn't decompose" into probabilities. Stuart Armstrong's paper on the sleeping beauty problem is, I think, valuable and greatly overlooked here.

Another thread is the limited-computation issue. We would all like to have a theory that pins down ideal reasoning, and then work out how to efficiently approximate that theory in a turing machine as a completely separate problem. My intuition is that things just don't decompose this way. I think that a complete theory of reasoning will make direct reference to models of computation.

This site has collected quite a repertoire of decision problems that challenge causal decision theory. They all share the following property (including your example in the comment above): that in a causal graph containing <you> as a node, there are links from <you> to <the world> that do not go via your <your action> (for newcomb-like problems) or that do not go via <your observations> (anthropic problems). Or in other words, your decisions are not independent of your beliefs about the world. The UDT solution says: "instead of drawing a graph containing <you>, draw one that contains <your abstract decision algorithm> and you will see that the independence between beliefs and decisions is restored!". This feels to me like a patch rather than a full solution, similar to saying "if your variables are correlated and you don't know how to deal with correlated distributions, try a linear change of variables -- maybe you'll find one that de-correlates them!". This only works if you're lucky enough to find a de-correlating change of variables. An alternate approach would be to work out how to deal with non-independent beliefs/decision directly.

One thought experiment I like to do is to ask probability theory to justify itself in a non-circular way. For example, let's say I propose the following Completely Stupid Theory Of Reasoning. In CSTOR, belief states are represented by a large sheet of paper where I write down everything that I have ever observed. What is my belief state at time t, you ask? Why, it is simply the contents of the entire sheet of paper. But what is my belief state about a specific event? Again, the contents of the entire sheet of paper. How does CSTOR update on new evidence? Easy! I simply add a line of writing to the bottom of the sheet. How does CSTOR marginalize? It doesn't! Marginalization is just for dummies who use probability theory, and, as you can see, CSTOR can do all the things that a theory of reasoning should do without need for silly marginalization.

So what really distinguishes CSTOR from probability theory? I think the best non-circular answer is that probability theory gives rise to a specific algorithm for making decisions, where CSTOR doesn't. So I think we should look at decision making as primary and then figure out how to decompose decision making into some abstract belief representation plus abstract notion of utility, plus some abstract algorithm for making decisions.

Comment author: TrE 22 July 2013 12:36:56PM 2 points [-]

Do the routers even "play"? Do the routers, just executing their programming, count as "agents" with "goals"? Assuming that the users don't normally change their router's firmware, this seems "merely" like an optimization problem, not like a problem of game theory.

Comment author: alexflint 22 July 2013 01:47:25PM *  4 points [-]

You're right - most users don't rewrite their TCP stack. But suppose you're designing the next version of TCP and you think "hey, instead of using fixed rules, let's write a TCP stack that optimizes for throughput". You will face a conceptual issue as you realize that the global outcome is now total network breakdown. So what do you optimize for instead? Superrationality says: make decisions as though deciding the output for all nodes at the current information set. This is conceptually helpful because it tells you what you should be optimizing for.

Now if you start out from the beginning (as in the paper) by thinking of optimizing over algorithms, with the assumption that the output will be run on every node then you're already doing superrationality. That's all superrationality is!

Superrationality and network flow control

16 alexflint 22 July 2013 01:49AM

Computers exchanging messages on a network must decide how fast or slow to transmit messages. If everyone transmits too slowly then the network is underutilized, which is bad for all. If everyone transmits too quickly then most messages on the network are actually flow control messages of the form "your message could not be delivered, please try again later", which is also bad for everyone.

Unfortunately, this leads to a classic prisoner's dilemma. It is in each node's own self-interest to transmit as quickly as possible, since each node has no information about when exactly an intermediate node will accept/drop a message, so transmitting a message earlier never decreases the probability that it will be successful. Of course, this means that the Nash equilibrium is a near-complete network breakdown in which most messages are flow control messages, which is bad for everyone.

Interestingly, some folks at MIT noticed this, and also noticed that the idea of superrationality (of Douglas Hofstadter origins, and the grandfather of TDT and friends) is one way to get past prisoner's dilemmas --- at least if everyone is running the same algorithm, which, on many networks, people mostly are.

The idea put forward in the paper is to design flow control algorithms with this in mind. There is an automated design process in which flow control algorithms with many different parameter settings are sampled and evaluated. The output is a program that gets installed on each node in the network.

Now, to be fair, this isn't exactly TDT: the end-product algorithms do not explicitly consider the behavior of other nodes in the network (although they were designed taking this into account), and the automated design process itself is really just maximizing an ordinary utility function since it does not expect there to be any other automated designers out there. But nevertheless, the link to superrationality, and the fact that the authors themselves picked up on it, was, I thought, quite interesting.

Comment author: alexflint 04 July 2013 05:42:28PM 9 points [-]

It's easy to give an algorithm that generates a proof of a mathematical theorem that's provable: choose a formal language with definitions and axioms, and for successive values of n, enumerate all sequences of mathematical deductions of length n, halting if the final line of a sequence is the statement of the the desired theorem. But the running time of this algorithm is exponential in the length of the proof, and the algorithm is infeasible to implement except for theorems with very short proofs.

Yes this approach is valid, but modern theorem provers more commonly reduce theorem proving to a sequence of SAT problems. Very roughly, for a first order sentence P, the idea is to search for counterexamples alternately to P and ~P in models of size 1,2,... . SAT solvers have improved rapidly over the past decade (http://www.satcompetition.org/), though they are still not good enough to facilitate theorem provers that can solve interesting math problems.

I highly recommend the concise and excellent "Handbook of Practical Logic and Automated Reasoning" http://www.cl.cam.ac.uk/~jrh13/atp/

Comment author: alexflint 01 July 2013 01:15:38PM 4 points [-]

You're asking people to execute a program, but you should be asking people to write a program.

View more: Next