Superrationality and network flow control

16 alexflint 22 July 2013 01:49AM

Computers exchanging messages on a network must decide how fast or slow to transmit messages. If everyone transmits too slowly then the network is underutilized, which is bad for all. If everyone transmits too quickly then most messages on the network are actually flow control messages of the form "your message could not be delivered, please try again later", which is also bad for everyone.

Unfortunately, this leads to a classic prisoner's dilemma. It is in each node's own self-interest to transmit as quickly as possible, since each node has no information about when exactly an intermediate node will accept/drop a message, so transmitting a message earlier never decreases the probability that it will be successful. Of course, this means that the Nash equilibrium is a near-complete network breakdown in which most messages are flow control messages, which is bad for everyone.

Interestingly, some folks at MIT noticed this, and also noticed that the idea of superrationality (of Douglas Hofstadter origins, and the grandfather of TDT and friends) is one way to get past prisoner's dilemmas --- at least if everyone is running the same algorithm, which, on many networks, people mostly are.

The idea put forward in the paper is to design flow control algorithms with this in mind. There is an automated design process in which flow control algorithms with many different parameter settings are sampled and evaluated. The output is a program that gets installed on each node in the network.

Now, to be fair, this isn't exactly TDT: the end-product algorithms do not explicitly consider the behavior of other nodes in the network (although they were designed taking this into account), and the automated design process itself is really just maximizing an ordinary utility function since it does not expect there to be any other automated designers out there. But nevertheless, the link to superrationality, and the fact that the authors themselves picked up on it, was, I thought, quite interesting.

Personality tests?

1 alexflint 29 February 2012 09:33AM

Does anyone know of a freely available, short personality test that would be appropriate for estimating pairwise compatibility for wedding seating?

What independence between ZFC and P vs NP would imply

1 alexflint 08 December 2011 02:30PM

Suppose we had a model M that we thought described cannons and cannon balls. M consists of a set of mathematical assertions about cannons, and the hypothesis is that these fully describe cannons in the sense that any question about cannons ("what trajectory do cannon balls follow for certain firing angles?", "Which angle should we pick to hit a certain target?") can be answered by deriving statements from M. Suppose further that M is specified in a certain mathematical system called A, consisting of axioms A1...An.

Now there is much to be said about good ways to find out whether M is true of cannons or not, but consider just this particular (strange) outcome: Suppose we discover that a crucial question about cannons - e.g. Q="Do cannon balls always land on the ground, for all firing angles?" - turned out to be not just un-answerable by our model M but formally independent of the mathematical system A in the sense that the addition of some axiom A0 implies Q, while the addition of its negation, ~A0, implies ~Q.

What would this say about our model for cannons? Let's suppose that we can take Q as a prima facie substantive question with a definitive yes or no answer regardless of any model or axiomatization. At the very least it seems that M must be an incomplete model of cannons if the system in which it is specified is insufficient to answer the various questions of interest. It seems to me that

If a question about reality turns out to be logically independent of a model M, then M is not a complete model of reality.

Now we have an axiomatization of mathematics -- let's say it's ZFC for now -- and we have a model of computation in reality, which is M="The unvierse can contain machines that (efficiently) compute F iff there exists a Turing machine that (efficiently) computes F" with appropriate definitions of what exactly a Turing machine is in terms of ZFC. Suppose we want to answer a question like Q="Can the universe contain machines that efficiently solve SAT?"

Under the premise that M is true, the question Q becomes the pure logical question R="Are there Turing machines that efficiently solve SAT?", i.e. the P versus NP problem.

Now suppose that R was shown to be formally independent of ZFC in the sense that for some axiom A0, ZFC+A0 implies P=NP and ZFC+~A implies P!=NP. This would resolve the mathematical question of P versus NP but the question Q seems like a prima facie concrete question with a definitive yes or no answer that does not rely for its substance on M or ZFC or any other epistemic construct. It would seem that we must have missed something in our description of reality, M.

Perhaps more controversially, I claim: Under the correct model M' it seems that it's impossible for a substantive question (such as Q) to be unanswerable.

All this adds up to: The P versus NP problem (and questions like it that can be phrased as definitive questions about reality) must have an answer unless our model of reality is incomplete.

Weight training

6 alexflint 26 August 2011 03:25PM

I'm looking for resources on effective weight training for the purpose of physique building. It's an area with a particularly poor signal to noise ratio so I would value pointers from other rationalists. The kinds of questions I would like to answer are:

  • How to structure work-outs. Should I lift as much weight as possible or do more repetitions at lower weights?
  • How should I trade off frequency of gym visits against length of those visits?
  • What supplements should I take?

Edit: I'm vegetarian, and I now realise this is rather important to answers to point three. So far the only supplement I've been taking is soy protein.

Derek Parfit, "On What Matters"

4 alexflint 07 July 2011 04:52PM

Derek Parfit has published his second book, "On What Matters". Here are reviews by Tyler Cowen and Peter Singer.

[link] Bruce Schneier on Cognitive Biases in Risk Analysis

8 alexflint 03 May 2011 06:37PM

A very clear-minded introduction to map-vs-territory ideas in the context of risk analysis. Nothing particularly new here, though the specific examples he gives may be of interest to LW readers.

http://www.ted.com/talks/bruce_schneier.html

What would you do with a solution to 3-SAT?

3 alexflint 27 April 2011 06:19PM

Many experts suspect that there is no polynomial-time solution to the so-called NP-complete problems, though no-one has yet been able to rigorously prove this and there remains the possibility that a polynomial-time algorithm will one day emerge. However unlikely this is, today I would like to invite LW to play a game I played with with some colleagues called what-would-you-do-with-a-polynomial-time-solution-to-3SAT? 3SAT is, of course, one of the most famous of the NP-complete problems and a solution to 3SAT would also constitute a solution to *all* the problems in NP. This includes lots of fun planning problems (e.g. travelling salesman) as well as the problem of performing exact inference in (general) Bayesian networks. What's the most fun you could have?

[link] flowchart for rational discussions

0 alexflint 05 April 2011 09:14AM

I came across the following flow chart on a climate change blog and thought LWers would be interested to comment:

http://thoughtcatalog.com/wp-content/uploads/2011/03/A-Flowchart-to-Help-You-Determine-if-Yoursquore-Having-a-Rational-Discussion.jpg

The AI-box for hunter-gatherers

9 alexflint 02 April 2011 12:09PM

The following is a minor curiosity that occurred to me regarding real-world analogies to the AI-box concept.

Fundamentally, the reason that we fear a randomly-chosen super-intelligent AI is twofold:

 

  1. It would be smarter than us, so it could outwit us no matter what its goals.
  2. We have no  reason to expect its goals to exactly coincide with ours, so we expect its actions to be detrimental to us.
Now, let's say you're playing a game with a wide range of outcomes, over which you have a well-defined utility function. Let's say you can choose a partner from some pool of potential partners, each with a random utility function. Let's say the intelligence of each player is known to you. It would be unwise to choose a player more intelligent than yourself because
  1. They could outwit you.
  2. You have no reason to expect their goals to coincide with yours.
On the other hand, if you pick a less intelligent player then perhaps you could trick them into furthering your own goals, or at least ascertain whether their motives coincide with yours. At the very least you would be able to keep them from subverting your own goal-directed actions.
I conjecture that this explains why people sometimes fear those more intelligent than themselves, and also why people sometimes act dumb. Imagine a hunter-gatherer group considering inviting an outsider to join them. If the outsider's motives are uncertain then the more intelligent the outsider, the less well-advised the group would be to let em in.
In fact, a situation analogous to an AI-box could arise:
Group member 1: We should let this intelligent outsider in, but we should keep tabs on them should they act against us.
Group member 2: But the fact that they're more intelligent than us means that you *can't* expect to keep tabs on them.
Group member 1: But couldn't we set a test of their motives and only allow them in if they proves to have motives coinciding with ours?

 

I want a better memory.

20 alexflint 02 April 2011 11:36AM

I suspect that forgetfulness is the single largest hindrance to me improving my rationality. This isn't something I've seen others report on LessWrong, so I'm suspicious that I'm in some kind of self-serving spiral, or that I'm doing something obvious wrong. So, I'm seeking feedback on (a) whether the above statement is true -- whether forgetfulness is likely to really be a dominant hindrance; (b) what I can do about it; and (c) why others haven't reported this.

Ways that I suspect forgetfulness harms me:

  • I forget past experiences that would otherwise allow me to observe correlations and extrapolate time-series style. Did my mood improve last time I called this friend? Was I more productive during that period where I got less sleep?
  • I fail to recall evidence that would be salient when evaluating probabilities. This includes evidence relevant not only to coffee-time discussions on abstract topics, but also questions that occur to me about akrasia and the like.
  • I can't remember names of authors and papers that could be relevant to ideas that arise during my research. Searching online for papers is fine for major literature reviews, but enormously expensive to evaluate hour-by-hour conjectures that occur to me.
  • When I encounter the same problem multiple times, I forget how I solved the problem last time.
  • When a new productivity/social/rationality strategy *does* work, I forget to keep using it. This isn't only about laziness: sometimes I actually wonder explicitly whether strategy X worked, and I can't remember.
  • I forget names, faces, places, facts and figures. But I understand this one is quite common.
  • I forget all the ways that forgetfulness frustrates me.

Steps I've taken:

  • I keep an elaborate diary with appointments like "bring USB drive home from work" and "purchase bread en route to Sam's house".
  • I keep an elaborate e-notebook where I try to record my "brain state" so that I can more quickly pick up where I left off with my work.
  • Every time I solve a technical problem, I write it down.
  • I use a memory app called mnemosyne to memorize foreign language words, names of jitsu techniques, etc.
The write-everything-down strategy has helped some, but it's *orders of magnitude less effective* than recalling stuff right from my brain. It's like replacing a CPU's L1 cache with a magnetic hard drive and expecting performance not to drop.

View more: Prev | Next