About a month ago I accidentally found out that the LW idea of quining cooperation is already studied in academia:

1) Moshe Tennenholtz's 2004 paper Program Equilibrium describes the idea of programs cooperating in the Prisoner's Dilemma by inspecting each other's source code.

2) Lance Fortnow's 2009 paper Program Equilibria and Discounted Computation Time describes an analogue of Benja Fallenstein's idea for implementing correlated play, among other things.

3) Peters and Szentes's 2012 paper Definable and Contractible Contracts studies quining cooperation over a wider class of definable (not just computable) functions.

As far as I know, academia still hasn't discovered Loebian cooperation or the subsequent ideas about formal models of UDT, but I might easily be wrong about that. In any case, the episode has given me a mini-crisis of faith, and a new appreciation of academia. That was a big part of the motivation for my previous post.

New Comment
40 comments, sorted by Click to highlight new comments since:
[-]gwern180

In any case, the episode has given me a mini-crisis of faith, and a new appreciation of academia.

What can one say? Academia is terrible at marketing, so you have to exercise http://lesswrong.com/lw/3m3/the_neglected_virtue_of_scholarship/ yourself. This is far from a new idea to me, which is why I find the occasional realization like this amusing. (Well, yeah. Why do you think people keep getting told to use Google or do some research? NIH indeed.)

Academia is terrible at marketing

Good point, I'll keep it in mind.

Can you give some examples of your recent realizations like that? ETA: oh wait, maybe you meant other people's realizations, in which case never mind.

The illustrations I find most vivid are from the past. Pick some particular technical idea that seems like a "big deal" to you, and then look at its penetration into academic and popular consciousness over time. Diagonalization arguments are a reasonable example. Cantor published over a century ago. The argument is simple enough to explain to an interested 11 year old in less than two hours and repairs basic confusions that many reasonably smart people have about "infinity"... and it remains nearly unknown among non-mathematicians to the present day.

Now imagine the tiny fraction of the population in 1910, 1930, and 1950 who knew about it, and the tricks they could do that "mere mortals" could not. That kind of stuff propagates slowly. Maybe faster now, what with the internet and paywalls slowly coming down and Wikipedia and so on? But still pretty slowly. Kolmogorov complexity has been an available thought for half a century already!

It would not surprise me if every insight needed for AGI had already been published somewhere already, but the separate ideas have just not yet been noticed, collated, synthesized, and reduced to practice. I find this thought sobering.

One of the real tricks is to know what the keywords are, and the best way I know to do that is to participate in academic specialties enough to pick up the "chalk board culture" of a new research group. Grad school (in addition to all the other stuff) is, in some sense, joining a research group and hanging out with them enough to pick up their chalk board culture. Not having access to this is one of the problems that an autodidact trying to make it with book learning and nothing but book learning can run into. Access to the chalk board cultures is partly valuable for helping you see what other smart people think are important things to have read. There's other value as well (like the way a chalk board culture is sometimes a locus of knowledge production), but the advice on keywords and authors is a good chunk of the value.

My question was more about examples of unwitting rediscovery, like those described by gwern. Good point about knowing the keywords :-)

[-][anonymous]-40

It would not surprise me if every insight needed for AGI had already been published somewhere already, but the separate ideas have just not yet been noticed, collated, synthesized, and reduced to practice. I find this thought sobering.

John Sowa 2012 The Goal of Language Understanding

Quote: All logic is a disciplined special case of analogical reasoning (page 128)

He knows ;)

[This comment is no longer endorsed by its author]Reply
[-]gwern160

Actually, I had one just last night. I was starting an anthology on Buddhist philosophy (Empty Words) which I had downloaded from library.nu, and the very first essay was arguing that Nagarjuna's Verses on the Heart of the Middle Way and Sextus Empiricus's Against books espoused a similar skeptical approach to metaphysical commitments (drawing on Kripke & Wittgenstein) - the exact approach I had mused upon at length before, with some of the same passages I would have chosen.

'Huh', I thought, 'I really should have expected someone to have thought that before, but I guess I didn't because I was surprised and angered (since "my" idea had been stolen) to realize where this essay was going.'

Fortunately, I hadn't gotten around to carefully studying and making notes towards an essay, so I didn't lose too much time to my failure to check whether the idea had already been done. (Unlike the time I re-invented longevity insurance, eg.)

I was surprised and angered (since "my" idea had been stolen)

My feelings are the opposite. I become exceedingly pleased and amused to find that someone else did it first. Aside from it being instant vindication, it just makes me happy. I feel a sense of kinship knowing that someone, somewhere, had the same data I had and came to the same exact logical leap that I did. It's like we're research buddies across time.

Though I do feel silly and a little mad at myself when I've wasted a lot of time synthesizing something could have just researched instead.

Thanks for the examples! And also for the footnote about Feynman and Hillis reinventing Kimura's work on population genetics.

Wait, I thought that library.nu was shut down back in the spring. What am I missing?

I never said when I downloaded it.

That's what I figured, but I hoped I was wrong, and there's still a super-secret beer-lovers' club which opens if you say "iftahh ya simsim" thrice or something. Assuming you would let me in on a secret, of course.

[-]gwern130

Unfortunately, if there was such a secret beer-lovers' club, I couldn't tell a relative stranger like you about it. (Ironically, this is also what I would say if there was no such thing.)

[-][anonymous]20

Me too. Without library.nu, research is significantly harder. If any LWer has an invite to a private repository/tracker for scholarly books/textbooks, please share with me.

Library Genesis.

You seem to have quite good research skills, do you have any advice for someone trying to find out if academia has considered something already? Especially if that someone doesn't know that much about the field in question.

Try a bunch of queries; read good-looking papers, not for their results, but their discussion of background information. There's no real good solution to the problem of known unknowns: "I'm sure this has been researched, but what's the exact phrase or name it's been given in academia?" I suspect this is one of the benefits of being in the community - when you discuss something or mention an idea, they can say 'ah yes, X' and now you have not the problem of an unknown known but the much much easier problem of a known unknown which you can just punch into Google Scholar and get to work.

read good-looking papers, not for their results, but their discussion of background information.

That's a really good idea! I usually skip introductions and go for the "meat" of the results, just realized that it's bitten me several times already, guess it's time to unlearn that habit.

Whenever I have an idea that I think may be new, I always do some research to see if other people already wrote about it. (Learned this lesson after an embarrassing episode when I was younger.) Did you do that and failed to find these papers (didn't use the right keywords?), or just didn't look?

I looked but didn't use the right keywords. Patrick LaVictoire, a math postdoc, also did a literature search about quining cooperation at my request and didn't find these papers either. Then I stumbled on them by accident while looking for something else.

After that I did a literature search about the next few UDT results and didn't find anything, but I'm not confident that these results are new either. Can you give it a try?

Out of curiosity, via what route did you run across these?

Don't remember exactly, but I think I was looking for papers on decision-making with bounded computation time, and Fortnow's paper came up.

Well, cooperating in the Prisoner's Dilemma against an identical agent is a simple special case of UDT reasoning, and people are writing math papers about it as you can see. Besides, if UDT is trivial, how come it has unsolved problems? Off the top of my head: spurious proofs, agent simulates predictor, logical uncertainty... The fact that already solved problems look trivial isn't strong evidence to me, because I know how much effort it takes to get something like Counterfactual Mugging from the "impossible" pile into the "trivial" pile. Actually I'm quite flattered when people on LW now consider UDT solutions trivial, e.g. in the discussion of Psy-Kosh's problem.

One simple heuristic formula for approximate problem difficulty I heard is this:

D = log(T)*N

D is difficulty, T is length of time the problem stayed open after being posed, N is the average number of people working on it.

Rationalle:

Since we expect P != NP, finding proofs is an exponential search problem. That means in time T we can only cover problems that have about log(T) shortest proof length. Assuming linear scaling with adding more processing, we have a factor of N (in reality, scaling is sublinear, but this is a heuristic..).

The point is that just because a field has open problems, doesn't mean those problems are necessarily very hard. It could be that either they haven't been open very long, or not enough people care.

If D is the proof length, surely your reasoning should lead to something like log(T*N) rather than log(T)*N? It seems to me that N people working for T days is T*N person-days, so the two variables should be mostly exchangeable, it's weird to see only one of them under the logarithm...

[-][anonymous]10

Since we expect P != NP, finding proofs is an exponential search problem.

P != NP does not imply that one can't do better than an exponential search for finding proofs. For instance an O(n^log(n)) algorithm for finding proofs would not contradict P != NP.

Is the default prior on academic competence on LW low/unfavorable? If so, why?

availability bias from being exposed to bad examples of scholarship?

I wonder if EY is partly to blame.

Probably. He has an unusually negative view of academia, doesn't have much firsthand experience with it (particularly elite academia), and his predictions have tended to be too negative, e.g. re the Higgs Boson, decision theory, and the skill of top academic AI/machine learning researchers. That inevitably slips in to things.

[-][anonymous]60

I do have some firsthand experience with it and still feel somewhat negatively about it, though not to the degree EY does.

[-][anonymous]80

At the beginning of this year I dove into psychology in my free time. I skimmed hundreds, maybe thousands of papers. I expected to find awesome useful ideas. Let me try to explain how much crap I found instead.

It's easy to nitpick on any specific piece of psychology. Fodor argued about empty labels that made no predictions. Computational models of memory were neither efficient nor biologically plausible. The concept literature is a mess of people arguing over the extent to which concepts are built on rules, or relations, or exemplars, or prototypes, or affordances, or schemas, or codelets, or perceptual symbols, or analogies, without ever finding the underlying math which explains why concepts are useful, when and how any of those kind of underlying concept-stuff could work.

But maybe those are examples of people failing to make unified theories out of their experimental results. What if we focus on the experiments? Here's the setup of a fairly recent nominal combination experiment: A child is asked to picture a zebra clam, and then to point at an image which is most similar to their imagined interpretation of the phrase. This is not an unusual experiment for psychology. Asking people what they think and thereby distilling their entire thought process down to one data point is the norm is psychology, not the exception. For example, the entire personality psychology research program was built on self- and peer-evaluations, just asking people "are you a pretty confident person?" and so on. That alone is amazing. It's like using the motion of an electron you don't understand to predict the motion of a proton you don't understand. But back to the zebra clam. No one decided to try that study because they thought it would answer any foundational questions of their field. It wasn't valuable information, it was just an easy test which they could do, one that sounded relevant to their favorite child's-interpretation-of-zebra-clam hypothesis.

That's a taste of what I see in psychology research. Hundreds of studies that never should have been done, a field that doesn't know what observables to measure (I don't know either! Brains are hard, I'm not saying I would be a great psychologist, I'm just saying psychology happens to be bad presently), and fluff arguments about intuitive theories.

Those were some of the fruits of my last research binge. Now I'm looking at logic, decision theory, and game theory. The situation is different there, but not much better. That said, while academics are generally incompetent, they're not relatively incompetent. I don't know anywhere to find people who can reliably solve new, alien problems.

[This comment is no longer endorsed by its author]Reply

For example, the entire personality psychology research program was built on self- and peer-evaluations, just asking people "are you a pretty confident person?" and so on. That alone is amazing. It's like using the motion of an electron you don't understand to predict the motion of a proton you don't understand.

You say that as if it were an obviously bad thing. But, well, if you don't understand the motion of electrons or protons, but want to figure them out, what else can you do than start conducting easy experiments and start looking for regularities that you could build your theory on?

And yes, many research programs are partially built on self- and peer-evaluations. But those evaluations are also checked for stability between rating occasions and between raters, and then looked at in order to find stable correlations between the evaluation and objective measures, ones which persist after other factors have been controlled for.

Sure, it might be better if you could find some completely objective measures to start out from... but the human brain has great tools to evaluate other human brains. Humans have effectively been handed a ready-made toolkit, parts of which evolved for the express purpose of extracting complex regularities about the behavior of themselves and others and distilling their observations into a highly compressed format. If you devise a measure which achieves good consistency and inter-rater reliability, that's strong evidence of something, and you'd be crazy not to take advantage of it to try to figure out what it means.

Now I'm not saying that psychology would be perfect, or even great. There is a lot of crap, and the theories are still a mess. But I don't think it's as hopelessly bad as you're implying it to be, either.

What do you think about statistics/machine learning? Also what exactly do you mean when you say the situation in "logic" is scarcely better than in psychology? Do you mean mathematical logic/model theory? GOFAI? Logic in analytic philosophy?

One area where psychology journals are said to be leading the way is encouraging a discussion of effect sizes. It's shocking to me that all journals don't do this.

[-][anonymous]00

I think Eliezer had a too low prior for academic competence in physics, which rubbed off on everyone else because he can be pretty damn convincing. His Bayesian Conspiracy stories show a small dedicated group working much faster than academia. The quantum physics sequence made a similar point. He has since changed his mind a little. IMO it's not enough, and most of LW still doesn't appreciate how competent you have to be to make good new science.

[This comment is no longer endorsed by its author]Reply

a confused group can have all sorts of variations of halting problem or incompleteness theorem as 'open problems'.

If you're talking about the LW decision theory group rather than some hypothetical group, can you show how any of the open problems above is equivalent to the halting problem or one of the incompleteness theorems?

Here's a handy, very short introduction to game theory and also program equilibrium:

Woolridge - Computation and the Prisoner's Dilemma.

Other interesting papers I found by checking papers that cited Tennenholtz (2004):

Cognitive Equilibrium:

We show that whenever a decision maker reasons about an optimal decision he is able to find one, even with non-transitive preferences. The existence of a reasoning process allows him to strategically manipulate how he reasons. A reasoning strategy that is robust against (finite) deviations is captured by the notion of cognitive equilibrum. We show that a cognitive equilibrium exists under all complete preferences, and characterize outcomes that can be implemented within it. Cognitive equilibria employ complex cognitive strategies. Simple strategies suffice only under transitive preferences. Robustness of the model is evaluated in the language of von Neumann-Morgenstern stable sets.

Adversarial Leakage in Games:

While the maximin strategy has become the standard, and most agreed-upon solution for decision-making in adversarial settings, as discussed in game theory, computer science and other disciplines, its power arises from the use of mixed strategies, a.k.a. probabilistic algorithms. Nevertheless, in adversarial settings we face the risk of information leakage about the actual strategy instantiation. Hence, real robust algorithms should take information leakage into account. To address this fundamental issue, we introduce the study of adversarial leakage in games. We consider two models of leakage. In both of them the adversary is able to learn the value of b binary predicates about the strategy instantiation. In one of the models these predicates are selected after the decision-maker announces its probabilistic algorithm and in the other one they are decided in advance. We give tight results about the effects of adversarial leakage in general zero-sum games with binary payoffs as a function of the level of leakage captured by b in both models. We also compare the power of adversarial leakage in the two models and the robustness of the original maximin strategies of games to adversarial leakage. Finally, we study the computation of optimal strategies for adversarial leakage models. Together, our study introduces a new framework for robust decision-making, and provides rigorous fundamental understanding of its properties.

These papers don't seem relevant to quining cooperation because they don't focus on computation or quining. There's no shortage of papers proposing new equilibrium concepts and new games to justify them with, our literature search also turned up many of them, my prior for their usefulness is low.

No, I didn't mean to imply that these papers focus on quining.