In any case, I think people listened to Eliezer more because he said things like "I have worked out a mathematical analysis of these confusing problems", not just "my intuition says the basic assumptions of game theory don't sound right".
Personally, I thought he made a good case that the basic assumptions of game theory aren't right, or rather won't be right in a future where superintelligent AIs know each others' source code. I don't think I would have been particularly interested if he just said "these non-standard assumptions lead to some cool math" since I don't have that much interest in math qua math.
Similarly, I explore other seemingly strange assumptions like the ones in Newcomb's Problem or Counterfactual Mugging because I think they are abstracted/simplified versions of real problems in FAI design and ethics, designed to isolate and clarify some particular difficulties, not because they are "interesting when taken on its own terms".
I guess it appears to you that you are working on these problems because they seem like interesting math, or "interesting when taken on its own terms", but I wonder why you find these particular math problems or assumptions interesting, and not the countless others you could choose instead. Maybe the part of your brain that outputs "interesting" is subconsciously evaluating importance and relevance?
I guess it appears to you that you are working on these problems because they seem like interesting math, or "interesting when taken on its own terms", but I wonder why you find these particular math problems or assumptions interesting, and not the countless others you could choose instead. Maybe the part of your brain that outputs "interesting" is subconsciously evaluating importance and relevance?
An even more likely explanation is that my mind evaluates reputation gained per unit of effort. Academic math is really crowded, chances ...
From my experience reading and talking about decision theory on LW, it seems that many of the unproductive comments in these discussions can be attributed to a handful of common mistakes.
Mistake #1: Arguing about assumptions
The main reason why I took so long to understand Newcomb's Problem and Counterfactual Mugging was my insistence on denying the assumptions behind these puzzles. I could have saved months if I'd just said to myself, okay, is this direction of inquiry interesting when taken on its own terms?
Many assumptions seemed to be divorced from real life at first. People dismissed the study of electromagnetism as an impractical toy, and considered number theory hopelessly abstract until cryptography arrived. The only way to make intellectual progress (either individually or as a group) is to explore the implications of interesting assumptions wherever they might lead. Unfortunately people love to argue about assumptions instead of getting anything done, though they can't really judge before exploring the implications in detail.
Several smart people on LW are repeating my exact mistake about Newcomb's Problem now, and others find ways to commit the same mistake when looking at our newer ideas. It's so frustrating and uninteresting to read yet another comment saying my assumptions look unintuitive or unphysical or irrelevant to FAI or whatever. I'm not against criticism, but somehow such comments never blossom into interesting conversations, and that's reason enough to caution you against the way of thinking that causes them.
Mistake #2: Stopping when your idea seems good enough
There's a handful of ideas that decision theory newbies rediscover again and again, like pointing out indexical uncertainty as the solution to Newcomb's problem, or adding randomness to models of UDT to eliminate spurious proofs. These ideas don't work and don't lead anywhere interesting, but that's hard to notice when you just had the flash of insight and want to share it with the world.
A good strategy in such situations is to always push a little bit past the point where you have everything figured out. Take one extra step and ask yourself: "Can I make this idea precise?" What are the first few implications? What are the obvious extensions? If your result seems to contradict what's already known, work through some of the contradictions yourself. If you don't find any mistakes in your idea, you will surely find new formal things to say about your idea, which always helps.
Mistake #2A: Stopping when your idea actually is good enough
I didn't want to name any names in this post because my status on LW puts me in a kinda position of power, but there's a name I can name with a clear conscience. In 2009, Eliezer wrote:
Of course that's not a newbie mistake at all, but an awesome and fruitful idea! As it happens, writing out that Godelian diagonal immediately leads to all sorts of puzzling questions like "but what does it actually do? and how do we prove it?", and eventually to all the decision theory research we're doing now. Knowing Eliezer's intelligence, he probably could have preempted most of our results. Instead he just declared the problem solved. Maybe he thought he was already at 0.95 formality and that going to 1.0 would be a trivial step? I don't want to insinuate here, but IMO he made a mistake.
Since this mistake is indistinguishable from the last, the remedy for it is the same: "Can I make this idea precise?" Whenever you stake out a small area of knowledge and make it amenable to mathematical thinking, you're likely to find new math that has lasting value. When you stop because your not-quite-formal idea seems already good enough, you squander that opportunity.
...
If this post has convinced you to stop making these common mistakes, be warned that it won't necessarily make you happier. As you learn to see more clearly, the first thing you'll see will be a locked door with a sign saying "Research is hard". Though it's not very scary or heroic, mostly you just stand there feeling stupid about yourself :-)