Wei_Dai comments on Common mistakes people make when thinking about decision theory - Less Wrong

40 Post author: cousin_it 27 March 2012 08:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 28 March 2012 10:24:45AM *  4 points [-]

In any case, I think people listened to Eliezer more because he said things like "I have worked out a mathematical analysis of these confusing problems", not just "my intuition says the basic assumptions of game theory don't sound right".

Personally, I thought he made a good case that the basic assumptions of game theory aren't right, or rather won't be right in a future where superintelligent AIs know each others' source code. I don't think I would have been particularly interested if he just said "these non-standard assumptions lead to some cool math" since I don't have that much interest in math qua math.

Similarly, I explore other seemingly strange assumptions like the ones in Newcomb's Problem or Counterfactual Mugging because I think they are abstracted/simplified versions of real problems in FAI design and ethics, designed to isolate and clarify some particular difficulties, not because they are "interesting when taken on its own terms".

I guess it appears to you that you are working on these problems because they seem like interesting math, or "interesting when taken on its own terms", but I wonder why you find these particular math problems or assumptions interesting, and not the countless others you could choose instead. Maybe the part of your brain that outputs "interesting" is subconsciously evaluating importance and relevance?

Comment author: cousin_it 28 March 2012 11:00:49AM *  9 points [-]

I guess it appears to you that you are working on these problems because they seem like interesting math, or "interesting when taken on its own terms", but I wonder why you find these particular math problems or assumptions interesting, and not the countless others you could choose instead. Maybe the part of your brain that outputs "interesting" is subconsciously evaluating importance and relevance?

An even more likely explanation is that my mind evaluates reputation gained per unit of effort. Academic math is really crowded, chances are that no one would read my papers anyway. Being in a frustratingly informal field with a lot of pent-up demand for formality allows me to get many people interested in my posts while my mathematician friends get zero feedback on their publications. Of course it didn't feel so cynical from the inside, it felt more like a growing interest fueled by constant encouragement from the community. If "Re-formalizing PD" had met with a cold reception, I don't think I'd be doing this now.

Comment author: Wei_Dai 28 March 2012 11:13:04AM *  5 points [-]

In that case you're essentially outsourcing your "interestingness" evaluation to the SIAI/LW community, and I think we are basing it mostly on relevance to FAI.

Comment author: cousin_it 28 March 2012 01:48:55PM *  3 points [-]

Yeah. Though that doesn't make me adopt FAI as my own primary motivation, just like enjoying sex doesn't make me adopt genetic fitness as my primary motivation.

Comment author: Wei_Dai 28 March 2012 09:03:30PM *  1 point [-]

My point is that your advice isn't appropriate for everyone. People who do care about FAI or other goals besides community approval should think/argue about assumptions. Of course one could overdo that and waste too much time, but they clearly can't just work on whatever problems seem likely to offer the largest social reward per unit of effort.

Though that doesn't make me adopt FAI as my own primary motivation

What if we rewarded you for adopting FAI as your primary motivation? :)

Comment author: cousin_it 28 March 2012 09:46:17PM 0 points [-]

What if we rewarded you for adopting FAI as your primary motivation? :)

That sounds sideways. Wouldn't that make the reward my primary motivation? =)

Comment author: Wei_Dai 28 March 2012 10:19:57PM 4 points [-]

No, I mean what if we offered you rewards for changing your terminal goals so that you'd continue to be motivated by FAI even after the rewards end? You should take that deal if we can offer big enough rewards and your discount rate is high enough, right? Previous related thread

Comment author: roystgnr 28 March 2012 10:36:59PM 2 points [-]

You're trying to affect the motivation of a decision theory researcher by offering a transaction whose acceptance is itself a tricky decision theory problem?

Upvoted for hilarious metaness.

Now, all we need to do is figure out how humans can modify their own source code and verify those modifications in others...

Comment author: cousin_it 29 March 2012 08:13:52AM *  0 points [-]

That could work, but how would that affect my behavior? We don't seem to have any viable mathematical attacks on FAI-related matters except this one.