arthurlewis comments on Mandatory Secret Identities - Less Wrong

28 Post author: Eliezer_Yudkowsky 08 April 2009 06:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (177)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 08 April 2009 08:03:33PM 14 points [-]

If you were to say tomorrow "I've been lying about the whole AI programmer thing; I actually live in my parents' basement and have never done anything worthwhile in any non-rationality field in my entire life," then would I have to revise my opinion that you're a very good rationality teacher? Would I have to deny having learned really valuable things from you?

But the fact that reality doesn't disentangle this way, is in a sense the whole point - it's not a coincidence that things are the way they are.

If we get far enough to have external real-world standards like those you're describing, then yes we can toss the "secret identity" thing out the window, so long as we don't have the problem of most good students wanting only to become rationality instructors themselves as opposed to going into other careers (but a teacher who raised their students this way would suffer on the 'accomplished students' metric, etc.). But on the other hand I still suspect that the instructors with secret identities would be revealed to do better.

Comment author: arthurlewis 09 April 2009 02:59:41PM 2 points [-]

Are you saying that teachers who don't externally practice the thing they're teaching won't make good teachers? Or that they're not worthy of respect at all? If the former, I agree with Yvain and others that we have better metrics for determining teacher quality. If the latter, I'm not sure why this would be the case. The comparison to literary critics doesn't answer that question; it just accesses our assumed cached thoughts about literary critics. What's the problem with people wanting to be literary critics?

The post proposes a required formula for respect, but it never explains what quantity that formula intends to maximize. What's the goal here?