Eliezer_Yudkowsky comments on A Rationalist's Tale - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (305)
So I'm thinking to myself, around six years ago, "I can at least manage to publish timeless decision theory, right? That's got to be around the safest idea I have, it couldn't get any safer than that while still being at all interesting. I mean, yes, there's these possible ways you could let these ideas eat your brain but who could possibly be smart enough to understand TDT and still manage to fall for that?"
Lesson learned.
And this is what several levels above me looks like? I'm not omnipotent, yet, but I have a deed or two to my name at this point; for example, when I write Harry Potter fanfiction, it reliably ends up as the most popular HP fanfiction on the Internet. (Those of you who didn't get here following HPMOR can rule out selection effects at this point.) Several levels above me should make it noticeably easier to show your power in a third-party-noticeable fashion, and the fact that you can't do so should cause you to question yourself.
It's the opposite of the lesson I usually try to teach, but in this one case I'll say it: it's not the world that's mad, it's you.
This doesn't obviously follow to me. There are skill sets which aren't due to rationality. Your own skill sets may be due in part to better writing capability and general intelligence.
Mad skillz doesn't imply rationality. Lack of demonstrable skillz does strongly decrease the probability of mad rashunalitea.
Reading charitably, he may mean you are a rationalist, and the other visiting fellows were peer aspiring rationalists. Also, he did say "nearly."
Thanks; yeah, I wasn't writing carefully, but I didn't mean to say that "I am a significantly better rationalist than anybody else on the planet", I meant to say "there are important subskills of rationality where I seem to be at roughly the SingInst Research Fellow level of rationality and high above the Less Wrong poster level of rationality". My apologies for being so unclear.
I don't think he is "mad", at least not if you press him enough. A few weeks ago I posted the following comment on one of his Facebook submissions:
His reply (emphasis mine):
It seems to me that he's still with the rest of humanity when it comes to what he is doing on a daily basis and his underlying desires.
(You argue that the madness in question, if present, is compartmentalized. The intended sense of "madness" (normal use on LW) includes the case of compartmentalized madness, so your argument doesn't seem to disagree with Eliezer's position.)
((For those who haven't seen it yet: http://lesswrong.com/lw/2q6/compartmentalization_in_epistemic_and/ ))
Belatedly.
Hold on. Motivated by what? If its objectives are only implicit in the structure, then why would these objectives include their self-preservation?
Don't hold yourself responsible when people go funny in the head on TDT-related matters. Quantum mechanics and relativity have turned much more brains to mush, does that mean they shouldn't have been published?
That would be a valid argument against, of course a relatively very weak one. Resist the temptation to make issues one-sided.
I got my intuitions from ADT, not TDT, and I would've gotten all the same ideas from Anna/Steve even if you hadn't popularized decision theory. (The general theme had been around since Wei Dai in the early 2000's, no?) So you shouldn't learn that lesson to too great an extent.
Make something idiotproof and the universe will build a better idiot.
You misinterpreted me, I wasn't claiming to be several levels above you. That's my fault for being unclear.
BTW, this is neat: http://arxiv.org/PS_cache/arxiv/pdf/0804/0804.3678v1.pdf
It's an attempt to better unify causal graphs with algorithmic information. The sections about various Markov properties is I think very important for explaining differences between CDT and TDT, 'cuz you can talk more clearly about exactly where a decision problem can't be solved due to Markov condition limitations.