PeterS comments on A question of rationality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (93)
This belongs as a comment on the SIAI blog, not a post on Less Wrong.
Why?
Because Less Wrong is about human rationality, not the Singularity Institute, and not me.
I can see a couple of reasons why the post does belong here:
Psy-Kosh's answer seems perfectly reasonable to me. I wonder why you don't just give that answer, instead of saying the post doesn't belong here. Actually if I had known this was one of the reasons for starting OB/LW, I probably would have paid more attention earlier, because at the beginning I was thinking "Why is Eliezer talking so much about human biases now? That doesn't seem so interesting, compared to the Singularity/FAI stuff he used to talk about."
E.Y. has given that answer before:
Rationality: Common Interest of Many Causes
Then from whence came the Q&A with Eliezer Yudkowsky, your fiction submissions (which I think lately have become of questionable value to LW), and other such posts which properly belong on either your personal blog or the SIAI blog?
I don't think that if any other organization were posting classified ads here that it would be tolerated.
However ugly it sounds, you've been using Less Wrong as a soap box. Regardless of our statement of purpose, you have made it, in part, about you and SIAI.
So I for one think that the OP's post isn't particularly out of place.
Edit: For the record I like most of your fiction. I just don't think it belongs here anymore.
That's like saying the Dialogues don't belong in Godel, Escher, Bach.
To be honest, maybe they didn't. Those crude analogies interspersed between the chapters - some as long as a chapter itself! - were too often unnecessary. The book was long enough without them... but with them? Most could have been summed up in a paragraph.
If you need magical stories about turtles and crabs drinking hot tea before a rabbit shows up with a device which allows him to enter paintings to understand recursion, then you're never going to get it.
On the other hand, if the author's introduction of stories in that manner is necessary to explain his subject or thesis, then something is either wrong with the subject or with his expose of it.
I know GEB is like the Book around Less Wrong, but what I'm saying here isn't heresy. Admittedly, Hofstadter had to write I Am a Strange Loop because people couldn't understand GEB.
It's a question of aesthetics. Of course math doesn't have to be presented this way, but a lot of people like the presentation.
You should make explicit what you are arguing. It seems to me that the cause of your argument is simply "I don't like the presentation", but you are trying to argue (rationalize) it as a universal. There is a proper generalization somewhere in between, like "it's not an efficient way to [something specific]".
Wait, what? I Am a Strange Loop was written about 30 years later. Hofstadter wrote four other books on mind and pattern in the meantime, so this doesn't make any sense.
An interview with Douglas R. Hofstadter
I'm having some slight difficulty putting perceptions into words - just as I can't describe in full detail everything I do to craft my fictions - but I can certainly tell the difference between that and this.
Since I haven't spent a lot of time here talking about ideas along the lines of Pirsig's Quality, there are readers who will think this is a copout. And if I wanted to be manipulative, I would go ahead and offer up a decoy reason they can verbally acknowledge in order to justify their intuitive perceptions of difference - something along the lines of "Demanding that a specific person justify specific decisions in a top-level post doesn't encourage the spreading threads of casual conversation about rationality" or "In the end, every OBLW post was about rationality even if it didn't look that way at the time, just as much as the Quantum Physics Sequence amazingly ended up being about rationality after all." Heck, if I was a less practiced rationalist, I would be inventing verbal excuses like that to justify my intuitive perceptions to myself. As it is, though, I'll just say that I can see the difference perceptually, and leave it at that - after adding some unnecessary ornaments to prevent this reply from being voted down by people who are still too focused on the verbal.
PS: We post classified ads for FHI, too.
You could have just not replied at all. It would have saved me the time spent trying to write up a response to a reply which is nearly devoid of any content.
Incidentally, I don't have "intuitive" perceptions of difference here. It's pretty clear to me, and I can explain why. Though in my estimation, you don't care.
When I read Eliezer's fiction the concepts from dozens of lesswrong posts float to the surface of my mind, are processed and the implications become more intuitively grasped. Your brain may be wired somewhat differently but for me fiction is useful.
PPS: Probing my intuitions further, I suspect that if the above post had been questioning e.g. komponisto's rationality in the same tone and manner, I would have had around the same reaction of offtopicness for around the same reason.
Actually, that's not true, classified ads for both SIAI and the Future of Humanity Institute have been posted. The sponsors of Overcoming Bias and Less Wrong have posted such announcements, and others haven't, which is an intelligible and not particularly ugly principle.
You're right. It is the sponsor's prerogative.
I am going to respond to the general overall direction of your responses.
That is feeble, and for those who don't understand why let me explain it.
Eliezer works for SIAI which is a non-profit where his pay depends on donations. Many people on LW are interested in SIAI and some even donate to SIAI, others potentially could donate. When your pay depends on convincing people that your work is worthwhile it is always worth justifying what you are doing. This becomes even more important when it looks like you're distracted from what you are being paid to do. (If you ever work with a VC and their money you'll know what I mean.)
When it comes to ensuring that SIAI continues to pay especially when you are the FAI researcher there justifying why you are writing a book on rationality which in no way solves FAI becomes extremely important.
EY ask yourself this what percent of the people interested in SIAI and donate are interested FAI? Then ask what percent are interested in rationality with no clear plan of how that gets to FAI? If the answer to the first is greater then the second then you have a big problem, because one could interpret the use of your time writing this book on rationality as wasting donated money unless there is a clear reason how rationality books get you to FAI.
P.S. If you want to educate people to help you out as someone speculated you'd be better off teaching them computer science and mathematics.
Remember my post drew no conclusions so for Yvain I have cast no stones I merely ask questions.
Even on the margin? There are already lots of standard textbooks and curricula for mathematics and computer science, whereas I'm not aware of anything else that fills the function of Less Wrong.
Rationality is the art of not screwing up - seeing what is there instead of what you want to see, or are evolutionarily suspectible to seeing. When working on a task that may have (literally) earth-shattering consequences, there may not be a skill that's more important. Getting people educated about rationality is of prime importance for FAI.
If you are previously a donor to SIAI, I'll be happy to answer you elsewhere.
If not, I am not interested in what you think SIAI donors think. Given your other behavior, I'm also not interested in any statements on your part that you might donate if only circumstances were X. Experience tells me better.