Thank you For Your Participation

I would like to thank you all for your unwitting and unwilling participation in my little social experiment. If I do say so myself you all performed as I had hoped. I found some of the responses interesting, many them are goofy. I was honestly hoping that a budding rationalist community like this one would have stopped this experiment midway but I thank you all for not being that rational. I really did appreciate all the mormon2 bashing it was quite amusing and some of the attempts to discredit me were humorous though unsuccessful. In terms of the questions I asked I was curious about the answers though I did not expect to get any nor do I really need them; since I have a good idea of what the answers are just from simple deductive reasoning. I really do hope EY is working on FAI and actually is able to do it though I certainly will not stake my hopes or money on it. 

Less there be any suspicion I am being sincere here.

 

Response

Because I can I am going to make one final response to this thread I started:

Since none of you understand what I am doing I will spell it out for you. My posts are formatted, written and styled intentionally for the response I desire. The point is to give you guys easy ways to avoid answering my questions (things like tone of the post, spelling, grammar, being "hostile (not really)" etc.). I just wanted to see if anyone here could actually look past that, specifically EY, and post some honest answers to the questions (real answers again from EY not pawns on LW). Obviously this was to much to ask, since the general responses, not completely, but for the most part were copouts. I am well aware that EY probably would never answer any challenge to what he thinks, people like EY typically won't (I have dealt with many people like EY). I think the responses here speak volumes about LW and the people who post here (If you can't look past the way the content is posted then you are going to have a hard time in life since not everyone is going to meet your standards for how they speak or write). You guys may not be trying to form a cult but the way you respond to a post like this screams cultish and even a some circle-jerk mentality mixed in there. 

 

Post

I would like to float an argument and a series of questions. Now before you guys vote me down please do me the curtsey of reading the post. I am also aware that some and maybe even many of you think that I am a troll just out to bash SIAI and Eliezer, that is in fact not my intent. This group is supposed to be about improving rationality so lets improve our rationality.

SIAI has the goal of raising awareness of the dangers of AI as well as trying to create their own FAI solution to the problem. This task has fallen to Eliezer as the paid researcher working on FAI. What I would like to point out is a bit of a disconnect between what SIAI is supposed to be doing and what EY is doing.

According to EY FAI is an extremely important problem that must be solved with global implications. It is both a hard math problem and a problem that needs to be solved by people who take FAI seriously first. To that end SIAI was started with EY as an AI researcher at SIAI. 

Until about 2006 EY was working on papers like CEV and working on designs for FAI which he has now discarded as being wrong for the most part. He then went on a long period of blogging on Overcoming Bias and LessWrong and is now working on a book on rationality as his stated main focus. If this be accurate I would ask how does this make sense from someone who has made such a big deal about FAI, its importance, being first to make AI and ensure it is FAI? If FAI is so important then where does a book on rationality fit? Does that even play into SIAI's chief goals? SIAI spends huge amounts of time talking about risks and rewards of FAI and the person who is supposed to be making the FAI is writing a book on rationality instead of solving FAI. How does this square with being paid to research FAI? How can one justify EY's reasons for not publishing the math of TDT, coming from someone who is committed to FAI? If one is committed to solving that hard of a problem then I would think that the publication of ones ideas on it would be a primary goal to advance the cause of FAI.

If this doesn't make sense then I would ask how rational is it to spend time helping SIAI if they are not focused on FAI? Can one justify giving to an organization like that when the chief FAI researcher is distracted by writing a book on rationality instead of solving the myriad of hard math problems that need to be solved for FAI? If this somehow makes sense then can one also state that FAI is not nearly as important as it has been made out to be since the champion of FAI feels comfortable with taking a break from solving the problem to write a book on rationality (in other words the world really isn't at stake)? 

Am I off base? If this group is devoted to rationality then everyone should be subjected to rational analysis.

New Comment
94 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Upvoted because every group wants to be a cult and this seems to be a topic particularly suspectible to groupthink when faced with criticism. (That is, for this community.)

I also note posters making comments that deny the question because of who posed it or because of the tone in which it is made, while admitting the basic validity of the question, which is a rationality fail of epic proportions. If the question is valid, then it should be discussed, no matter how unpopular the poster happens to be within the inner circle. Period. Even if he isn't as eloquent and well-blessed with social skills than the most vocal people.

Nor does it seem good in the eyes of outsiders to see a valid question dismissed on such superficial reasons. This is separate from the direct effect the dismissal has on our own rationality. It will weaken both the rationalist and SIAI causes in the long run, as outsiders will see the causes as hypocritical and incapable of answering tough questions, respectively. Thus a smaller influx of interested outsiders.

In addition to the main post being voted down and the very validity of the question being denied, mormon2 has made at least one comment containing good point... (read more)

8wedrifid
No. You are either mistaken or you are using some arbitrary definition of 'rationality' that I reject. It is not rational to ignore all social implications of a question. It matters who is asking. It matters how it is asked. And it matters why it is asked. It also matters what you predict the effect of 'feeding' another answer to the questioner will be. No. Just no. Nobody is obliged to answer any question on pain of epic rationality fail. Nobody is obliged to accept belligerent disrespect and respond to it as though though a rapport is present. And nobody, not even Eliezer, is required to justify their professional goals at the whim of a detractor.

You are either mistaken or you are using some arbitrary definition of 'rationality' that I reject. It is not rational to ignore all social implications of a question.

Of the two definitions for rationality, I was going for e-rationality. It can certainly be i-rational to take social implications of a question into account to such a degree that one won't even consider it by fear of consequences. Simply deciding that evolution must be false since the social consequences of believing otherwise would be too unpleasant, for instance. Or you could admit, in the safety of your head, that evolution must be true, but hide this belief because you know what the consequences were. That might be rational both ways. But it is always a failure of e-rationality to refuse to consider a valid question because of social consequences. In the case of communities and not individuals, "to consider" means discussion within the community.

"Taking into account the social consequences" is fine in theory, but it's an easy path towards rationalizing away every argument from every person you don't like. I would be a bit more understanding if the poster in question would have been really abr... (read more)

9AnnaSalamon
Sorry, Kaj. We have been working on a more fleshed out "what we've done, what we're doing, and what more money would let us do" webpage, which should be up within the next week. We have had a large a backlog of worthwhile activities and a comparative shortage of already trained person-hours, lately. Part of the idea in the visiting fellows program.
1Kaj_Sotala
No problem - I do know you folks are probably overworked, I know how it is from plenty of volunteer work for various groups over the years myself. Not to mention I still have at least one project from at least a year back already that I was doing for SIAI, but never got finished. Just do your best. :)
4wedrifid
It is a failure of e-rationality to alter your beliefs for social purposes. It is not an epic failure of e-rationality to not accept a particular social challenge. Moreover e-rationality makes no normative claims at all. "If the question is valid, then it should be discussed" is about your preferences and not something required by e-rationality to the degree 'epic fail period'. You can have different preferences to me, that's fine. But I take offence at your accusation of an epic failure of rationality based on advocating ignoring a question that you would choose to answer. It is nonsensical. It seems my assertion was ambiguous. I don't mean "need to answer any possible question". I insist that nobody is required to answer any question whatsoever. Substitute "in order to slight him" with "in order not to slight oneself" and that is exactly the story under consideration. It isn't about ignoring a question as a rhetorical ploy to counter an argument. In fact, saying that you would answer such a question under different circumstances serves to waive such a rhetorical use. You are advocating a norm about the social obligations of people to engage with the challenges and you are advocating it using the threat of being considered 'epically irrational'. I absolutely refuse to submit myself to the norm you advocate and take umbrage at the manner of your assertion of it upon me (as a subset of 'us'). I have no objection to you suggesting that answering this particular question may be a better than not answering it. You may even be right. I cannot claim to be a master of intricacies of social politics by any stretch of the imagination. I would like to see some more details of SIAI's approach and progress made public. Perhaps in the form of some extra PR on the SIAI website and posts here that link to it to allow discussion from the many interested lesswrong participants. I don't consider this inappropriate. Karma serves as an organic form of moderation that can alleviat
1Kaj_Sotala
I think we are talking at cross purposes here. And it seems I may have misunderstood part of the comments which led to that "epic rationality fail" line, in which case I apologize. This line of yours first led me to see that we really are talking about two different things: I am puzzled over as to why you would want to consider this a "social challenge". The opening post was formulated in a reasonable tone, asking reasonable and fully warranted questions. I had automatically assumed that any aspiring rationalist would, well, treat such as post like any other post by someone else. I certainly hadn't assumed that people would instead prefer to interpret it as a maneuver in some elaborate social battle, and I am utterly puzzled over why anyone would want to take it that way. Not only does that run a major risk of misinterpretation in case the person in question actually meant what they said in the post, it's also stooping down to their level and making things worse in case they did intend it as a social attack. Okay, it appears I was ambiguous as well. I didn't mean that anyone would be required to answer any question, either. The tone I got from the comments was something along the lines of "this is an important question, and I do find it interesting and worthy enough to discuss and consider, but now that you have brought it up, I'll push it out of my mind or at least delay discussion of it later on". Does this mean people find this an important topic and would like to discuss it, but will now forever avoid the question? That would indeed be a rationality fail. Does it mean that some poster of a higher status should reword the same questions in his own words, and post them in the open thread / as his own top-level post, and then it would be acceptable to discuss? That just seems petty and pointless, when it could just as well be discussed here. Certainly there's no requirement on anybody to answer any questions if they don't feel like it. But, how should I put th
-1wedrifid
I am not ignoring this but I will not engage fully with all of it because to do so effectively would require us to write several posts worth of background to even be using the same meaning for words. I agree, and rather hope not. I would not be quite as worried and would perhaps frame it slightly differently. I may share an underlying concern with maintaining an acceptable 'bullshit to content' ratio. Things like ignoring people or arguments can sometimes fall into that bullshit category and oft times are a concern to me. I think I have a somewhat different conception than you when it comes to "times when ignoring stuff is undesirable".
1MichaelAnissimov
I do mention SIAI and what we're up to on my blog, which has about 3K readers, about every other day for several months. It may not be an "official" SIAI news source, but many people read it and gain knowledge about SIAI via that route.
1Kaj_Sotala
Now that you mention it, yes, your blog is probably the place with the most reporting about what SIAI does. Not enough even there, though.

Vote this up if, as a matter of policy, when a post like this gets voted down far enough (though for some reason it still shows as 0), it's okay to remove it and tell the poster to resubmit as an Open Thread comment. I would like posts like this to automatically disappear when voted down far enough, but that would take a code change and those are hard to get.

Vote this up if you disagree with the above policy.

I am reluctant to agree with the aforementioned policy because I do not want to lose the comments on such posts. There have been cases where a '<= 0' post has been flawed but the replies have provided worthwhile insight into the topic being covered. I often search for things I can remember reading months in the past and it would frustrate me if they were not there to be found.

I like the sound of the (unfortunately code requiring) idea of having them not visible in the side bar.

0Eliezer Yudkowsky
The mechanism I have for post removal wouldn't prevent the comments from being visible if you knew the old link.
1wedrifid
Would search still work? (I assume it probably would...)
0Eliezer Yudkowsky
I don't know. I too assume it probably would.
0Kaj_Sotala
If there's a link to it from somewhere, Google ought to index it and keep it indexed. A quick google returned a discussion suggesting that orphan pages with no incoming links will eventually be de-indexed (assuming the bot had the time to index them before they got orphaned), though it might take a long time. It's from 2004, though, and Google has revamped their systems plenty of times afterwards.
1MrHen
Why isn't this essentially included in the Preferences? There is a box stating, "Don't show me articles with a score less than X." Is that box not working? Or am I misunderstanding what it does? Or...?

My understanding is the reasoning goes something like this: This is a difficult problem. Eliezer, on his own, might not be smart enough to do this. Fundamentally smart people he can't quite create more of yet. But maybe he can create rationalists out of some them, and then some of those may join SIAI. Besides, boosting human rationality overall is a good thing anyways.

Since none of you understand what I am doing I will spell it out for you. My posts are formatted, written and styled intentionally for the response I desire.

You are claiming to be a troll?

-10mormon2

mormon2, have you ever read other people on the Internet who write in the style and tone that you've adopted here? Are these people, in your experience, usually writing sense or nonsense? Are they, in your experience, usually worth the time that it takes to read them?

3Cyan
Either mormon2 is a deliberate troll, or we can expect that the Dunning-Kruger effect will prevent him from being able to answer your questions adequately.

Let him who has never used time in a less than maximally-utility-producing way cast the first stone.

4Eliezer Yudkowsky
Spending a year writing a book isn't exactly watching an episode of anime. The question would, under other circumstances, be just - but I don't care to justify myself here. Elsewhere, perhaps.
4Liron
Estimated number of copies of said book I will buy: 30 Just putting it out there.
2wedrifid
(My upvote unfortunately only brings the parent to 0.) This is the approach that I would have taken (and recommended). The author and context of a post matters. Coming from an author who is consistently arrogant and disrespectful changes the meaning of the question from curiosity to challenge. Justifying oneself in response serves to validate the challenge, signalling that such challenges are acceptable and lowering one's status. It is far better to convey, usually through minimal action, that the 'question' is inappropriate. I would not have even framed the topic with the phrase 'be just'. You can explain the reasons motivating a decision (and even explain such reasoning for the purposes of PR) without it being a justification. It's just and interesting question.
3Kaj_Sotala
This sounds like a dodge - yes, we can certainly all agree that there are worse uses for SIAI's time, but the question of whether SIAI is working most effectively is still a valid and important one.

Until about 2006 EY was working on papers like CEV and working on designs for FAI which he has now discarded as being wrong for the most part. He then went on a long period of blogging on Overcoming Bias and LessWrong and is now working on a book on rationality as his stated main focus. If this be accurate I would ask how does this make sense from someone who has made such a big deal about FAI, its importance, being first to make AI and ensure it is FAI?

I know! It's a Xanatos Gambit!

7wedrifid
You @#$@! I want those hours of my life back. Although I must admit Batman and his Crazy Prepared is kinda cool and I do love later series Wesley, seriously Badass. I was also tempted to create an account and add Althalus and Talen to the Divine Date page. What is it about that site that is such a trap? We must anticipate a serious social payoff for engaging with stories and identifying the abstractions they represent. I had been reading for half an hour before I even noticed I was on the TvTropes page and the warning sprung to mind.
5kpreid
Anyone want to write a LW post about (discussing) How To Be Less Unreasonably Attracted To TV Tropes And Other Such Sites? It's certainly a matter of rationality-considered-as-winning, since I've been losing somewhat as a result of wasting time on TV Tropes / Wikipedia / Everything2 / reading the archives of new-to-me webcomics.
2wedrifid
I have had a lot of success with the LeechBlock plugin for firefox. It provides enough of a stimulus to trigger executive control. Unfortunately Chrome is just a whole lot faster than my Rigged-For-Web-Development Firefox these days and so I've lost that tool. I've actually been considering uninstalling chrome for more or less this reason. Incidently, one of the sites I have blocked on Leechblock is Lesswrong.com. This habit kills far more time that tropes does. I then read the posts from the RSS feed but it block-reminds me of my self imposed policy when I try to click through to comment.
6Eliezer Yudkowsky
The key to strategy is not to choose a path to victory, but to choose so that all paths lead to a victory.
[-]Roko50

Mormon2:

How would you act if you were Eliezer?

Bear in mind that you could either work directly on the problem, or you could try to cause others to work on it. If you think that you could cause an average of 10 smart people to work on the problem for every 6 months you spend writing/blogging, how much of your life would you spend writing/blogging versus direct work on FAI?

-3mormon2
"How would you act if you were Eliezer?" If I made claims of having a TDT I would post the math. I would publish papers. I would be sure I had accomplishments to back up the authority with which I speak. I would not spend a single second blogging about rationality. If I used a blog it would be to discuss the current status of my AI work and to have a select group of intelligent people who could read and comment on it. If I thought FAI was that important I would be spending as much time as possible finding the best people possible to work with and would never resort to a blog to try to attract the right sort of people (I cite LW as evidence of the failure of blogging to attract the right people). Oh and for the record I would never start a non-profit to do FAI research. I also would do away with the Singularity Summit and replace it with more AGI conferences. I would also do away the most of SIAI's programs and replace them, and the money they cost, with researchers and scientists along with some devoted angel funders.
4Mitchell_Porter
I can see reasons for proceeding indirectly. Eliezer is 30. He thinks his powers may decline after age 40. It's said that it takes 10 years to become expert in a subject. So if solving the problems of FAI requires modes of thought which do not come naturally, writing his book on rationality now is his one chance to find and train people appropriately. It is also possible that he makes mistakes. Eliezer and SIAI are inadequately supported and have always been inadequately supported. People do make mistakes under such conditions. If you wish to see how seriously the mainstream of AI takes the problem of Friendliness, just search the recent announcements from MIT, about a renewed AI research effort, for the part where they talk about safety issues. I have a suggestion: Offer to donate to SIAI if Eliezer can give you a satisfactory answer. (The terms of such a deal may need to be negotiated first.)
0Nick_Tarleton
Do you have a link? I can't find anything seemingly relevant with a little searching.
1Mitchell_Porter
Neither can I - that's the point.
0Nick_Tarleton
No, I mean anything about the renewed AI research effort.
0Mitchell_Porter
It's the MIT Mind Machine Project.
2Roko
A good rationalist exercise is to try to predict what those who do not adopt your position would say in response to your arguments. What criticisms do you think I will make of the statement: ?
0[anonymous]
Couldn't help yourself. The remainder is a reasonable answer.

Well, first of all, the tone of your post is very passive aggressive and defensive, and if you are trying to encourage a good, rational discussion like you say, then you should maybe be a little more self-concious about your behavior.

Regarding the content of your post, I think it's a fair question. However, you seem like you are being quite a bit closed minded and EY-centric about the entire issue. This person is just one employee for SIAI, which presumeably can manage their own business, and no doubt have a much better idea of what their employees are do... (read more)

I believe it would be greatly informative to mormon2's experimental result if the profile was blocked. Mormon2 could confirm a hypothesis, we would be rid of him: everybody wins!

Short answer to the original post:

SIAI has a big Human Resources problem. Eliezer had a really difficult time finding anyone to hire as an assistant/coworker at SIAI who didn't immediately set out to do something really, really stupid. So he's blogging and writing a book on rationality in the hope of finding someone worthwhile to work with.

3Eliezer Yudkowsky
Michael Vassar is much, much better at the H.R. thing. We still have H.R. problems but could now actually expand at a decent clip given more funding. Unless you're talking about directly working on the core FAI problem, in which case, yes, we have a huge H.R. problem. Phrasing above might sound somewhat misleading; it's not that I hired people for A.I. research but they failed at once, or that I couldn't find anyone above the level of basic stupid failures. Rather that it takes a lot more than "beyond the basic stupid failures" to avoid clever failures and actually get stuff done, and the basic stupid failures give you some idea of the baseline level of competence beyond which we need some number of sds.
1CronoDAS
Yeah, sorry for phrasing it wrong. I guess I should have said And yes, I did mean that you had trouble finding people to work directly on the core FAI problem.
2Blueberry
Now I'm really curious: what were the "really, really stupid" things that were attempted?
7CronoDAS
http://lesswrong.com/lw/tf/dreams_of_ai_design/ http://lesswrong.com/lw/lq/fake_utility_functions/ and many, many other archived posts cover this.

The way you use "rationality" here reminds me of the way that commenters at Overcoming Bias so often say "But isn't it a bias that... (you disagree with me about X)". When you speak of rationality or bias, you should be talking about systematic, general means by which you can bend towards or away from the truth. Just invoking the words to put a white coat on whatever position you are defending devalues them.

I believe EY has already explained that he's trying to make more rationalists, so they can go and solve FAI.

Irrelevant questions

The questions are relevant to how you ought to interpret your results. You need to answer them to know what to infer from the reaction to your experiment.

While they may have been irrelevant, the questions were certainly interesting. I could probably think of other irrelevant, interesting questions. I don't suppose you'd be willing to answer them?

I am conducting a social experiment as I already explained. The posts are a performance for effect as part my experiment.

Have you yourself participated in this kind of experiment when it was being performed on you by a stranger on the Internet who used the style and tone that you've adopted here? If so, what payoff did you anticipate for doing so?

From my understanding Mr. Yudkowski has two separate but linked interests, that of rationality, which predominates in writings and blog posts and designing AI, which is the interaction with SIAI. While I disagree about their particular approach (or lack thereof) I can see how it is rational to follow both simultaneously toward similar ends.

I would argue that rationality and AI are really the same project at different levels and different stated outcomes. Even if an AI never develops, increasing rationality is a good enough goal in and of itself.

[-]Roko00

You seem emotionally resistant to seeking out criticisms of your own arguments. In the long run, reality will punish you for this. Sorry.

[-][anonymous]00

I am conducting a social experiment as I already explained. The posts are a performance for effect as part my experiment.

Some of Robin's recent posts have commented on how giving the appearance of trying hard to secure one's status actually lowers your status. Now you are being exemplary.

[-][anonymous]-10

If this be accurate I would ask how does this make sense from someone who has made such a big deal about FAI, its importance, being first to make AI and ensure it is FAI?

This sentence confused me; it should probably be reworded. Something like:

"If this be accurate, I would ask how this makes sense from someone who has made such a big deal about FAI and about how important it is to both be the first to make AI and ensure that it is Friendly."

This belongs as a comment on the SIAI blog, not a post on Less Wrong.

3PeterS
Why?
3Eliezer Yudkowsky
Because Less Wrong is about human rationality, not the Singularity Institute, and not me.
[-]PeterS150

Then from whence came the Q&A with Eliezer Yudkowsky, your fiction submissions (which I think lately have become of questionable value to LW), and other such posts which properly belong on either your personal blog or the SIAI blog?

I don't think that if any other organization were posting classified ads here that it would be tolerated.

However ugly it sounds, you've been using Less Wrong as a soap box. Regardless of our statement of purpose, you have made it, in part, about you and SIAI.

So I for one think that the OP's post isn't particularly out of place.

Edit: For the record I like most of your fiction. I just don't think it belongs here anymore.

2Blueberry
That's like saying the Dialogues don't belong in Godel, Escher, Bach.
0PeterS
To be honest, maybe they didn't. Those crude analogies interspersed between the chapters - some as long as a chapter itself! - were too often unnecessary. The book was long enough without them... but with them? Most could have been summed up in a paragraph. If you need magical stories about turtles and crabs drinking hot tea before a rabbit shows up with a device which allows him to enter paintings to understand recursion, then you're never going to get it. On the other hand, if the author's introduction of stories in that manner is necessary to explain his subject or thesis, then something is either wrong with the subject or with his expose of it. I know GEB is like the Book around Less Wrong, but what I'm saying here isn't heresy. Admittedly, Hofstadter had to write I Am a Strange Loop because people couldn't understand GEB.
6Vladimir_Nesov
It's a question of aesthetics. Of course math doesn't have to be presented this way, but a lot of people like the presentation. You should make explicit what you are arguing. It seems to me that the cause of your argument is simply "I don't like the presentation", but you are trying to argue (rationalize) it as a universal. There is a proper generalization somewhere in between, like "it's not an efficient way to [something specific]".
0Blueberry
Wait, what? I Am a Strange Loop was written about 30 years later. Hofstadter wrote four other books on mind and pattern in the meantime, so this doesn't make any sense.
8PeterS
An interview with Douglas R. Hofstadter
1CarlShulman
Actually, that's not true, classified ads for both SIAI and the Future of Humanity Institute have been posted. The sponsors of Overcoming Bias and Less Wrong have posted such announcements, and others haven't, which is an intelligible and not particularly ugly principle.
1PeterS
You're right. It is the sponsor's prerogative.
1Eliezer Yudkowsky
I'm having some slight difficulty putting perceptions into words - just as I can't describe in full detail everything I do to craft my fictions - but I can certainly tell the difference between that and this. Since I haven't spent a lot of time here talking about ideas along the lines of Pirsig's Quality, there are readers who will think this is a copout. And if I wanted to be manipulative, I would go ahead and offer up a decoy reason they can verbally acknowledge in order to justify their intuitive perceptions of difference - something along the lines of "Demanding that a specific person justify specific decisions in a top-level post doesn't encourage the spreading threads of casual conversation about rationality" or "In the end, every OBLW post was about rationality even if it didn't look that way at the time, just as much as the Quantum Physics Sequence amazingly ended up being about rationality after all." Heck, if I was a less practiced rationalist, I would be inventing verbal excuses like that to justify my intuitive perceptions to myself. As it is, though, I'll just say that I can see the difference perceptually, and leave it at that - after adding some unnecessary ornaments to prevent this reply from being voted down by people who are still too focused on the verbal. PS: We post classified ads for FHI, too.
7PeterS
You could have just not replied at all. It would have saved me the time spent trying to write up a response to a reply which is nearly devoid of any content. Incidentally, I don't have "intuitive" perceptions of difference here. It's pretty clear to me, and I can explain why. Though in my estimation, you don't care.
6wedrifid
When I read Eliezer's fiction the concepts from dozens of lesswrong posts float to the surface of my mind, are processed and the implications become more intuitively grasped. Your brain may be wired somewhat differently but for me fiction is useful.
1Eliezer Yudkowsky
PPS: Probing my intuitions further, I suspect that if the above post had been questioning e.g. komponisto's rationality in the same tone and manner, I would have had around the same reaction of offtopicness for around the same reason.

I can see a couple of reasons why the post does belong here:

  • It concerns Less Wrong itself, specifically it's origin and motivation. This should be of interest to community members.
  • You (Eliezer) are the most visible advocate and practitioner of human rationality improvement. If it turns out that you are not particularly rational, then perhaps the techniques you have developed are not worth learning.

Psy-Kosh's answer seems perfectly reasonable to me. I wonder why you don't just give that answer, instead of saying the post doesn't belong here. Actually if I had known this was one of the reasons for starting OB/LW, I probably would have paid more attention earlier, because at the beginning I was thinking "Why is Eliezer talking so much about human biases now? That doesn't seem so interesting, compared to the Singularity/FAI stuff he used to talk about."

6timtyler
E.Y. has given that answer before: Rationality: Common Interest of Many Causes
2mormon2
I am going to respond to the general overall direction of your responses. That is feeble, and for those who don't understand why let me explain it. Eliezer works for SIAI which is a non-profit where his pay depends on donations. Many people on LW are interested in SIAI and some even donate to SIAI, others potentially could donate. When your pay depends on convincing people that your work is worthwhile it is always worth justifying what you are doing. This becomes even more important when it looks like you're distracted from what you are being paid to do. (If you ever work with a VC and their money you'll know what I mean.) When it comes to ensuring that SIAI continues to pay especially when you are the FAI researcher there justifying why you are writing a book on rationality which in no way solves FAI becomes extremely important. EY ask yourself this what percent of the people interested in SIAI and donate are interested FAI? Then ask what percent are interested in rationality with no clear plan of how that gets to FAI? If the answer to the first is greater then the second then you have a big problem, because one could interpret the use of your time writing this book on rationality as wasting donated money unless there is a clear reason how rationality books get you to FAI. P.S. If you want to educate people to help you out as someone speculated you'd be better off teaching them computer science and mathematics. Remember my post drew no conclusions so for Yvain I have cast no stones I merely ask questions.
4Zack_M_Davis
Even on the margin? There are already lots of standard textbooks and curricula for mathematics and computer science, whereas I'm not aware of anything else that fills the function of Less Wrong.
3Eliezer Yudkowsky
If you are previously a donor to SIAI, I'll be happy to answer you elsewhere. If not, I am not interested in what you think SIAI donors think. Given your other behavior, I'm also not interested in any statements on your part that you might donate if only circumstances were X. Experience tells me better.
-13mormon2
2Kaj_Sotala
Rationality is the art of not screwing up - seeing what is there instead of what you want to see, or are evolutionarily suspectible to seeing. When working on a task that may have (literally) earth-shattering consequences, there may not be a skill that's more important. Getting people educated about rationality is of prime importance for FAI.
[-]Tiiba-30

Eliezer wrote once that he wants to teach a little tiibaism to other FAI designers, so they're less likely to tile the world with smileys. He is an influential person among FAI designers, so perhaps he'll succeed at this. His book will probably be popular among AI programmers, who are the people who matter. And if he doesn't, he'll be the only tiibaist in the world working on a problem that suffers atiibaism poorly. So yes, teaching people how to properly exploit their hundred billion neurons can actually save the world.

Of course, if politicians or judges ... (read more)

1wedrifid
Ok, Tiiba is evidently significant enough to you to inspire your name... but what the? I'm confused.
1Tiiba
What desquires clarifaction?
3wedrifid
Tiiba, tiibaism, tiibaist, atiibaism and tiibal. More specifically I was wondering whether your use thereof in places where some would use derivatives of 'rational' was making a particular statement about possible misuse of said word or were perhaps making a reference to a meme that I have not been exposed to. Also, I was slightly curious as to whether you were using words like 'desquires' just to be a d@#$ or whether it was more along the lines of good natured quirkiness. So I know whether to banter or block.
0Tiiba
All right, all right. I tried to make myself look rational by defining rationality so that I would have it by definition. I'm not sure how saying "desquires" makes or can make me a dathashdollar, but I assure you that I hold no malice for anyone. Except people who make intrusive advertising. And people who mis-program traffic lights. And people who make DRM. And bullies. And people who embed auto-starting music in their Web pages. And people who eat chicken. Tiiba is the name of a fairly low-level demon from the anime Slayers. He was imprisoned by the wizard-priest Rezo in a body that looks like an oversized chicken. He stuck in my mind after I read a fanfic where he appeared, briefly but hilariously.
0Morendil
Oh, that's likely to be in TVTropes then... (Hint!)
0wedrifid
Yeah, my googling took me to a TV trope page. Something about an Anime demon chicken. Was it an especially rational demon chicken?
3Tiiba
By chicken standards, I guess so.
0Zack_M_Davis
The "D" and "S" keys are close to the "R" ...?
1Kaj_Sotala
I think the question was "what heck is Tiiba"?