Upvoted because every group wants to be a cult and this seems to be a topic particularly suspectible to groupthink when faced with criticism. (That is, for this community.)
I also note posters making comments that deny the question because of who posed it or because of the tone in which it is made, while admitting the basic validity of the question, which is a rationality fail of epic proportions. If the question is valid, then it should be discussed, no matter how unpopular the poster happens to be within the inner circle. Period. Even if he isn't as eloquent and well-blessed with social skills than the most vocal people.
Nor does it seem good in the eyes of outsiders to see a valid question dismissed on such superficial reasons. This is separate from the direct effect the dismissal has on our own rationality. It will weaken both the rationalist and SIAI causes in the long run, as outsiders will see the causes as hypocritical and incapable of answering tough questions, respectively. Thus a smaller influx of interested outsiders.
In addition to the main post being voted down and the very validity of the question being denied, mormon2 has made at least one comment containing good point...
You are either mistaken or you are using some arbitrary definition of 'rationality' that I reject. It is not rational to ignore all social implications of a question.
Of the two definitions for rationality, I was going for e-rationality. It can certainly be i-rational to take social implications of a question into account to such a degree that one won't even consider it by fear of consequences. Simply deciding that evolution must be false since the social consequences of believing otherwise would be too unpleasant, for instance. Or you could admit, in the safety of your head, that evolution must be true, but hide this belief because you know what the consequences were. That might be rational both ways. But it is always a failure of e-rationality to refuse to consider a valid question because of social consequences. In the case of communities and not individuals, "to consider" means discussion within the community.
"Taking into account the social consequences" is fine in theory, but it's an easy path towards rationalizing away every argument from every person you don't like. I would be a bit more understanding if the poster in question would have been really abr...
Vote this up if, as a matter of policy, when a post like this gets voted down far enough (though for some reason it still shows as 0), it's okay to remove it and tell the poster to resubmit as an Open Thread comment. I would like posts like this to automatically disappear when voted down far enough, but that would take a code change and those are hard to get.
I am reluctant to agree with the aforementioned policy because I do not want to lose the comments on such posts. There have been cases where a '<= 0' post has been flawed but the replies have provided worthwhile insight into the topic being covered. I often search for things I can remember reading months in the past and it would frustrate me if they were not there to be found.
I like the sound of the (unfortunately code requiring) idea of having them not visible in the side bar.
My understanding is the reasoning goes something like this: This is a difficult problem. Eliezer, on his own, might not be smart enough to do this. Fundamentally smart people he can't quite create more of yet. But maybe he can create rationalists out of some them, and then some of those may join SIAI. Besides, boosting human rationality overall is a good thing anyways.
Since none of you understand what I am doing I will spell it out for you. My posts are formatted, written and styled intentionally for the response I desire.
You are claiming to be a troll?
mormon2, have you ever read other people on the Internet who write in the style and tone that you've adopted here? Are these people, in your experience, usually writing sense or nonsense? Are they, in your experience, usually worth the time that it takes to read them?
Let him who has never used time in a less than maximally-utility-producing way cast the first stone.
Until about 2006 EY was working on papers like CEV and working on designs for FAI which he has now discarded as being wrong for the most part. He then went on a long period of blogging on Overcoming Bias and LessWrong and is now working on a book on rationality as his stated main focus. If this be accurate I would ask how does this make sense from someone who has made such a big deal about FAI, its importance, being first to make AI and ensure it is FAI?
I know! It's a Xanatos Gambit!
Mormon2:
How would you act if you were Eliezer?
Bear in mind that you could either work directly on the problem, or you could try to cause others to work on it. If you think that you could cause an average of 10 smart people to work on the problem for every 6 months you spend writing/blogging, how much of your life would you spend writing/blogging versus direct work on FAI?
Well, first of all, the tone of your post is very passive aggressive and defensive, and if you are trying to encourage a good, rational discussion like you say, then you should maybe be a little more self-concious about your behavior.
Regarding the content of your post, I think it's a fair question. However, you seem like you are being quite a bit closed minded and EY-centric about the entire issue. This person is just one employee for SIAI, which presumeably can manage their own business, and no doubt have a much better idea of what their employees are do...
I believe it would be greatly informative to mormon2's experimental result if the profile was blocked. Mormon2 could confirm a hypothesis, we would be rid of him: everybody wins!
Short answer to the original post:
SIAI has a big Human Resources problem. Eliezer had a really difficult time finding anyone to hire as an assistant/coworker at SIAI who didn't immediately set out to do something really, really stupid. So he's blogging and writing a book on rationality in the hope of finding someone worthwhile to work with.
The way you use "rationality" here reminds me of the way that commenters at Overcoming Bias so often say "But isn't it a bias that... (you disagree with me about X)". When you speak of rationality or bias, you should be talking about systematic, general means by which you can bend towards or away from the truth. Just invoking the words to put a white coat on whatever position you are defending devalues them.
I believe EY has already explained that he's trying to make more rationalists, so they can go and solve FAI.
Irrelevant questions
The questions are relevant to how you ought to interpret your results. You need to answer them to know what to infer from the reaction to your experiment.
While they may have been irrelevant, the questions were certainly interesting. I could probably think of other irrelevant, interesting questions. I don't suppose you'd be willing to answer them?
I am conducting a social experiment as I already explained. The posts are a performance for effect as part my experiment.
Have you yourself participated in this kind of experiment when it was being performed on you by a stranger on the Internet who used the style and tone that you've adopted here? If so, what payoff did you anticipate for doing so?
From my understanding Mr. Yudkowski has two separate but linked interests, that of rationality, which predominates in writings and blog posts and designing AI, which is the interaction with SIAI. While I disagree about their particular approach (or lack thereof) I can see how it is rational to follow both simultaneously toward similar ends.
I would argue that rationality and AI are really the same project at different levels and different stated outcomes. Even if an AI never develops, increasing rationality is a good enough goal in and of itself.
You seem emotionally resistant to seeking out criticisms of your own arguments. In the long run, reality will punish you for this. Sorry.
I am conducting a social experiment as I already explained. The posts are a performance for effect as part my experiment.
Some of Robin's recent posts have commented on how giving the appearance of trying hard to secure one's status actually lowers your status. Now you are being exemplary.
If this be accurate I would ask how does this make sense from someone who has made such a big deal about FAI, its importance, being first to make AI and ensure it is FAI?
This sentence confused me; it should probably be reworded. Something like:
"If this be accurate, I would ask how this makes sense from someone who has made such a big deal about FAI and about how important it is to both be the first to make AI and ensure that it is Friendly."
Then from whence came the Q&A with Eliezer Yudkowsky, your fiction submissions (which I think lately have become of questionable value to LW), and other such posts which properly belong on either your personal blog or the SIAI blog?
I don't think that if any other organization were posting classified ads here that it would be tolerated.
However ugly it sounds, you've been using Less Wrong as a soap box. Regardless of our statement of purpose, you have made it, in part, about you and SIAI.
So I for one think that the OP's post isn't particularly out of place.
Edit: For the record I like most of your fiction. I just don't think it belongs here anymore.
I can see a couple of reasons why the post does belong here:
Psy-Kosh's answer seems perfectly reasonable to me. I wonder why you don't just give that answer, instead of saying the post doesn't belong here. Actually if I had known this was one of the reasons for starting OB/LW, I probably would have paid more attention earlier, because at the beginning I was thinking "Why is Eliezer talking so much about human biases now? That doesn't seem so interesting, compared to the Singularity/FAI stuff he used to talk about."
Eliezer wrote once that he wants to teach a little tiibaism to other FAI designers, so they're less likely to tile the world with smileys. He is an influential person among FAI designers, so perhaps he'll succeed at this. His book will probably be popular among AI programmers, who are the people who matter. And if he doesn't, he'll be the only tiibaist in the world working on a problem that suffers atiibaism poorly. So yes, teaching people how to properly exploit their hundred billion neurons can actually save the world.
Of course, if politicians or judges ...
Thank you For Your Participation
I would like to thank you all for your unwitting and unwilling participation in my little social experiment. If I do say so myself you all performed as I had hoped. I found some of the responses interesting, many them are goofy. I was honestly hoping that a budding rationalist community like this one would have stopped this experiment midway but I thank you all for not being that rational. I really did appreciate all the mormon2 bashing it was quite amusing and some of the attempts to discredit me were humorous though unsuccessful. In terms of the questions I asked I was curious about the answers though I did not expect to get any nor do I really need them; since I have a good idea of what the answers are just from simple deductive reasoning. I really do hope EY is working on FAI and actually is able to do it though I certainly will not stake my hopes or money on it.
Less there be any suspicion I am being sincere here.
Response
Because I can I am going to make one final response to this thread I started:
Since none of you understand what I am doing I will spell it out for you. My posts are formatted, written and styled intentionally for the response I desire. The point is to give you guys easy ways to avoid answering my questions (things like tone of the post, spelling, grammar, being "hostile (not really)" etc.). I just wanted to see if anyone here could actually look past that, specifically EY, and post some honest answers to the questions (real answers again from EY not pawns on LW). Obviously this was to much to ask, since the general responses, not completely, but for the most part were copouts. I am well aware that EY probably would never answer any challenge to what he thinks, people like EY typically won't (I have dealt with many people like EY). I think the responses here speak volumes about LW and the people who post here (If you can't look past the way the content is posted then you are going to have a hard time in life since not everyone is going to meet your standards for how they speak or write). You guys may not be trying to form a cult but the way you respond to a post like this screams cultish and even a some circle-jerk mentality mixed in there.
Post
I would like to float an argument and a series of questions. Now before you guys vote me down please do me the curtsey of reading the post. I am also aware that some and maybe even many of you think that I am a troll just out to bash SIAI and Eliezer, that is in fact not my intent. This group is supposed to be about improving rationality so lets improve our rationality.
SIAI has the goal of raising awareness of the dangers of AI as well as trying to create their own FAI solution to the problem. This task has fallen to Eliezer as the paid researcher working on FAI. What I would like to point out is a bit of a disconnect between what SIAI is supposed to be doing and what EY is doing.
According to EY FAI is an extremely important problem that must be solved with global implications. It is both a hard math problem and a problem that needs to be solved by people who take FAI seriously first. To that end SIAI was started with EY as an AI researcher at SIAI.
Until about 2006 EY was working on papers like CEV and working on designs for FAI which he has now discarded as being wrong for the most part. He then went on a long period of blogging on Overcoming Bias and LessWrong and is now working on a book on rationality as his stated main focus. If this be accurate I would ask how does this make sense from someone who has made such a big deal about FAI, its importance, being first to make AI and ensure it is FAI? If FAI is so important then where does a book on rationality fit? Does that even play into SIAI's chief goals? SIAI spends huge amounts of time talking about risks and rewards of FAI and the person who is supposed to be making the FAI is writing a book on rationality instead of solving FAI. How does this square with being paid to research FAI? How can one justify EY's reasons for not publishing the math of TDT, coming from someone who is committed to FAI? If one is committed to solving that hard of a problem then I would think that the publication of ones ideas on it would be a primary goal to advance the cause of FAI.
If this doesn't make sense then I would ask how rational is it to spend time helping SIAI if they are not focused on FAI? Can one justify giving to an organization like that when the chief FAI researcher is distracted by writing a book on rationality instead of solving the myriad of hard math problems that need to be solved for FAI? If this somehow makes sense then can one also state that FAI is not nearly as important as it has been made out to be since the champion of FAI feels comfortable with taking a break from solving the problem to write a book on rationality (in other words the world really isn't at stake)?
Am I off base? If this group is devoted to rationality then everyone should be subjected to rational analysis.