(I hope this doesn't come across as overly critical because I'd love to see this problem fixed. I'm not dissing rationality, just its current implementation. You have declared Crocker's Rules before, so I'm giving you an emotional impression of what your recent rationality propaganda articles look like to me, and I hope that doesn't come across as an attack, but something that can be improved upon.)
I think many of your claims of rationality powers (about yourself and other SIAI members) look really self-congratulatory and, well, lame. SIAI plainly doesn't appear all that awesome to me, except at explaining how some old philosophical problems have been solved somewhat recently.
You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?! Frankly, the only publicly visible person who strikes me as having some awesome powers is you, and from reading CSA, you seem to have had high productivity (in writing and summarizing) before you ever met LW.
Maybe there are all these awesome feats I just never get to see because I'm not at SIAI, but I've seen similar levels of confidence in your methods and wea...
Thought experiment
If the SIAI was a group of self interested/self deceiving individuals, similar to new age groups, who had made up all this stuff about rationality and FAI as a cover for fundraising what different observations would we expect?
I would expect them to:
SIAI does not appear to fit 1 (I'm not sure what the standard is here), certainly does not fit 2 or 3, debatably fits 4, and certainly does not fit 5 or 6. 7 is highly debatable but I would argue that the Sequences and other rationality material are clearly valuable, if somewhat obtuse.
I wouldn't have expected them to hire Luke. If Luke was a member all along and everything just planned to make them look more convincing that would imply a level of competence at such things that I'd expect all round better execution (which would have helped more than slightly improved believability from faking lower level of PR etc competence).
What evidence have you? Lots of New Age practitioners claim that New Age practices work for them. Scientology does not allow members to claim levels of advancement until they attest to "wins".
For my part, the single biggest influence that "their brand of rationality" (i.e. the Sequences) has had on me may very well be that I now know how to effectively disengage from dictionary arguments.
I appreciate the tone and content of your comment. Responding to a few specific points...
You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?!
There are many things we aren't (yet) good at. There are too many things about which to check the science and test things and update. In fact, our ability to collaborate successfully with volunteers on things has greatly improved in the last month, in part because we implemented some advice from the GWWC gang, who are very good at collaborating with volunteers.
the only publicly visible person who strikes me as having some awesome powers is you
Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.
Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them...
I don't think you're taking enough of an outside view. Here's how these accomplishments look to "regular" people:
CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team.
You wrote something 11 years ago, which you now consider defunct and still is not a mainstream view in any field.
The Sequences are simply awesome.
You wrote series of esoteric blog posts that some people like.
And he did manage to write the most popular Harry Potter fanfic of all time.
You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?
Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them and Carl Shulman. But you'd have to visit the Bay Area, and we can't afford to have him do nothing but conversations, anyway. If you want a taste, you can read his comment history, which consists of him writing the exactly correct thing to say in almost every comment he's made for the past several years.
You have a guy who is pretty smart. Ok...
The point ...
You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?
It's actually been incredibly useful to establishing the credibility of every x-risk argument that I've had with people my age.
"Have you read Harry Potter and the Methods of Rationality?"
"YES!"
"Ah, awesome!"
merriment ensues
topic changes to something about things that people are doing
"So anyway the guy who wrote that also does...."
Again, take the outside outside view. The kind of conversation you described only happens with people who have read HPMoR--just telling people about the fic isn't really impressive. (Especially if we are talking about the 90+% of the population who know nothing about fanfiction.) Ditto for the Sequences, they're only impressive after the fact. Compare this to publishing a number of papers in a mainstream journal, which is a huge status boost even to people who have never actually read the papers.
Perhaps not, but Luke was using HPMoR as an example of an accomplishment that would help negate accusations of arrogance, and for the majority of "regular" people, hearing that SIAI published journal articles does that better than hearing that they published Harry Potter fanfiction.
Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.
I wasn't aware of Google's AGI team accepting CFAI. Is there a link of organizations that consider the Friendly AI issue important?
My #1 suggestion, by a big margin, is to generate more new formal math results.
My #2 suggestion is to communicate more carefully, like Holden Karnofsky or Carl Shulman. Eliezer's tone is sometimes too preachy.
SI is arrogant because it pretends to be even better than science, while failing to publish in significant scientific papers. If this does not seem like a pseudoscience or cult, I don't know what does.
So please either stop pretending to be so great or prove it! For starters, it is not necessary to publish a paper about AI; you can choose any other topic.
No offense; I honestly think you are all awesome. But there are some traditional ways to prove one's skills, and if you don't accept the challenge, you look like wimps. Even if the ritual is largely a waste of time (all signals are costly), there are thousands of people who have passed it, so a group of x-rational gurus should be able to use their magical powers and do it in five minutes, right?
Yeah. The best way to dispel the aura of arrogance is to actually accomplish something amazing. So, SIAI should publish some awesome papers, or create a powerful (1) AI capable of some impressive task like playing Go (2), or end poverty in Haiti (3), or something. Until they do, and as long as they're claiming to be super-awesome despite the lack of any non-meta achievements, they'll be perceived as arrogant.
(1) But not too powerful, I suppose.
(2) Seeing as Jeopardy is taken.
(3) In a non-destructive way.
How much is that "same length of time"? Hours? Days? If 5 days of work could make LW acceptable in scientific circles, is it not worth doing? It is better to complain why oh why more people don't treat SI seriously?
Can some part of that work be oursourced? Just write the outline of the answer, then find some smart guy in India and pay him like $100 to write it? Or if money is not enough for people who could write the paper well, could you bribe someone by offering them co-authorship? Graduate students have to publish in papers anyway, so if you give them a complete solution, they should be happy to cooperate.
Or set up a "scientific wiki" on SI site, where the smartest people will write the outlines of their articles, and the lesser brains can contribute by completing the texts.
These are my solutions, which seem rather obvious to me. It is not sure they would work, but I guess trying them is better than do nothing. Could a group of x-rational gurus find seven more solutions in five minutes?
From outside, this seems like: "Yeah, I totally could do it, but I will not. Now explain me why are people, who can do it, percieved like more skilled than me?" -- "Because they showed everyone they can do it, duh."
in combination with his lack of technical publication
I think it would help for EY to submit more of his technical work for public judgment. Clear proof of technical skill in a related domain makes claims less likely to come off as arrogant. For that matter it also makes people more willing to accept actions that they do perceive as arrogant.
The claim made that donating to the SIAI is the charity donation with the highest expected return* always struck me as rather arrogant, though I can see the logic behind it.
The problem is firstly that its an extremely self serving statement, (equivalent to "giving us money is the best thing you can ever possibly do") even if true its credibility is reduced by the claim coming from the same person who would benefit from it.
Secondly it requires me to believe a number of claims which individually require a burden of proof, and gain more from the conjunction. Including: "Strong AI is possible," "friendly AI is possible," "The actions of the SIAI will significantly effect the results of investigations into FAI," and "the money I donate will significantly improve the effectiveness of the SIAI's research" (I expect the relationship between research efffectiveness and funding isn't linear). All of which I only have your word for.
Thirdly, contrast this with other charities who are known to be very effective and can prove it, and whose results affect presently suffering people (e.g. Against malaria).
Caveat, I'm not arguing any of the clai...
I feel like I've heard this claimed, too, but... where? I can't find it.
because GWWC's members are significantly x-risk focused
Where is this established? As far as I can tell, one cannot donate "to" GWWC, and none of their recommended charities are x-risk focused.
Having been through a Physics grad school (albeit not of a Caltech caliber), I can confirm that lack of (a real or false) modesty is a major red flag, and a tell-tale of a crank. Hawking does not refer to the black-hole radiation as Hawking radiation, and Feynman did not call his diagrams Feynman diagrams, at least not in public. A thorough literature review in the introduction section of any worthwhile paper is a must, unless you are Einstein, or can reference your previous relevant paper where you dealt with it.
Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org (cs.DM or similar), properly referenced and formatted to conform with the prevailing standard (probably LaTeXed), and submit them for conference proceedings and/or into peer-reviewed journals. Anything less would be less than rational.
Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org...
Even Greg Egan managed to copublish papers on arxiv.org :-)
ETA
Here is what John Baez thinks about Greg Egan (science fiction author):
He's incredibly smart, and whenever I work with him I feel like I'm a slacker. We wrote a paper together on numerical simulations of quantum gravity along with my friend Dan Christensen, and not only did they do all the programming, Egan was the one who figured out a great approximation to a certain high-dimensional integral that was the key thing we were studying. He also more recently came up with some very nice observations on techniques for calculating square roots, in my post with Richard Elwes on a Babylonian approximation of sqrt(2). And so on!
That's actually what academics should be saying about Eliezer Yudkowsky if it is true. How does an SF author manage to get such a reputation instead?
For someone who knows how to program, learning LaTeX to a perfectly serviceable level should take at most one day's worth of effort, and most likely it would be spread diffusely throughout the using process, with maybe a couple of hours' dedicated introduction to begin with.
It is quite possible that, considering the effort required to find an editor and organise for that editor to edit an entire paper into LaTeX, compared with the effort required to write the paper in LaTeX in the first place, the additional effort cost of learning LaTeX may in fact pay for itself after less than one whole paper. It's very unlikely that it would take more than two.
Publishing technical papers would be one of the better uses of his time, editing and formatting them probably is not. If you have no volunteers, you can easily find a starving grad student who would do it for peanuts.
I've asked around a bit, and we can't recall when exactly EY claimed "world-class mathematical ability". As far as I can remember, he's been pretty up-front about wishing he were better at math. I seem to remember him looking for a math-savvy assistant at one point.
If this is the case, it sounds like EY has a Chuck Norris problem, i.e., his mythos has spread beyond its reality.
Yes. At various times we've considered hiring EY an advanced math tutor to take him to the next level more quickly. He's pretty damn good at math but he's not Terence Tao.
I've asked around a bit, and we can't recall when exactly EY claimed "world-class mathematical ability". As far as I can remember, he's been pretty up-front about wishing he were better at math. I seem to remember him looking for a math-savvy assistant at one point.
I too don't remember that he ever claimed to have remarkable math ability. He's said that he was "spoiled math prodigy" (or something like that), meaning that he showed precocious math ability while young, but he wasn't really challenged to develop it. Right now, his knowledge seems to be around the level of a third- or fourth-year math major, and he's never claimed otherwise. He surely has the capacity to go much further (as many people who reach that level do), but he hasn't even claimed that much, has he?
There's a phrase that the tech world uses to describe the kind of people you want to hire: "smart, and gets things done." I'm willing to grant "smart", but what about the other one?
The sequences and HPMoR are fantastic introductory/outreach writing, but they're all a few years old at this point. The rhetoric about SI being more awesome than ever doesn't square with the trend I observe* in your actual productivity. To be blunt, why are you happy that you're doing less with more?
*I'm sure I don't know everything SI has actually done in the last year, but that's a problem too.
To educate myself, I visited the SI site and read your December progress report. I should note that I've never visited the SI site before, despite having donated twice in the past two years. Here are my two impressions:
I agree with what has been said about the modesty norm of academia; I speculate that it arises because if you can avoid washing out of the first-year math courses, you're already one or two standard deviations above average, and thus you are in a population in which achievements that stood out in a high school (even a good one) are just not that special. Bragging about your SAT scores, or even your grades, begins to feel a bit like bragging about your "Participant" ribbon from sports day. There's also the point that the IQ distribution in a good physics department is not Gaussian; it is the top end of a Gaussian, sliced off. In other words, there's a lower bound and an exponential frequency decay from there. Thus, most people in a physics department are on the lower end of their local peer group. I speculate that this discourages bragging because the mass of ordinary plus-two-SDs doesn't want to be reminded that they're not all that bright.
However, all that aside: Are academics the target of this blog, or of lukeprog's posts? Propaganda, to be effective, should reach the masses, not the elite - although there's something to be said for "Get the elite and the masses ...
Well, no, I don't think so. Most academics do not work on impossible problems, or think of this as a worthy goal. So it should be more like "Do cool stuff, but let it speak for itself".
Moderately related: I was just today in a meeting to discuss a presentation that an undergraduate student in our group will be giving to show her work to the larger collaboration. On her first page she had
Subject
Her name
Grad student helping her
Dr supervisor no 1
Dr supervisor no 2
And to start off our critique, supervisor 1 mentioned that, in the subculture of particle physics, it is not the custom to list titles, at least for internal presentations. (If you're talking to a general audience the rules change.) Everyone knows who you are and what you've done! Thus, he gave the specific example that, if you mention "Leon", everyone knows you speak of Leon Lederman, the Nobel-Prize winner. But as for "Dr Lederman", pff, what's a doctorate? Any idiot can be a doctor and many idiots (by physics standards, that is) are; if you're not a PhD it's at least assumed that you're a larval version of one. It's just not a very unusual accomplishment in these circles. To have your first ...
I've reccommended this before, I think.
I think that you should get Eliezer to say the accurate but arrogant sounding things, because everyone already knows he's like that. You should yourself, Luke, be more careful about maintaining a humble opinion.
If you need people to say arrogant things, make them ghost-write for Eliezer.
Personally, I think that a lot of Eliezer's arrogance is deserved. He's explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people's problems. CFAI was way ahead of its time, as TDT still is. So he can feel smug. He's got a reputation as an arrogant eccentric genius anyway.
But the rest of the organisation should try to be more careful. You should imitate Carl Shulman rather than Eliezer.
I think having people ghost-write for Eliezer is a very anti-optimum solution in the long run. It removes integrity from the process. SI would become insufficiently distinguishable from Scientology or a political party if it did this.
Eliezer is a real person. He is not "big brother" or some other fictional figure head used to manipulate the followers. The kind of people you want, and have, following SI or lesswrong will discount Eliezer too much when (not if) they find out he has become a fiction employed to manipulate them.
He's explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people's problems.
As a curiosity, what would the world look like if this were not the case? I mean, I'm not even sure what it means for such a sentence to be true or false.
Addendum: Sorry, that was way too hostile. I accidentally pattern-matched your post to something that an Objectivist would say. It's just that, in professional philosophy, there does not seem to be a consensus on what a "problem of philosophy" is. Likewise, there does not seem to be a consensus on what a solution to one would look like. It seems that most "problems" of philosophy are dismissed, rather than ever solved.
Here are examples of these philosophical solutions. I don't know which of these he solved personally, and which he simply summarized others' answer to:
What is free will? Ooops, wrong question. Free will is what a decision-making algorithm feels like from the inside.
What is intelligence? The ability to optimize things.
What is knowledge? The ability to constrain your expectations.
What should I do with the Newcomb's Box problem? TDT answers this.
...other examples include inventing Fun theory, using CEV to make a better version of utilitarianism, and arguing for ethical injunctions using TDT.
And so on. I know he didn't come up with these on his own, but at the least he brought them all together and argued convincingly for his answers in the Sequences.
I've been trying to figure out these problems for years. So have lots of philosophers. I have read these various philosophers' proposed solutions, and disagreed with them all. Then I read Eliezer, and agreed with him. I feel that this is strong evidence that Eliezer has actually created something of value.
What should SI do about this?
I think that separating instrumental rationality from the Singularity/FAI ideas will help. Hopefully this project is coming along nicely.
(I was going to write a post on 'why I'm skeptical about SIAI', but I guess this thread is a good place to put it. This was written in a bit of a rush - if it sounds like I am dissing you guys, that isn't my intention.)
I think the issue isn't so much 'arrogance' per se - I don't think many of your audience would care about accurate boasts - but rather your arrogance isn't backed up with any substantial achievement:
You say you're right on the bleeding edge in very hard bits of technical mathematics ("we have 30-40 papers which could be published on decision theory" in one of lukeprogs Q&As, wasn't it?), yet as far as I can see none of you have published anything in any field of science. The problem is (as far as I can tell) you've been making the same boasts about all these advances you are making for years, and they've never been substantiated.
You say you've solved all these important philosophical questions (Newcomb, Quantum mechanics, Free will, physicalism, etc.), yet your answers are never published, and never particularly impress those who are actual domain experts in these things - indeed, a complaint I've heard commonly is that Lesswrong just simply misundersta...
I think Eli as being the main representative of SI, should be more careful of how he does things, and resist his natural instinct to declare people stupid (-> Especially <- if he's basically right)
Case in point: http://www.sl4.org/archive/0608/15895.html That could have been handled more politically and with more face-saving for the victim. Now you have this guy and at least one "friend" with loads of free time going around putting down anything associated with Eliezer or SI on the Internet. For 5 minutes of extra thinking and not typing this could have been largely avoided. Eli has to realize that he's in a good position to needlessly hurt his (and our) own causes.
Another case in point was handling the Roko affair. There is doing the right thing, but you can do it without being an asshole (also IMO the "ownership" of LW policies is still an unresolved issue, but at least it's mostly "between friends"). If something like this needs to be done Eli needs to pass the keyboard to cooler heads.
Why don't SIAI researchers decide to definitively solve some difficult unsolved mathematics, programming, or engineering problem as proof of their abilities?
Yes it would waste time that could have been spent on AI-related philosophy but would unambiguously support the competency of SIAI.
There are two recurring themes: peer-reviewed technical results, and intellectual firepower.
If you want to show people intellectual firepower and the awesomeness of your conversations, tape the conversations. Just walk around with a recorder going all day, find the interesting bits later, and put them up for people to listen to.
But... you're not selling "we're super bright," you're selling "we're super effective." And for that you need effectiveness. Earnest, bright people wasting their effort is an old thing, and with as goals as large as yours it's difficult to see the difference between progress and floundering.
I don't know how to address your particular signalling problem. But a question I need answered for myself: I wouldn't be able to tell the difference between the SIAI folks being "reasonably good at math and science" and "actually being really good - the kind of good they'd need to be for me to give them my money."
ARE there straightforward tests you could hypothetically take (or which some of you may have taken) which probably wouldn't actually satisfy academics, but which are perfectly reasonable benchmarks we should expect you to be able to complete to demonstrate your equivalent education?
10 points for pointing out that you have gone to school, as if this were evidence of sanity.
I'm not sure, but I think this is roughly how "look, I did great on the GRE!" would sound to someone already skeptical. It's the sort of accomplishment that sounds childish to point out outside of a very limited context.
According to Feynman, he tested at 125 when he was a schoolboy. (Search for "IQ" in the Gleick biography.)
There are a couple reasons to not care about this factoid:
- Feynman was younger than 15 when he took it [....]
- [I]t was one of the 'ratio' based IQ tests - utterly outdated and incorrect by modern standards.
- Finally, it's well known that IQ tests are very unreliable in childhood; kids can easily bounce around compared to their stable adult scores.
...I suspect that this test emphasized verbal, as opposed to mathematical, ability. Feynman received the highest score in the country by a large margin on the notoriously difficult Putnam mathematics competition exam, although he joined the MIT team on short notice and did not prepare for the test. [...] It seems quite possible to me that Feynman's cognitive abilities might have been a bit lopsided -- his vocabulary and verbal ability were well above average, but perhaps not as great as his mathematical abilities. I recall looking at excerpts from a notebook Feynman kept as an undergraduate. While the notes covered very advanced topics -- including general relativity and the Dirac equation --
What are the most egregious examples of SI's arrogance?
Public tantrums, shouting and verbal abuse. Those are status displays that pay off for tribal chieftans and some styles of gang leader. They aren't appropriate for leaders of intellectually oriented charities. Eliezer thinking he can get away with that is the biggest indicator of arrogance that I've noticed thus far.
To be fair, while I personally do perceive the SIAI as being arrogant, I haven't seen any public tantrums. As far as I can tell, all their public discourse has been quite civil.
The most significant example was the Roko incident. The relevant threads and comments were all censored during the later part of his tantrum. Not a good day in the life of Eliezer's reputation.
I still don't think that we could call it a tantrum of Eliezer's.
Whatever you choose to call it the act of shouting at people and calling them names is the kind of thing that looks bad to me. I think Eliezer would look better if he didn't shout or call people names.
but he probably at least thought that the Roko guy was being stupid.
Of course he did. Lack of sincerity is not the problem here. The belief that the other person is stupid and, more importantly, the belief that if he thinks other people are being stupid it is right and appropriate for him to launch into an abusive hysterical tirade is the arrogance problem in this case.
What SIAI could do to help the image problem: Get credible grown-ups on board.
The main team looks to be in their early thirties, and the visiting fellows mostly students in their twenties. With the claims of importance SIAI is making, people go looking for people over forty who are well-established as serious thinkers, AI experts or similarly known-competent folk in a relevant field. There should be either some sufficiently sold on the SIAI agenda to be actually on board full-time, or quite a few more in some kind of endorsing partnership role. Currently there's just Ray Kurzweil on the team page, and beyond "Singularity Summit Co-Founder", there's nothing there saying just what his relation to SIAI is, exactly. SIAI doesn't appear to be suitably convincing to have gotten any credible grown-ups as full-time team members.
There are probably good reasons why this isn't useful for what SIAI is actually trying to do, but the demographic of thirty-somethings leading the way and twenty-somethings doing stuff looks way iffier at a glance for "support us in solving the most important philosophical, societal and technological problem humanity has ever faced once and for all!" than it does for "we're doing a revolutionary Web 3.0 SaaS multi mobile OS cloud computing platform!"
To be honest, I've only ever felt SI/EY/LW's "arrogance" once, and I think that LW in general is pretty damn awesome. (I realize I'm equating LW with SI, but I don't really know what SI does)
The one time is while reading through the Free Willhttp://wiki.lesswrong.com/wiki/Free_will page, which I've copied here: "One of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own. "
This smacks strongly of "oh look, there's a classic stumper, and I'm the ONLY ONE who's solved it (naa naa naa). If you want to be a true rationalist/join the tribe, you better solve it on your own, too"
I've also heard others mention that HP from HPMoR is an unsufferable little twat, which I assume is the same attitude they would have if they were to read LW.
I've written some of my thoughts up about the arrogance issue here. The short version is that some people have strongly developed identities as "not one of those pretentious people" and have strong immune responses when encountering intelligence. http://moderndescartes.blogspot.com/2011/07/turn-other-cheek.html
What are the most egregious examples of SI's arrogance?
Well, you do tend to talk about "saving the world" a lot. That makes it sound like you, Eliezer Yudkowsky, plus a few other people are the new Justice League. That sounds at least a little arrogant...
If it helps at all, another data point (not quite answers to your questions):
There are two obvious options:
The first, boring option is to make fewer bold claims. I personally would not prefer that you take this tack. It would be akin to shooting yourselves in the foot. If all of your claims vis-a-vis saving the world are couched in extremely humble signaling packages, no one will want to ever give you any money.
The second, much better option is to start doing amazing, high-visibility things worthy of that arrogance. Muflax points out that you don't have a Tim Ferriss. Tim Ferriss is an interesting case specifically because he is a huge self-promoter who people actually like despite the fact that he makes his living largely by boasting entertainingly. The reason Tim Ferriss can do this is because he delivers. He has accomplished the things he is making claims about - or at least he convinces you that he is qualified to talk about it.
I really want a Rationality Tim Ferriss who I can use as a model for my own development. You could nominate yourself or Eliezer for this role, but if you did so, you would have to sell that role.
I like the second option better, too.
I'm certainly going to try to be a Rationality Tim Ferris, but I have a ways to go.
Eliezer is still hampered by the cognitive exhaustion problem that he described way back in 2000. He's tried dozens of things and still tries new diets, sleeping patterns, etc. but we haven't kicked it yet. That said, he's pretty damn productive each day before cognitive exhaustion sets in.
I unfortunately don't have much to offer that can actually be helpful. I (and I feel like this probably applies to many LWers) am not at all turned off by arrogance, and actually find it somewhat refreshing. But this reminds me of something that a friend of mine said after I got her to read HPMOR:
"after finishing chapter 5 of hpmor I have deduced that harry is a complete smarmy shit that I want to punch in the face. no kid is that disrespectful. also he reminds me of a young voldemort....please don't tell me he actually tries taking over the world/embezzling funds/whatever"
ETA: she goes on in another comment (On Facebook), after I told her to give it to chapter 10, like EY suggests, "yeah I'm at chapter 17 and still don't really like harry (he seems a bit too much of a projection of the author perhaps? or the fact that he siriusly thinks he's the greatest thing evarrr/is a timelord) but I'm still reading for some reason?"
Seems to be the same general sentiment, to me. Not specifically the SI, but of course tangentially related. For what it's worth, I disagree. Harry's awesome. ;-)
(a) My experience with the sociology of academia has been very much in line with what Lukeprog's friend, Shminux and RolfAndreassen describe. This is the culture that I was coming from in writing my post titled Existential Risk and Public Relations. Retrospectively I realize that the modesty norm is unusually strong in academia and to that extent I was off-base in my criticism.
The modesty norms have some advantages and disadvantages. I think that it's appropriate for even the best people take the view "I'm part of a vast undertaking; if I hadn't gotte...
I'm pretty sure most everyone here already knows this, but the perception of arrogance is basically a signalling/counter-signalling problem. If you boast (produce expensive signals of your own fitness), that tells people you are not too poor to have anything to boast about. But it can also signal that you have a need to brag to be noticed, which in turn can be interpreted to mean you aren't truly the best of the best. The basic question is context.
Is there a serious danger your potential contributions will be missed? If so, it is wisest to boast. Is there ...
I intended [...]
But some people seem to have read it and heard this instead [...]
When I write posts, I'd often be tempted to use examples from my own life, but then I'd think:
This usually stop...
So, I have a few questions:
- What are the most egregious examples of SI's arrogance?
Since you explicitly ask a question phrased thus, I feel obligated to mention that last April I witnessed a certain email incident that I thought was somewhat extremely bad in some ways.
I do believe that lessons have been learned since then, though. Probably there's no need to bring the matter up again, and I only mention it since according to my ethics it's the required thing to do when asked such an explicit question as above.
(Some readers may wonder why I'm not provi...
A lot of people are suggesting something like "SIAI should publish more papers", but I'm not sure anyone (including those who are making the suggestion) would actually change their behavior based on that. It sounds an awful lot like "SIAI should hire a PhD".
I've been a donor for a long time, but every now and then I've wondered whether I should be - and the fact that they don't publish more has been one of the main reasons why I've felt those doubts.
I do expect the paper thing to actually be the true rejection of a lot of people. I mean, demanding some outputs is one of the most basic expectations you could have.
I consider "donating to SIAI" to be on the same level as "donating to webcomics" - I pay Eliezer for the entertainment value of his writing, in the same spirit as when I bought G.E.B. and thereby paid Douglas Hofstadter for the entertainment value of his writing.
Find someplace I call myself a mathematical genius, anywhere.
(I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern. I don't know what this alarm feels like, so it's hard to guess what sets it off.)
I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern.
Some quotes by you that might highlight why some people think you/SI is arrogant :
I tried - once - going to an interesting-sounding mainstream AI conference that happened to be in my area. I met ordinary research scholars and looked at their posterboards and read some of their papers. I watched their presentations and talked to them at lunch. And they were way below the level of the big names. I mean, they weren't visibly incompetent, they had their various research interests and I'm sure they were doing passable work on them. And I gave up and left before the conference was over, because I kept thinking "What am I even doing here?" (Competent Elites)
More:
...I don't mean to bash normal AGI researchers into the ground. They are not evil. They are not ill-intentioned. They are not even dangerous, as individuals. Only the mob of them is dangerous, that can learn from each o
I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?
I am the wrong person to ask if a "a doctorate in AI would be negatively useful". I guess it is technically useful. And I am pretty sure that it is wrong to say that others are "not remotely close to the rationality standards of Less Wrong". That's of course the case for most humans, but I think that there are quite a few people out there who are at least at the same level. I further think that it is quite funny to criticize people on whose work your arguments for risks from AI are dependent on.
But that's besides the point. Those statements are clearly false when it comes to public relations.
If you want to win in this world, as a human being, you are either smart enough to be able to overpower everyone else or you actually have to get involved in some fair amount of social engineering, signaling games and need to refine your public relations.
Are you able to solve friendly AI, without much more money, without hiring top-notch mathematicians, and then solve general intelligence to implement it and take over the world? If not, then you will at some point eit...
I mostly agree with the first 3/4 of your post. However...
Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won't even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.
You can't make everyone happy. Whatever policy a website has, some people will leave. I have run away from a few websites that have "no censorship, except in extreme cases" policy, because the typical consequence of such policy is some users attacking other users (weighing the attack carefully to prevent moderator's action) and some users producing huge amounts of noise. And that just wastes my time.
People leaving LW should be considered on case-by-case basis. They are not ...
The first three statements can be boiled down to saying, "I, Eliezer, am much better at understanding and developing AI than the overwhelming majority of professional AI researchers".
Is that statement true, or false ? Is Eliezer (or, if you prefer, the average SIAI member) better at AI than everyone else (plus or minus epsilon) who is working in the field of AI ?
The prior probability for such a claim is quite low, especially since the field is quite large, and includes companies such as Google and IBM who have accomplished great things. In order to sway my belief in favor of Eliezer, I'll need to witness some great things that he has accomplished; and these great things should be significantly greater than those accomplished by the mainstream AI researchers. The same sentiment applies to SIAI as a whole.
I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?
When I read that, I interpreted it to mean something like "Yes, he does come across as arrogant, but it's okay because everything he's saying is actually true." It didn't come across to me like a separate question - it read to me like a rhetorical question which was used to make a point. Maybe that's not how you intended it?
I think erratio is saying that it's important to communicate in a way that doesn't turn people off, regardless of whether what you're saying is true or not.
But I don't get it. You asked for examples and XiXiDu gave some. You can judge whether they were good or bad examples of arrogance. Asking whether the examples qualify under another, different criterion seems a bit defensive.
Also, several of the examples were of the form "I was tempted to say X" or "I thought Y to myself", so where does truth or falsity come into it?
Okay, let me try again...
XiXiDu, those are good examples of why people think SI is arrogant. Out of curiosity, do you think the statements you quote are actually false?
This isn't a useful counterargument when the subject at hand is public relations. Several organizations have been completely pwned by hostile parties cherry-picking quotes.
Interestingly, the first sentence of this comment set off my arrogance sensors (whether justified or not). I don't think it's the content of your statement, but rather the way you said it.
I believe that. My first-pass filter for theories of why some people think SIAI is "arrogant" is whether the theory also explains, in equal quantity, why those same people find Harry James Potter-Evans-Verres to be an unbearably snotty little kid or whatever. If the theory is specialized to SIAI and doesn't explain the large quantities of similar-sounding vitriol gotten by a character in a fanfiction in a widely different situation who happens to be written by the same author, then in all honesty I write it off pretty quickly. I wouldn't mind understanding this better, but I'm looking for the detailed mechanics of the instinctive sub-second ick reaction experienced by a certain fraction of the population, not the verbal reasons they reach for afterward when they have to come up with a serious-sounding justification. I don't believe it, frankly, any more than I believe that someone actually hates hates hates Methods because "Professor McGonagall is acting out of character".
I once read a book on characterization. I forget the exact quote, but it went something like, "If you want to make your villian more believable, make him more intelligent."
I thought my brain had misfired. But apparently, for the average reader it works.
"Here is a threat to the existence of humanity which you've likely never even considered. It's probably the most important issue our species has ever faced. We're still working on really defining the ins and outs of the problem, but we figure we're the best people to solve it, so give us some money."
Unless you're a fictional character portrayed by Will Smith, I don't think there's enough social status in the world to cover that.
If trying to save the world requires having more social status than humanly obtainable, then the world is lost, even if it was easy to save...
The question is one of credibility rather than capability. In private, public, academic and voluntary sectors it's a fairly standard assumption that if you want people to give you resources, you have to do a little dance to earn it. Yes, it's wasteful and stupid and inefficient, but it's generally easier to do the little dance than convince people that the little dance is a stupid system. They know that already.
It's not arrogant to say "my time is too precious to do a little dance", and it may even be true. The arrogance would be to expect people to give you those resources without the little dance. I doubt the folk at SIAI expect this to happen, but I do suspect they're probably quite tired of being asked to dance.
I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern.
My thinking when I read this post went something along these lines but where you put "made up because" I put "actually consists of". That is, acting in a way that (the observer perceives) is beyond your station is a damn good first approximation at a practical definition of 'arrogance'. I would go as far as to say that if you weren't being arrogant you wouldn't be able to do you job. Please keep on being arrogant!
The above said, there are other behaviors that will provoke the label 'arrogant' which are not beneficial. For example:
Hi Luke,
I think you are correct that SI has an image problem, and I agree that it's at least partially due to academic norm violations (and partially due to the personalities involved). And partially due to the fact that out of possible social organizations, SI most readily maps to a kind of secular cult, where a charismatic leader extracts a living from his followers.
If above is seen as a problem in need of correcting then some possibilities for change include:
(a) Adopting mainstream academic norms strategically. (b) Competing in the "mainstream marketplace of ideas" by writing research grant proposals.
There's the signalling problem from boasting in this culture, but should we also be taking a look at whether boasting is a custom that there are rational reasons for encouraging or dropping?
Since it's been seven months, I'm curious - how much of this, if any, has been implemented? TDT has been published, but it doesn't get too many hits outside of LessWrong/MIRI, for example.
This is the best example I've seen so far:
I actually intend to fix the universe (or at least throw some padding atop my local region of it, as disclaimed above)
The padding version seems more reasonable next to the original statement, but neither of these are very realistic goals for a person to accomplish. There is probably not a way to present grandiosity such as this without it coming across as arrogance or worse.
I still don't get what's actually supposed to be wrong about being arrogant. In all the examples I've found of actual arrogance it seems a good and sensible reaction when justified, and in the alleged cases of it casuing bad outcomes it never actualy is the arrogance itself that does, there is just an overconfidence causing both the arrogance and bad outcome. Is this just some social tabo because it *correlates with overconfidence?
I'd be curious to see your feedback regarding the comments on this post. Do you believe that the answers to your questions were useful ? If so, what are you going to do about it (and if not, why not) ? If you have already done something, what was it, and how effective did it end up being ?
What about getting some tech/science savvy public-relations practitioners involved? Understanding and interacting effectively with the relevant publics might just be a skill worthy of dedicated consideration and more careful management.
Personally, I don't think SI is arrogant, but rather that should would harder to publish books/papers so that they would be more accepted by the general scientific (and non-scientific, even) community. Not that I think think that they aren't trying already...
Are there subjects and ways in which SI isn't arrogant enough?
Informally, let us suppose perceived arrogance in attempting a task is the perceived competence of the individual divided by the perceived difficulty of the task. SIAI is attempting an impossible task, without infinite competence. Thus, there is no way SIAI can be arrogant enough.
I intended Leveling Up in Rationality to communicate this:
But some people seem to have read it and heard this instead:
This failure (on my part) fits into a larger pattern of the Singularity Institute seeming too arrogant and (perhaps) being too arrogant. As one friend recently told me:
So, I have a few questions: