I intended Leveling Up in Rationality to communicate this:

Despite worries that extreme rationality isn't that great, I think there's reason to hope that it can be great if some other causal factors are flipped the right way (e.g. mastery over akrasia). Here are some detailed examples I can share because they're from my own life...

But some people seem to have read it and heard this instead:

I'm super-awesome. Don't you wish you were more like me? Yay rationality!

This failure (on my part) fits into a larger pattern of the Singularity Institute seeming too arrogant and (perhaps) being too arrogant. As one friend recently told me:

At least among Caltech undergrads and academic mathematicians, it's taboo to toot your own horn. In these worlds, one's achievements speak for themselves, so whether one is a Fields Medalist or a failure, one gains status purely passively, and must appear not to care about being smart or accomplished. I think because you and Eliezer don't have formal technical training, you don't instinctively grasp this taboo. Thus Eliezer's claim of world-class mathematical ability, in combination with his lack of technical publications, make it hard for a mathematician to take him seriously, because his social stance doesn't pattern-match to anything good. Eliezer's arrogance as evidence of technical cluelessness, was one of the reasons I didn't donate until I met [someone at SI in person]. So for instance, your boast that at SI discussions "everyone at the table knows and applies an insane amount of all the major sciences" would make any Caltech undergrad roll their eyes; your standard of an "insane amount" seems to be relative to the general population, not relative to actual scientists. And posting a list of powers you've acquired doesn't make anyone any more impressed than they already were, and isn't a high-status move.

So, I have a few questions:

 

  1. What are the most egregious examples of SI's arrogance?
  2. On which subjects and in which ways is SI too arrogant? Are there subjects and ways in which SI isn't arrogant enough?
  3. What should SI do about this?

 

New to LessWrong?

New Comment
307 comments, sorted by Click to highlight new comments since: Today at 11:47 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]12y740

(I hope this doesn't come across as overly critical because I'd love to see this problem fixed. I'm not dissing rationality, just its current implementation. You have declared Crocker's Rules before, so I'm giving you an emotional impression of what your recent rationality propaganda articles look like to me, and I hope that doesn't come across as an attack, but something that can be improved upon.)

I think many of your claims of rationality powers (about yourself and other SIAI members) look really self-congratulatory and, well, lame. SIAI plainly doesn't appear all that awesome to me, except at explaining how some old philosophical problems have been solved somewhat recently.

You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?! Frankly, the only publicly visible person who strikes me as having some awesome powers is you, and from reading CSA, you seem to have had high productivity (in writing and summarizing) before you ever met LW.

Maybe there are all these awesome feats I just never get to see because I'm not at SIAI, but I've seen similar levels of confidence in your methods and wea... (read more)

Thought experiment

If the SIAI was a group of self interested/self deceiving individuals, similar to new age groups, who had made up all this stuff about rationality and FAI as a cover for fundraising what different observations would we expect?

I would expect them to:

  • 1- Never hire anybody or hire only very rarely
  • 2- Not release information about their finances
  • 3- Avoid high-profile individuals or events
  • 4- Laud their accomplishments a lot without producing concrete results
  • 5- Charge large amounts of money for classes/training
  • 6- Censor dissent on official areas, refuse to even think about the possibility of being a cult, etc.
  • 7- Not produce useful results

SIAI does not appear to fit 1 (I'm not sure what the standard is here), certainly does not fit 2 or 3, debatably fits 4, and certainly does not fit 5 or 6. 7 is highly debatable but I would argue that the Sequences and other rationality material are clearly valuable, if somewhat obtuse.

5private_messaging12y
That goes for self interested individuals with high rationality, purely material goals, and very low self deception. The self deceived case, on the other hand, is the people whose self interest includes 'feeling important' and 'believing oneself to be awesome' and perhaps even 'taking a shot at becoming the saviour of mankind'. In that case you should expect them to see awesomeness in anything that might possibly be awesome (various philosophy, various confused texts that might be becoming mainstream for all we know, you get the idea), combined with absence of anything that is definitely awesome and can't be trivial (a new algorithmic solution to long standing well known problem that others worked on, practically important enough, etc).

I wouldn't have expected them to hire Luke. If Luke was a member all along and everything just planned to make them look more convincing that would imply a level of competence at such things that I'd expect all round better execution (which would have helped more than slightly improved believability from faking lower level of PR etc competence).

3RobertLumley12y
I would not expect their brand of rationality to work in my own life. Which it does.

What evidence have you? Lots of New Age practitioners claim that New Age practices work for them. Scientology does not allow members to claim levels of advancement until they attest to "wins".

For my part, the single biggest influence that "their brand of rationality" (i.e. the Sequences) has had on me may very well be that I now know how to effectively disengage from dictionary arguments.

8FiftyTwo12y
Even if certain rationality techniques are effective that's separate from the claims about the rest of the organisation. Similar to the early level Scientology classes being useful social hacks but the overall structure less so.
0Blueberry12y
They are? Do you have a reference? I thought they were weird nonsense about pointing to things and repeating pairs of words and starting at corners of rooms and so on.
2RobertLumley12y
Markedly increased general satisfaction in life, better success at relationships, both intimate and otherwise, noticing systematic errors in thinking, etc. I haven't bothered to collect actual data (which wouldn't do much good since I don't have pre-LW data anyway) but I am at least twice as happy with my life as I have been in previous years.
9Karmakaiser12y
This is the core issue with rationality at present. Until and unless some intrepid self data collectors track their personal lives post sequences then we have a collection of smart people who post nice anecdotes. I admit that, like you, I didn't have the presence of mind to start collecting data as I can't keep a diary current. But without real data we will have continued trouble convincing people that this works.
3RobertLumley12y
I was thinking the other day that I desperately wished I had written down my cached thoughts (and more importantly, cached feelings) about things like cryonics (in particular), politics, or [insert LW topic of choice here] before reading LW so that I could compare them now. I don't think I had ever really thought about cryonics, or if I had, I had a node linking it to crazy people. Actually, now that I think about it it's not true. I remember thinking about it once when I first started in research, and we were unfreezing lab samples, and considering whether or not cryonicists have a point. I don't remember what I felt about it though.
4Karmakaiser12y
One of the useful things about the internet is it's record keeping abilities and humans natural ability to comment on things they know nothing about. Are you aware of being on record on a forum or social media site pre LW on issues that LW has dealt with?
2RobertLumley12y
Useful and harmful. ;-) Yes, to an extent. I've had Facebook for about six years (I found HPMOR about 8 months ago, and LW about 7?) but I deleted the majority of easily accessible content and do not post anything particularly introspective on there. I know, generally, how I felt about more culturally popular memes, what I really wish I remember though are things like cryonics or the singularity, to which I never gave serious consideration before LW. Edit: At one point, I wrote a program to click the "Older posts" button on Facebook so I could go back and read all of my old posts, but it's been made largely obsolete by the timeline feature.
1gwern12y
It's probably a bit late for many attitudes of mine, but I have made a stab at this by keeping copies of all my YourMorals.org answers and listing other psychometric data at http://www.gwern.net/Links#profile (And I've retrospectively listed in an essay the big shifts that I can remember; hopefully I can keep it up to date and obtain a fairly complete list over my life.)
0gwern12y
IIRC, wasn't a bunch of data-collection done for the Bootcamp attendees, which was aimed at resolving precisely that issue?

I appreciate the tone and content of your comment. Responding to a few specific points...

You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?!

There are many things we aren't (yet) good at. There are too many things about which to check the science and test things and update. In fact, our ability to collaborate successfully with volunteers on things has greatly improved in the last month, in part because we implemented some advice from the GWWC gang, who are very good at collaborating with volunteers.

the only publicly visible person who strikes me as having some awesome powers is you

Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.

Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them... (read more)

[-][anonymous]12y540

I don't think you're taking enough of an outside view. Here's how these accomplishments look to "regular" people:

CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team.

You wrote something 11 years ago, which you now consider defunct and still is not a mainstream view in any field.

The Sequences are simply awesome.

You wrote series of esoteric blog posts that some people like.

And he did manage to write the most popular Harry Potter fanfic of all time.

You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?

Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them and Carl Shulman. But you'd have to visit the Bay Area, and we can't afford to have him do nothing but conversations, anyway. If you want a taste, you can read his comment history, which consists of him writing the exactly correct thing to say in almost every comment he's made for the past several years.

You have a guy who is pretty smart. Ok...

The point ... (read more)

You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?

It's actually been incredibly useful to establishing the credibility of every x-risk argument that I've had with people my age.

"Have you read Harry Potter and the Methods of Rationality?"

"YES!"

"Ah, awesome!"

merriment ensues

topic changes to something about things that people are doing

"So anyway the guy who wrote that also does...."

[-][anonymous]12y210

Again, take the outside outside view. The kind of conversation you described only happens with people who have read HPMoR--just telling people about the fic isn't really impressive. (Especially if we are talking about the 90+% of the population who know nothing about fanfiction.) Ditto for the Sequences, they're only impressive after the fact. Compare this to publishing a number of papers in a mainstream journal, which is a huge status boost even to people who have never actually read the papers.

3atucker12y
I don't think that that kind of status converts nearly as well as establishing a niche of people who start adopting your values, and then talking to them.
[-][anonymous]12y170

Perhaps not, but Luke was using HPMoR as an example of an accomplishment that would help negate accusations of arrogance, and for the majority of "regular" people, hearing that SIAI published journal articles does that better than hearing that they published Harry Potter fanfiction.

4pjeby12y
The majority of "regular" people don't know what journals are; apart from the Wall Street Journal and the New England Journal of Medicine, they mostly haven't heard of any. If asked about journal articles, many would say, "you mean like a blog?" (if younger) or think you were talking about a diary or a newspaper (if older). They have, however, heard of Harry Potter. ;-)
1private_messaging12y
You know what would be awesome, it's if Eliezer wrote original Harry Potter to obtain funding for the SI. Seriously, there is a plenty of people whom I would not pay to work on AI, who accomplished far more than anyone at SI, in the more relevant fields.

Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.

I wasn't aware of Google's AGI team accepting CFAI. Is there a link of organizations that consider the Friendly AI issue important?

I wasn't even aware of "Google's AGI team" . .

0lukeprog11y
Update: please see here.
1beoShaffer12y
Building off of this and my previous comment, I think that more and more visible rationality verification could help. First off, opening your ideas up to tests generally reduces perceptions of arrogance. Secondly, successful results would have similar effects to the technical accomplishments I mentioned above. (Note I expect wide scale rationality verification to increase the amount of pro-LW evidence that can be easily presented to outsiders, not for it to increase my own confidence. Thus this isn't in conflict with the conservation of evidence.)
-2Solvent12y
Eliezer is pretty amazing. He's written some brilliant fiction, and some amazing stuff in the Sequences, plus CFAI, CEV, and TDT.

My #1 suggestion, by a big margin, is to generate more new formal math results.

My #2 suggestion is to communicate more carefully, like Holden Karnofsky or Carl Shulman. Eliezer's tone is sometimes too preachy.

SI is arrogant because it pretends to be even better than science, while failing to publish in significant scientific papers. If this does not seem like a pseudoscience or cult, I don't know what does.

So please either stop pretending to be so great or prove it! For starters, it is not necessary to publish a paper about AI; you can choose any other topic.

No offense; I honestly think you are all awesome. But there are some traditional ways to prove one's skills, and if you don't accept the challenge, you look like wimps. Even if the ritual is largely a waste of time (all signals are costly), there are thousands of people who have passed it, so a group of x-rational gurus should be able to use their magical powers and do it in five minutes, right?

Yeah. The best way to dispel the aura of arrogance is to actually accomplish something amazing. So, SIAI should publish some awesome papers, or create a powerful (1) AI capable of some impressive task like playing Go (2), or end poverty in Haiti (3), or something. Until they do, and as long as they're claiming to be super-awesome despite the lack of any non-meta achievements, they'll be perceived as arrogant.

(1) But not too powerful, I suppose.
(2) Seeing as Jeopardy is taken.
(3) In a non-destructive way.

0Regex8y
2016 update: Go is now also taken. Impressive tasks remaining as (t-> inf) approaches zero! If not to AI or heat death, we're doomed to having already done everything amazing.
2DuncanS12y
There are indeed times you can get the right answer in five minutes (no, seconds), but it still takes the same length of time as for everyone else to write the thing up into a paper.

How much is that "same length of time"? Hours? Days? If 5 days of work could make LW acceptable in scientific circles, is it not worth doing? It is better to complain why oh why more people don't treat SI seriously?

Can some part of that work be oursourced? Just write the outline of the answer, then find some smart guy in India and pay him like $100 to write it? Or if money is not enough for people who could write the paper well, could you bribe someone by offering them co-authorship? Graduate students have to publish in papers anyway, so if you give them a complete solution, they should be happy to cooperate.

Or set up a "scientific wiki" on SI site, where the smartest people will write the outlines of their articles, and the lesser brains can contribute by completing the texts.

These are my solutions, which seem rather obvious to me. It is not sure they would work, but I guess trying them is better than do nothing. Could a group of x-rational gurus find seven more solutions in five minutes?

From outside, this seems like: "Yeah, I totally could do it, but I will not. Now explain me why are people, who can do it, percieved like more skilled than me?" -- "Because they showed everyone they can do it, duh."

3Benya12y
Upvoted for clearly pointing out the tradeoff (yes publicly visible accomplishments that are easy to recognize as accomplishments may not be the most useful thing to work on, but not looking awesome is a price paid for that and needs to be taken into account in deciding what's useful). However, I want to point out that if I heard that an important paper was written by someone who was paid $100 and doesn't appear on the author list, my crackpot/fraud meter (as related to the people on the author list) would go ping-Ping-PING, whether that's fair or not. This makes me worry that there's still a real danger of SIAI sending the wrong signals to people in academia (for similar but different reasons than in the OP).

in combination with his lack of technical publication

I think it would help for EY to submit more of his technical work for public judgment. Clear proof of technical skill in a related domain makes claims less likely to come off as arrogant. For that matter it also makes people more willing to accept actions that they do perceive as arrogant.

The claim made that donating to the SIAI is the charity donation with the highest expected return* always struck me as rather arrogant, though I can see the logic behind it.

The problem is firstly that its an extremely self serving statement, (equivalent to "giving us money is the best thing you can ever possibly do") even if true its credibility is reduced by the claim coming from the same person who would benefit from it.

Secondly it requires me to believe a number of claims which individually require a burden of proof, and gain more from the conjunction. Including: "Strong AI is possible," "friendly AI is possible," "The actions of the SIAI will significantly effect the results of investigations into FAI," and "the money I donate will significantly improve the effectiveness of the SIAI's research" (I expect the relationship between research efffectiveness and funding isn't linear). All of which I only have your word for.

Thirdly, contrast this with other charities who are known to be very effective and can prove it, and whose results affect presently suffering people (e.g. Against malaria).

Caveat, I'm not arguing any of the clai... (read more)

-1lukeprog12y
I feel like I've heard this claimed, too, but... where? I can't find it. Here is the latest fundraiser; which line were you thinking of? I don't see it.
[-][anonymous]12y170

I feel like I've heard this claimed, too, but... where? I can't find it.

Question #5.

7lukeprog12y
Yup, there it is! Thanks. Eliezer tends to be more forceful on this than I am, though. I seem to be less certain about how much x-risk is purchased by donating to SI as opposed to donating to FHI or GWWC (because GWWC's members are significantly x-risk focused). But when this video was recorded, FHI wasn't working as much on AI risk (like it is now), and GWWC barely existed. I am happy to report that I'm more optimistic about the x-risk reduction purchased per dollar when donating to SI now than I was 6 months ago. Because of stuff like this. We're getting the org into better shape as quickly as possible.

because GWWC's members are significantly x-risk focused

Where is this established? As far as I can tell, one cannot donate "to" GWWC, and none of their recommended charities are x-risk focused.

2Thrasymachus12y
(Belated reply): I can only offer anecdotal data here, but as one of the members of GWWC, many of the members are interested. Also, listening to the directors, most of them are also interested in x-risk issues. You are right in that GWWC isn't a charity (although it is likely to turn into one), and their recommendations are non-x-risk. The rationale for recommending charities is dependent on reliable data: and x-risk is one of those things where a robust "here's more much more likely happy singularity will be if you give to us" analysis looks very hard.
1Barry_Cotter12y
Neither can I but IIRC Anna Salamon did an EU calculation which came up with eight lives saved per dollar donated, no doubt impressively caveated and with error bars aplenty.
7lukeprog12y
I think you're talking about this video. Without watching it again, I can't remember if Anna says that SI donation could buy something like eight lives per dollar, or whether donation to x-risk reduction in general could buy something like eight lives per dollar.

Having been through a Physics grad school (albeit not of a Caltech caliber), I can confirm that lack of (a real or false) modesty is a major red flag, and a tell-tale of a crank. Hawking does not refer to the black-hole radiation as Hawking radiation, and Feynman did not call his diagrams Feynman diagrams, at least not in public. A thorough literature review in the introduction section of any worthwhile paper is a must, unless you are Einstein, or can reference your previous relevant paper where you dealt with it.

Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org (cs.DM or similar), properly referenced and formatted to conform with the prevailing standard (probably LaTeXed), and submit them for conference proceedings and/or into peer-reviewed journals. Anything less would be less than rational.

Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org...

Even Greg Egan managed to copublish papers on arxiv.org :-)

ETA

Here is what John Baez thinks about Greg Egan (science fiction author):

He's incredibly smart, and whenever I work with him I feel like I'm a slacker. We wrote a paper together on numerical simulations of quantum gravity along with my friend Dan Christensen, and not only did they do all the programming, Egan was the one who figured out a great approximation to a certain high-dimensional integral that was the key thing we were studying. He also more recently came up with some very nice observations on techniques for calculating square roots, in my post with Richard Elwes on a Babylonian approximation of sqrt(2). And so on!

That's actually what academics should be saying about Eliezer Yudkowsky if it is true. How does an SF author manage to get such a reputation instead?

5gwern12y
That actually explains a lot for me - when I was reading The Clockwork Rocket, I kept thinking to myself, 'how the deuce could anyone without a physics degree follow the math/physics in this story?' Well, here's my answer - he's still up on his math, and now that I check, I see he has a BS in math too.
4arundelo12y
I thought this comment by Egan said something interesting about his approach to fiction: (I enjoyed Incandescence without taking notes. If, while I was reading it, I had been quizzed on the direction words, I would have done OK but not great.) Edit: The other end of the above link contains spoilers for Incandescence. To understand the portion I quoted, it suffices to know that some characters in the story have their own set of six direction words (instead of "up", "down", "north", "south", "east", and "west"). Edit 2: I have a bit of trouble keeping track of characters in novels. When I read on my iPhone, I highlight characters' names as they're introduced, so I can easily refresh my memory when I forgot who someone is.
1gwern12y
Yes, he's pretty unapologetic about his elitism - if you aren't already able to follow his concepts or willing to do the work so you can, you are not his audience and he doesn't care about you. Which isn't a problem with Incandescence, whose directions sound perfectly comprehensible, but is much more of an issue with TCR, which builds up an entire alternate physics.
2Pablo10y
What's the source for that quote? A quick Google search failed to yield any relevant results.
2XiXiDu10y
Private conversation with John Baez (I asked him if I am allowed to quote him on it). You can ask him to verify it.
2mwengler12y
To be fair Eliezer gets good press from Professor Robin Hanson. This is one of the main bulwarks of my opinion of Eliezer and SIAI. (Other bulwarks include having had the distinct pleasure of meeting lukeprog at a few meetups and meeing Anna at the first meetup I ever attended. Whatever else is going on at SIAI, there is a significant amount of firepower in the rooms).
6ScottMessick12y
Yes, and isn't it interesting to note that Robin Hanson sought his own higher degrees for the express purpose of giving his smart contrarian ideas (and way of thinking) more credibility?
0Viliam_Bur12y
By publishing his results at a place where scientists publish.
3[anonymous]12y
I agree, wholeheartedly, of course -- except the last sentence. There's a not very good argument that the opportunity cost of EY learning LaTeX is greater than the opportunity cost of having others edit afterward. There's also a not very good argument that EY doesn't lose terribly much from his lack of academic signalling credentials. Together these combine to a weak argument that the current course is in line with what EY wants, or perhaps would want if he knew all the relevant details.

For someone who knows how to program, learning LaTeX to a perfectly serviceable level should take at most one day's worth of effort, and most likely it would be spread diffusely throughout the using process, with maybe a couple of hours' dedicated introduction to begin with.

It is quite possible that, considering the effort required to find an editor and organise for that editor to edit an entire paper into LaTeX, compared with the effort required to write the paper in LaTeX in the first place, the additional effort cost of learning LaTeX may in fact pay for itself after less than one whole paper. It's very unlikely that it would take more than two.

8dbaupp12y
And one gets all the benefits of a text document while writing it (grep-able, version control, etc.). (It should be noted that if one is writing LaTeX, it is much easier with a LaTeX specific editor (or one with an advanced LaTeX mode))
5lukeprog12y
I'm not at all confident that writing (or collaborating on) academic papers is the most x-risk-reducing way for Eliezer to spend his time.
8Bugmaster12y
Speaking of arrogance and communication skills: your comment sounds very similar to, "Since Eliezer is always right about everything, there's no need for him to waste time on seeking validation from the unwashed academic masses, who likely won't comprehend his profound ideas anyway". Yes, I am fully aware that this is not what you meant, but this is what it sounds like to me.
2lukeprog12y
Interesting. That is a long way from what I meant. I just meant that there are many, many ways to reduce x-risk, and it's not at all clear that writing papers is the optimal way to do so, and it's even less clear that having Eliezer write papers is so.
6Bugmaster12y
Yes, I understood what you meant; my comment was about style, not substance. Most people (myself included, to some non-trivial degree) view publication in academic journals as a very strong test of one's ideas. Once you publish your paper (or so the belief goes), the best scholars in the field will do their best to pick it apart, looking for weaknesses that you might have missed. Until that happens, you can't really be sure whether your ideas are correct. Thus, by saying "it would be a waste of Eliezer's time to publish papers", what you appear to be saying is, "we already know that Eliezer is right about everything". And by combining this statement with saying that Eliezer's time is very valuable because he's reducing x-risk, you appear to be saying that either the other academics don't care about x-risk (in which case they're clearly ignorant or stupid), or that they would be unable to recognize Eliezer's x-risk-reducing ideas as being correct. Hence, my comment above. Again, I am merely commenting on the appearance of your post, as it could be perceived by someone with an "outside view". I realize that you did not mean to imply these things.
3wedrifid12y
That really isn't what Luke appears to be saying. It would be fairer to say "a particularly aggressive reader could twist this so that it means..." It may sometimes be worth optimising speech such that it is hard to even willfully misinterpret what you say (or interpret based on an already particularly high prior for 'statement will be arrogant') but this is a different consideration to trying not to (unintentionally) appear arrogant to a neutral audience.
5JoshuaZ12y
For what it is worth, I had an almost identical reaction when reading the statement.
0Bugmaster12y
Fair enough; it's quite possible that my interpretation was too aggressive.
0wedrifid12y
It's the right place for erring on the side of aggressive interpretation. We've been encouraged (and primed) to do so!
8mwengler12y
I think the evolution is towards a democratization of the academic process. One could say the cost of academia was so high in the middle ages that the smart move was filtering the heck out of participants to at least have a chance of maximizing utility of those scarce resources. And now those costs have been driven to nearly zero, with the largest cost being the sigal-to-noise problem: how does a smart person choose what to look at. I think putting your signal into locations where the type of person you would like to attract gather is the best bet. Web publication of papers is one. Scientific meetings is another. I don't think you can find an existing institution more chock full of people you would like to be involved with than the Math-Science-Engineering academic institutions. Market in them. If there is no one who can write an academic math paper that is interested enough in EY's work to translate it into something somewhat recognizable as valuable by his peers, than the emperor is wearing no clothes. As a PhD calltech applied physicist who has worked with optical interferometers both in real life and in QM calculations (published in journals), EY's stuff on interferometer is incomprehensible to me. I would venture to say "wrong" but I wouldn't go that far without discussing it in person with someone. Robin Hanson's endorsement of EY is the best credential he has for me. I am a caltech grad and I love Hanson's "freakonomics of the future" approach, but his success at being associated wtih great institutions is not a trivial factor in my thinking I am right to respect him. Get EY or lukeprog or Anna or someone else from SIAI on Russ Roberts' podcast. Robin has done it. Overall, SIAI serves my purposes pretty well as is. But I tend to view SIAI as pushing a radical position about some sort of existential risk and beliefs about AI, where the real value is probably not quite as radical as what they push. An example from history would be BF Skinner and behaviori
4Adele_L12y
Similarly, the fact that Scott Aaronson and John Baez seem to take him seriously are significant credentials he has for me.
8[anonymous]12y
I thought we were talking about the view from outside the SIAI?
8lukeprog12y
Clearly, Eliezer publishing technical papers would improve SI's credibility. I'm just pointing out that this doesn't mean that publishing papers is the best use of Eliezer's time. I wasn't disagreeing with you; just making a different point.

Publishing technical papers would be one of the better uses of his time, editing and formatting them probably is not. If you have no volunteers, you can easily find a starving grad student who would do it for peanuts.

3[anonymous]12y
Well, they've got me for free.
0shminux12y
You must be allergic to peanuts.
0[anonymous]12y
Not allergic, per se. But I doubt they would willingly throw peanuts at me, unless perhaps I did a trick with an elephant.
0[anonymous]12y
I'm not disagreeing with you either.
2shminux12y
I would see what the formatting standards are in the relevant journals and find a matching document class or a LyX template. Someone other than Eliezer can certainly do that.
[-][anonymous]12y360

I've asked around a bit, and we can't recall when exactly EY claimed "world-class mathematical ability". As far as I can remember, he's been pretty up-front about wishing he were better at math. I seem to remember him looking for a math-savvy assistant at one point.

If this is the case, it sounds like EY has a Chuck Norris problem, i.e., his mythos has spread beyond its reality.

Yes. At various times we've considered hiring EY an advanced math tutor to take him to the next level more quickly. He's pretty damn good at math but he's not Terence Tao.

5[anonymous]12y
So did you ask your friend where this notion of theirs came from?
0Kaj_Sotala12y
I have a memory of EY boasting about how he learned to solve high school/college level math before the age of ten, but I couldn't track down where I read that.
5Kaj_Sotala12y
Ah, here is the bit I was thinking about:
2Desrtopa12y
I don't remember the post, but I'm pretty sure I remember that Eliezer described himself as a coddled math prodigy, not having made to train seriously and compete, and so he lags behind math prodigies who were made to hone their skills that way, like Marcello.
1mwengler12y
its in the waybackmachine link in the post you are commenting on!
0Kaj_Sotala12y
I hadn't read that link before, so it was somewhere else, too.

I've asked around a bit, and we can't recall when exactly EY claimed "world-class mathematical ability". As far as I can remember, he's been pretty up-front about wishing he were better at math. I seem to remember him looking for a math-savvy assistant at one point.

I too don't remember that he ever claimed to have remarkable math ability. He's said that he was "spoiled math prodigy" (or something like that), meaning that he showed precocious math ability while young, but he wasn't really challenged to develop it. Right now, his knowledge seems to be around the level of a third- or fourth-year math major, and he's never claimed otherwise. He surely has the capacity to go much further (as many people who reach that level do), but he hasn't even claimed that much, has he?

7private_messaging12y
This leaves one wondering how the hell would one be this concerned about the AI risk but not study math properly? How the hell can one go on Bayesian this and Bayesian that but not study? How can one trust one's intuitions about how much computational power is needed for AGI, and not want to improve those intuitions? I've speculated elsewhere that he would likely be unable to implement general Bayesian belief propagation graph or even know what is involved (its NP complete problem in general and the accuracy of solution is up to heuristics. Yes, heuristics. Biased ones, too). That's very bad when it comes to understanding rationality, as you will start going on with maxims like "update all your beliefs" etc, which look outright stupid to e.g. me (I assure you I can implement Bayesian belief propagation graph), and triggers my 'its another annoying person that talks about things he has no clue about' reflex. Talking about Bayesian this and Bayesian that, one should better know mathematics very well. Because in practice all those equations get awfully hairy on things like graphs in general (not just trees). If you don't know relevant math very well and you call yourself Bayesian, you are professing a belief in belief. If you do not make a claim of extreme mathematical skills and knowledge, and you go on Bayesian this and that, other people will have to assume extreme mathematical skills and knowledge out of politeness.
5David_Gerard12y
Yes.

There's a phrase that the tech world uses to describe the kind of people you want to hire: "smart, and gets things done." I'm willing to grant "smart", but what about the other one?

The sequences and HPMoR are fantastic introductory/outreach writing, but they're all a few years old at this point. The rhetoric about SI being more awesome than ever doesn't square with the trend I observe* in your actual productivity. To be blunt, why are you happy that you're doing less with more?

*I'm sure I don't know everything SI has actually done in the last year, but that's a problem too.

To educate myself, I visited the SI site and read your December progress report. I should note that I've never visited the SI site before, despite having donated twice in the past two years. Here are my two impressions:

  • Many of these bullet points are about work in progress and (paywalled?) journal articles. If I can't link it to my friends and say, "Check out this cool thing," I don't care. Tell me what you've finished that I can share with people who might be interested.
  • Lots on transparency and progress reporting. In general, your communication strategy seems focused on people who already are aware of and follow SIAI closely. These people are loud, but they're a small minority of your potential donors.
5lukeprog12y
Of course, things we finished before December 2011 aren't in the progress report. E.g. The Singularity and Machine Ethics. Not really. We're also working on many things accessible to a wider crowd, like Facing the Singularity and the new website. Once the new website is up we plan to write some articles for mainstream magazines and so on.
6Paul Crowley12y
"smart and gets things done" I think originates with Joel Spolsky: http://www.joelonsoftware.com/articles/fog0000000073.html

I agree with what has been said about the modesty norm of academia; I speculate that it arises because if you can avoid washing out of the first-year math courses, you're already one or two standard deviations above average, and thus you are in a population in which achievements that stood out in a high school (even a good one) are just not that special. Bragging about your SAT scores, or even your grades, begins to feel a bit like bragging about your "Participant" ribbon from sports day. There's also the point that the IQ distribution in a good physics department is not Gaussian; it is the top end of a Gaussian, sliced off. In other words, there's a lower bound and an exponential frequency decay from there. Thus, most people in a physics department are on the lower end of their local peer group. I speculate that this discourages bragging because the mass of ordinary plus-two-SDs doesn't want to be reminded that they're not all that bright.

However, all that aside: Are academics the target of this blog, or of lukeprog's posts? Propaganda, to be effective, should reach the masses, not the elite - although there's something to be said for "Get the elite and the masses ... (read more)

4Karmakaiser12y
So if I could restate the norms of academia vis a vi modesty: "Do the impossible. But don't forget to shut up as well." Is that a fair characterization?

Well, no, I don't think so. Most academics do not work on impossible problems, or think of this as a worthy goal. So it should be more like "Do cool stuff, but let it speak for itself".

Moderately related: I was just today in a meeting to discuss a presentation that an undergraduate student in our group will be giving to show her work to the larger collaboration. On her first page she had

Subject

Her name

Grad student helping her

Dr supervisor no 1

Dr supervisor no 2

And to start off our critique, supervisor 1 mentioned that, in the subculture of particle physics, it is not the custom to list titles, at least for internal presentations. (If you're talking to a general audience the rules change.) Everyone knows who you are and what you've done! Thus, he gave the specific example that, if you mention "Leon", everyone knows you speak of Leon Lederman, the Nobel-Prize winner. But as for "Dr Lederman", pff, what's a doctorate? Any idiot can be a doctor and many idiots (by physics standards, that is) are; if you're not a PhD it's at least assumed that you're a larval version of one. It's just not a very unusual accomplishment in these circles. To have your first ... (read more)

2asr12y
I have seen this elsewhere in the academy as well. At many elite universities, professors are never referred to as Dr-so-and-so. Everybody on the faculty has a doctorate. They are Professor-so-and-so. At some schools, I'm told they are referred to as Mr or Mrs-so-and-so. Similar effect: "we know who's cool and high-status and don't need to draw attention to it."
1jsteinhardt12y
Wow, I didn't even consciously recognize this convention, although I would definitely never, for instance, add titles to the author list of a paper. So I seem to have somehow picked it up without explicitly deciding to.

I've reccommended this before, I think.

I think that you should get Eliezer to say the accurate but arrogant sounding things, because everyone already knows he's like that. You should yourself, Luke, be more careful about maintaining a humble opinion.

If you need people to say arrogant things, make them ghost-write for Eliezer.

Personally, I think that a lot of Eliezer's arrogance is deserved. He's explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people's problems. CFAI was way ahead of its time, as TDT still is. So he can feel smug. He's got a reputation as an arrogant eccentric genius anyway.

But the rest of the organisation should try to be more careful. You should imitate Carl Shulman rather than Eliezer.

I think having people ghost-write for Eliezer is a very anti-optimum solution in the long run. It removes integrity from the process. SI would become insufficiently distinguishable from Scientology or a political party if it did this.

Eliezer is a real person. He is not "big brother" or some other fictional figure head used to manipulate the followers. The kind of people you want, and have, following SI or lesswrong will discount Eliezer too much when (not if) they find out he has become a fiction employed to manipulate them.

4Solvent12y
Yeah, I kinda agree. I was slightly exaggerating my position for clarity. Maybe not full on ghost-writing. But occasionally, having someone around who can say what he wants without further offending anybody can be useful. Like, part of the reason the Sequences are awesome is that he personally claims that they are. Also, Eliezer says: So occasionally SingInst needs to say something that sounds arrogant. I just think that when possible, Eliezer should say those things.

He's explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people's problems.

As a curiosity, what would the world look like if this were not the case? I mean, I'm not even sure what it means for such a sentence to be true or false.

Addendum: Sorry, that was way too hostile. I accidentally pattern-matched your post to something that an Objectivist would say. It's just that, in professional philosophy, there does not seem to be a consensus on what a "problem of philosophy" is. Likewise, there does not seem to be a consensus on what a solution to one would look like. It seems that most "problems" of philosophy are dismissed, rather than ever solved.

Here are examples of these philosophical solutions. I don't know which of these he solved personally, and which he simply summarized others' answer to:

  • What is free will? Ooops, wrong question. Free will is what a decision-making algorithm feels like from the inside.

  • What is intelligence? The ability to optimize things.

  • What is knowledge? The ability to constrain your expectations.

  • What should I do with the Newcomb's Box problem? TDT answers this.

...other examples include inventing Fun theory, using CEV to make a better version of utilitarianism, and arguing for ethical injunctions using TDT.

And so on. I know he didn't come up with these on his own, but at the least he brought them all together and argued convincingly for his answers in the Sequences.

I've been trying to figure out these problems for years. So have lots of philosophers. I have read these various philosophers' proposed solutions, and disagreed with them all. Then I read Eliezer, and agreed with him. I feel that this is strong evidence that Eliezer has actually created something of value.

9J_Taylor12y
I admire the phrase "what an algorithm feels like from the inside". This is certainly one of Yudkowsky's better ideas, if it is one of his. I think that one can see the roots of it in G.E.B. Still, this may well count as something novel. Nonetheless, Yudkowsky is not the first compatibilist. One could define the term in such a way. I tend to take a instrumentalist view on intelligence. However, "the ability to optimize things" may well be a thing. You may as well call it intelligence, if you are so inclined. This, nonetheless, may not be a solution to the question "what is intelligence?". It seems as though most competent naturalists have moved passed the question. I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate? I have absolutely no knowledge of the history of Newcomb's problem. I apologize. Further apologies for the following terse statements: I don't think Fun theory is known by academia. Also, it looks like, at best, a contemporary version of eudaimonia. The concept of CEV is neat. However, I think if one were to create an ethical version of the pragmatic definition of truth, "The good is the end of inquiry" would essentially encapsulate CEV. Well, as far as one can encapsulate a complex theory with a brief statement. TDT is awesome. Predicted by the superrationality of Hofstadter, but so what? I don't mean to discount the intelligence of Yudkowsky. Further, it is extremely unkind of me to be so critical of him, considering how much he has influenced my own thoughts and beliefs. However, he has never written a "Two Dogmas of Empiricism" or a Naming and Necessity. Philosophical influence is something that probably can only be seen, if at all, in retrospect. Of course, none of this really matters. He's not trying to be a good philosopher. He's trying to save the world.
3Solvent12y
Okay, the Gettier problem. I can explain the Gettier problem, but it's just my explanation, not Eliezer's. The Gettier problem is pointing out problems with the definition of knowledge as justified true belief. "Justified true belief" (JTB) is an attempt at defining knowledge. However, it falls into the classic problem with philosophy of using intuition wrong, and has a variety of other issues. Lukeprog discusses the weakness of conceptual analysis here. Also, it's only for irrational beings like humans that there is a distinction between "justified' and 'belief.' An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn't usefully define knowledge anyway. Incidentally, I just re-read this post, which says: So perhaps Eliezer didn't create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Leibniz and calculus, really.
6asr12y
I am skeptical that AIs will do pure Bayesian updates -- it's computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.
7XiXiDu12y
Isn't this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question. Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI's might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That's a real danger in my opinion.
4wedrifid12y
Honest answer: Yes. For example 1 utilon per paperclip.
3lessdazed12y
I appreciate the example. It will serve me well. Upvoted.
3J_Taylor12y
I am aware of the Gettier Problem. I just do not see the phrase, "the ability to constrain one's expectations" as being a proper conceptual analysis of "knowledge." If it were a conceptual analysis of "knowledge", it probably would be vulnerable to Gettieriziation. I love Bayesian epistemology. However, most Bayesian accounts which I have encountered either do away with knowledge-terms or redefine them in such a way that it entirely fails to match the folk-term "knowledge". Attempting to define "knowledge" is probably attempting to solve the wrong problem. This is a significant weakness of traditional epistemology. I am not entirely familiar with Eliezer's history. However, he is clearly influenced by Hofstadter, Dennet, and Jaynes. From just the first two, one could probably assemble a working account which is, weaker than, but has surface resemblances to, Eliezer's espoused beliefs. Also, I have never heard of Hooke independently inventing calculus. It sounds interesting however. Still, are you certain you are not thinking of Leibniz?
0Solvent12y
ooops, fixed. I'll respond to the rest of what you said later.
0MatthewBaker12y
To quickly sum up Newcomb's problem, Its a question of probability where choosing the more "rational" thing to do will result in a great deal less currency to a traditional probability based decision theory. TDT takes steps to avoid getting stuck 2 boxing, or choosing the more rational of the two choices while applying in the vast majority of other situations.
0J_Taylor12y
Apologies, I know what Newcomb's problem is. I simply do not know anything about its history and the history of its attempted solutions.
0lessdazed12y
...efficiently. Most readers will misinterpret that. The question for most was/is instead "Formally, why should I one-box on Newcomb's problem?"

What should SI do about this?

I think that separating instrumental rationality from the Singularity/FAI ideas will help. Hopefully this project is coming along nicely.

8lukeprog12y
Yes, we're full steam ahead on this one.

(I was going to write a post on 'why I'm skeptical about SIAI', but I guess this thread is a good place to put it. This was written in a bit of a rush - if it sounds like I am dissing you guys, that isn't my intention.)

I think the issue isn't so much 'arrogance' per se - I don't think many of your audience would care about accurate boasts - but rather your arrogance isn't backed up with any substantial achievement:

You say you're right on the bleeding edge in very hard bits of technical mathematics ("we have 30-40 papers which could be published on decision theory" in one of lukeprogs Q&As, wasn't it?), yet as far as I can see none of you have published anything in any field of science. The problem is (as far as I can tell) you've been making the same boasts about all these advances you are making for years, and they've never been substantiated.

You say you've solved all these important philosophical questions (Newcomb, Quantum mechanics, Free will, physicalism, etc.), yet your answers are never published, and never particularly impress those who are actual domain experts in these things - indeed, a complaint I've heard commonly is that Lesswrong just simply misundersta... (read more)

3lukeprog12y
No, that wasn't it. I said 30-40 papers of research. Most of that is strategic research, like Carl Shulman's papers, not decision theory work. Otherwise, I almost entirely agree with your comments.

I think Eli as being the main representative of SI, should be more careful of how he does things, and resist his natural instinct to declare people stupid (-> Especially <- if he's basically right)

Case in point: http://www.sl4.org/archive/0608/15895.html That could have been handled more politically and with more face-saving for the victim. Now you have this guy and at least one "friend" with loads of free time going around putting down anything associated with Eliezer or SI on the Internet. For 5 minutes of extra thinking and not typing this could have been largely avoided. Eli has to realize that he's in a good position to needlessly hurt his (and our) own causes.

Another case in point was handling the Roko affair. There is doing the right thing, but you can do it without being an asshole (also IMO the "ownership" of LW policies is still an unresolved issue, but at least it's mostly "between friends"). If something like this needs to be done Eli needs to pass the keyboard to cooler heads.

9Nick_Tarleton12y
Note: happened five years ago
8Multiheaded12y
Certainly anyone building a Serious & Official image for themselves should avoid mentioning any posteriors not of the probability kind in their public things.
3Dr_Manhattan12y
Already noted, and I'm guessing the situation improved. But it's still a symptom of a harmful personality trait.

Why don't SIAI researchers decide to definitively solve some difficult unsolved mathematics, programming, or engineering problem as proof of their abilities?

Yes it would waste time that could have been spent on AI-related philosophy but would unambiguously support the competency of SIAI.

9WrongBot12y
You mean, like decision theory? Both Timeless Decision Theory (which Eliezer developed) and Updateless Decision Theory (developed mostly by folks who are now SI Research Associates) are groundbreaking work in the field, and both are currently being written up for publication, I believe.

There are two recurring themes: peer-reviewed technical results, and intellectual firepower.

If you want to show people intellectual firepower and the awesomeness of your conversations, tape the conversations. Just walk around with a recorder going all day, find the interesting bits later, and put them up for people to listen to.

But... you're not selling "we're super bright," you're selling "we're super effective." And for that you need effectiveness. Earnest, bright people wasting their effort is an old thing, and with as goals as large as yours it's difficult to see the difference between progress and floundering.

I don't know how to address your particular signalling problem. But a question I need answered for myself: I wouldn't be able to tell the difference between the SIAI folks being "reasonably good at math and science" and "actually being really good - the kind of good they'd need to be for me to give them my money."

ARE there straightforward tests you could hypothetically take (or which some of you may have taken) which probably wouldn't actually satisfy academics, but which are perfectly reasonable benchmarks we should expect you to be able to complete to demonstrate your equivalent education?

1abramdemski12y
Why shouldn't the tests satisfy academics? Why not use something like the GRE with subject tests, plus an IQ test and other relevant tests?

Crackpot Index:

10 points for pointing out that you have gone to school, as if this were evidence of sanity.

I'm not sure, but I think this is roughly how "look, I did great on the GRE!" would sound to someone already skeptical. It's the sort of accomplishment that sounds childish to point out outside of a very limited context.

9asr12y
There are two big problems with standardized tests. First, the standard tests are badly calibrated for measuring the high-performing tail of the distribution. Something like 6% of all GRE takers get a perfect score on the math portion. So GREs won't separate good from very good. Second, aptitude for doing GRE-style or IQ-style math problems isn't known to be a close correlate for real ability. Universities are full of people with stellar test scores who don't ever amount to anything. On the other hand, Richard Feynman, who was very smart and very hard working, had a measured IQ of something like 125, which is not all that impressive as a test score.
1Dr_Manhattan12y
125???! Sh*t, I've got to start working harder. (source?)
2billswift12y
I don't know a source for the number, but in one of his popular books he mentioned that Mensa contacted him and he responded that his IQ wasn't high enough, which means it was less than 130.
2Dr_Manhattan12y
Knowing Feynman, This might well have been a joke at their expense.

According to Feynman, he tested at 125 when he was a schoolboy. (Search for "IQ" in the Gleick biography.)

Gwern says:

There are a couple reasons to not care about this factoid:

  • Feynman was younger than 15 when he took it [....]
  • [I]t was one of the 'ratio' based IQ tests - utterly outdated and incorrect by modern standards.
  • Finally, it's well known that IQ tests are very unreliable in childhood; kids can easily bounce around compared to their stable adult scores.

Steve Hsu says:

I suspect that this test emphasized verbal, as opposed to mathematical, ability. Feynman received the highest score in the country by a large margin on the notoriously difficult Putnam mathematics competition exam, although he joined the MIT team on short notice and did not prepare for the test. [...] It seems quite possible to me that Feynman's cognitive abilities might have been a bit lopsided -- his vocabulary and verbal ability were well above average, but perhaps not as great as his mathematical abilities. I recall looking at excerpts from a notebook Feynman kept as an undergraduate. While the notes covered very advanced topics -- including general relativity and the Dirac equation --

... (read more)
8wedrifid12y
It is a joke at their expense. The question is whether he based it on a true premise.
0Karmakaiser12y
125 is the average IQ of a Ph.D. I'm not sure what the IQ is for specific domains so I can't say if that is incredibly low for a Physics Ph.D.
4Raemon12y
Because people aren't rational and it's silly to pretend otherwise?

What are the most egregious examples of SI's arrogance?

Public tantrums, shouting and verbal abuse. Those are status displays that pay off for tribal chieftans and some styles of gang leader. They aren't appropriate for leaders of intellectually oriented charities. Eliezer thinking he can get away with that is the biggest indicator of arrogance that I've noticed thus far.

2Bugmaster12y
To be fair, while I personally do perceive the SIAI as being arrogant, I haven't seen any public tantrums. As far as I can tell, all their public discourse has been quite civil.

To be fair, while I personally do perceive the SIAI as being arrogant, I haven't seen any public tantrums. As far as I can tell, all their public discourse has been quite civil.

The most significant example was the Roko incident. The relevant threads and comments were all censored during the later part of his tantrum. Not a good day in the life of Eliezer's reputation.

9Bugmaster12y
Fair enough; I was unaware of the Roko incident (understandably so, since apparently it was Sovieted from history). I have now looked it up elsewhere, though. Thanks for the info.
0Solvent12y
I tried to look up this Roko incident, and from what I could see, Eliezer just acted crazily towards saying something Eliezer thought was dangerous. So him deleting everything could be justified without him necessarily being egotistical. But can you elaborate on what happened, please?
2wedrifid12y
Oh, of course. The deletion just explains Bug's unfamiliarity, it isn't an arrogance example itself. Rationalwiki.
4Prismattic12y
I'm sort of pleased to see that I guessed roughly what this episode was about despite having arrived at LessWrong well after it unhappened.+ But if the Rationalwiki description is accurate, I'm now really confused about something new. I was under the impression that Lesswrong was fairly big on the Litany of Gendlin. But an AI that could do the things Roko proposed (something I place vanishingly small probability, fortunately) could also retrospectively figure out who was being willfully ignorant or failing to reach rational conclusions for which they had sufficient priors. It's disconcerting, after watching so much criticism of the rest of humanity finding ways to rationalize around the "inevitability" of death, to see transhumanists finding ways to hide their minds from their own "inevitable" conclusions. +Since most people who would care about this subject at all have probably read Three Worlds Collide, I think this episode should be referred to as The Confessar Vanishes, but my humor may be idiosyncratic even for this crowd.
2JoshuaZ12y
The primary issue with the Roko matter wasn't as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals. At the time when it occurred there were at least two people in the general SI/LW cluster who were apparently deeply disturbed by the thought. I expect that the sort who would be vulnerable would be the same sort who if they were religious would lose sleep over the possibility of going to hell.
5Humbug12y
The original reasons given: ...and further: (emphasis mine)
3Solvent12y
I should have known that Rationalwiki would be the place to look for dirt on Eliezer. Thanks for the link. Wow, that was fascinating reading. I still don't think that we could call it a tantrum of Eliezer's. I mean, I have no doubt he acted like a dick, but he probably at least thought that the Roko guy was being stupid.

I still don't think that we could call it a tantrum of Eliezer's.

Whatever you choose to call it the act of shouting at people and calling them names is the kind of thing that looks bad to me. I think Eliezer would look better if he didn't shout or call people names.

but he probably at least thought that the Roko guy was being stupid.

Of course he did. Lack of sincerity is not the problem here. The belief that the other person is stupid and, more importantly, the belief that if he thinks other people are being stupid it is right and appropriate for him to launch into an abusive hysterical tirade is the arrogance problem in this case.

1Solvent12y
I agree. Eliezer is occasionally a jerk, and it looks like this was one of those times. Also, I have no idea what went on and you do, so any disagreement from me is pretty dubious. Nitpicking: I don't think that's how we should use the work tantrum. Tantrum makes it sound like someone criticized Eliezer and he got mad at them. (I suppose that might have happened, though...) I dunno. I just dislike your choice of words. I would have phrased it as "Eliezer should put more effort into not occasionally being an arrogant dick."
6wedrifid12y
The word 'Tantrum' invokes in my mind a picture of either a child or someone with an overwhelmingly high perception of their status responding to things not going their way by acting out emotionally in violation of usual norms of behavior that apply to everyone else. I did not want to make that point. Acting out when things don't go his way is a distinctly different behavior pattern with different connotations with respect to arrogance. I'm going to stick with tantrum because it just seems to be exactly what I'm trying to convey.
0Raw_Power12y
I think he did the right thing there. He did it badly and clumsily, but had I been in his place I'd have had a hard time getting a grip on my emotions, and we know how sensitive and emotional he is. Rational Wiki are great guys. We try to watch our own step, but it's nice to have someone else watching us too, who can understand and sympathize with what we do.

What SIAI could do to help the image problem: Get credible grown-ups on board.

The main team looks to be in their early thirties, and the visiting fellows mostly students in their twenties. With the claims of importance SIAI is making, people go looking for people over forty who are well-established as serious thinkers, AI experts or similarly known-competent folk in a relevant field. There should be either some sufficiently sold on the SIAI agenda to be actually on board full-time, or quite a few more in some kind of endorsing partnership role. Currently there's just Ray Kurzweil on the team page, and beyond "Singularity Summit Co-Founder", there's nothing there saying just what his relation to SIAI is, exactly. SIAI doesn't appear to be suitably convincing to have gotten any credible grown-ups as full-time team members.

There are probably good reasons why this isn't useful for what SIAI is actually trying to do, but the demographic of thirty-somethings leading the way and twenty-somethings doing stuff looks way iffier at a glance for "support us in solving the most important philosophical, societal and technological problem humanity has ever faced once and for all!" than it does for "we're doing a revolutionary Web 3.0 SaaS multi mobile OS cloud computing platform!"

To be honest, I've only ever felt SI/EY/LW's "arrogance" once, and I think that LW in general is pretty damn awesome. (I realize I'm equating LW with SI, but I don't really know what SI does)

The one time is while reading through the Free Willhttp://wiki.lesswrong.com/wiki/Free_will page, which I've copied here: "One of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own. "

This smacks strongly of "oh look, there's a classic stumper, and I'm the ONLY ONE who's solved it (naa naa naa). If you want to be a true rationalist/join the tribe, you better solve it on your own, too"

I've also heard others mention that HP from HPMoR is an unsufferable little twat, which I assume is the same attitude they would have if they were to read LW.

I've written some of my thoughts up about the arrogance issue here. The short version is that some people have strongly developed identities as "not one of those pretentious people" and have strong immune responses when encountering intelligence. http://moderndescartes.blogspot.com/2011/07/turn-other-cheek.html

9wedrifid12y
Ewww! That's hideous. It seems to be totally subverting the point of the wiki. I actually just went as far as to log in planning to remove the offending passage until I noticed that Eliezer put it there himself. I'm actually somewhat embarrassed by page now that you've brought it to our attention. I rather hope we can remove it and replace it with either just a summary of what free will looks like dissolved or a placeholder with the links to relevant blog posts.
0thomblake12y
The point of that was that dissolving free will is an exercise (a rather easy one once you know what you're doing), and it probably shouldn't be short-circuited.
5wedrifid12y
My point was that I didn't approve of making that point in that manner in that place. I refrained from nuking the page myself but I don't have to like it. I support Brilee's observation that going around and doing that sort of thing is bad PR for Eliezer Yudkowsky, which has non-trivial relevance to SingInst's arrogance problem.
2ScottMessick12y
One issue is that the same writing sends different signals to different people. I remember thinking about free will early in life (my parents thought they'd tease me with the age-old philosophical question) and, a little later in life, thinking that I had basically solved it--that people were simply thinking about it the wrong way. People around me often didn't accept my solution, but I was never convinced that they even understood it (not due to stupidity, but failure to adjust their perspective in the right way), so my confidence remained high. Later I noticed that my solution is a standard kind of "compatibilist" position, which is given equal attention by philosophers as many other positions and sub-positions, fiercely yet politely discussed without the slightest suggestion that it is a solution, or even more valid than other positions except as the one a particular author happens to prefer. Later I noticed that my solution was also independently reached and exposited by Eliezer Yudkowsky (on Overcoming Bias before LW was created, if I remember correctly). The solution was clearly presented as such--a solution--and one which is easy to find with the right shift in perspective--that is, an answer to a wrong question. I immediately significantly updated the likelihood of the same author having further useful intellectual contributions, to my taste at least, and found the honesty thoroughly refreshing.
8ArisKatsaris12y
I also think that HJPEV is a unsufferable little twat / horrible little jerk, but I love LW and have donated hundreds of dollars to SIAI. And I've strongly recommended HPMOR itself even when I warn people it has something of a jerk for a protagonist. Why shouldn't I ? Is anyone disputing that he's much less nice than e.g. Hermione is, and he often treats other people with horribly bad manners? If he's not insufferable, who is actually suffering him other than Hermione (who has also had to punish him by not speaking to him for a week) or Draco (who found him so insufferable in occasion that he locked him up and Gom-Jabbared him...)
2Bugmaster12y
I always assumed that this character detail was intentional, especially since some other characters call HP out on it explicitly.
1CronoDAS12y
Well, Professor Quirrell seems to have taken quite a liking to him, but I don't think he counts...
0[anonymous]12y
Got a similar reaction. Well, except the donating dollars part. Though I'm not bothered so much by the way that HJPEV interacts with people but rather by his unique-snowflake/superhero/God-wannabe complex.
-2wedrifid12y
And Hermione's tendency to pull this sort of stunt makes her even more insufferable than Harry. While I might choose to tolerate those two as allies and associate with them for sake of gaining power or saving the world I'd say Neville is the only actually likable character that Eliezer has managed to include. Writing about characters that are arrogant prats does seem to come naturally to Eliezer for some reason.
2ArisKatsaris12y
To you maybe, but Hermione is well-liked by lots of other characters, SPHEW and her army and the professors. "insufferable know-it-all" is how Ron calls her in canon. In HPMOR she actually is nicer, less dogmatic and has many more friends than in canon. Compare canon SPEW with SPHEW, and how she goes about doing each.
4wedrifid12y
Yes. It is one thing to write about a character that is an arrogant prat that is perceived as an arrogant prat by the other characters. It is far more telling when obnoxious or poorly considered behavior is portrayed within the story as appropriate or wise and so accepted by all the other characters. I'm not a huge fan of either of them to be honest. Although MoR!Hermione does get points for doing whichever of those two acronyms is the one that involved beating up bullies. Although now I'm having vague memories about her having a tantrum when Harry saved the lives of the girls she put at risk. Yeah, she's a pratt. A dangerous prat. Apart from making her controlling and unpleasant to be around that ego of hers could get people killed! And what makes it worse is that Hermione's idiotic behavior seems to be more implicitly endorsed as appropriate by the author than Harry's idiotic behavior.
-1ArisKatsaris12y
I don't understand you. The rest of the paragraph seems to be arguing that this was irresponsible idiotic behavior on her part; this sentence seems to be saying it's a point in her favor. I think you're significantly misremembering what she said -- she explicitly didn't mind Harry saving them, she minded that he scared the bejeezus out of her. Do you belong in that small minority of HPMOR readers who only read each chapter once? :-)
1wedrifid12y
I approve of fighting bullying. I don't approve of initiating conflict when Harry saves their lives by pulling a Harry. Because his actions in that situation aren't really any of her business. Harry's actions in that scene are in acordance with Harry's Harriness and he would have done them without her involvement. They aren't about her (making this situation different in nature to the earlier incident pretending to be a ghost to stop a gossip.) Citation needed. Actually for realz, not as the typical 'nerd comeback'. I want to know what chapter to start reading to review the incident. Both because that is one of the most awesome things Harry has done and because I do actually recall Hermione engaging in behavior in the aftermath of the incident that makes me think less of her. Most significantly she makes Harry give an oath that makes me think less of Harry (and MoR) for submitting to. Because he made a promise the adherence to which could make him lose the fight for the universe! I've actually had a discussion with Eliezer on the subject and was somewhat relieved when he admitted that he wrote in the necessary clauses but omitted them only for stylistic reasons.
3pengvado12y
Chapter 75:

What are the most egregious examples of SI's arrogance?

Well, you do tend to talk about "saving the world" a lot. That makes it sound like you, Eliezer Yudkowsky, plus a few other people are the new Justice League. That sounds at least a little arrogant...

If it helps at all, another data point (not quite answers to your questions):

  • I'm a complete SI outsider. My exposure to it is entirely indirectly through Less Wrong, which from time to time seems to function as a PR/fundraising/visibility tool for SI.
  • I have no particular opinion about SI's arrogance or non-arrogance as an organization, or EY's arrogance or non-arrogance as an individual. They certainly don't demonstrate humility, nor do they claim to, but there's a wide middle ground between the two.
  • I doubt I would be noticeably more likely to donate money, or to encourage others to donate money, if SI convinced me that it was now 50% less arrogant than it was in 2011.
  • One thing that significantly lowers my likelihood of donating to SI is my estimate that the expected value of SI's work is negligible, and that the increase/decrease in that EV based on my donations is even more so. It's not clear what SI can really do to increase my EV-of-donating, though.
  • Similar to the comment you quote, someone's boasts:accomplishments ratio is directly proportional to my estimate that they are crackpots. OTOH, I find it likely that without the boasting and related monkey dyna
... (read more)
4Vaniver12y
I am not impressed by those sorts of ploys.
1wedrifid12y
I cannot think of one example of a claim along those lines.
1XiXiDu12y
The closest I can think of right now is the following quote from Eliezer's January 2010 video Q&A: ETA Skimming over the CEV document I see some hints that could explain where the idea comes from that Eliezer believes that he has the wisdom to transform the world:
8wedrifid12y
You quoted the context of my statement but edited out the part my reply was based on. Don't do that. The very quote of Eliezer that you supply in the parent demonstrates the Eliezer presents himself as actually trying to do those "impossible" transformations, not refraining from doing them for moral reasons. That part just comes totally out of left field and since it is presented as a conjunction the whole thing just ends up false.
6TheOtherDave12y
Thanks for clarifying what part of my statement you were objecting to. Mostly what I was thinking of on that side was the idea that actually building a powerful AI, or even taking tangible steps that make the problem of building a powerful AI easier, would result in the destruction of the world (or, at best, the creation of various "failed utopias"), and therefore the moral thing to do (which most AI researchers, to say nothing of lesser mortals, aren't wise enough to realize is absolutely critical) is to hold off on that stuff and instead work on moral philosophy and decision theory. I recall a long wave of exchanges of the form "Show us some code!" "You know, I could show you code... it's not that hard a problem, really, for one with the proper level of vampiric aura, once the one understands the powerful simplicity of the Bayes-structure of the entire universe and finds something to protect important enough to motivate the one to shut up and do the impossible. But it would be immoral for me to write AI code right now, because we haven't made enough progress in philosophy and decision theory to do it safely." But looking at your clarification, I will admit I got sloppy in my formulation, given that that's only one example (albeit a pervasive one). What I should have said was "throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways, one obvious tangible expression of which (that is, actual AI design) he holds back from creating only because he possesses the unusual wisdom to realize that doing so is immoral."
1wedrifid12y
I'd actually be very surprised if Eliezer had ever said that - since it is plainly wrong and as far as I know Eliezer isn't quite that insane. I can imagine him saying that it is (probably) an order of magnitude easier than making the coded AI friendly but that is still just placing it simpler on a scale of 'impossible'. Eliezer says many things that qualify for the label arrogant but I doubt this is one of them. If Eliezer thought AI wasn't a hard problem he wouldn't be comfortable dismissing (particular isntances of) AI researchers who don't care about friendliness as "Mostly Harmless"!
2TheOtherDave12y
What I wrote was "it's not that hard a problem, really, for one with (list of qualifications most people don't have)," which is importantly different from what you quote. Incidentally, I didn't claim it was arrogant. I claimed it was a boast, and I brought boasts up in the context of judging whether someone is a crackpot. I explicitly said, and I repeat here, that I don't really have an opinion about EY's supposed arrogance. Neither do I think it especially important.
3wedrifid12y
I extend my denial to the full list. I do not believe Eliezer has made the claim that you allege he has made, even with the list of qualifications. It would be a plainly wrong claim and I believe you have made a mistake in your recollection. The flip side is that if Eliezer has actually claimed that it isn't a hard problem (with the list of qualifications) then I assert that said claim significantly undermines Eliezer's credibility in my eyes.
0TheOtherDave12y
OK, cool. Do you also still maintain that if he thought it wasn't a hard problem for people with the right qualifications, he wouldn't be comfortable dismissing particular instances of AI researchers as mostly harmless?
4wedrifid12y
Yes. And again if Eliezer did consider the problem easy with qualifications but still dismissed the aforementioned folks as mostly harmless it would constitute dramatically enhanced boastful arrogance!
6TheOtherDave12y
OK, that's clear. I don't know if I'll bother to do the research to confirm one way or the other, but in either case your confidence that I'm misremembering has reduced my confidence in my recollection.
5XiXiDu12y
My apologies, it wasn't my intention to do that. Careless oversight.
1Craig_Heldreth12y
Yeah I remember that and it was certainly a megalomaniacal slip. But I do not agree that arrogant is the correct term. I suspect "arrogant" may be a brief and inaccurate substitute for: "unappealing, but I cannot be bothered to come up with anything specific". In my dictionaries (I checked Merriam-Webster and American Heritage), arrogant is necessarily overbearing. If you are clicking on their website or reading their literature or attending their public function there isn't any easy way for them to overbear upon you. When Terrel Owens does a touchdown dance in the endzone and the cameras are on him for fifteen seconds until the next play your attention is under his thumb and he is being arrogant. Eliezer's little slip of on-webcam megalomania is not arrogant. It would be arrogant if he was running for public office and he said that in a debate and the voters felt they had to watch it, but not when the viewer has surfed to that information and getting away is free of any cost and as easy as a click. Almost all of us do megalomaniacal stuff all the time when nobody is looking and almost all of us expend some deliberate effort trying to not do it when people are looking.
1TheOtherDave12y
OK; I stand corrected about the controversiality.

There are two obvious options:

The first, boring option is to make fewer bold claims. I personally would not prefer that you take this tack. It would be akin to shooting yourselves in the foot. If all of your claims vis-a-vis saving the world are couched in extremely humble signaling packages, no one will want to ever give you any money.

The second, much better option is to start doing amazing, high-visibility things worthy of that arrogance. Muflax points out that you don't have a Tim Ferriss. Tim Ferriss is an interesting case specifically because he is a huge self-promoter who people actually like despite the fact that he makes his living largely by boasting entertainingly. The reason Tim Ferriss can do this is because he delivers. He has accomplished the things he is making claims about - or at least he convinces you that he is qualified to talk about it.

I really want a Rationality Tim Ferriss who I can use as a model for my own development. You could nominate yourself or Eliezer for this role, but if you did so, you would have to sell that role.

I like the second option better, too.

I'm certainly going to try to be a Rationality Tim Ferris, but I have a ways to go.

Eliezer is still hampered by the cognitive exhaustion problem that he described way back in 2000. He's tried dozens of things and still tries new diets, sleeping patterns, etc. but we haven't kicked it yet. That said, he's pretty damn productive each day before cognitive exhaustion sets in.

9Caspian12y
I had the impression of Tim Ferris as being no more trustworthy than anyone else who was trying to sell you something. I would expect him to exaggerate how easy something is, exaggerate how likely something is to help, etc. Now, not having read his stuff, that's second hand and not well informed, but you are asking about you come across, so it's relevant. The doing amazing things part is great if you can manage it.
3NihilCredo12y
I have read about half of his book and skimmed the rest, and I pretty much share that impression. To put it succinctly, that man works a 4-hour workweek only if you adopt a very restrictive definition of what counts as "work".
9WrongBot12y
For what it's worth, that sounds virtually identical to a problem psychologists have told me is ADHD. (I also had a catastrophic school attendance failure in seventh grade, funnily enough.) Adderall has unpleasant side-effects but actually allows me to sit down and work for eight or ten consecutive hours, whenever I want to. Not perfectly, but the effect is remarkable.
1CronoDAS12y
I think prescription antidepressants also tend to have a similar energy-boosting effect.
1thomblake12y
I've observed the same problem and solution as well.
7jswan12y
Please no. Here's an example. When you say stuff like: "As an autodidact who now consumes whole fields of knowledge in mere weeks, I've developed efficient habits that allow me to research topics quickly." http://lesswrong.com/lw/5me/scholarship_how_to_do_it_efficiently/ You sound like Tim Ferriss and you make me want to ignore you in the same way I ignore him. I don't want to do this because you seem like a good person with a genuine ability to help others. Don't lose that.
3wedrifid12y
It sounds like you place high importance on public image. In particular, on maintaining a public image that is self effacing or humble. I wonder if, over all, it is more effective for luke to convey confidence and be up front about his achievements and capabilities and so gain influence with a wide range of people or if it is best to optimize his image for that group of people who place high importance on humble decorum. Tim Ferris is a good person (as far as people go) and he has been able to positively influence far more people by mastering self promotion than he ever would have been if he restrained himself. Is this about "being a good person and helping others" or keeping your approval? The two seem to be conflated here. Fortunately for you when luke says "try to be a Rationality Tim Ferris" he does not mean anything at all along the lines of "talk like Tim Ferris". He is talking about being as productive, efficient and resourceful as Tim Ferris. He's talking about Tim's strong capability for instrumental rationality not his even stronger capability for self promotion. (Incidentally I don't think Tim would make the kind of boast that Luke made there, simply because it is an awkward and poorly implemented boast. Tim boasts by giving a specific example of the awesome thing he has done rather than just making abstract assertions. At least give Tim the credit of knowing how to implement arrogance and boasting somewhat effectively!)
0jswan12y
Yeah, I think you pretty much called it. It doesn't really work for me, but I guess that if such a communication style is the most effective way to go, drive on.
2Solvent12y
That was fascinating to read. Eliezer certainly has toned down the arrogance a bit recently. I look forward to watching this.
2Kaj_Sotala12y
Wow, that link is really interesting. Especially this bit: I don't know if that hypothesis is true, but if it is, I probably have a mild version of it. It would explain a lot about my akrasia issues.
0NancyLebovitz12y
Has he tried anything related to breaking movement/tension habits?

I unfortunately don't have much to offer that can actually be helpful. I (and I feel like this probably applies to many LWers) am not at all turned off by arrogance, and actually find it somewhat refreshing. But this reminds me of something that a friend of mine said after I got her to read HPMOR:

"after finishing chapter 5 of hpmor I have deduced that harry is a complete smarmy shit that I want to punch in the face. no kid is that disrespectful. also he reminds me of a young voldemort....please don't tell me he actually tries taking over the world/embezzling funds/whatever"

ETA: she goes on in another comment (On Facebook), after I told her to give it to chapter 10, like EY suggests, "yeah I'm at chapter 17 and still don't really like harry (he seems a bit too much of a projection of the author perhaps? or the fact that he siriusly thinks he's the greatest thing evarrr/is a timelord) but I'm still reading for some reason?"

Seems to be the same general sentiment, to me. Not specifically the SI, but of course tangentially related. For what it's worth, I disagree. Harry's awesome. ;-)

(a) My experience with the sociology of academia has been very much in line with what Lukeprog's friend, Shminux and RolfAndreassen describe. This is the culture that I was coming from in writing my post titled Existential Risk and Public Relations. Retrospectively I realize that the modesty norm is unusually strong in academia and to that extent I was off-base in my criticism.

The modesty norms have some advantages and disadvantages. I think that it's appropriate for even the best people take the view "I'm part of a vast undertaking; if I hadn't gotte... (read more)

9XiXiDu12y
I agree with this. I probably would have never voiced any skepticism/criticism if most SI/LW folks would be more like Holden Karnofsky, Carl Shulman, Nick Bostrom or cousin_it.

I'm pretty sure most everyone here already knows this, but the perception of arrogance is basically a signalling/counter-signalling problem. If you boast (produce expensive signals of your own fitness), that tells people you are not too poor to have anything to boast about. But it can also signal that you have a need to brag to be noticed, which in turn can be interpreted to mean you aren't truly the best of the best. The basic question is context.

Is there a serious danger your potential contributions will be missed? If so, it is wisest to boast. Is there ... (read more)

I intended [...]

But some people seem to have read it and heard this instead [...]

When I write posts, I'd often be tempted to use examples from my own life, but then I'd think:

  1. Do I really just intend to use myself to illustrate some point of rationality, or do I subconsciously also want to raise my social status by pointing out my accomplishments?
  2. Regardless of what I "really intend", others will probably see those examples as boasting, and there's no excuse (e.g., I couldn't any better examples) I can make to prevent that.

This usually stop... (read more)

5Bongo12y
You could just tell the story with "me" replaced by "my friend" or "someone I know" or "Bob". I'd hate to miss a W_D post because of a trivial thing like this.

So, I have a few questions:

  1. What are the most egregious examples of SI's arrogance?

Since you explicitly ask a question phrased thus, I feel obligated to mention that last April I witnessed a certain email incident that I thought was somewhat extremely bad in some ways.

I do believe that lessons have been learned since then, though. Probably there's no need to bring the matter up again, and I only mention it since according to my ethics it's the required thing to do when asked such an explicit question as above.

(Some readers may wonder why I'm not provi... (read more)

6Aleksei_Riikonen12y
Curse me for presenting myself as someone having interesting secret knowledge. Now I get several PMs asking for details. In short, this "incident" was about one or two SIAI folks making a couple of obvious errors of judgment, and in the case of the error that sparked the whole thing, getting heatedly defensive about it for a moment. Other SIAI folks however recognized the obvious mistakes as such, so the issue was resolved, even though unprofessional conduct was observed for a moment. The actual mistakes were rather minor, nothing dramatic. The surprising thing was that heated defensiveness took place on the way to those mistakes getting corrected. (And since Eliezer is the SIAI guy most often accused of arrogance, I'll additionally state that here that is not the case. Eliezer was very professional in the email exchange in question.)

A lot of people are suggesting something like "SIAI should publish more papers", but I'm not sure anyone (including those who are making the suggestion) would actually change their behavior based on that. It sounds an awful lot like "SIAI should hire a PhD".

I've been a donor for a long time, but every now and then I've wondered whether I should be - and the fact that they don't publish more has been one of the main reasons why I've felt those doubts.

I do expect the paper thing to actually be the true rejection of a lot of people. I mean, demanding some outputs is one of the most basic expectations you could have.

I consider "donating to SIAI" to be on the same level as "donating to webcomics" - I pay Eliezer for the entertainment value of his writing, in the same spirit as when I bought G.E.B. and thereby paid Douglas Hofstadter for the entertainment value of his writing.

5antigonus12y
Of course it depends on the specific papers and the nature of the publications. "Publish more papers" seems like shorthand for "Demonstrate that you are capable of rigorously defending your novel/controversial ideas well enough that very many experts outside of the transhumanism movement will take them seriously." It seems to me that doing this would change a lot of people's behavior.
0[anonymous]12y
How would someone convince you that it was their true rejection?
0faul_sname12y
Donate to groups that actually demonstrate results.
2[anonymous]12y
Like who? I don't know any other non-profit working on FAI.
0faul_sname12y
If you limit your choice of charity to one working on FAI, I am not aware of any others. However, for a group that has demonstrated results in their domain: Schistosomiasis Control Initiative.
2[anonymous]12y
I don't see why donating to SCI would convince people with thomblake's skepticism.
0faul_sname12y
It would convince them that at least some people donate to organizations with visible outputs (like SCI). (Disclaimer: the lack of publications actually is not my true rejection of donating to SIAI, which has more to do with the lack of evidence that SIAI's cause is not only important, but urgent.)
0[anonymous]12y
Many people already do that through GiveWell, and yet he appears unconvinced.
0TheOtherDave12y
Agreed. Then again, the OP didn't actually pose the question "What would change your behavior?" (Which I assume translates to "What would cause you to donate more to SI and encourage others to do so?")

People tell me SI is arrogant but I don't see it myself. When you tell someone something and open it up to falsification and criticism I no longer see it as arrogance ( but I am wrong there for some reason)

In any case, what annoys me about the claims made is that its mostly based on anecdotal evidence and very little has come from research. Also as a regular guy and not a scientist or engineer I've noticed a distinct lack of any discussion of SI's viewpoints in the news.

I don't see anyone actively trying to falsify any of the claims in the sequences for ex... (read more)

there are many typo's

Murphy's law: a sentence criticising typos will contain a typo itself.

3tetsuo5512y
Thanks, google docs is not flagging any typos, could you point some out for me?
7arundelo12y
Apostrophes are not used to form plurals. (Some style guides give some exceptions, but this is not one of them.) The plural of "typo" is "typos". "Typo's" is a word, but it's the possesive form of "typo" (so it's not the word you want here). (Ninja edit: better link.)
0tetsuo5512y
Thanks that helped. Too bad the spellchecker missed it.
0prase12y
In what circumstances we use 's to form a plural? The link doesn't appear to suggest any.
5arundelo12y
Rule 11: If you were looking at the link I posted before editing my comment, search for "tired" and "DO use the apostrophe to form the plural". My 1992 Little, Brown Handbook says:
1wedrifid12y
Correct or not, the style guide is lame. A clearly superior way to prevent the ambiguity with unfortunate clear default is to use single quotes on both side of the 'i'. So 'i's, not i's.
0prase12y
I've missed that, thanks.

Find someplace I call myself a mathematical genius, anywhere.

(I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern. I don't know what this alarm feels like, so it's hard to guess what sets it off.)

I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern.

Some quotes by you that might highlight why some people think you/SI is arrogant :

I tried - once - going to an interesting-sounding mainstream AI conference that happened to be in my area. I met ordinary research scholars and looked at their posterboards and read some of their papers. I watched their presentations and talked to them at lunch. And they were way below the level of the big names. I mean, they weren't visibly incompetent, they had their various research interests and I'm sure they were doing passable work on them. And I gave up and left before the conference was over, because I kept thinking "What am I even doing here?" (Competent Elites)

More:

I don't mean to bash normal AGI researchers into the ground. They are not evil. They are not ill-intentioned. They are not even dangerous, as individuals. Only the mob of them is dangerous, that can learn from each o

... (read more)
4lukeprog12y
I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?

I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?

I am the wrong person to ask if a "a doctorate in AI would be negatively useful". I guess it is technically useful. And I am pretty sure that it is wrong to say that others are "not remotely close to the rationality standards of Less Wrong". That's of course the case for most humans, but I think that there are quite a few people out there who are at least at the same level. I further think that it is quite funny to criticize people on whose work your arguments for risks from AI are dependent on.

But that's besides the point. Those statements are clearly false when it comes to public relations.

If you want to win in this world, as a human being, you are either smart enough to be able to overpower everyone else or you actually have to get involved in some fair amount of social engineering, signaling games and need to refine your public relations.

Are you able to solve friendly AI, without much more money, without hiring top-notch mathematicians, and then solve general intelligence to implement it and take over the world? If not, then you will at some point eit... (read more)

I mostly agree with the first 3/4 of your post. However...

Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won't even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.

You can't make everyone happy. Whatever policy a website has, some people will leave. I have run away from a few websites that have "no censorship, except in extreme cases" policy, because the typical consequence of such policy is some users attacking other users (weighing the attack carefully to prevent moderator's action) and some users producing huge amounts of noise. And that just wastes my time.

People leaving LW should be considered on case-by-case basis. They are not ... (read more)

7Rain12y
I wish I could decompile my statements of "they need to do a much better job at marketing" into paragraphs like this. Thanks.
5wedrifid12y
Practice makes perfect!
6FeepingCreature12y
I hope you understand that this is not an argument against LW's policy in this matter.
5wedrifid12y
Counterprediction: The optimal degree of implementation of that policy for the purpose of PR maximisation is somewhat higher than it currently is. You don't secure an ideal public image by being gentle.
7XiXiDu12y
Don't start a war if you don't expect to be able to win it. It is much easier to damage a reputation than to build one, especially if you support a cause that can easily trigger the absurdity heuristic in third-party people. Being rude to people who don't get it will just cause them to reinforce their opinion and tell everyone that you are wrong instead. Which will work, because your arguments are complex and in support of something that sounds a lot like science fiction. A better route is to just ignore them, if you are not willing to discuss the matter over, or to explain how exactly they are wrong. And if you consider both routes to be undesirable, then do it like FHI and don't host a public forum.
4wedrifid12y
Being gratuitously rude to people isn't the point. 'Maintaining a garden' for the purpose of optimal PR involves far more targeted and ruthless intervention. "Weeds" (those who are likely to try sabotage your reputation, otherwise interfere with your goals, or significantly provoke 'rudeness' from others) are removed early before they have a chance to take root.
3[anonymous]12y
I've had these thoughts for a while, but I undoubtedly would have done much worse in writing them down than you have. Well done.
2TrE12y
Related: http://www.overcomingbias.com/2012/01/dear-young-eccentric.html Don't appear like a rebel, be a rebel. Don't signal rebel-ness, instead, be part of the systemand infiltrate it with your ideas. If those ideas are decent, this has a good chance of working.
0lessdazed12y
The problem is will?
0[anonymous]12y
Organizations are made of people. People in highly technical or scientific lines of work are likely to pay less attention to social signaling bullshit and more to actual validity of arguments or quality of insights. By writing the sequences Eliezer was talking to those people and by extension to the organizations that employ them. A somewhat funny example: there's an alternative keyboard layout, called Colemak that was developed about 5 years ago by people from the Internet and later promoted by enthusiasts on the Internet. Absolutely no institutional muscle to back it up. Yet it somehow ended included in the latest version of Mac OS X. Does that mean that Apple started caring about Colemak? I don't think the execs had a meeting about it. Maybe the question of whether an organization "cares" about something isn't that well defined.
6asr12y
I am skeptical of this claim and would like evidence. My experience is that scientists are just as tribal, status-conscious and signalling-driven as anybody else. (I am a graduate student in the science at a major research university.)

The first three statements can be boiled down to saying, "I, Eliezer, am much better at understanding and developing AI than the overwhelming majority of professional AI researchers".

Is that statement true, or false ? Is Eliezer (or, if you prefer, the average SIAI member) better at AI than everyone else (plus or minus epsilon) who is working in the field of AI ?

The prior probability for such a claim is quite low, especially since the field is quite large, and includes companies such as Google and IBM who have accomplished great things. In order to sway my belief in favor of Eliezer, I'll need to witness some great things that he has accomplished; and these great things should be significantly greater than those accomplished by the mainstream AI researchers. The same sentiment applies to SIAI as a whole.

6erratio12y
To repeat something I said in the other thread, truth values have nothing to do with tone. It's the same issue some people downthread have with Tim Ferriss - no one denies that he seems very effective, but he communicates in a way that gives many people an unpleasant vibe. Same goes if you communicate in a way that pattern-matches to 'arrogant'.
6lukeprog12y
Of course. That's why I said I can "smell the arrogance," and then went on to ask a different question about whether XiXiDu thought the claims were false.

I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?

When I read that, I interpreted it to mean something like "Yes, he does come across as arrogant, but it's okay because everything he's saying is actually true." It didn't come across to me like a separate question - it read to me like a rhetorical question which was used to make a point. Maybe that's not how you intended it?

I think erratio is saying that it's important to communicate in a way that doesn't turn people off, regardless of whether what you're saying is true or not.

But I don't get it. You asked for examples and XiXiDu gave some. You can judge whether they were good or bad examples of arrogance. Asking whether the examples qualify under another, different criterion seems a bit defensive.

Also, several of the examples were of the form "I was tempted to say X" or "I thought Y to myself", so where does truth or falsity come into it?

Okay, let me try again...

XiXiDu, those are good examples of why people think SI is arrogant. Out of curiosity, do you think the statements you quote are actually false?

4[anonymous]12y
.
2[anonymous]12y
I hadn't seen that before. Was it written before the sequences? I ask because it all seemed trivial to my sequenced self and it seemed like it was not supposed to be trivial. I must say that writing the sequences is starting to look like it was a very good idea.
2katydee12y
I believe so; I also believe that post is now considered obsolete.
2amcknight12y
FWIW, I'm not sure why you added the 2nd quote and the 3rd is out of context. Also, remember that we're talking about 700+ blog posts and other articles. Just be careful you're not cherry-picking.
[-][anonymous]12y210

This isn't a useful counterargument when the subject at hand is public relations. Several organizations have been completely pwned by hostile parties cherry-picking quotes.

4[anonymous]12y
The point was "you may be quote mining" which is a useful thing to tell a LWer, even if it doesn't mean a thing to "the masses".
2amcknight12y
Good point.
0wedrifid12y
I love this quote. Yes, it's totally arrogant, but I love it just the same. It would be a shame if Eliezer had to lose this attitude. (Even though all things considered it may be better if he did.)
1lessdazed12y

Interestingly, the first sentence of this comment set off my arrogance sensors (whether justified or not). I don't think it's the content of your statement, but rather the way you said it.

I believe that. My first-pass filter for theories of why some people think SIAI is "arrogant" is whether the theory also explains, in equal quantity, why those same people find Harry James Potter-Evans-Verres to be an unbearably snotty little kid or whatever. If the theory is specialized to SIAI and doesn't explain the large quantities of similar-sounding vitriol gotten by a character in a fanfiction in a widely different situation who happens to be written by the same author, then in all honesty I write it off pretty quickly. I wouldn't mind understanding this better, but I'm looking for the detailed mechanics of the instinctive sub-second ick reaction experienced by a certain fraction of the population, not the verbal reasons they reach for afterward when they have to come up with a serious-sounding justification. I don't believe it, frankly, any more than I believe that someone actually hates hates hates Methods because "Professor McGonagall is acting out of character".

I once read a book on characterization. I forget the exact quote, but it went something like, "If you want to make your villian more believable, make him more intelligent."

I thought my brain had misfired. But apparently, for the average reader it works.

8thomblake12y
I acquired my aversion to modesty before reading your stuff, and I seem to identify that "thing", whatever it is shared by you and Harry, as "awesome" rather than "arrogant". You're acting too big for your britches. You can't save the world; you're not Superman. Harry can't invent new spells; he's just a student. The proper response to that sort of criticism is to ignore it and (save the world / invent new spells) anyway. I don't think there really is a way to make it go away without actually diminishing your ability to do awesome stuff.
2Matt_Simpson12y
FWIW I don't ever recall having this reaction to Harry, though my memory is pretty bad and I think I'm easily manipulated by stories. It may have something to do with being terse and blunt - this often makes the speaker seem as though they think they're "better" than their interlocutors. I had a Polish professor for one of my calculus classes in undergrad who, being a Pole speaking english, naturally sounded very blunt to our American ears. There were several students in that class who just though he was an arrogant asshole who talked down to his students. I'm mostly speculating here though.
2[anonymous]12y
Self-reference and any more than a moderate degree of certainty about anything that isn't considered normal by whoever happens to be listening are both (at least, in my experience) considered less than discreet. Trying to demonstrate that one isn't arrogant probably qualifies as arrogance, too. I don't know how useful this observation is, but I thought it was at least worth posting.

"Here is a threat to the existence of humanity which you've likely never even considered. It's probably the most important issue our species has ever faced. We're still working on really defining the ins and outs of the problem, but we figure we're the best people to solve it, so give us some money."

Unless you're a fictional character portrayed by Will Smith, I don't think there's enough social status in the world to cover that.

If trying to save the world requires having more social status than humanly obtainable, then the world is lost, even if it was easy to save...

The question is one of credibility rather than capability. In private, public, academic and voluntary sectors it's a fairly standard assumption that if you want people to give you resources, you have to do a little dance to earn it. Yes, it's wasteful and stupid and inefficient, but it's generally easier to do the little dance than convince people that the little dance is a stupid system. They know that already.

It's not arrogant to say "my time is too precious to do a little dance", and it may even be true. The arrogance would be to expect people to give you those resources without the little dance. I doubt the folk at SIAI expect this to happen, but I do suspect they're probably quite tired of being asked to dance.

5NihilCredo12y
The little dance is not wasteful and stupid and inefficient. For each individual with the ability to provide resources (be they money, manpower, or exposure), there are a thousand projects who would love to be the beneficiaries of said resources. Challenging the applicants to produce some standardised signals of competence is a vastly more efficient approach than expecting the benefactors to be able to thoroughly analyse each and every applicant's exoteric efforts.
5sixes_and_sevens12y
I agree that methods of signalling competence are, in principle, a fine mechanism for allowing those with resources to responsibly distribute them between projects. In practise, I've seen far too many tall, attractive, well-spoken men from affluent background go up to other tall, attractive, well-spoken men from affluent backgrounds and get them to allocate ridiculous quantities of money and man-hours to projects on the basis of presentations which may as well be written in crayon for all the salient information they contain. The amount this happens varies from place to place, and in the areas where I see it most there does seem to be an improving trend of competence signalling actually correlating to whatever it is the party in question needs to be competent at, but there is still way too much scope for such signalling being as applicable to the work in question as actually getting up in front of potential benefactors and doing a little dance.
-4Epiphany12y
Unless people wake up to the fact that people are requiring an appeal to authority as a prerequisite for important decisions, AND gain the ability to determine for themselves whether something is a good cause. I think the reason people rely on appeals to popularity, authority and the "respect" that comes with status is that they do not feel competent to judge for themselves.
9amcknight12y
This isn't fair. Use a real quote.
6sixes_and_sevens12y
Uh...no. It's in quotation marks because it's expressed as dialogue for stylistic purposes, not because I'm attributing it as a direct statement made by another person. That may make it a weaker statement than if I'd used a direct quote, but it doesn't make it invalid.
4amcknight12y
Arrogance is probably to be found in the way things are said rather than the content. By not using a real example, you've invented the tone of the argument.
6sixes_and_sevens12y
It's not supposed to be an example of arrogance, through tone or otherwise. It's a broad paraphrasing of the purpose and intent of SIAI to illustrate the scope, difficulty and nebulousness of same.
0amcknight12y
OK, sure. But now I'm confused about why you said it. Aren't we specifically talking about arrogance?
2sixes_and_sevens12y
EY made a (quite reasonable) observation that the perceived arrogance of SIAI may be a result of trying to tackle a problem disproportionately large for the organisation's social status. My point was that the problem (FAI) is so large, that no-one can realistically claim to have enough social status to try and tackle it.
3Vaniver12y
Typically, when I paraphrase I use apostrophes rather than quotation marks to avoid that confusion. I don't know if that's standard practice or not.
8sixes_and_sevens12y
It's my understanding there's no formal semantic distinction between single- or double-quotes as punctuation, and their usage is a typographic style choice. Your distinction does make sense in a couple of different ways, though. The one that immediately leaps to mind is the distinction between literal and interpreted strings in Perl, et al., though that's a bit of a niche association. Also single-quotes are more commonly used for denoting dialogue, but that has more to do with historical practicalities of the publishing and printing industries than any kind of standard practise. The English language itself doesn't really seem to know what it's doing when it puts something in quotes, hence the dispute over whether trailing commas and full stops belong inside or outside quotations. One makes sense if you're marking up the text itself, while another makes sense if you're marking up what the text is describing. I think I may adopt this usage.
8NihilCredo12y
- NihilCredo

I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern.

My thinking when I read this post went something along these lines but where you put "made up because" I put "actually consists of". That is, acting in a way that (the observer perceives) is beyond your station is a damn good first approximation at a practical definition of 'arrogance'. I would go as far as to say that if you weren't being arrogant you wouldn't be able to do you job. Please keep on being arrogant!

The above said, there are other behaviors that will provoke the label 'arrogant' which are not beneficial. For example:

  • Acting like one is too good to have to update based on what other people say. You've commented before that high status can make you stupid. Being arrogant - acting in an exaggerated high status manner - certainly enhances this phenomon. As far as high status people go you aren't too bad along the "too arrogant to be able to comprehend what
... (read more)
7jeremysalwen12y
Here: http://lesswrong.com/lw/ua/the_level_above_mine/ I was going to go through quote by quote, but I realized I would be quoting the entire thing. Basically: A) You imply that you have enough brainpower to consider yourself to be approaching Jaynes's level. (approaching alluded to in several instances) B) You were surprised to discover you were not the smartest person Marcello knew. (or if you consider surprised too strong a word, compare your reaction to that of the merely very smart people I know, who would certainly not respond with "Darn"). C) Upon hearing someone was smarter than you, the first thing you thought of was how to demonstrate that you were smarter than them. D) You say that not being a genius like Jaynes and Conway is a "possibility" you must "confess" to. E) You frame in equally probable terms the possibility that the only thing separating you from genius is that you didn't study quite enough math as a kid. So basically, yes, you don't explicitly say "I am a mathematical genius", but you certainly positions yourself as hanging out on the fringes of this "genius" concept. Maybe I'll say "Schrodinger’s Genius". Please ignore that this is my first post and it seems hostile. I am a moderate-time lurker and this is the first time that I felt I had relevant information that was not already mentioned.

Hi Luke,

I think you are correct that SI has an image problem, and I agree that it's at least partially due to academic norm violations (and partially due to the personalities involved). And partially due to the fact that out of possible social organizations, SI most readily maps to a kind of secular cult, where a charismatic leader extracts a living from his followers.

If above is seen as a problem in need of correcting then some possibilities for change include:

(a) Adopting mainstream academic norms strategically. (b) Competing in the "mainstream marketplace of ideas" by writing research grant proposals.

There's the signalling problem from boasting in this culture, but should we also be taking a look at whether boasting is a custom that there are rational reasons for encouraging or dropping?

Since it's been seven months, I'm curious - how much of this, if any, has been implemented? TDT has been published, but it doesn't get too many hits outside of LessWrong/MIRI, for example.

This is the best example I've seen so far:

I actually intend to fix the universe (or at least throw some padding atop my local region of it, as disclaimed above)

The padding version seems more reasonable next to the original statement, but neither of these are very realistic goals for a person to accomplish. There is probably not a way to present grandiosity such as this without it coming across as arrogance or worse.

http://lesswrong.com/lw/uk/beyond_the_reach_of_god/nsh

I still don't get what's actually supposed to be wrong about being arrogant. In all the examples I've found of actual arrogance it seems a good and sensible reaction when justified, and in the alleged cases of it casuing bad outcomes it never actualy is the arrogance itself that does, there is just an overconfidence causing both the arrogance and bad outcome. Is this just some social tabo because it *correlates with overconfidence?

4TheOtherDave12y
If I behave arrogantly and as a consequence other people are less willing/able to coordinate effectively with me, would you consider that a bad outcome? If so, do you believe that never happens? Or would you say that in that case the cause of the bad outcome is other people's reactions to my arrogance, rather than the arrogance itself? Or something else?
0Armok_GoB12y
Yea, the only case of that bad outcome is peoples bad reactions to it, and further I can't see why people should react badly to. It seem like an arbitrary and unfair taboo against a perfectly valid personality trait/emotional reaction.
2TheOtherDave12y
So, just making sure I understand: you acknowledge that it does have these negative consequences at the moment, but you're arguing that it's a mistake to conclude that therefore arrogant people ought to change anything about themselves; the proper conclusion is that arrogance-averse folks should get over it and acknowledge the importance of equal treatment for arrogant people. Yes?
-1Armok_GoB12y
Ideally, yes, but that's obviously not going to happen. If I were to propose a coarse of action it'd be "Realize the question is more complex than it seems, that the situation is likely to require messy compromise and indirection, and to hold of on proposing solutions"

I'd be curious to see your feedback regarding the comments on this post. Do you believe that the answers to your questions were useful ? If so, what are you going to do about it (and if not, why not) ? If you have already done something, what was it, and how effective did it end up being ?

What about getting some tech/science savvy public-relations practitioners involved? Understanding and interacting effectively with the relevant publics might just be a skill worthy of dedicated consideration and more careful management.

[-][anonymous]12y00

Personally, I don't think SI is arrogant, but rather that should would harder to publish books/papers so that they would be more accepted by the general scientific (and non-scientific, even) community. Not that I think think that they aren't trying already...

Are there subjects and ways in which SI isn't arrogant enough?

Informally, let us suppose perceived arrogance in attempting a task is the perceived competence of the individual divided by the perceived difficulty of the task. SIAI is attempting an impossible task, without infinite competence. Thus, there is no way SIAI can be arrogant enough.

[This comment is no longer endorsed by its author]Reply
3[anonymous]12y
I'm pretty sure you flipped the fraction upside-down here. Shouldn't it be perceived difficulty of the task divided by perceived competence? Gifted high-school student who boldly declares that he will develop a Theory of Everything over the course of summer vacation is arrogant (low competence, high difficulty). Top-notch theoretical physicist who boldly declares that he will solve a problem from a high-school math contest is not. So SIAI is actually infinitely arrogant, according to your assumptions.
2thomblake12y
I'm pretty sure I did too. But the whole explanation seems much less intuitive to me now, so I'll retract rather than correct it.
4komponisto12y
It seems to me that the "perceived arrogance quotient" used by most people is the following: (status asserted by speaker as perceived by listener)/(status assigned to speaker by listener) However, I think this is wrong and unfair, and it should instead be: (status asserted by speaker as perceived by speaker)/(status assigned to speaker by listener) That is, before you call someone arrogant, you should have to put in a little work to determine their intention, and what the world looks like from their point of view.