Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Singularity Institute's Arrogance Problem

61 Post author: lukeprog 18 January 2012 10:30PM

I intended Leveling Up in Rationality to communicate this:

Despite worries that extreme rationality isn't that great, I think there's reason to hope that it can be great if some other causal factors are flipped the right way (e.g. mastery over akrasia). Here are some detailed examples I can share because they're from my own life...

But some people seem to have read it and heard this instead:

I'm super-awesome. Don't you wish you were more like me? Yay rationality!

This failure (on my part) fits into a larger pattern of the Singularity Institute seeming too arrogant and (perhaps) being too arrogant. As one friend recently told me:

At least among Caltech undergrads and academic mathematicians, it's taboo to toot your own horn. In these worlds, one's achievements speak for themselves, so whether one is a Fields Medalist or a failure, one gains status purely passively, and must appear not to care about being smart or accomplished. I think because you and Eliezer don't have formal technical training, you don't instinctively grasp this taboo. Thus Eliezer's claim of world-class mathematical ability, in combination with his lack of technical publications, make it hard for a mathematician to take him seriously, because his social stance doesn't pattern-match to anything good. Eliezer's arrogance as evidence of technical cluelessness, was one of the reasons I didn't donate until I met [someone at SI in person]. So for instance, your boast that at SI discussions "everyone at the table knows and applies an insane amount of all the major sciences" would make any Caltech undergrad roll their eyes; your standard of an "insane amount" seems to be relative to the general population, not relative to actual scientists. And posting a list of powers you've acquired doesn't make anyone any more impressed than they already were, and isn't a high-status move.

So, I have a few questions:

 

  1. What are the most egregious examples of SI's arrogance?
  2. On which subjects and in which ways is SI too arrogant? Are there subjects and ways in which SI isn't arrogant enough?
  3. What should SI do about this?

 

Comments (306)

Sort By: Popular
Comment author: IlyaShpitser 14 March 2012 08:57:29PM 2 points [-]

Hi Luke,

I think you are correct that SI has an image problem, and I agree that it's at least partially due to academic norm violations (and partially due to the personalities involved). And partially due to the fact that out of possible social organizations, SI most readily maps to a kind of secular cult, where a charismatic leader extracts a living from his followers.

If above is seen as a problem in need of correcting then some possibilities for change include:

(a) Adopting mainstream academic norms strategically. (b) Competing in the "mainstream marketplace of ideas" by writing research grant proposals.

Comment author: Wei_Dai 21 February 2012 11:43:58PM 5 points [-]

I intended [...]

But some people seem to have read it and heard this instead [...]

When I write posts, I'd often be tempted to use examples from my own life, but then I'd think:

  1. Do I really just intend to use myself to illustrate some point of rationality, or do I subconsciously also want to raise my social status by pointing out my accomplishments?
  2. Regardless of what I "really intend", others will probably see those examples as boasting, and there's no excuse (e.g., I couldn't any better examples) I can make to prevent that.

This usually stops me from using myself as examples, sometimes with the result that the post stays unwritten or unpublished. I'm not saying that you should do the same since you have different costs and benefits to consider (or I could well be wrong myself and shouldn't care so much about not being seen as boasting), but the fact that people interpret your posts filled with personal examples/accomplishments as being arrogant shouldn't have come as a surprise.

Another point I haven't seen brought up yet is that social conventions seem to allow organizations to be more boastful than individuals. You'd often see press releases or annual reports talking up an organization's own accomplishments, while an individual doing the same thing would be considered arrogant. So an idea to consider is that when you want to boast of some accomplishment, link it to the Institute and not to an individual.

Comment author: Bongo 22 February 2012 08:15:43PM 4 points [-]

This usually stops me from using myself as examples, sometimes with the result that the post stays unwritten or unpublished.

You could just tell the story with "me" replaced by "my friend" or "someone I know" or "Bob". I'd hate to miss a W_D post because of a trivial thing like this.

Comment author: Thrasymachus 05 February 2012 08:32:12PM 15 points [-]

(I was going to write a post on 'why I'm skeptical about SIAI', but I guess this thread is a good place to put it. This was written in a bit of a rush - if it sounds like I am dissing you guys, that isn't my intention.)

I think the issue isn't so much 'arrogance' per se - I don't think many of your audience would care about accurate boasts - but rather your arrogance isn't backed up with any substantial achievement:

You say you're right on the bleeding edge in very hard bits of technical mathematics ("we have 30-40 papers which could be published on decision theory" in one of lukeprogs Q&As, wasn't it?), yet as far as I can see none of you have published anything in any field of science. The problem is (as far as I can tell) you've been making the same boasts about all these advances you are making for years, and they've never been substantiated.

You say you've solved all these important philosophical questions (Newcomb, Quantum mechanics, Free will, physicalism, etc.), yet your answers are never published, and never particularly impress those who are actual domain experts in these things - indeed, a complaint I've heard commonly is that Lesswrong just simply misunderstand the basics. An example: I'm pretty good at philosophy of religion, and the sort of arguments Lesswrong seems to take as slam-dunks for Atheism ("biases!" "Kolmogorov complexity!") just aren't impressive, or even close to the level of discussion seen in academia. This itself is no big deal (ditto the MWI, phil of mind), but it makes for an impression of being intellectual dilettantes spouting off on matters you aren't that competent in. (I'm pretty sure most analytic philospohers roll their eyes at all the 'tabooing' and 'dissolving problems' - they were trying to solve philosophy that way 80 years ago!) Worse, my (admittedly anecdotal) survey suggests a pretty mixed reception from domain-experts in stuff that really matters to your project, like probability theory, decision theory etc.

You also generally talk about how awesome you all are via the powers of rationalism, yet none of you have done anything particularly awesome by standard measures of achievement. Writing a forest of blog posts widely reputed to be pretty good doesn't count. Nor does writing lots of summaries of modern cogsci and stuff.

It is not all bad. Because there are lots of people who are awesome by conventional metrics and do awesome things who take you guys seriously, and meeting these people has raised my confidence that you guys are doing something interesting. But reflected esteem can only take you so far.

So my feeling is basically 'put up or shut up'. You guys need to build a record of tangible/'real world' achievements, like writing some breakthrough papers on decision theory (or any papers on anything) which are published and taken seriously in mainstream science, a really popular book on 'everyday rationality', going off and using rationality to make zillions from the stock market, or whatever. I gather you folks are trying to do some of these: great! Until then, though, your 'arrogance problem' is simply that you promise lots and do little.

Comment author: lukeprog 01 April 2012 07:58:40AM 1 point [-]

"we have 30-40 papers which could be published on decision theory"

No, that wasn't it. I said 30-40 papers of research. Most of that is strategic research, like Carl Shulman's papers, not decision theory work.

Otherwise, I almost entirely agree with your comments.

Comment author: Bugmaster 21 January 2012 12:35:00AM *  9 points [-]

What are the most egregious examples of SI's arrogance?

Well, you do tend to talk about "saving the world" a lot. That makes it sound like you, Eliezer Yudkowsky, plus a few other people are the new Justice League. That sounds at least a little arrogant...

Comment author: wedrifid 20 January 2012 05:30:30PM 13 points [-]

What are the most egregious examples of SI's arrogance?

Public tantrums, shouting and verbal abuse. Those are status displays that pay off for tribal chieftans and some styles of gang leader. They aren't appropriate for leaders of intellectually oriented charities. Eliezer thinking he can get away with that is the biggest indicator of arrogance that I've noticed thus far.

Comment author: Bugmaster 21 January 2012 01:16:26AM 1 point [-]

To be fair, while I personally do perceive the SIAI as being arrogant, I haven't seen any public tantrums. As far as I can tell, all their public discourse has been quite civil.

Comment author: wedrifid 21 January 2012 02:06:49AM 11 points [-]

To be fair, while I personally do perceive the SIAI as being arrogant, I haven't seen any public tantrums. As far as I can tell, all their public discourse has been quite civil.

The most significant example was the Roko incident. The relevant threads and comments were all censored during the later part of his tantrum. Not a good day in the life of Eliezer's reputation.

Comment author: Bugmaster 21 January 2012 02:14:19AM 7 points [-]

Fair enough; I was unaware of the Roko incident (understandably so, since apparently it was Sovieted from history). I have now looked it up elsewhere, though. Thanks for the info.

Comment author: thomblake 20 January 2012 07:25:50PM 2 points [-]

A lot of people are suggesting something like "SIAI should publish more papers", but I'm not sure anyone (including those who are making the suggestion) would actually change their behavior based on that. It sounds an awful lot like "SIAI should hire a PhD".

Comment author: antigonus 22 January 2012 08:07:21AM 4 points [-]

Of course it depends on the specific papers and the nature of the publications. "Publish more papers" seems like shorthand for "Demonstrate that you are capable of rigorously defending your novel/controversial ideas well enough that very many experts outside of the transhumanism movement will take them seriously." It seems to me that doing this would change a lot of people's behavior.

Comment author: Kaj_Sotala 20 January 2012 09:44:17PM *  10 points [-]

I've been a donor for a long time, but every now and then I've wondered whether I should be - and the fact that they don't publish more has been one of the main reasons why I've felt those doubts.

I do expect the paper thing to actually be the true rejection of a lot of people. I mean, demanding some outputs is one of the most basic expectations you could have.

Comment author: CronoDAS 21 January 2012 01:27:45AM 6 points [-]

I consider "donating to SIAI" to be on the same level as "donating to webcomics" - I pay Eliezer for the entertainment value of his writing, in the same spirit as when I bought G.E.B. and thereby paid Douglas Hofstadter for the entertainment value of his writing.

Comment author: Risto_Saarelma 20 January 2012 06:23:55AM *  13 points [-]

What SIAI could do to help the image problem: Get credible grown-ups on board.

The main team looks to be in their early thirties, and the visiting fellows mostly students in their twenties. With the claims of importance SIAI is making, people go looking for people over forty who are well-established as serious thinkers, AI experts or similarly known-competent folk in a relevant field. There should be either some sufficiently sold on the SIAI agenda to be actually on board full-time, or quite a few more in some kind of endorsing partnership role. Currently there's just Ray Kurzweil on the team page, and beyond "Singularity Summit Co-Founder", there's nothing there saying just what his relation to SIAI is, exactly. SIAI doesn't appear to be suitably convincing to have gotten any credible grown-ups as full-time team members.

There are probably good reasons why this isn't useful for what SIAI is actually trying to do, but the demographic of thirty-somethings leading the way and twenty-somethings doing stuff looks way iffier at a glance for "support us in solving the most important philosophical, societal and technological problem humanity has ever faced once and for all!" than it does for "we're doing a revolutionary Web 3.0 SaaS multi mobile OS cloud computing platform!"

Comment author: Vaniver 20 January 2012 01:39:09AM *  15 points [-]

There are two recurring themes: peer-reviewed technical results, and intellectual firepower.

If you want to show people intellectual firepower and the awesomeness of your conversations, tape the conversations. Just walk around with a recorder going all day, find the interesting bits later, and put them up for people to listen to.

But... you're not selling "we're super bright," you're selling "we're super effective." And for that you need effectiveness. Earnest, bright people wasting their effort is an old thing, and with as goals as large as yours it's difficult to see the difference between progress and floundering.

Comment author: lsparrish 20 January 2012 02:42:31AM *  6 points [-]

I'm pretty sure most everyone here already knows this, but the perception of arrogance is basically a signalling/counter-signalling problem. If you boast (produce expensive signals of your own fitness), that tells people you are not too poor to have anything to boast about. But it can also signal that you have a need to brag to be noticed, which in turn can be interpreted to mean you aren't truly the best of the best. The basic question is context.

Is there a serious danger your potential contributions will be missed? If so, it is wisest to boast. Is there already an arms race of other boasts to compete with? Is boasting so cheap nobody will pay it any attention? In that case, the best strategy is to stun people with unexpected modesty. You can also save resources that way, as long as nobody interprets that as a need to save resources.

Pulling off the modesty trick can turn out to be harder than an effective boast, which is of course related to why it works. People have to receive the information that you are competent somehow -- a subtle nudge of some kind, preexisting reputation, etc. It also comes to a point of saturation, just like loud/direct boasting does, it just is harder to notice when it does.

So when someone unexpectedly acts arrogant in a niche where modesty has become commonplace, my theory is that it can actually act as a counter-counter-signal. To pull it off they would have to somehow distinguish their arrogance from that of a low-status blowhard who is only making noise because otherwise they wouldn't be noticed.

Logically extrapolating this, we might then get the more seemingly modest counter-counter-counter signaler, who is able to signal (through a supremely sophisticated mechanism) that they don't need to signal arrogance and separate themselves from modest folk who are so pretentious as to signal their modesty by keeping quiet in order to prevent themselves from being confused with blowhards who signal expensively. However, for counter(3)-signaling to be an advantage there would first need to be a significant population of counter(2)-signalers to compete against. I'm guessing this probably just sort of slides into different kinds of signal/counter-signal forms rather than going infinitely meta.

Comment author: Viliam_Bur 19 January 2012 03:28:22PM 39 points [-]

SI is arrogant because it pretends to be even better than science, while failing to publish in significant scientific papers. If this does not seem like a pseudoscience or cult, I don't know what does.

So please either stop pretending to be so great or prove it! For starters, it is not necessary to publish a paper about AI; you can choose any other topic.

No offense; I honestly think you are all awesome. But there are some traditional ways to prove one's skills, and if you don't accept the challenge, you look like wimps. Even if the ritual is largely a waste of time (all signals are costly), there are thousands of people who have passed it, so a group of x-rational gurus should be able to use their magical powers and do it in five minutes, right?

Comment author: Bugmaster 21 January 2012 12:41:04AM *  16 points [-]

Yeah. The best way to dispel the aura of arrogance is to actually accomplish something amazing. So, SIAI should publish some awesome papers, or create a powerful (1) AI capable of some impressive task like playing Go (2), or end poverty in Haiti (3), or something. Until they do, and as long as they're claiming to be super-awesome despite the lack of any non-meta achievements, they'll be perceived as arrogant.

(1) But not too powerful, I suppose.
(2) Seeing as Jeopardy is taken.
(3) In a non-destructive way.

Comment author: DuncanS 19 January 2012 10:00:08PM 2 points [-]

There are indeed times you can get the right answer in five minutes (no, seconds), but it still takes the same length of time as for everyone else to write the thing up into a paper.

Comment author: Viliam_Bur 20 January 2012 08:15:27AM *  14 points [-]

How much is that "same length of time"? Hours? Days? If 5 days of work could make LW acceptable in scientific circles, is it not worth doing? It is better to complain why oh why more people don't treat SI seriously?

Can some part of that work be oursourced? Just write the outline of the answer, then find some smart guy in India and pay him like $100 to write it? Or if money is not enough for people who could write the paper well, could you bribe someone by offering them co-authorship? Graduate students have to publish in papers anyway, so if you give them a complete solution, they should be happy to cooperate.

Or set up a "scientific wiki" on SI site, where the smartest people will write the outlines of their articles, and the lesser brains can contribute by completing the texts.

These are my solutions, which seem rather obvious to me. It is not sure they would work, but I guess trying them is better than do nothing. Could a group of x-rational gurus find seven more solutions in five minutes?

From outside, this seems like: "Yeah, I totally could do it, but I will not. Now explain me why are people, who can do it, percieved like more skilled than me?" -- "Because they showed everyone they can do it, duh."

Comment author: Benja 30 August 2012 09:32:44AM 2 points [-]

Upvoted for clearly pointing out the tradeoff (yes publicly visible accomplishments that are easy to recognize as accomplishments may not be the most useful thing to work on, but not looking awesome is a price paid for that and needs to be taken into account in deciding what's useful). However, I want to point out that if I heard that an important paper was written by someone who was paid $100 and doesn't appear on the author list, my crackpot/fraud meter (as related to the people on the author list) would go ping-Ping-PING, whether that's fair or not. This makes me worry that there's still a real danger of SIAI sending the wrong signals to people in academia (for similar but different reasons than in the OP).

Comment author: TheOtherDave 19 January 2012 09:24:42PM 7 points [-]

If it helps at all, another data point (not quite answers to your questions):

  • I'm a complete SI outsider. My exposure to it is entirely indirectly through Less Wrong, which from time to time seems to function as a PR/fundraising/visibility tool for SI.
  • I have no particular opinion about SI's arrogance or non-arrogance as an organization, or EY's arrogance or non-arrogance as an individual. They certainly don't demonstrate humility, nor do they claim to, but there's a wide middle ground between the two.
  • I doubt I would be noticeably more likely to donate money, or to encourage others to donate money, if SI convinced me that it was now 50% less arrogant than it was in 2011.
  • One thing that significantly lowers my likelihood of donating to SI is my estimate that the expected value of SI's work is negligible, and that the increase/decrease in that EV based on my donations is even more so. It's not clear what SI can really do to increase my EV-of-donating, though.
  • Similar to the comment you quote, someone's boasts:accomplishments ratio is directly proportional to my estimate that they are crackpots. OTOH, I find it likely that without the boasting and related monkey dynamics, SI would not receive the funding it has today, so it's not clear that adopting a less boastful stance is actually a good idea from SI's perspective. (I'm taking as given that SI wants to continue to exist and to increase its funding.)
  • Just to be clear what I mean by "boasts," here... throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral. I don't think that much is at all controversial, but if you really want specific instances I might be motivated to go back through and find some. (Probably not, though.)
Comment author: wedrifid 20 January 2012 01:59:14PM 1 point [-]

throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral.

I cannot think of one example of a claim along those lines.

Comment author: XiXiDu 20 January 2012 02:26:18PM *  1 point [-]

...throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways...

I cannot think of one example of a claim along those lines.

The closest I can think of right now is the following quote from Eliezer's January 2010 video Q&A:

So if I got hit by a meteor right now, what would happen is that Michael Vassar would take over responsibility for seeing the planet through to safety, and say ‘Yeah I’m personally just going to get this done, not going to rely on anyone else to do it for me, this is my problem, I have to handle it.’ And Marcello Herreshoff would be the one who would be tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don’t know of any other person who could do that, or I’d be working with them. There’s not really much of a motive in a project like this one to have the project split into pieces; whoever can do work on it is likely to work on it together.

ETA

Skimming over the CEV document I see some hints that could explain where the idea comes from that Eliezer believes that he has the wisdom to transform the world:

This seems obvious, until you realize that only the Singularity Institute has even tried to address this issue. [...] Once I acknowledged the problem existed, I didn't waste time planning the New World Order.

Comment author: wedrifid 20 January 2012 02:43:38PM *  6 points [-]

...throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways...

I cannot think of one example of a claim along those lines.

The closest I can think of right now is the following quote from Eliezer's January 2010 video Q&A:

You quoted the context of my statement but edited out the part my reply was based on. Don't do that.

and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral.

The very quote of Eliezer that you supply in the parent demonstrates the Eliezer presents himself as actually trying to do those "impossible" transformations, not refraining from doing them for moral reasons. That part just comes totally out of left field and since it is presented as a conjunction the whole thing just ends up false.

Comment author: XiXiDu 20 January 2012 06:27:58PM 4 points [-]

You quoted the context of my statement but edited out the part my reply was based on. Don't do that.

My apologies, it wasn't my intention to do that. Careless oversight.

Comment author: TheOtherDave 20 January 2012 04:13:43PM 5 points [-]

Thanks for clarifying what part of my statement you were objecting to.

Mostly what I was thinking of on that side was the idea that actually building a powerful AI, or even taking tangible steps that make the problem of building a powerful AI easier, would result in the destruction of the world (or, at best, the creation of various "failed utopias"), and therefore the moral thing to do (which most AI researchers, to say nothing of lesser mortals, aren't wise enough to realize is absolutely critical) is to hold off on that stuff and instead work on moral philosophy and decision theory.

I recall a long wave of exchanges of the form "Show us some code!" "You know, I could show you code... it's not that hard a problem, really, for one with the proper level of vampiric aura, once the one understands the powerful simplicity of the Bayes-structure of the entire universe and finds something to protect important enough to motivate the one to shut up and do the impossible. But it would be immoral for me to write AI code right now, because we haven't made enough progress in philosophy and decision theory to do it safely."

But looking at your clarification, I will admit I got sloppy in my formulation, given that that's only one example (albeit a pervasive one). What I should have said was "throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways, one obvious tangible expression of which (that is, actual AI design) he holds back from creating only because he possesses the unusual wisdom to realize that doing so is immoral."

Comment author: wedrifid 20 January 2012 04:46:06PM 1 point [-]

"You know, I could show you code... it's not that hard a problem, really,

I'd actually be very surprised if Eliezer had ever said that - since it is plainly wrong and as far as I know Eliezer isn't quite that insane. I can imagine him saying that it is (probably) an order of magnitude easier than making the coded AI friendly but that is still just placing it simpler on a scale of 'impossible'. Eliezer says many things that qualify for the label arrogant but I doubt this is one of them.

If Eliezer thought AI wasn't a hard problem he wouldn't be comfortable dismissing (particular isntances of) AI researchers who don't care about friendliness as "Mostly Harmless"!

Comment author: TheOtherDave 20 January 2012 05:31:11PM 1 point [-]

What I wrote was "it's not that hard a problem, really, for one with (list of qualifications most people don't have)," which is importantly different from what you quote.

Incidentally, I didn't claim it was arrogant. I claimed it was a boast, and I brought boasts up in the context of judging whether someone is a crackpot. I explicitly said, and I repeat here, that I don't really have an opinion about EY's supposed arrogance. Neither do I think it especially important.

Comment author: wedrifid 20 January 2012 05:36:35PM 2 points [-]

What I wrote was "it's not that hard a problem, really, for one with (list of qualifications most people don't have)," which is importantly different from what you quote.

I extend my denial to the full list. I do not believe Eliezer has made the claim that you allege he has made, even with the list of qualifications. It would be a plainly wrong claim and I believe you have made a mistake in your recollection.

The flip side is that if Eliezer has actually claimed that it isn't a hard problem (with the list of qualifications) then I assert that said claim significantly undermines Eliezer's credibility in my eyes.

Comment author: TheOtherDave 20 January 2012 02:08:53PM 1 point [-]

OK; I stand corrected about the controversiality.

Comment author: Vaniver 20 January 2012 12:44:13AM 3 points [-]

EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral.

I am not impressed by those sorts of ploys.

Comment author: Dr_Manhattan 19 January 2012 02:38:19PM 17 points [-]

I think Eli as being the main representative of SI, should be more careful of how he does things, and resist his natural instinct to declare people stupid (-> Especially <- if he's basically right)

Case in point: http://www.sl4.org/archive/0608/15895.html That could have been handled more politically and with more face-saving for the victim. Now you have this guy and at least one "friend" with loads of free time going around putting down anything associated with Eliezer or SI on the Internet. For 5 minutes of extra thinking and not typing this could have been largely avoided. Eli has to realize that he's in a good position to needlessly hurt his (and our) own causes.

Another case in point was handling the Roko affair. There is doing the right thing, but you can do it without being an asshole (also IMO the "ownership" of LW policies is still an unresolved issue, but at least it's mostly "between friends"). If something like this needs to be done Eli needs to pass the keyboard to cooler heads.

Comment author: Nick_Tarleton 19 January 2012 03:56:11PM *  6 points [-]

Case in point: http://www.sl4.org/archive/0608/15895.html That could have been handled more politically and with more face-saving for the victim.

Note: happened five years ago

Comment author: Multiheaded 19 January 2012 06:16:27PM 7 points [-]

Certainly anyone building a Serious & Official image for themselves should avoid mentioning any posteriors not of the probability kind in their public things.

Comment author: Dr_Manhattan 19 January 2012 04:49:46PM 2 points [-]

Already noted, and I'm guessing the situation improved. But it's still a symptom of a harmful personality trait.

Comment author: cousin_it 19 January 2012 09:36:20AM *  42 points [-]

My #1 suggestion, by a big margin, is to generate more new formal math results.

My #2 suggestion is to communicate more carefully, like Holden Karnofsky or Carl Shulman. Eliezer's tone is sometimes too preachy.

Comment author: Solvent 19 January 2012 11:10:08AM 20 points [-]

I've reccommended this before, I think.

I think that you should get Eliezer to say the accurate but arrogant sounding things, because everyone already knows he's like that. You should yourself, Luke, be more careful about maintaining a humble opinion.

If you need people to say arrogant things, make them ghost-write for Eliezer.

Personally, I think that a lot of Eliezer's arrogance is deserved. He's explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people's problems. CFAI was way ahead of its time, as TDT still is. So he can feel smug. He's got a reputation as an arrogant eccentric genius anyway.

But the rest of the organisation should try to be more careful. You should imitate Carl Shulman rather than Eliezer.

Comment author: J_Taylor 19 January 2012 09:38:33PM *  9 points [-]

He's explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people's problems.

As a curiosity, what would the world look like if this were not the case? I mean, I'm not even sure what it means for such a sentence to be true or false.

Addendum: Sorry, that was way too hostile. I accidentally pattern-matched your post to something that an Objectivist would say. It's just that, in professional philosophy, there does not seem to be a consensus on what a "problem of philosophy" is. Likewise, there does not seem to be a consensus on what a solution to one would look like. It seems that most "problems" of philosophy are dismissed, rather than ever solved.

Comment author: Solvent 20 January 2012 12:50:52AM 11 points [-]

Here are examples of these philosophical solutions. I don't know which of these he solved personally, and which he simply summarized others' answer to:

  • What is free will? Ooops, wrong question. Free will is what a decision-making algorithm feels like from the inside.

  • What is intelligence? The ability to optimize things.

  • What is knowledge? The ability to constrain your expectations.

  • What should I do with the Newcomb's Box problem? TDT answers this.

...other examples include inventing Fun theory, using CEV to make a better version of utilitarianism, and arguing for ethical injunctions using TDT.

And so on. I know he didn't come up with these on his own, but at the least he brought them all together and argued convincingly for his answers in the Sequences.

I've been trying to figure out these problems for years. So have lots of philosophers. I have read these various philosophers' proposed solutions, and disagreed with them all. Then I read Eliezer, and agreed with him. I feel that this is strong evidence that Eliezer has actually created something of value.

Comment author: J_Taylor 20 January 2012 08:45:26AM *  7 points [-]

What is free will? Ooops, wrong question. Free will is what a decision-making algorithm feels like from the inside.

I admire the phrase "what an algorithm feels like from the inside". This is certainly one of Yudkowsky's better ideas, if it is one of his. I think that one can see the roots of it in G.E.B. Still, this may well count as something novel.

Nonetheless, Yudkowsky is not the first compatibilist.

What is intelligence? The ability to optimize things.

One could define the term in such a way. I tend to take a instrumentalist view on intelligence. However, "the ability to optimize things" may well be a thing. You may as well call it intelligence, if you are so inclined.

This, nonetheless, may not be a solution to the question "what is intelligence?". It seems as though most competent naturalists have moved passed the question.

What is knowledge? The ability to constrain your expectations.

I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?

What should I do with the Newcomb's Box problem? TDT answers this.

I have absolutely no knowledge of the history of Newcomb's problem. I apologize.

Further apologies for the following terse statements:

I don't think Fun theory is known by academia. Also, it looks like, at best, a contemporary version of eudaimonia.

The concept of CEV is neat. However, I think if one were to create an ethical version of the pragmatic definition of truth, "The good is the end of inquiry" would essentially encapsulate CEV. Well, as far as one can encapsulate a complex theory with a brief statement.

TDT is awesome. Predicted by the superrationality of Hofstadter, but so what?

I don't mean to discount the intelligence of Yudkowsky. Further, it is extremely unkind of me to be so critical of him, considering how much he has influenced my own thoughts and beliefs. However, he has never written a "Two Dogmas of Empiricism" or a Naming and Necessity. Philosophical influence is something that probably can only be seen, if at all, in retrospect.

Of course, none of this really matters. He's not trying to be a good philosopher. He's trying to save the world.

Comment author: Solvent 21 January 2012 12:14:49AM *  3 points [-]

I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?

Okay, the Gettier problem. I can explain the Gettier problem, but it's just my explanation, not Eliezer's.

The Gettier problem is pointing out problems with the definition of knowledge as justified true belief. "Justified true belief" (JTB) is an attempt at defining knowledge. However, it falls into the classic problem with philosophy of using intuition wrong, and has a variety of other issues. Lukeprog discusses the weakness of conceptual analysis here.

Also, it's only for irrational beings like humans that there is a distinction between "justified' and 'belief.' An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn't usefully define knowledge anyway.

Incidentally, I just re-read this post, which says:

Yudkowsky once wrote, "If there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it." When I read that I thought: What? That's Quinean naturalism! That's Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!

So perhaps Eliezer didn't create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Leibniz and calculus, really.

Comment author: J_Taylor 23 January 2012 09:43:11PM 2 points [-]

I am aware of the Gettier Problem. I just do not see the phrase, "the ability to constrain one's expectations" as being a proper conceptual analysis of "knowledge." If it were a conceptual analysis of "knowledge", it probably would be vulnerable to Gettieriziation. I love Bayesian epistemology. However, most Bayesian accounts which I have encountered either do away with knowledge-terms or redefine them in such a way that it entirely fails to match the folk-term "knowledge". Attempting to define "knowledge" is probably attempting to solve the wrong problem. This is a significant weakness of traditional epistemology.

So perhaps Eliezer didn't create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Hooke and calculus, really.

I am not entirely familiar with Eliezer's history. However, he is clearly influenced by Hofstadter, Dennet, and Jaynes. From just the first two, one could probably assemble a working account which is, weaker than, but has surface resemblances to, Eliezer's espoused beliefs.

Also, I have never heard of Hooke independently inventing calculus. It sounds interesting however. Still, are you certain you are not thinking of Leibniz?

Comment author: asr 21 January 2012 01:03:09AM *  4 points [-]

Also, it's only for irrational beings like humans that there is a distinction between "justified' and 'belief.' An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn't usefully define knowledge anyway.

I am skeptical that AIs will do pure Bayesian updates -- it's computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.

Comment author: XiXiDu 21 January 2012 10:25:44AM 5 points [-]

I am skeptical that AIs will do pure Bayesian updates -- it's computationally intractable.

Isn't this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.

An AI is very likely to have beliefs or behaviors that are irrational...

Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI's might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That's a real danger in my opinion.

Comment author: lessdazed 23 January 2012 03:29:42PM 2 points [-]

As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions.

I appreciate the example. It will serve me well. Upvoted.

Comment author: wedrifid 22 January 2012 10:52:11PM 3 points [-]

Is a definition of utility that is precise enough to be usable even possible? Honest question.

Honest answer: Yes. For example 1 utilon per paperclip.

Comment author: mwengler 19 January 2012 03:34:29PM 12 points [-]

I think having people ghost-write for Eliezer is a very anti-optimum solution in the long run. It removes integrity from the process. SI would become insufficiently distinguishable from Scientology or a political party if it did this.

Eliezer is a real person. He is not "big brother" or some other fictional figure head used to manipulate the followers. The kind of people you want, and have, following SI or lesswrong will discount Eliezer too much when (not if) they find out he has become a fiction employed to manipulate them.

Comment author: Solvent 20 January 2012 12:55:01AM 4 points [-]

Yeah, I kinda agree. I was slightly exaggerating my position for clarity.

Maybe not full on ghost-writing. But occasionally, having someone around who can say what he wants without further offending anybody can be useful. Like, part of the reason the Sequences are awesome is that he personally claims that they are. Also, Eliezer says:

I should note that if I'm teaching deep things, then I view it as important to make people feel like they're learning deep things, because otherwise, they will still have a hole in their mind for "deep truths" that needs filling, and they will go off and fill their heads with complete nonsense that has been written in a more satisfying style.

So occasionally SingInst needs to say something that sounds arrogant.

I just think that when possible, Eliezer should say those things.

Comment author: brilee 19 January 2012 03:59:24PM 8 points [-]

To be honest, I've only ever felt SI/EY/LW's "arrogance" once, and I think that LW in general is pretty damn awesome. (I realize I'm equating LW with SI, but I don't really know what SI does)

The one time is while reading through the Free Will<http://wiki.lesswrong.com/wiki/Free_will> page, which I've copied here: "One of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own. "

This smacks strongly of "oh look, there's a classic stumper, and I'm the ONLY ONE who's solved it (naa naa naa). If you want to be a true rationalist/join the tribe, you better solve it on your own, too"

I've also heard others mention that HP from HPMoR is an unsufferable little twat, which I assume is the same attitude they would have if they were to read LW.

I've written some of my thoughts up about the arrogance issue here. The short version is that some people have strongly developed identities as "not one of those pretentious people" and have strong immune responses when encountering intelligence. http://moderndescartes.blogspot.com/2011/07/turn-other-cheek.html

Comment author: ArisKatsaris 20 January 2012 12:33:35AM *  6 points [-]

I've also heard others mention that HP from HPMoR is an unsufferable little twat, which I assume is the same attitude they would have if they were to read LW.

I also think that HJPEV is a unsufferable little twat / horrible little jerk, but I love LW and have donated hundreds of dollars to SIAI. And I've strongly recommended HPMOR itself even when I warn people it has something of a jerk for a protagonist. Why shouldn't I ? Is anyone disputing that he's much less nice than e.g. Hermione is, and he often treats other people with horribly bad manners? If he's not insufferable, who is actually suffering him other than Hermione (who has also had to punish him by not speaking to him for a week) or Draco (who found him so insufferable in occasion that he locked him up and Gom-Jabbared him...)

Comment author: Bugmaster 21 January 2012 01:19:08AM 1 point [-]

I also think that HPJEV is a unsufferable little twat / horrible little jerk...

I always assumed that this character detail was intentional, especially since some other characters call HP out on it explicitly.

Comment author: wedrifid 19 January 2012 04:20:23PM 7 points [-]

The one time is while reading through the Free Will<http://wiki.lesswrong.com/wiki/Free_will> page, which I've copied here: "One of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own. "

Ewww! That's hideous. It seems to be totally subverting the point of the wiki. I actually just went as far as to log in planning to remove the offending passage until I noticed that Eliezer put it there himself.

I'm actually somewhat embarrassed by page now that you've brought it to our attention. I rather hope we can remove it and replace it with either just a summary of what free will looks like dissolved or a placeholder with the links to relevant blog posts.

Comment author: multifoliaterose 19 January 2012 01:50:47PM *  8 points [-]

(a) My experience with the sociology of academia has been very much in line with what Lukeprog's friend, Shminux and RolfAndreassen describe. This is the culture that I was coming from in writing my post titled Existential Risk and Public Relations. Retrospectively I realize that the modesty norm is unusually strong in academia and to that extent I was off-base in my criticism.

The modesty norms have some advantages and disadvantages. I think that it's appropriate for even the best people take the view "I'm part of a vast undertaking; if I hadn't gotten there first it's not unlikely that someone else would have gotten there within a few decades." However, I'm bothered by the fact that the norm is so strong that innocuous questions/comments which quite are weak signals of immodesty are frowned upon.

(b) I agree with cousin it that it would be good for SIAI staff to "communicate more carefully, like Holden Karnofsky or Carl Shulman."

Comment author: XiXiDu 19 January 2012 05:17:37PM 8 points [-]

I agree with cousin it that it would be good for SIAI staff to "communicate more carefully, like Holden Karnofsky or Carl Shulman."

I agree with this. I probably would have never voiced any skepticism/criticism if most SI/LW folks would be more like Holden Karnofsky, Carl Shulman, Nick Bostrom or cousin_it.

Comment author: RolfAndreassen 19 January 2012 06:33:35AM 20 points [-]

I agree with what has been said about the modesty norm of academia; I speculate that it arises because if you can avoid washing out of the first-year math courses, you're already one or two standard deviations above average, and thus you are in a population in which achievements that stood out in a high school (even a good one) are just not that special. Bragging about your SAT scores, or even your grades, begins to feel a bit like bragging about your "Participant" ribbon from sports day. There's also the point that the IQ distribution in a good physics department is not Gaussian; it is the top end of a Gaussian, sliced off. In other words, there's a lower bound and an exponential frequency decay from there. Thus, most people in a physics department are on the lower end of their local peer group. I speculate that this discourages bragging because the mass of ordinary plus-two-SDs doesn't want to be reminded that they're not all that bright.

However, all that aside: Are academics the target of this blog, or of lukeprog's posts? Propaganda, to be effective, should reach the masses, not the elite - although there's something to be said for "Get the elite and the masses will follow", to be sure. Although academics are no doubt over-represented among LessWrong readers and indeed among regular blog readers, still they are not the whole world. Can we show that a glowing listing of not-very-specific awesomenesses is counterproductive to the average LW reader, or the average prospective recruit who might be pointed to lukeprog's post? If not, the criticism rather misses its mark. Academics can always be pointed to the Sequences instead; what we're missing is a quick introduction for the plus-one-SD who is not going to read three years of blog output.

Comment author: Karmakaiser 19 January 2012 03:52:07PM 3 points [-]

So if I could restate the norms of academia vis a vi modesty: "Do the impossible. But don't forget to shut up as well."

Is that a fair characterization?

Comment author: RolfAndreassen 19 January 2012 10:45:45PM *  13 points [-]

Well, no, I don't think so. Most academics do not work on impossible problems, or think of this as a worthy goal. So it should be more like "Do cool stuff, but let it speak for itself".

Moderately related: I was just today in a meeting to discuss a presentation that an undergraduate student in our group will be giving to show her work to the larger collaboration. On her first page she had

Subject

Her name

Grad student helping her

Dr supervisor no 1

Dr supervisor no 2

And to start off our critique, supervisor 1 mentioned that, in the subculture of particle physics, it is not the custom to list titles, at least for internal presentations. (If you're talking to a general audience the rules change.) Everyone knows who you are and what you've done! Thus, he gave the specific example that, if you mention "Leon", everyone knows you speak of Leon Lederman, the Nobel-Prize winner. But as for "Dr Lederman", pff, what's a doctorate? Any idiot can be a doctor and many idiots (by physics standards, that is) are; if you're not a PhD it's at least assumed that you're a larval version of one. It's just not a very unusual accomplishment in these circles. To have your first name instantly recognised is a much greater accolade. Doctors are thirteen to the dozen, but there is only one Leon.

Of course this is not really modesty, as such; it's a particular form of status recognition. We don't make much overt show of it, but everyone knows their position in the hierarchy!

Comment author: jsteinhardt 20 January 2012 08:26:25AM 1 point [-]

Wow, I didn't even consciously recognize this convention, although I would definitely never, for instance, add titles to the author list of a paper. So I seem to have somehow picked it up without explicitly deciding to.

Comment author: asr 20 January 2012 02:44:09AM *  2 points [-]

I have seen this elsewhere in the academy as well.

At many elite universities, professors are never referred to as Dr-so-and-so. Everybody on the faculty has a doctorate. They are Professor-so-and-so. At some schools, I'm told they are referred to as Mr or Mrs-so-and-so. Similar effect: "we know who's cool and high-status and don't need to draw attention to it."

Comment author: [deleted] 18 January 2012 11:50:12PM *  49 points [-]

(I hope this doesn't come across as overly critical because I'd love to see this problem fixed. I'm not dissing rationality, just its current implementation. You have declared Crocker's Rules before, so I'm giving you an emotional impression of what your recent rationality propaganda articles look like to me, and I hope that doesn't come across as an attack, but something that can be improved upon.)

I think many of your claims of rationality powers (about yourself and other SIAI members) look really self-congratulatory and, well, lame. SIAI plainly doesn't appear all that awesome to me, except at explaining how some old philosophical problems have been solved somewhat recently.

You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?! Frankly, the only publicly visible person who strikes me as having some awesome powers is you, and from reading CSA, you seem to have had high productivity (in writing and summarizing) before you ever met LW.

Maybe there are all these awesome feats I just never get to see because I'm not at SIAI, but I've seen similar levels of confidence in your methods and weak results in the New Age circles I hung out in years ago. Your beliefs are much saner, but as long as you can't be more effective than them, I'll always have a problem taking you seriously.

In short, as you yourself noted, you lack a Tim Ferriss. Even for technical skills, there isn't much I can point at and say, "holy shit, this is amazing and original, I wanna learn how to do that, have all my monies!".

(This has little to do with the soundness of SIAI's claims about Intelligence Explosion etc., though, but it does decrease my confidence that conclusions reached through your epistemic rationality are to be trusted if the present results seem so lacking.)

Comment author: FiftyTwo 19 January 2012 12:32:05AM 21 points [-]

Thought experiment

If the SIAI was a group of self interested/self deceiving individuals, similar to new age groups, who had made up all this stuff about rationality and FAI as a cover for fundraising what different observations would we expect?

Comment author: katydee 19 January 2012 05:34:39PM *  17 points [-]

I would expect them to:

  • 1- Never hire anybody or hire only very rarely
  • 2- Not release information about their finances
  • 3- Avoid high-profile individuals or events
  • 4- Laud their accomplishments a lot without producing concrete results
  • 5- Charge large amounts of money for classes/training
  • 6- Censor dissent on official areas, refuse to even think about the possibility of being a cult, etc.
  • 7- Not produce useful results

SIAI does not appear to fit 1 (I'm not sure what the standard is here), certainly does not fit 2 or 3, debatably fits 4, and certainly does not fit 5 or 6. 7 is highly debatable but I would argue that the Sequences and other rationality material are clearly valuable, if somewhat obtuse.

Comment author: private_messaging 27 July 2012 03:50:03PM *  3 points [-]

That goes for self interested individuals with high rationality, purely material goals, and very low self deception. The self deceived case, on the other hand, is the people whose self interest includes 'feeling important' and 'believing oneself to be awesome' and perhaps even 'taking a shot at becoming the saviour of mankind'. In that case you should expect them to see awesomeness in anything that might possibly be awesome (various philosophy, various confused texts that might be becoming mainstream for all we know, you get the idea), combined with absence of anything that is definitely awesome and can't be trivial (a new algorithmic solution to long standing well known problem that others worked on, practically important enough, etc).

Comment author: FAWS 19 January 2012 12:49:42AM 11 points [-]

I wouldn't have expected them to hire Luke. If Luke was a member all along and everything just planned to make them look more convincing that would imply a level of competence at such things that I'd expect all round better execution (which would have helped more than slightly improved believability from faking lower level of PR etc competence).

Comment author: lukeprog 19 January 2012 01:08:40AM *  17 points [-]

I appreciate the tone and content of your comment. Responding to a few specific points...

You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?!

There are many things we aren't (yet) good at. There are too many things about which to check the science and test things and update. In fact, our ability to collaborate successfully with volunteers on things has greatly improved in the last month, in part because we implemented some advice from the GWWC gang, who are very good at collaborating with volunteers.

the only publicly visible person who strikes me as having some awesome powers is you

Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.

Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them and Carl Shulman. But you'd have to visit the Bay Area, and we can't afford to have him do nothing but conversations, anyway. If you want a taste, you can read his comment history, which consists of him writing the exactly correct thing to say in almost every comment he's made for the past several years.

Aaaaaaaaaand now Carl will slap me for setting expectations too high. But I don't think I'm exaggerating that much. Maybe I'll get by with another winky-face.

;)

Comment author: Karmakaiser 19 January 2012 03:31:14PM 20 points [-]

Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.

I wasn't aware of Google's AGI team accepting CFAI. Is there a link of organizations that consider the Friendly AI issue important?

Comment author: jacob_cannell 03 March 2012 05:19:30AM 3 points [-]

I wasn't even aware of "Google's AGI team" . .

Comment author: [deleted] 19 January 2012 02:47:19AM 38 points [-]

I don't think you're taking enough of an outside view. Here's how these accomplishments look to "regular" people:

CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team.

You wrote something 11 years ago, which you now consider defunct and still is not a mainstream view in any field.

The Sequences are simply awesome.

You wrote series of esoteric blog posts that some people like.

And he did manage to write the most popular Harry Potter fanfic of all time.

You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?

Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them and Carl Shulman. But you'd have to visit the Bay Area, and we can't afford to have him do nothing but conversations, anyway. If you want a taste, you can read his comment history, which consists of him writing the exactly correct thing to say in almost every comment he's made for the past several years.

You have a guy who is pretty smart. Ok...

The point I'm trying to make is, muflax's diagnosis of "lame" isn't far off the mark. There's nothing here with the ability to wow someone who hasn't heard of SIAI before, or to encourage people to not be put off by arguments like the one Eliezer makes in the Q&A.

Comment author: atucker 19 January 2012 10:53:59AM *  12 points [-]

You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?

It's actually been incredibly useful to establishing the credibility of every x-risk argument that I've had with people my age.

"Have you read Harry Potter and the Methods of Rationality?"

"YES!"

"Ah, awesome!"

merriment ensues

topic changes to something about things that people are doing

"So anyway the guy who wrote that also does...."

Comment author: [deleted] 19 January 2012 12:23:54PM 19 points [-]

Again, take the outside outside view. The kind of conversation you described only happens with people who have read HPMoR--just telling people about the fic isn't really impressive. (Especially if we are talking about the 90+% of the population who know nothing about fanfiction.) Ditto for the Sequences, they're only impressive after the fact. Compare this to publishing a number of papers in a mainstream journal, which is a huge status boost even to people who have never actually read the papers.

Comment author: atucker 19 January 2012 12:38:24PM 2 points [-]

I don't think that that kind of status converts nearly as well as establishing a niche of people who start adopting your values, and then talking to them.

Comment author: [deleted] 19 January 2012 01:46:01PM *  14 points [-]

Perhaps not, but Luke was using HPMoR as an example of an accomplishment that would help negate accusations of arrogance, and for the majority of "regular" people, hearing that SIAI published journal articles does that better than hearing that they published Harry Potter fanfiction.

Comment author: pjeby 23 January 2012 06:37:32AM 4 points [-]

for the majority of "regular" people, hearing that SIAI published journal articles does that better than hearing that they published Harry Potter fanfiction

The majority of "regular" people don't know what journals are; apart from the Wall Street Journal and the New England Journal of Medicine, they mostly haven't heard of any. If asked about journal articles, many would say, "you mean like a blog?" (if younger) or think you were talking about a diary or a newspaper (if older).

They have, however, heard of Harry Potter. ;-)

Comment author: beoShaffer 19 January 2012 12:56:06AM 1 point [-]

Building off of this and my previous comment, I think that more and more visible rationality verification could help. First off, opening your ideas up to tests generally reduces perceptions of arrogance. Secondly, successful results would have similar effects to the technical accomplishments I mentioned above. (Note I expect wide scale rationality verification to increase the amount of pro-LW evidence that can be easily presented to outsiders, not for it to increase my own confidence. Thus this isn't in conflict with the conservation of evidence.)

Comment author: tetsuo55 19 January 2012 08:10:44PM *  1 point [-]

People tell me SI is arrogant but I don't see it myself. When you tell someone something and open it up to falsification and criticism I no longer see it as arrogance ( but I am wrong there for some reason)

In any case, what annoys me about the claims made is that its mostly based on anecdotal evidence and very little has come from research. Also as a regular guy and not a scientist or engineer I've noticed a distinct lack of any discussion of SI's viewpoints in the news.

I don't see anyone actively trying to falsify any of the claims in the sequences for example, and I think it's because you cannot really take them all that seriously.

A second problem is that there are many typos, little mistakes and (due to new experimental evidence) wrong things in the sequences and they never get updated. I'd rather see the sequences as part of a continually updated wiki-like lesson plan, where feedback is reviewed by a kind of board and they change what the texts accordingly.

The nitpicks mentioned on rationalwiki also contribute to the feeling of cultishness and arrogance:

http://rationalwiki.org/wiki/LessWrong The part about quantum mechanics could use some extra posts, especially since EY does explain why he makes the claim when you take the whole of the sequences into account. He uses evidence from unrelated fields to prove many worlds.

EDIT: for some unknown reason people are downvoting my comment, if you downvote(d) this post or see why please tell me why so I can learn and improve future posts. Private messages are ok if you don't want to do it through a response here.

Comment author: prase 20 January 2012 02:36:28PM 10 points [-]

there are many typo's

Murphy's law: a sentence criticising typos will contain a typo itself.

Comment author: tetsuo55 20 January 2012 04:35:43PM *  2 points [-]

Thanks, google docs is not flagging any typos, could you point some out for me?

Comment author: arundelo 20 January 2012 04:47:11PM *  5 points [-]

Apostrophes are not used to form plurals. (Some style guides give some exceptions, but this is not one of them.) The plural of "typo" is "typos". "Typo's" is a word, but it's the possesive form of "typo" (so it's not the word you want here).

(Ninja edit: better link.)

Comment author: paper-machine 19 January 2012 01:18:05AM 29 points [-]

I've asked around a bit, and we can't recall when exactly EY claimed "world-class mathematical ability". As far as I can remember, he's been pretty up-front about wishing he were better at math. I seem to remember him looking for a math-savvy assistant at one point.

If this is the case, it sounds like EY has a Chuck Norris problem, i.e., his mythos has spread beyond its reality.

Comment author: Tyrrell_McAllister 19 January 2012 02:23:42PM *  8 points [-]

I've asked around a bit, and we can't recall when exactly EY claimed "world-class mathematical ability". As far as I can remember, he's been pretty up-front about wishing he were better at math. I seem to remember him looking for a math-savvy assistant at one point.

I too don't remember that he ever claimed to have remarkable math ability. He's said that he was "spoiled math prodigy" (or something like that), meaning that he showed precocious math ability while young, but he wasn't really challenged to develop it. Right now, his knowledge seems to be around the level of a third- or fourth-year math major, and he's never claimed otherwise. He surely has the capacity to go much further (as many people who reach that level do), but he hasn't even claimed that much, has he?

Comment author: private_messaging 27 July 2012 04:16:36PM *  4 points [-]

This leaves one wondering how the hell would one be this concerned about the AI risk but not study math properly? How the hell can one go on Bayesian this and Bayesian that but not study? How can one trust one's intuitions about how much computational power is needed for AGI, and not want to improve those intuitions?

I've speculated elsewhere that he would likely be unable to implement general Bayesian belief propagation graph or even know what is involved (its NP complete problem in general and the accuracy of solution is up to heuristics. Yes, heuristics. Biased ones, too). That's very bad when it comes to understanding rationality, as you will start going on with maxims like "update all your beliefs" etc, which look outright stupid to e.g. me (I assure you I can implement Bayesian belief propagation graph), and triggers my 'its another annoying person that talks about things he has no clue about' reflex.

Talking about Bayesian this and Bayesian that, one should better know mathematics very well. Because in practice all those equations get awfully hairy on things like graphs in general (not just trees). If you don't know relevant math very well and you call yourself Bayesian, you are professing a belief in belief. If you do not make a claim of extreme mathematical skills and knowledge, and you go on Bayesian this and that, other people will have to assume extreme mathematical skills and knowledge out of politeness.

Comment author: David_Gerard 27 July 2012 08:46:17PM 4 points [-]

If you don't know relevant math very well and you call yourself Bayesian, you are professing a belief in belief.

Yes.

Comment author: lukeprog 19 January 2012 01:36:49AM 12 points [-]

Yes. At various times we've considered hiring EY an advanced math tutor to take him to the next level more quickly. He's pretty damn good at math but he's not Terence Tao.

Comment author: paper-machine 19 January 2012 01:37:40AM 3 points [-]

So did you ask your friend where this notion of theirs came from?

Comment author: Aleksei_Riikonen 19 January 2012 12:51:56PM 3 points [-]

So, I have a few questions:

  1. What are the most egregious examples of SI's arrogance?

Since you explicitly ask a question phrased thus, I feel obligated to mention that last April I witnessed a certain email incident that I thought was somewhat extremely bad in some ways.

I do believe that lessons have been learned since then, though. Probably there's no need to bring the matter up again, and I only mention it since according to my ethics it's the required thing to do when asked such an explicit question as above.

(Some readers may wonder why I'm not providing details here. That's because after some thought, I for my part decided against making the incident public, since I expect it might subsequently get misrepresented to look worse than what's fair. (There might be value in showing records of the incident to new SIAI employees as an example of how not to do things, though.))

Comment author: Aleksei_Riikonen 20 January 2012 01:04:24PM 4 points [-]

Curse me for presenting myself as someone having interesting secret knowledge. Now I get several PMs asking for details.

In short, this "incident" was about one or two SIAI folks making a couple of obvious errors of judgment, and in the case of the error that sparked the whole thing, getting heatedly defensive about it for a moment. Other SIAI folks however recognized the obvious mistakes as such, so the issue was resolved, even though unprofessional conduct was observed for a moment.

The actual mistakes were rather minor, nothing dramatic. The surprising thing was that heated defensiveness took place on the way to those mistakes getting corrected.

(And since Eliezer is the SIAI guy most often accused of arrogance, I'll additionally state that here that is not the case. Eliezer was very professional in the email exchange in question.)

Comment author: malthrin 19 January 2012 12:48:45AM 27 points [-]

There's a phrase that the tech world uses to describe the kind of people you want to hire: "smart, and gets things done." I'm willing to grant "smart", but what about the other one?

The sequences and HPMoR are fantastic introductory/outreach writing, but they're all a few years old at this point. The rhetoric about SI being more awesome than ever doesn't square with the trend I observe* in your actual productivity. To be blunt, why are you happy that you're doing less with more?

*I'm sure I don't know everything SI has actually done in the last year, but that's a problem too.

Comment author: ciphergoth 20 January 2012 08:09:40AM 3 points [-]

"smart and gets things done" I think originates with Joel Spolsky:

http://www.joelonsoftware.com/articles/fog0000000073.html

Comment author: malthrin 19 January 2012 01:18:42AM 13 points [-]

To educate myself, I visited the SI site and read your December progress report. I should note that I've never visited the SI site before, despite having donated twice in the past two years. Here are my two impressions:

  • Many of these bullet points are about work in progress and (paywalled?) journal articles. If I can't link it to my friends and say, "Check out this cool thing," I don't care. Tell me what you've finished that I can share with people who might be interested.
  • Lots on transparency and progress reporting. In general, your communication strategy seems focused on people who already are aware of and follow SIAI closely. These people are loud, but they're a small minority of your potential donors.
Comment author: lukeprog 19 January 2012 01:33:41AM 6 points [-]

Tell me what you've finished that I can share with people who might be interested.

Of course, things we finished before December 2011 aren't in the progress report. E.g. The Singularity and Machine Ethics.

In general, your communication strategy seems focused on people who already are aware of and follow SIAI closely.

Not really. We're also working on many things accessible to a wider crowd, like Facing the Singularity and the new website. Once the new website is up we plan to write some articles for mainstream magazines and so on.

Comment author: FiftyTwo 18 January 2012 11:51:11PM *  32 points [-]

The claim made that donating to the SIAI is the charity donation with the highest expected return* always struck me as rather arrogant, though I can see the logic behind it.

The problem is firstly that its an extremely self serving statement, (equivalent to "giving us money is the best thing you can ever possibly do") even if true its credibility is reduced by the claim coming from the same person who would benefit from it.

Secondly it requires me to believe a number of claims which individually require a burden of proof, and gain more from the conjunction. Including: "Strong AI is possible," "friendly AI is possible," "The actions of the SIAI will significantly effect the results of investigations into FAI," and "the money I donate will significantly improve the effectiveness of the SIAI's research" (I expect the relationship between research efffectiveness and funding isn't linear). All of which I only have your word for.

Thirdly, contrast this with other charities who are known to be very effective and can prove it, and whose results affect presently suffering people (e.g. Against malaria).

Caveat, I'm not arguing any of the claims are wrong, but all the arguments I have for it come from people with an incentive in getting me to donate so I have reasonable grounds for questioning the whole construct from outside the argument.

*Can't remember the exact wording but that was the takeaway of a headline in the last fundraiser.

Comment author: lukeprog 19 January 2012 01:26:54AM *  1 point [-]

The claim made that donating to the SIAI is the charity donation with the highest expected return* always struck me as rather arrogant

I feel like I've heard this claimed, too, but... where? I can't find it.

Can't remember the exact wording but that was the takeaway of a headline in the last fundraiser.

Here is the latest fundraiser; which line were you thinking of? I don't see it.

Comment author: [deleted] 19 January 2012 01:35:19AM 11 points [-]

I feel like I've heard this claimed, too, but... where? I can't find it.

Question #5.

Comment author: lukeprog 19 January 2012 01:53:28AM *  5 points [-]

Yup, there it is! Thanks.

Eliezer tends to be more forceful on this than I am, though. I seem to be less certain about how much x-risk is purchased by donating to SI as opposed to donating to FHI or GWWC (because GWWC's members are significantly x-risk focused). But when this video was recorded, FHI wasn't working as much on AI risk (like it is now), and GWWC barely existed.

I am happy to report that I'm more optimistic about the x-risk reduction purchased per dollar when donating to SI now than I was 6 months ago. Because of stuff like this. We're getting the org into better shape as quickly as possible.

Comment author: curiousepic 19 January 2012 04:42:07PM *  9 points [-]

because GWWC's members are significantly x-risk focused

Where is this established? As far as I can tell, one cannot donate "to" GWWC, and none of their recommended charities are x-risk focused.

Comment author: Thrasymachus 05 February 2012 07:44:33PM 2 points [-]

(Belated reply): I can only offer anecdotal data here, but as one of the members of GWWC, many of the members are interested. Also, listening to the directors, most of them are also interested in x-risk issues.

You are right in that GWWC isn't a charity (although it is likely to turn into one), and their recommendations are non-x-risk. The rationale for recommending charities is dependent on reliable data: and x-risk is one of those things where a robust "here's more much more likely happy singularity will be if you give to us" analysis looks very hard.

Comment author: Barry_Cotter 19 January 2012 01:35:38AM 1 point [-]

I feel like I've heard this claimed, too, but... where? I can't find it.

Neither can I but IIRC Anna Salamon did an EU calculation which came up with eight lives saved per dollar donated, no doubt impressively caveated and with error bars aplenty.

Comment author: lukeprog 19 January 2012 01:52:57AM 4 points [-]

I think you're talking about this video. Without watching it again, I can't remember if Anna says that SI donation could buy something like eight lives per dollar, or whether donation to x-risk reduction in general could buy something like eight lives per dollar.

Comment author: Eliezer_Yudkowsky 19 January 2012 02:45:33PM 2 points [-]

Find someplace I call myself a mathematical genius, anywhere.

(I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern. I don't know what this alarm feels like, so it's hard to guess what sets it off.)

Comment author: jeremysalwen 02 April 2012 03:20:52AM 6 points [-]

Here: http://lesswrong.com/lw/ua/the_level_above_mine/

I was going to go through quote by quote, but I realized I would be quoting the entire thing.

Basically:

A) You imply that you have enough brainpower to consider yourself to be approaching Jaynes's level. (approaching alluded to in several instances) B) You were surprised to discover you were not the smartest person Marcello knew. (or if you consider surprised too strong a word, compare your reaction to that of the merely very smart people I know, who would certainly not respond with "Darn"). C) Upon hearing someone was smarter than you, the first thing you thought of was how to demonstrate that you were smarter than them. D) You say that not being a genius like Jaynes and Conway is a "possibility" you must "confess" to. E) You frame in equally probable terms the possibility that the only thing separating you from genius is that you didn't study quite enough math as a kid.

So basically, yes, you don't explicitly say "I am a mathematical genius", but you certainly positions yourself as hanging out on the fringes of this "genius" concept. Maybe I'll say "Schrodinger’s Genius".

Please ignore that this is my first post and it seems hostile. I am a moderate-time lurker and this is the first time that I felt I had relevant information that was not already mentioned.

Comment author: wedrifid 22 January 2012 06:10:19AM *  9 points [-]

I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern.

My thinking when I read this post went something along these lines but where you put "made up because" I put "actually consists of". That is, acting in a way that (the observer perceives) is beyond your station is a damn good first approximation at a practical definition of 'arrogance'. I would go as far as to say that if you weren't being arrogant you wouldn't be able to do you job. Please keep on being arrogant!

The above said, there are other behaviors that will provoke the label 'arrogant' which are not beneficial. For example:

  • Acting like one is too good to have to update based on what other people say. You've commented before that high status can make you stupid. Being arrogant - acting in an exaggerated high status manner - certainly enhances this phenomon. As far as high status people go you aren't too bad along the "too arrogant to be able to comprehend what other people say" axis but "better than most high status people" isn't the bar you are aiming for.
  • Acting oblivious to how people think of you isn't usually the optimal approach for people whose success (in, for example, saving the @#%ing world) depends on the perceptions of others (who give you the money).

When I saw Luke make this post I thought that ahh, Luke is taking his new role seriously and actively demonstrated that he is committed to being open to feedback and managing public perception. I expected both he and others from SingInst to actively resist the temptation to engage with the (requested!) criticism so as to avoid looking defensive and undermining the whole point of what he was attempting.

What was your reasoning when you decided to make this reply? Did you think to yourself "What's the existential-opportunity-maximising approach here? I know! I'm going to reply with aggressive defensiveness and cavalierly dismiss all those calling me arrogant as suffering bias because they are unable to accept how awesome we are!" Of course what you say is essentially correct yet saying it in this context strikes me as a tad naive. It's also (a behavior that will prompt people to think of you as) rather arrogant.

(As a tangent that I find at least mildly curious I've just gone and rather blatantly condescended to Eliezer Yudkowsky. Given that Eliezer is basically superior to me in every aspect (except, I've discovered, those abilities that are useful when doing Parkour) this is the very height of arrogance. But then in my case the very fate of the universe doesn't depend on what people think of me!)

Comment author: XiXiDu 19 January 2012 05:24:10PM 24 points [-]

I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern.

Some quotes by you that might highlight why some people think you/SI is arrogant :

I tried - once - going to an interesting-sounding mainstream AI conference that happened to be in my area. I met ordinary research scholars and looked at their posterboards and read some of their papers. I watched their presentations and talked to them at lunch. And they were way below the level of the big names. I mean, they weren't visibly incompetent, they had their various research interests and I'm sure they were doing passable work on them. And I gave up and left before the conference was over, because I kept thinking "What am I even doing here?" (Competent Elites)

More:

I don't mean to bash normal AGI researchers into the ground. They are not evil. They are not ill-intentioned. They are not even dangerous, as individuals. Only the mob of them is dangerous, that can learn from each other's partial successes and accumulate hacks as a community. (Above-Average AI Scientists)

Even more:

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified. (So You Want To Be A Seed AI Programmer)

And:

If you haven't read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't. (Eliezer_Yudkowsky August 2010 03:57:30PM)

Comment author: [deleted] 20 January 2012 10:06:09PM 2 points [-]

(So You Want To Be A Seed AI Programmer)

I hadn't seen that before. Was it written before the sequences?

I ask because it all seemed trivial to my sequenced self and it seemed like it was not supposed to be trivial.

I must say that writing the sequences is starting to look like it was a very good idea.

Comment author: katydee 21 January 2012 04:32:53PM 1 point [-]

I believe so; I also believe that post is now considered obsolete.

Comment author: lukeprog 19 January 2012 08:19:50PM 4 points [-]

I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?

Comment author: Bugmaster 21 January 2012 01:53:54AM 11 points [-]

The first three statements can be boiled down to saying, "I, Eliezer, am much better at understanding and developing AI than the overwhelming majority of professional AI researchers".

Is that statement true, or false ? Is Eliezer (or, if you prefer, the average SIAI member) better at AI than everyone else (plus or minus epsilon) who is working in the field of AI ?

The prior probability for such a claim is quite low, especially since the field is quite large, and includes companies such as Google and IBM who have accomplished great things. In order to sway my belief in favor of Eliezer, I'll need to witness some great things that he has accomplished; and these great things should be significantly greater than those accomplished by the mainstream AI researchers. The same sentiment applies to SIAI as a whole.

Comment author: XiXiDu 20 January 2012 10:46:56AM *  31 points [-]

I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?

I am the wrong person to ask if a "a doctorate in AI would be negatively useful". I guess it is technically useful. And I am pretty sure that it is wrong to say that others are "not remotely close to the rationality standards of Less Wrong". That's of course the case for most humans, but I think that there are quite a few people out there who are at least at the same level. I further think that it is quite funny to criticize people on whose work your arguments for risks from AI are dependent on.

But that's besides the point. Those statements are clearly false when it comes to public relations.

If you want to win in this world, as a human being, you are either smart enough to be able to overpower everyone else or you actually have to get involved in some fair amount of social engineering, signaling games and need to refine your public relations.

Are you able to solve friendly AI, without much more money, without hiring top-notch mathematicians, and then solve general intelligence to implement it and take over the world? If not, then you will at some point either need much more money or convince actual academics to work for you for free. And, most importantly, if you don't think that you will be the first to invent AGI, then you need to talk to a lot of academics, companies and probably politicians to convince them that there is a real risk and that they need to implement your friendly AI theorem.

It is of topmost importance to have an academic degree and reputation to make people listen to you. Because at some point it won't be enough to say, "I am a research fellow of the Singularity Institute who wrote a lot about rationality and cognitive biases and you are not remotely close to our rationality standards." Because at the point that you utter the word "Singularity" you have already lost. The very name of your charity already shows that you underestimate the importance of signaling.

Do you think IBM, Apple or DARPA care about a blog and a popular fanfic? Do you think that you can even talk to DARPA without first getting involved in some amount of politics, making powerful people aware of the risks? And do you think you can talk to them as a "research fellow of the Singularity Institute"? If you are lucky then they might ask someone from their staff about you. And if you are really lucky then they will say that you are for the most part well-meaning and thoughtful individuals who never quite grew out of their science-fiction addiction as adolescents (I didn't write that line myself, it's actually from an email conversation with a top-notch person that didn't give me their permission to publish it). In any case, you won't make them listen to you, let alone do what you want.

Compare the following:

Eliezer Yudkowsky, research fellow of the Singularity Institute.

Education: -

Professional Experience: -

Awards and Honors: A lot of karma on lesswrong and many people like his Harry Potter fanfiction.

vs.

Eliezer Yudkowsky, chief of research at the Institute for AI Ethics.

Education: He holds three degrees from the Massachusetts Institute of Technology: a Ph.D in mathematics, a BS in electrical engineering and computer science, and an MS in physics and computer science.

Professional Experience: He worked on various projects with renowned people making genuine insights. He is the author of numerous studies and papers.

Awards and Honors: He holds various awards and is listed in the Who's Who in computer science.

Who are people going to listen to? Well, okay...the first Eliezer might receive a lot of karma on lesswrong, the other doesn't have enough time for that.

Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won't even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.

Think about it. Imagine how easy it would have been for me to cause serious damage to SI and the idea of risks from AI by writing different kinds of emails.

Why does that rational wiki entry about lesswrong exist? You are just lucky that they are the only people who really care about lesswrong/SI. What do you think will happen if you continue to act like you do and real experts feel uncomfortable about your statements or even threatened? It just takes one top-notch person, who becomes seriously bothered, to damage your reputation permanently.

Comment author: Rain 21 January 2012 04:16:07AM 6 points [-]

I wish I could decompile my statements of "they need to do a much better job at marketing" into paragraphs like this. Thanks.

Comment author: wedrifid 21 January 2012 06:19:58AM 3 points [-]

I wish I could decompile my statements of "they need to do a much better job at marketing" into paragraphs like this.

Practice makes perfect!

Comment author: Viliam_Bur 20 January 2012 03:17:40PM *  6 points [-]

I mostly agree with the first 3/4 of your post. However...

Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won't even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.

You can't make everyone happy. Whatever policy a website has, some people will leave. I have run away from a few websites that have "no censorship, except in extreme cases" policy, because the typical consequence of such policy is some users attacking other users (weighing the attack carefully to prevent moderator's action) and some users producing huge amounts of noise. And that just wastes my time.

People leaving LW should be considered on case-by-case basis. They are not all in the same category.

Why does that rational wiki entry about lesswrong exist?

To express opinions of rationalwiki authors about lesswrong, probably. And that opinion seems to be that "belief in many worlds + criticism of science = pseudoscience".

I agree with them that "nonstandard belief + criticism of science = high probability of pseudoscience". Except that: (1) among quantum physicists the belief in many worlds is not completely foreign; (2) the criticism of science seems rational to me, and to be fair, don't forget that scholarship is an officially recognized virtue at LW; (3) the criticism of naive Friendly AI approaches is correct, though I doubt the SI's ability to produce something better (so this part really may be crank), but the rest of LW again seems rational to me.

Now, how much rational are the arguments on the talk page of rational wiki? See: "the [HP:MoR link] is to a bunch of crap", "he explicitly wrote [HP:MoR] as propaganda and LessWrong readers are pretty much expected to have read it", "The stuff about 'luminosity' and self-help is definitely highly questionable", "they casually throw physics and chemistry out the window and talk about nanobots as if they can exist", "I have seen lots of examples of 'smart' writing, but have yet to encounter one of 'intelligent' writing", "bunch of scholastic idiots who think they matter somehow", "Esoteric discussions that are hard to understand without knowing a lot about math, decision theory, and most of all the exalted sequences", "Poor writing (in terms of clarity)", "[the word 'emergence'] is treated as disallowed vocabulary", "I wonder how many oracular-looking posts by EY that have become commonplaces were reactions to an AI researcher that had annoyed him that day" etc. To be fair, there are also some positive voices, such as: "Say what you like about the esoteric AI stuff, but that man knows his shit when it comes to cognitive biases and thinking", "I believe we have a wiki here about people who pursue ideas past the point of actual wrongness".

Seems to me like someone has a hammer (a wiki for criticizing pseudoscience) and suddenly everything unusual becomes a nail.

You are just lucky that they are the only people who really care about lesswrong/SI.

Frankly, most people don't care about lesswrong or SI or rational wiki.

Comment author: FeepingCreature 20 January 2012 01:20:51PM 3 points [-]

Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing.

I hope you understand that this is not an argument against LW's policy in this matter.

Comment author: TrE 20 January 2012 01:48:48PM 2 points [-]

Related: http://www.overcomingbias.com/2012/01/dear-young-eccentric.html

Don't appear like a rebel, be a rebel. Don't signal rebel-ness, instead, be part of the systemand infiltrate it with your ideas. If those ideas are decent, this has a good chance of working.

Comment author: wedrifid 20 January 2012 11:20:36AM 3 points [-]

Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face.

Counterprediction: The optimal degree of implementation of that policy for the purpose of PR maximisation is somewhat higher than it currently is.

You don't secure an ideal public image by being gentle.

Comment author: XiXiDu 20 January 2012 12:39:18PM *  5 points [-]

You don't secure an ideal public image by being gentle.

Don't start a war if you don't expect to be able to win it. It is much easier to damage a reputation than to build one, especially if you support a cause that can easily trigger the absurdity heuristic in third-party people.

Being rude to people who don't get it will just cause them to reinforce their opinion and tell everyone that you are wrong instead. Which will work, because your arguments are complex and in support of something that sounds a lot like science fiction.

A better route is to just ignore them, if you are not willing to discuss the matter over, or to explain how exactly they are wrong. And if you consider both routes to be undesirable, then do it like FHI and don't host a public forum.

Comment author: wedrifid 20 January 2012 01:07:24PM 3 points [-]

Being rude to people

Being gratuitously rude to people isn't the point. 'Maintaining a garden' for the purpose of optimal PR involves far more targeted and ruthless intervention. "Weeds" (those who are likely to try sabotage your reputation, otherwise interfere with your goals, or significantly provoke 'rudeness' from others) are removed early before they have a chance to take root.

Comment author: paper-machine 20 January 2012 01:07:59PM 2 points [-]

I've had these thoughts for a while, but I undoubtedly would have done much worse in writing them down than you have. Well done.

Comment author: erratio 19 January 2012 09:48:12PM 6 points [-]

To repeat something I said in the other thread, truth values have nothing to do with tone. It's the same issue some people downthread have with Tim Ferriss - no one denies that he seems very effective, but he communicates in a way that gives many people an unpleasant vibe. Same goes if you communicate in a way that pattern-matches to 'arrogant'.

Comment author: lukeprog 19 January 2012 10:18:29PM 4 points [-]

Of course. That's why I said I can "smell the arrogance," and then went on to ask a different question about whether XiXiDu thought the claims were false.

Comment author: kbaxter 19 January 2012 10:57:06PM 14 points [-]

I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?

When I read that, I interpreted it to mean something like "Yes, he does come across as arrogant, but it's okay because everything he's saying is actually true." It didn't come across to me like a separate question - it read to me like a rhetorical question which was used to make a point. Maybe that's not how you intended it?

I think erratio is saying that it's important to communicate in a way that doesn't turn people off, regardless of whether what you're saying is true or not.

Comment author: jmmcd 19 January 2012 11:19:46PM 8 points [-]

But I don't get it. You asked for examples and XiXiDu gave some. You can judge whether they were good or bad examples of arrogance. Asking whether the examples qualify under another, different criterion seems a bit defensive.

Also, several of the examples were of the form "I was tempted to say X" or "I thought Y to myself", so where does truth or falsity come into it?

Comment author: lukeprog 20 January 2012 12:04:13AM 14 points [-]

Okay, let me try again...

XiXiDu, those are good examples of why people think SI is arrogant. Out of curiosity, do you think the statements you quote are actually false?

Comment author: GabrielDuquette 21 January 2012 04:41:45AM *  3 points [-]

I wonder how many of the people who think SI is arrogant and are bothered by it would consider themselves guessers and how many people who note how SI could be perceived as arrogant but aren't bothered by it would consider themselves askers.

Comment author: amcknight 19 January 2012 08:25:55PM *  2 points [-]

FWIW, I'm not sure why you added the 2nd quote and the 3rd is out of context. Also, remember that we're talking about 700+ blog posts and other articles. Just be careful you're not cherry-picking.

Comment author: paper-machine 20 January 2012 01:01:06PM 13 points [-]

This isn't a useful counterargument when the subject at hand is public relations. Several organizations have been completely pwned by hostile parties cherry-picking quotes.

Comment author: [deleted] 20 January 2012 09:17:05PM 2 points [-]

The point was "you may be quote mining" which is a useful thing to tell a LWer, even if it doesn't mean a thing to "the masses".

Comment author: amcknight 20 January 2012 08:38:14PM 1 point [-]

Good point.

Comment author: Matt_Simpson 19 January 2012 05:44:18PM 10 points [-]

Interestingly, the first sentence of this comment set off my arrogance sensors (whether justified or not). I don't think it's the content of your statement, but rather the way you said it.

Comment author: Eliezer_Yudkowsky 19 January 2012 07:37:32PM *  9 points [-]

I believe that. My first-pass filter for theories of why some people think SIAI is "arrogant" is whether the theory also explains, in equal quantity, why those same people find Harry James Potter-Evans-Verres to be an unbearably snotty little kid or whatever. If the theory is specialized to SIAI and doesn't explain the large quantities of similar-sounding vitriol gotten by a character in a fanfiction in a widely different situation who happens to be written by the same author, then in all honesty I write it off pretty quickly. I wouldn't mind understanding this better, but I'm looking for the detailed mechanics of the instinctive sub-second ick reaction experienced by a certain fraction of the population, not the verbal reasons they reach for afterward when they have to come up with a serious-sounding justification. I don't believe it, frankly, any more than I believe that someone actually hates hates hates Methods because "Professor McGonagall is acting out of character".

Comment author: CharlesR 20 January 2012 06:34:11PM 7 points [-]

I once read a book on characterization. I forget the exact quote, but it went something like, "If you want to make your villian more believable, make him more intelligent."

I thought my brain had misfired. But apparently, for the average reader it works.

Comment author: thomblake 19 January 2012 08:50:45PM *  7 points [-]

I acquired my aversion to modesty before reading your stuff, and I seem to identify that "thing", whatever it is shared by you and Harry, as "awesome" rather than "arrogant".

You're acting too big for your britches. You can't save the world; you're not Superman. Harry can't invent new spells; he's just a student. The proper response to that sort of criticism is to ignore it and (save the world / invent new spells) anyway. I don't think there really is a way to make it go away without actually diminishing your ability to do awesome stuff.

Comment author: Matt_Simpson 20 January 2012 01:49:52AM 1 point [-]

FWIW I don't ever recall having this reaction to Harry, though my memory is pretty bad and I think I'm easily manipulated by stories.

It may have something to do with being terse and blunt - this often makes the speaker seem as though they think they're "better" than their interlocutors. I had a Polish professor for one of my calculus classes in undergrad who, being a Pole speaking english, naturally sounded very blunt to our American ears. There were several students in that class who just though he was an arrogant asshole who talked down to his students. I'm mostly speculating here though.

Comment author: yew 19 January 2012 08:55:19PM *  2 points [-]

Self-reference and any more than a moderate degree of certainty about anything that isn't considered normal by whoever happens to be listening are both (at least, in my experience) considered less than discreet.

Trying to demonstrate that one isn't arrogant probably qualifies as arrogance, too.

I don't know how useful this observation is, but I thought it was at least worth posting.

Comment author: sixes_and_sevens 19 January 2012 05:00:44PM 9 points [-]

"Here is a threat to the existence of humanity which you've likely never even considered. It's probably the most important issue our species has ever faced. We're still working on really defining the ins and outs of the problem, but we figure we're the best people to solve it, so give us some money."

Unless you're a fictional character portrayed by Will Smith, I don't think there's enough social status in the world to cover that.

Comment author: amcknight 19 January 2012 08:36:17PM 8 points [-]

This isn't fair. Use a real quote.

Comment author: sixes_and_sevens 19 January 2012 09:15:44PM 5 points [-]

Uh...no. It's in quotation marks because it's expressed as dialogue for stylistic purposes, not because I'm attributing it as a direct statement made by another person. That may make it a weaker statement than if I'd used a direct quote, but it doesn't make it invalid.

Comment author: amcknight 19 January 2012 09:55:33PM 4 points [-]

Arrogance is probably to be found in the way things are said rather than the content. By not using a real example, you've invented the tone of the argument.

Comment author: sixes_and_sevens 19 January 2012 10:38:35PM 4 points [-]

It's not supposed to be an example of arrogance, through tone or otherwise. It's a broad paraphrasing of the purpose and intent of SIAI to illustrate the scope, difficulty and nebulousness of same.

Comment author: Vaniver 19 January 2012 11:59:41PM 2 points [-]

Typically, when I paraphrase I use apostrophes rather than quotation marks to avoid that confusion. I don't know if that's standard practice or not.

Comment author: sixes_and_sevens 20 January 2012 12:39:22AM *  6 points [-]

It's my understanding there's no formal semantic distinction between single- or double-quotes as punctuation, and their usage is a typographic style choice. Your distinction does make sense in a couple of different ways, though. The one that immediately leaps to mind is the distinction between literal and interpreted strings in Perl, et al., though that's a bit of a niche association.

Also single-quotes are more commonly used for denoting dialogue, but that has more to do with historical practicalities of the publishing and printing industries than any kind of standard practise. The English language itself doesn't really seem to know what it's doing when it puts something in quotes, hence the dispute over whether trailing commas and full stops belong inside or outside quotations. One makes sense if you're marking up the text itself, while another makes sense if you're marking up what the text is describing.

I think I may adopt this usage.

Comment author: NihilCredo 20 January 2012 09:40:54PM *  5 points [-]

LessWrong, at least, has a markup function that is specifically designed for the purpose of quoting.

- NihilCredo

Comment author: Vladimir_Nesov 19 January 2012 05:17:15PM *  11 points [-]

If trying to save the world requires having more social status than humanly obtainable, then the world is lost, even if it was easy to save...

Comment author: sixes_and_sevens 19 January 2012 05:40:13PM 13 points [-]

The question is one of credibility rather than capability. In private, public, academic and voluntary sectors it's a fairly standard assumption that if you want people to give you resources, you have to do a little dance to earn it. Yes, it's wasteful and stupid and inefficient, but it's generally easier to do the little dance than convince people that the little dance is a stupid system. They know that already.

It's not arrogant to say "my time is too precious to do a little dance", and it may even be true. The arrogance would be to expect people to give you those resources without the little dance. I doubt the folk at SIAI expect this to happen, but I do suspect they're probably quite tired of being asked to dance.

Comment author: NihilCredo 20 January 2012 09:39:48PM 4 points [-]

The little dance is not wasteful and stupid and inefficient. For each individual with the ability to provide resources (be they money, manpower, or exposure), there are a thousand projects who would love to be the beneficiaries of said resources. Challenging the applicants to produce some standardised signals of competence is a vastly more efficient approach than expecting the benefactors to be able to thoroughly analyse each and every applicant's exoteric efforts.

Comment author: sixes_and_sevens 21 January 2012 12:02:51AM 4 points [-]

I agree that methods of signalling competence are, in principle, a fine mechanism for allowing those with resources to responsibly distribute them between projects.

In practise, I've seen far too many tall, attractive, well-spoken men from affluent background go up to other tall, attractive, well-spoken men from affluent backgrounds and get them to allocate ridiculous quantities of money and man-hours to projects on the basis of presentations which may as well be written in crayon for all the salient information they contain.

The amount this happens varies from place to place, and in the areas where I see it most there does seem to be an improving trend of competence signalling actually correlating to whatever it is the party in question needs to be competent at, but there is still way too much scope for such signalling being as applicable to the work in question as actually getting up in front of potential benefactors and doing a little dance.

Comment author: beoShaffer 18 January 2012 11:26:18PM *  33 points [-]

in combination with his lack of technical publication

I think it would help for EY to submit more of his technical work for public judgment. Clear proof of technical skill in a related domain makes claims less likely to come off as arrogant. For that matter it also makes people more willing to accept actions that they do perceive as arrogant.

Comment author: shminux 18 January 2012 11:11:04PM *  29 points [-]

Having been through a Physics grad school (albeit not of a Caltech caliber), I can confirm that lack of (a real or false) modesty is a major red flag, and a tell-tale of a crank. Hawking does not refer to the black-hole radiation as Hawking radiation, and Feynman did not call his diagrams Feynman diagrams, at least not in public. A thorough literature review in the introduction section of any worthwhile paper is a must, unless you are Einstein, or can reference your previous relevant paper where you dealt with it.

Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org (cs.DM or similar), properly referenced and formatted to conform with the prevailing standard (probably LaTeXed), and submit them for conference proceedings and/or into peer-reviewed journals. Anything less would be less than rational.

Comment author: XiXiDu 19 January 2012 10:08:26AM *  37 points [-]

Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org...

Even Greg Egan managed to copublish papers on arxiv.org :-)

ETA

Here is what John Baez thinks about Greg Egan (science fiction author):

He's incredibly smart, and whenever I work with him I feel like I'm a slacker. We wrote a paper together on numerical simulations of quantum gravity along with my friend Dan Christensen, and not only did they do all the programming, Egan was the one who figured out a great approximation to a certain high-dimensional integral that was the key thing we were studying. He also more recently came up with some very nice observations on techniques for calculating square roots, in my post with Richard Elwes on a Babylonian approximation of sqrt(2). And so on!

That's actually what academics should be saying about Eliezer Yudkowsky if it is true. How does an SF author manage to get such a reputation instead?

Comment author: Pablo_Stafforini 21 March 2014 07:28:09PM 1 point [-]

What's the source for that quote? A quick Google search failed to yield any relevant results.

Comment author: XiXiDu 22 March 2014 09:41:15AM 1 point [-]

What's the source for that quote? A quick Google search failed to yield any relevant results.

Private conversation with John Baez (I asked him if I am allowed to quote him on it). You can ask him to verify it.

Comment author: gwern 25 January 2012 04:05:35AM 2 points [-]

That actually explains a lot for me - when I was reading The Clockwork Rocket, I kept thinking to myself, 'how the deuce could anyone without a physics degree follow the math/physics in this story?' Well, here's my answer - he's still up on his math, and now that I check, I see he has a BS in math too.

Comment author: arundelo 26 January 2012 04:23:46PM *  2 points [-]

I thought this comment by Egan said something interesting about his approach to fiction:

A few reviewers [of Incandescence] complained that they had trouble keeping straight the physical meanings of the Splinterites' [direction words]. This leaves me wondering if they've really never encountered a book before that benefits from being read with a pad of paper and a pen beside it, or whether they're just so hung up on the idea that only non-fiction should be accompanied by note-taking and diagram-scribbling that it never even occurred to them to do this. I realise that some people do much of their reading with one hand on a strap in a crowded bus or train carriage, but books simply don't come with a guarantee that they can be properly enjoyed under such conditions.

(I enjoyed Incandescence without taking notes. If, while I was reading it, I had been quizzed on the direction words, I would have done OK but not great.)

Edit: The other end of the above link contains spoilers for Incandescence. To understand the portion I quoted, it suffices to know that some characters in the story have their own set of six direction words (instead of "up", "down", "north", "south", "east", and "west").

Edit 2: I have a bit of trouble keeping track of characters in novels. When I read on my iPhone, I highlight characters' names as they're introduced, so I can easily refresh my memory when I forgot who someone is.

Comment author: gwern 26 January 2012 04:45:52PM 1 point [-]

Yes, he's pretty unapologetic about his elitism - if you aren't already able to follow his concepts or willing to do the work so you can, you are not his audience and he doesn't care about you. Which isn't a problem with Incandescence, whose directions sound perfectly comprehensible, but is much more of an issue with TCR, which builds up an entire alternate physics.

Comment author: mwengler 19 January 2012 03:55:49PM 4 points [-]

To be fair Eliezer gets good press from Professor Robin Hanson. This is one of the main bulwarks of my opinion of Eliezer and SIAI. (Other bulwarks include having had the distinct pleasure of meeting lukeprog at a few meetups and meeing Anna at the first meetup I ever attended. Whatever else is going on at SIAI, there is a significant amount of firepower in the rooms).

Comment author: ScottMessick 21 January 2012 11:47:14PM 5 points [-]

Yes, and isn't it interesting to note that Robin Hanson sought his own higher degrees for the express purpose of giving his smart contrarian ideas (and way of thinking) more credibility?

Comment author: paper-machine 19 January 2012 01:15:49AM 2 points [-]

Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org (cs.DM or similar), properly referenced and formatted to conform with the prevailing standard (probably LaTeXed), and submit them for conference proceedings and/or into peer-reviewed journals. Anything less would be less than rational.

I agree, wholeheartedly, of course -- except the last sentence. There's a not very good argument that the opportunity cost of EY learning LaTeX is greater than the opportunity cost of having others edit afterward. There's also a not very good argument that EY doesn't lose terribly much from his lack of academic signalling credentials. Together these combine to a weak argument that the current course is in line with what EY wants, or perhaps would want if he knew all the relevant details.

Comment author: Maelin 19 January 2012 01:30:18AM 27 points [-]

For someone who knows how to program, learning LaTeX to a perfectly serviceable level should take at most one day's worth of effort, and most likely it would be spread diffusely throughout the using process, with maybe a couple of hours' dedicated introduction to begin with.

It is quite possible that, considering the effort required to find an editor and organise for that editor to edit an entire paper into LaTeX, compared with the effort required to write the paper in LaTeX in the first place, the additional effort cost of learning LaTeX may in fact pay for itself after less than one whole paper. It's very unlikely that it would take more than two.

Comment author: dbaupp 19 January 2012 02:12:35AM 5 points [-]

It is quite possible that, considering the effort required to find an editor and organise for that editor to edit an entire paper into LaTeX, compared with the effort required to write the paper in LaTeX in the first place, the additional effort cost of learning LaTeX may in fact pay for itself after less than one whole paper

And one gets all the benefits of a text document while writing it (grep-able, version control, etc.).

(It should be noted that if one is writing LaTeX, it is much easier with a LaTeX specific editor (or one with an advanced LaTeX mode))

Comment author: lukeprog 19 January 2012 01:28:56AM 3 points [-]

I'm not at all confident that writing (or collaborating on) academic papers is the most x-risk-reducing way for Eliezer to spend his time.

Comment author: Bugmaster 21 January 2012 03:44:12AM 7 points [-]

Speaking of arrogance and communication skills: your comment sounds very similar to, "Since Eliezer is always right about everything, there's no need for him to waste time on seeking validation from the unwashed academic masses, who likely won't comprehend his profound ideas anyway". Yes, I am fully aware that this is not what you meant, but this is what it sounds like to me.

Comment author: lukeprog 21 January 2012 03:48:48AM 1 point [-]

Interesting. That is a long way from what I meant. I just meant that there are many, many ways to reduce x-risk, and it's not at all clear that writing papers is the optimal way to do so, and it's even less clear that having Eliezer write papers is so.

Comment author: Bugmaster 21 January 2012 03:58:25AM 4 points [-]

Yes, I understood what you meant; my comment was about style, not substance.

Most people (myself included, to some non-trivial degree) view publication in academic journals as a very strong test of one's ideas. Once you publish your paper (or so the belief goes), the best scholars in the field will do their best to pick it apart, looking for weaknesses that you might have missed. Until that happens, you can't really be sure whether your ideas are correct.

Thus, by saying "it would be a waste of Eliezer's time to publish papers", what you appear to be saying is, "we already know that Eliezer is right about everything". And by combining this statement with saying that Eliezer's time is very valuable because he's reducing x-risk, you appear to be saying that either the other academics don't care about x-risk (in which case they're clearly ignorant or stupid), or that they would be unable to recognize Eliezer's x-risk-reducing ideas as being correct. Hence, my comment above.

Again, I am merely commenting on the appearance of your post, as it could be perceived by someone with an "outside view". I realize that you did not mean to imply these things.

Comment author: wedrifid 21 January 2012 04:34:27AM *  2 points [-]

Thus, by saying "it would be a waste of Eliezer's time to publish papers", what you appear to be saying is, "we already know that Eliezer is right about everything".

That really isn't what Luke appears to be saying. It would be fairer to say "a particularly aggressive reader could twist this so that it means..."

It may sometimes be worth optimising speech such that it is hard to even willfully misinterpret what you say (or interpret based on an already particularly high prior for 'statement will be arrogant') but this is a different consideration to trying not to (unintentionally) appear arrogant to a neutral audience.

Comment author: JoshuaZ 27 January 2012 08:27:49PM 3 points [-]

That really isn't what Luke appears to be saying. It would be fairer to say "a particularly aggressive reader could twist this so that it means..."

For what it is worth, I had an almost identical reaction when reading the statement.

Comment author: mwengler 19 January 2012 04:11:45PM 7 points [-]

I think the evolution is towards a democratization of the academic process. One could say the cost of academia was so high in the middle ages that the smart move was filtering the heck out of participants to at least have a chance of maximizing utility of those scarce resources. And now those costs have been driven to nearly zero, with the largest cost being the sigal-to-noise problem: how does a smart person choose what to look at.

I think putting your signal into locations where the type of person you would like to attract gather is the best bet. Web publication of papers is one. Scientific meetings is another. I don't think you can find an existing institution more chock full of people you would like to be involved with than the Math-Science-Engineering academic institutions. Market in them.

If there is no one who can write an academic math paper that is interested enough in EY's work to translate it into something somewhat recognizable as valuable by his peers, than the emperor is wearing no clothes.

As a PhD calltech applied physicist who has worked with optical interferometers both in real life and in QM calculations (published in journals), EY's stuff on interferometer is incomprehensible to me. I would venture to say "wrong" but I wouldn't go that far without discussing it in person with someone.

Robin Hanson's endorsement of EY is the best credential he has for me. I am a caltech grad and I love Hanson's "freakonomics of the future" approach, but his success at being associated wtih great institutions is not a trivial factor in my thinking I am right to respect him.

Get EY or lukeprog or Anna or someone else from SIAI on Russ Roberts' podcast. Robin has done it.

Overall, SIAI serves my purposes pretty well as is. But I tend to view SIAI as pushing a radical position about some sort of existential risk and beliefs about AI, where the real value is probably not quite as radical as what they push. An example from history would be BF Skinner and behaviorism. No doubt behavioral concepts and findings have been very valuable, but the extreme "behaviorism is the only thing, there are no internal states" behaviorism of its genius pusher BF Skinner is way less valuable than an eclectic theory that includes behaviorism as one piece.

This is a core dump since you ask. I don't claim to be the best person to evaluate EY's interformetry claims as my work was all single-photon (or linear anyway) stuff and I have worked only a small bit with two-photon formalisms. And I am unsophisticated enough to think MWI doesn't pass the smell test no matter how much lesswrong I've read.

Comment author: Adele_L 03 June 2012 08:05:17AM 3 points [-]

Robin Hanson's endorsement of EY is the best credential he has for me.

Similarly, the fact that Scott Aaronson and John Baez seem to take him seriously are significant credentials he has for me.

Comment author: paper-machine 19 January 2012 01:30:45AM 6 points [-]

I thought we were talking about the view from outside the SIAI?

Comment author: lukeprog 19 January 2012 01:45:22AM 5 points [-]

Clearly, Eliezer publishing technical papers would improve SI's credibility. I'm just pointing out that this doesn't mean that publishing papers is the best use of Eliezer's time. I wasn't disagreeing with you; just making a different point.

Comment author: shminux 19 January 2012 02:55:40AM 12 points [-]

Publishing technical papers would be one of the better uses of his time, editing and formatting them probably is not. If you have no volunteers, you can easily find a starving grad student who would do it for peanuts.

Comment author: paper-machine 20 January 2012 03:50:40PM 2 points [-]

Well, they've got me for free.

Comment author: shminux 19 January 2012 02:51:18AM *  1 point [-]

I would see what the formatting standards are in the relevant journals and find a matching document class or a LyX template. Someone other than Eliezer can certainly do that.

Comment author: shminux 19 January 2012 12:06:49AM *  18 points [-]

What should SI do about this?

I think that separating instrumental rationality from the Singularity/FAI ideas will help. Hopefully this project is coming along nicely.

Comment author: lukeprog 19 January 2012 02:06:50AM 6 points [-]

Hopefully this project is coming along nicely.

Yes, we're full steam ahead on this one.

Comment author: Incorrect 19 January 2012 01:11:10AM 14 points [-]

Why don't SIAI researchers decide to definitively solve some difficult unsolved mathematics, programming, or engineering problem as proof of their abilities?

Yes it would waste time that could have been spent on AI-related philosophy but would unambiguously support the competency of SIAI.

Comment author: WrongBot 19 January 2012 08:33:49AM 5 points [-]

You mean, like decision theory? Both Timeless Decision Theory (which Eliezer developed) and Updateless Decision Theory (developed mostly by folks who are now SI Research Associates) are groundbreaking work in the field, and both are currently being written up for publication, I believe.

Comment author: Raemon 18 January 2012 11:07:04PM 14 points [-]

I don't know how to address your particular signalling problem. But a question I need answered for myself: I wouldn't be able to tell the difference between the SIAI folks being "reasonably good at math and science" and "actually being really good - the kind of good they'd need to be for me to give them my money."

ARE there straightforward tests you could hypothetically take (or which some of you may have taken) which probably wouldn't actually satisfy academics, but which are perfectly reasonable benchmarks we should expect you to be able to complete to demonstrate your equivalent education?

Comment author: abramdemski 19 January 2012 12:57:11AM 1 point [-]

Why shouldn't the tests satisfy academics?

Why not use something like the GRE with subject tests, plus an IQ test and other relevant tests?

Comment author: Nick_Tarleton 19 January 2012 03:47:35PM *  10 points [-]

Crackpot Index:

10 points for pointing out that you have gone to school, as if this were evidence of sanity.

I'm not sure, but I think this is roughly how "look, I did great on the GRE!" would sound to someone already skeptical. It's the sort of accomplishment that sounds childish to point out outside of a very limited context.

Comment author: asr 19 January 2012 02:29:13AM *  5 points [-]

There are two big problems with standardized tests.

First, the standard tests are badly calibrated for measuring the high-performing tail of the distribution. Something like 6% of all GRE takers get a perfect score on the math portion. So GREs won't separate good from very good.

Second, aptitude for doing GRE-style or IQ-style math problems isn't known to be a close correlate for real ability. Universities are full of people with stellar test scores who don't ever amount to anything. On the other hand, Richard Feynman, who was very smart and very hard working, had a measured IQ of something like 125, which is not all that impressive as a test score.

Comment author: Dr_Manhattan 19 January 2012 11:57:51AM 1 point [-]

125???! Sh*t, I've got to start working harder. (source?)

Comment author: billswift 19 January 2012 02:47:29PM 1 point [-]

I don't know a source for the number, but in one of his popular books he mentioned that Mensa contacted him and he responded that his IQ wasn't high enough, which means it was less than 130.

Comment author: Dr_Manhattan 19 January 2012 03:52:10PM 2 points [-]

Knowing Feynman, This might well have been a joke at their expense.

Comment author: arundelo 19 January 2012 04:42:18PM 10 points [-]

According to Feynman, he tested at 125 when he was a schoolboy. (Search for "IQ" in the Gleick biography.)

Gwern says:

There are a couple reasons to not care about this factoid:

  • Feynman was younger than 15 when he took it [....]
  • [I]t was one of the 'ratio' based IQ tests - utterly outdated and incorrect by modern standards.
  • Finally, it's well known that IQ tests are very unreliable in childhood; kids can easily bounce around compared to their stable adult scores.

Steve Hsu says:

I suspect that this test emphasized verbal, as opposed to mathematical, ability. Feynman received the highest score in the country by a large margin on the notoriously difficult Putnam mathematics competition exam, although he joined the MIT team on short notice and did not prepare for the test. [...] It seems quite possible to me that Feynman's cognitive abilities might have been a bit lopsided -- his vocabulary and verbal ability were well above average, but perhaps not as great as his mathematical abilities. I recall looking at excerpts from a notebook Feynman kept as an undergraduate. While the notes covered very advanced topics -- including general relativity and the Dirac equation -- they also contained a number of misspellings and grammatical errors. I doubt Feynman cared very much about such things.

Comment author: wedrifid 19 January 2012 04:22:23PM 4 points [-]

Knowing Feynman, This might well have been a joke at their expense.

It is a joke at their expense. The question is whether he based it on a true premise.

Comment author: Raemon 19 January 2012 01:58:23AM 2 points [-]

Why shouldn't the tests satisfy academics?

Because people aren't rational and it's silly to pretend otherwise?

Comment author: moridinamael 19 January 2012 01:14:54AM 8 points [-]

There are two obvious options:

The first, boring option is to make fewer bold claims. I personally would not prefer that you take this tack. It would be akin to shooting yourselves in the foot. If all of your claims vis-a-vis saving the world are couched in extremely humble signaling packages, no one will want to ever give you any money.

The second, much better option is to start doing amazing, high-visibility things worthy of that arrogance. Muflax points out that you don't have a Tim Ferriss. Tim Ferriss is an interesting case specifically because he is a huge self-promoter who people actually like despite the fact that he makes his living largely by boasting entertainingly. The reason Tim Ferriss can do this is because he delivers. He has accomplished the things he is making claims about - or at least he convinces you that he is qualified to talk about it.

I really want a Rationality Tim Ferriss who I can use as a model for my own development. You could nominate yourself or Eliezer for this role, but if you did so, you would have to sell that role.

Comment author: lukeprog 19 January 2012 01:40:36AM 9 points [-]

I like the second option better, too.

I'm certainly going to try to be a Rationality Tim Ferris, but I have a ways to go.

Eliezer is still hampered by the cognitive exhaustion problem that he described way back in 2000. He's tried dozens of things and still tries new diets, sleeping patterns, etc. but we haven't kicked it yet. That said, he's pretty damn productive each day before cognitive exhaustion sets in.

Comment author: jswan 21 January 2012 02:49:18AM *  5 points [-]

I'm certainly going to try to be a Rationality Tim Ferris, but I have a ways to go.

Please no. Here's an example. When you say stuff like:

"As an autodidact who now consumes whole fields of knowledge in mere weeks, I've developed efficient habits that allow me to research topics quickly."

http://lesswrong.com/lw/5me/scholarship_how_to_do_it_efficiently/

You sound like Tim Ferriss and you make me want to ignore you in the same way I ignore him. I don't want to do this because you seem like a good person with a genuine ability to help others. Don't lose that.

Comment author: wedrifid 21 January 2012 03:39:21AM 1 point [-]

When you say stuff like:

"As an autodidact who now consumes whole fields of knowledge in mere weeks, I've developed efficient habits that allow me to research topics quickly."

http://lesswrong.com/lw/5me/scholarship_how_to_do_it_efficiently/

You sound like Tim Ferriss and you make me want to ignore you in the same way I ignore him.

It sounds like you place high importance on public image. In particular, on maintaining a public image that is self effacing or humble. I wonder if, over all, it is more effective for luke to convey confidence and be up front about his achievements and capabilities and so gain influence with a wide range of people or if it is best to optimize his image for that group of people who place high importance on humble decorum.

I don't want to do this because you seem like a good person with a genuine ability to help others. Don't lose that.

Tim Ferris is a good person (as far as people go) and he has been able to positively influence far more people by mastering self promotion than he ever would have been if he restrained himself. Is this about "being a good person and helping others" or keeping your approval? The two seem to be conflated here.

Fortunately for you when luke says "try to be a Rationality Tim Ferris" he does not mean anything at all along the lines of "talk like Tim Ferris". He is talking about being as productive, efficient and resourceful as Tim Ferris. He's talking about Tim's strong capability for instrumental rationality not his even stronger capability for self promotion.

(Incidentally I don't think Tim would make the kind of boast that Luke made there, simply because it is an awkward and poorly implemented boast. Tim boasts by giving a specific example of the awesome thing he has done rather than just making abstract assertions. At least give Tim the credit of knowing how to implement arrogance and boasting somewhat effectively!)

Comment author: WrongBot 19 January 2012 08:46:51AM 8 points [-]

For what it's worth, that sounds virtually identical to a problem psychologists have told me is ADHD. (I also had a catastrophic school attendance failure in seventh grade, funnily enough.) Adderall has unpleasant side-effects but actually allows me to sit down and work for eight or ten consecutive hours, whenever I want to. Not perfectly, but the effect is remarkable.

Comment author: CronoDAS 21 January 2012 12:33:52AM 1 point [-]

I think prescription antidepressants also tend to have a similar energy-boosting effect.

Comment author: thomblake 19 January 2012 08:37:32PM 1 point [-]

I've observed the same problem and solution as well.

Comment author: Caspian 19 January 2012 10:02:49AM 4 points [-]

I had the impression of Tim Ferris as being no more trustworthy than anyone else who was trying to sell you something. I would expect him to exaggerate how easy something is, exaggerate how likely something is to help, etc. Now, not having read his stuff, that's second hand and not well informed, but you are asking about you come across, so it's relevant. The doing amazing things part is great if you can manage it.

Comment author: NihilCredo 20 January 2012 09:46:18PM *  1 point [-]

I have read about half of his book and skimmed the rest, and I pretty much share that impression. To put it succinctly, that man works a 4-hour workweek only if you adopt a very restrictive definition of what counts as "work".

Comment author: Solvent 19 January 2012 11:49:26AM 2 points [-]

That was fascinating to read. Eliezer certainly has toned down the arrogance a bit recently.

I'm certainly going to try to be a Rationality Tim Ferris, but I have a ways to go.

I look forward to watching this.

Comment author: Kaj_Sotala 19 January 2012 10:42:54AM 1 point [-]

Wow, that link is really interesting. Especially this bit:

I was, once again, pondering the question of why I didn't have any mental energy, and I tried thinking about the occasions when I did find mental energy. It occurred to me that when I started a new project, my energy level went up briefly before crashing. Maybe, I thought, energy was produced by new ideas. And that's when the light went on. "Maybe both the genius and the energy deficit were produced by overloading a single force, the force that resists thoughts moving repeatedly in the same channel." (24). And then I thought: "Maybe that's why my genius isn't an evolutionary advantage."

I don't know if that hypothesis is true, but if it is, I probably have a mild version of it. It would explain a lot about my akrasia issues.

Comment author: NancyLebovitz 19 January 2012 07:52:40AM 2 points [-]

There's the signalling problem from boasting in this culture, but should we also be taking a look at whether boasting is a custom that there are rational reasons for encouraging or dropping?

Comment author: RobertLumley 18 January 2012 10:39:20PM *  9 points [-]

I unfortunately don't have much to offer that can actually be helpful. I (and I feel like this probably applies to many LWers) am not at all turned off by arrogance, and actually find it somewhat refreshing. But this reminds me of something that a friend of mine said after I got her to read HPMOR:

"after finishing chapter 5 of hpmor I have deduced that harry is a complete smarmy shit that I want to punch in the face. no kid is that disrespectful. also he reminds me of a young voldemort....please don't tell me he actually tries taking over the world/embezzling funds/whatever"

ETA: she goes on in another comment (On Facebook), after I told her to give it to chapter 10, like EY suggests, "yeah I'm at chapter 17 and still don't really like harry (he seems a bit too much of a projection of the author perhaps? or the fact that he siriusly thinks he's the greatest thing evarrr/is a timelord) but I'm still reading for some reason?"

Seems to be the same general sentiment, to me. Not specifically the SI, but of course tangentially related. For what it's worth, I disagree. Harry's awesome. ;-)

Comment author: linkhyrule5 15 July 2013 06:31:29AM 0 points [-]

Since it's been seven months, I'm curious - how much of this, if any, has been implemented? TDT has been published, but it doesn't get too many hits outside of LessWrong/MIRI, for example.