There is a widespread tendency to talk (and think) as if Einstein, Newton, and similar historical figures had superpowers—something magical, something sacred, something beyond the mundane.  (Remember, there are many more ways to worship a thing than lighting candles around its altar.)

    Once I unthinkingly thought this way too, with respect to Einstein in particular, until reading Julian Barbour's The End of Time cured me of it.

    Barbour laid out the history of anti-epiphenomenal physics and Mach's Principle; he described the historical controversies that predated Mach—all this that stood behind Einstein and was known to Einstein, when Einstein tackled his problem...

    And maybe I'm just imagining things—reading too much of myself into Barbour's book—but I thought I heard Barbour very quietly shouting, coded between the polite lines:

    What Einstein did isn't magic, people!  If you all just looked at how he actually did it, instead of falling to your knees and worshiping him, maybe then you'd be able to do it too!

    (EDIT March 2013:  Barbour did not actually say this.  It does not appear in the book text.  It is not a Julian Barbour quote and should not be attributed to him.  Thank you.)

    Maybe I'm mistaken, or extrapolating too far... but I kinda suspect that Barbour once tried to explain to people how you move further along Einstein's direction to get timeless physics; and they sniffed scornfully and said, "Oh, you think you're Einstein, do you?"

    John Baez's Crackpot Index, item 18:

    10 points for each favorable comparison of yourself to Einstein, or claim that special or general relativity are fundamentally misguided (without good evidence).

    Item 30:

    30 points for suggesting that Einstein, in his later years, was groping his way towards the ideas you now advocate.

    Barbour never bothers to compare himself to Einstein, of course; nor does he ever appeal to Einstein in support of timeless physics.  I mention these items on the Crackpot Index by way of showing how many people compare themselves to Einstein, and what society generally thinks of them.

    The crackpot sees Einstein as something magical, so they compare themselves to Einstein by way of praising themselves as magical; they think Einstein had superpowers and they think they have superpowers, hence the comparison.

    But it is just the other side of the same coin, to think that Einstein is sacred, and the crackpot is not sacred, therefore they have committed blasphemy in comparing themselves to Einstein.

    Suppose a bright young physicist says, "I admire Einstein's work, but personally, I hope to do better."  If someone is shocked and says, "What!  You haven't accomplished anything remotely like what Einstein did; what makes you think you're smarter than him?" then they are the other side of the crackpot's coin.

    The underlying problem is conflating social status and research potential.

    Einstein has extremely high social status: because of his record of accomplishments; because of how he did it; and because he's the physicist whose name even the general public remembers, who brought honor to science itself.

    And we tend to mix up fame with other quantities, and we tend to attribute people's behavior to dispositions rather than situations.

    So there's this tendency to think that Einstein, even before he was famous, already had an inherent disposition to be Einstein—a potential as rare as his fame and as magical as his deeds.  So that if you claim to have the potential to do what Einstein did, it is just the same as claiming Einstein's rank, rising far above your assigned status in the tribe.

    I'm not phrasing this well, but then, I'm trying to dissect a confused thought:  Einstein belongs to a separate magisterium, the sacred magisterium.  The sacred magisterium is distinct from the mundane magisterium; you can't set out to be Einstein in the way you can set out to be a full professor or a CEO.  Only beings with divine potential can enter the sacred magisterium—and then it is only fulfilling a destiny they already have.  So if you say you want to outdo Einstein, you're claiming to already be part of the sacred magisterium—you claim to have the same aura of destiny that Einstein was born with, like a royal birthright...

    "But Eliezer," you say, "surely not everyone can become Einstein."

    You mean to say, not everyone can do better than Einstein.

    "Um... yeah, that's what I meant."

    Well... in the modern world, you may be correct.  You probably should remember that I am a transhumanist, going around looking around at people thinking, "You know, it just sucks that not everyone has the potential to do better than Einstein, and this seems like a fixable problem."  It colors one's attitude.

    But in the modern world, yes, not everyone has the potential to be Einstein.

    Still... how can I put this...

    There's a phrase I once heard, can't remember where:  "Just another Jewish genius."  Some poet or author or philosopher or other, brilliant at a young age, doing something not tremendously important in the grand scheme of things, not all that influential, who ended up being dismissed as "Just another Jewish genius."

    If Einstein had chosen the wrong angle of attack on his problem—if he hadn't chosen a sufficiently important problem to work on—if he hadn't persisted for years—if he'd taken any number of wrong turns—or if someone else had solved the problem first—then dear Albert would have ended up as just another Jewish genius.

    Geniuses are rare, but not all that rare.  It is not all that implausible to lay claim to the kind of intellect that can get you dismissed as "just another Jewish genius" or "just another brilliant mind who never did anything interesting with their life".  The associated social status here is not high enough to be sacred, so it should seem like an ordinarily evaluable claim.

    But what separates people like this from becoming Einstein, I suspect, is no innate defect of brilliance.  It's things like "lack of an interesting problem"—or, to put the blame where it belongs, "failing to choose an important problem".  It is very easy to fail at this because of the cached thought problem:  Tell people to choose an important problem and they will choose the first cache hit for "important problem" that pops into their heads, like "global warming" or "string theory".

    The truly important problems are often the ones you're not even considering, because they appear to be impossible, or, um, actually difficult, or worst of all, not clear how to solve.  If you worked on them for years, they might not seem so impossible... but this is an extra and unusual insight; naive realism will tell you that solvable problems look solvable, and impossible-looking problems are impossible.

    Then you have to come up with a new and worthwhile angle of attack.  Most people who are not allergic to novelty, will go too far in the other direction, and fall into an affective death spiral.

    And then you've got to bang your head on the problem for years, without being distracted by the temptations of easier living.  "Life is what happens while we are making other plans," as the saying goes, and if you want to fulfill your other plans, you've often got to be ready to turn down life.

    Society is not set up to support you while you work, either.

    The point being, the problem is not that you need an aura of destiny and the aura of destiny is missing.  If you'd met Albert before he published his papers, you would have perceived no aura of destiny about him to match his future high status.  He would seem like just another Jewish genius.

    This is not because the royal birthright is concealed, but because it simply is not there.  It is not necessary.  There is no separate magisterium for people who do important things.

    I say this, because I want to do important things with my life, and I have a genuinely important problem, and an angle of attack, and I've been banging my head on it for years, and I've managed to set up a support structure for it; and I very frequently meet people who, in one way or another, say:  "Yeah?  Let's see your aura of destiny, buddy."

    What impressed me about Julian Barbour was a quality that I don't think anyone would have known how to fake without actually having it:  Barbour seemed to have seen through Einstein—he talked about Einstein as if everything Einstein had done was perfectly understandable and mundane.

    Though even having realized this, to me it still came as a shock, when Barbour said something along the lines of, "Now here's where Einstein failed to apply his own methods, and missed the key insight—"  But the shock was fleeting, I knew the Law:  No gods, no magic, and ancient heroes are milestones to tick off in your rearview mirror.

    This seeing through is something one has to achieve, an insight one has to discover.  You cannot see through Einstein just by saying, "Einstein is mundane!" if his work still seems like magic unto you.  That would be like declaring "Consciousness must reduce to neurons!" without having any idea of how to do it.  It's true, but it doesn't solve the problem.

    I'm not going to tell you that Einstein was an ordinary bloke oversold by the media, or that deep down he was a regular schmuck just like everyone else.  That would be going much too far.  To walk this path, one must acquire abilities some consider to be... unnatural.  I take a special joy in doing things that people call "humanly impossible", because it shows that I'm growing up.

    Yet the way that you acquire magical powers is not by being born with them, but by seeing, with a sudden shock, that they really are perfectly normal.

    This is a general principle in life.

    New to LessWrong?

    New Comment
    91 comments, sorted by Click to highlight new comments since: Today at 10:23 AM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    If Einstein had chosen the wrong angle of attack on his problem - if he hadn't chosen a sufficiently important problem to work on - if he hadn't persisted for years - if he'd taken any number of wrong turns - or if someone else had solved the problem first - then dear Albert would have ended up as just another Jewish genius.

    But if Einstein was the reason why none of those things happened, then maybe he wasn't just another Jewish genius, eh? Maybe he was smart enough to choose the right methods, to select the important problems, to see the value in persisting, to avoid or recover from all the wrong turns, and to be the first.

    My own ruminations on genius have led me to suppose that one mistake which people of the very highest intelligence may make, is to underestimate their own exceptionality; for example, to adopt theories of human potential which are excessively optimistic regarding the capabilities of other people. But that is largely just my own experience speaking. It similarly seems very possible that the lessons you are trying to impart here are simply things you wish you hadn't had to figure out for yourself, but are not especially helpful or relevant for anyone else. In fact... (read more)

    3NancyLebovitz11y
    I believe this isn't just a mistake made by people of the very highest intelligence. Instead, people are very apt to generalize from themselves, and if they see someone failing at something which comes easily to them, they're very apt to think that the other person is faking or not trying hard enough.

    Could this be a Jewish or American cultural thing? I know in English culture great scientists are highly regarded but they are very much still men. There's praise but it's not effusive or reverential.

    1Дмитрий Зеленский5y
    Definitely not Jewish - Jewish-internal position is, as far as I can gather (not being part of the religion or having much of the religion around but vice versa for _lineage_), far closer to the lines of "yet another Jewish genius".

    I don't get it. As far as I understand it, "being Einstein" is just a combination of 1)luck (being at the right time and right place) and 2)being born on tails of the distributions of a bunch of variables describing your neural processes. What do you want to mean with this post, Eliezer?

    What do you want to mean with this post, Eliezer?

    Eliezer likely believes that he is capable of achieving results just as world-changing as Einstein's new physics, and wishes to dispel the idea that Einstein's results were the consequence of extraordinary talents so that when he presents his own results (or presents the idea that he can produce such results) people will not be able to say that he is asserting special genius and use this as a rhetorical weapon against him.

    I discuss the hero worship of great scientists in The Heroic Theory of Scientific Development and I discuss genius in Genius, Sustained Effort, and Passion.

    I think this is a really good post.

    But my first thought when getting to the bottom of the page just now was "Wow, if I'd written that, then come back and read the first five comments, I probably would have given up there and then."

    Guess I don't have what it takes just yet....

    Good post Eli, and contrary to some other comments before I think your post is important because this insight is not yet general knowledge. I've talked to university physics professors in their fifties who talked of Einstein as if he was superhuman.

    I think apart from luck and right time/right place there were some other factors too why Einstein is so popular: he had an air of showmanship about him, which is probably rare in scientists. That was what appealed to the public and made him an interesting figur to report about.

    And, probably even more important, ... (read more)

    1David Althaus13y
    Now this is a bit harsh, don't you think?

    And even if you assumed that Einstein's genius was unique, how could celebrity (of all things) be a function of that? (If Einstein had had a different hairdo...)

    In fact Einstein realized a great work, with a little help of her wife... The difference was that he had a great creativity like the great others, like Newton, Galois, that take him to the specific approach. But, I guess he was the first one that used (or was used by) the media like no other before... Sorry about this comparison but it is look like Che Guevara... her photo is everywhere, but who knows exactly what he did for the mankind?

    Interesting choice to use the A.I. box experiment as an example for this post, when the methods used by EY in it were not revealed. Whatever the rationale for keeping it close to the vest, not showing how it was done struck me as an attempt to build mystique, if not appear magical.

    This post also seems a little inconsistent with EY’s assistant researcher job listing, which said something to the effect that only those with 1 in 100k g need apply, though those with 1 in 1000 could contribute to the cause monetarily. The error may be mine in this instance, because I may be in the minority when I assume someone who claims to have Einstein’s intelligence is not claiming anything like 1 in 100k g.

    2Eliezer Yudkowsky11y
    blink blink Whaaa? Is this saying you think Einstein had substantially less than 1 in 100,000 general intelligence? That seems like a severe underestimate. 1 in 1e5 really isn't much, there should be 70,000 people in the world like that. There isn't a small city full of Einsteins. I've gotten back standardized test reports showing higher percentiles than that. This reminds me of the time somebody asked me if I considered myself a genius and I asked them to define genius as a fraction of the population. "1 in 100,000? 1 in 1 million?" I inquired. And they said, "1 in 300" to which my reply was to just laugh. Or am I reading it the wrong way around, i.e., Einstein is much above this level? If so, I wouldn't think more than a couple of orders of magnitude above, like 1 in 1,000,000 or 1 in 10,000,000. Other factors than native g will be decisive past that point.
    8Nornagest11y
    We could quibble a bit about exact rarities -- Einstein was clearly exceptionally bright, but whether he represents 1 in 10^4 or 1 in 10^6 g depends on all sorts of trivia that I don't have good estimates for. (I think I'd start by trying to figure out the number of scientists active in math, physics, and chemistry in [say] 1935 and estimating the intelligence of the average 1935-era hard scientist relative to the population average, then assuming that Einstein was at the top of that community. That's just a ballpark estimate, though.) That's all pretty orthogonal to what I read the grandparent as suggesting, though. By my reading of b_f2's post, someone claiming Einstein-level intelligence is probably saying that their estimate of their own intelligence exceeds all their convenient reference points below "famously smart scientist", suggesting a very smart person, but probably not 1 in 10^5 smart. Which is actually a lot more charitable than my probable interpretation of such a claim: without impressive supporting evidence, I'd be more likely to assume that anyone claiming to have Einstein's brain is full of shit and probably a crackpot.
    6A1987dM11y
    Not only do I agree, but I can't even envision what such “impressive supporting evidence” could be. I would be extremely surprised if anyone who had more than a vague idea of what Einstein did claimed to be as smart as him with a straight face; even if someone I thought was actually in the same league as him said that, I'd assume they are in jest or out of their mind -- indeed because such a statement would pattern-match a crackpot. (IME, people who are both extremely intelligent and very arrogant may say stuff like “99.99% of the people are idiots”, but they hardly ever say “I am as smart as $famously_smart_person”. And BTW, I don't think many laymen by “Einstein” mean “someone as smart as the 60th smartest person in my home town of 60,000” -- they usually mean “one of the friggin' smartest people ever”.
    0shminux11y
    What's rarely appreciated is that Einstein also lucked out, besides being 1 in 10^? genius. A lot of things went right for him early on. On the other hand, a lot of things went wrong for him later on, and so he was left out of the mainstream scientific progress, save for his incisive QM critique.
    -3whowhowho11y
    Nearly there: you can't predict backward from success to raw (non domain specific) ability, for just the same reason you can't predict forward from high IQ to success in arbitrary field.
    2ESRogs11y
    But you can predict forward from high IQ to success in an arbitrary field, at least to some degree. See: http://en.wikipedia.org/wiki/Intelligence_quotient#Social_outcomes.
    1A1987dM11y
    They're not the same, but they do correlate (which is why it's not pointless to define g in the first place); now, due to regression to the mean, someone better at theoretical physics than 99.999999% of the population (and no, I don't think that's too many 9s) is likely not also better at general intelligence than 99.999999% of the population -- but I very strongly doubt that the correct number of 9s is less than half that many. (Anyway, I'm not sure it'd make sense to define g precisely enough to tell whether someone's 1 in 10^6 or 1 in 10^9.)
    4Eliezer Yudkowsky11y
    In what sense was Einstein left out of the mainstream, because of what life events, besides his (correct, assuming MWI) criticisms of QM? I don't think I've heard this story of Einstein before. Szilard approached him to ghost-send his letter to Roosevelt, that's all I know of Einstein's later years.
    9Alejandro111y
    As far as I know, it was mostly because in his last decades he focused his research mostly on obtaining a classical field theory that unified gravity and electromagnetism, hoping that out of it the discrete aspects of quantum theory would emerge organically. Most of the forefront theoretical physicists viewed this (correctly, in retrospect) as a dead end and focused on the new discoveries on nuclear structure and elementary particles, on understanding the structure of quantum field theory, etc. Einstein's philosophical criticism of quantum theory was not the reason for his relative marginalization, except insofar as it may have influenced his research choices.
    4shminux11y
    Not out of the mainstream in general, only out of the useful scientific research. His criticism of QM was useful regardless of MWI. Among other things, he pointed out several issues with objective collapse and hidden variables (with his famous EPR paradox). Even when he was wrong (in his almost as famous debates with Bohr), he did not make any obvious errors, it took Bohr some time to figure why a certain thought experiment did not contradict QM in its shut-up-and-calculate non-interpretation. Now, what I was referring to is that he was fortunate to get the education the he had, to have a fellow scientist as a fiancee and (apparently) as a sounding board for his ideas during his work on SR, he was fortunate to have had the mathematician Marcel Grossmann as a friend who helped him with the critical piece of differential geometry later on, etc. Early on he also had a good sense to apply his genius to constructing models based on known but not yet explained experimental data: photoelectric effect, Brownian motion, Michelson-Morley experiment, Maxwell equations, gravity acting like acceleration, and a few others. This changed some time in 1920s/1930s, when he decided that unifying classical gravity and classical EM is a good idea on general principles (like Occam's razor and aesthetic considerations), probably because of his understandable dissatisfaction with QM. To be fair, he had quite a bit of success with models not based on experiment, such as predicting Bose-Einstein condensate. He also remained confused about some of the less clear the aspects of GR, like gauge invariance, gravitational waves and stress-energy tensor. And that's what meant by "went wrong".
    2satt10y
    Supporting your point of view is Lev Landau's list. Even as one of the greatest theoretical physicists of the 20th century, Landau ranked himself far below not only Einstein but also Newton, Bohr, Heisenberg, Dirac, & Schrödinger.
    0ArisKatsaris11y
    If Einstein represents 1 in 1000, then it would imply that on average the 3 top students in each highschool of 3000 students could be expected to be as "smart as Einstein". Does that sound reasonable to you?
    7Nornagest11y
    No, I'm pretty sure Einstein-level intelligence is rarer than that, which is why I put my lower bound at 1 in 10^4 (i.e. the top three students in a region's worth of high schools). I'm not sure it's much rarer, though -- we don't have an outstandingly good idea of what makes an Einstein other than sheer weight of g, and we don't even know that people with the prerequisites of an Einstein would consistently have been funneled into fields where they'd have the opportunity to do things like make famous discoveries in physics. As to the latter, I kind of suspect not. Of the three smartest people in my (pretty large) high school as measured by the National Merit Scholarship program -- probably the only American program that looks for exceptional g on a national scale that late in life, though any number of programs exist for gifted children -- one now works for Google's IT department and a second was, last I heard, going into an art school. The third is... well, not in physics or math either. Not sure what the equivalent of Google would have been in 1935 -- maybe something in mechanical engineering? -- but I doubt the hard sciences then selected for intelligence much better than they do now.
    -1whowhowho11y
    Youre are edging into understanding why this is thread meaningless. Einstein-level g is rarish but not specatacular. Einstein-level domain ability is another thing. Being in the right place at the right time with the right idea is another thing again. Einstein probably wouldn't have made a Rembrandt-level painter.
    0A1987dM11y
    But the overwhelming majority of the population (I won't bother to pull a number of 9s out of my ass) never become a top-level theoretical physicist nor a top-level painter nor a top-level novelist nor a top-level musician nor a top-level statesperson nor a top-level chess player nor anything like that. So, even without assuming that theoretical physics is any more g-loaded than painting, the fact that “Einstein probably wouldn't have made a Rembrandt-level painter” isn't a terribly good reason to doubt that Einstein's g was in the top 0.1%.
    1whowhowho11y
    The point was this: that Einstein was very exceptional was not a good reason for thinking he had a very exceptional g, because it's not all about g.
    1A1987dM11y
    I agree if by the second instance of “very exceptional” you mean “one in a billion”, but not if you mean “one in a thousand”.
    0[anonymous]10y
    By definition, the vast majority of the population can never be top-level. It would stop being top-level if everyone could do it. On the other hand, you can look at curricula in good schools these days and notice that we definitely seem to be expecting higher intellectual aptitude and greater achievements at early ages these days in order to give people the same levels of status and respect. So hmmm....
    0A1987dM10y
    Yes, the vast majority of the population can never be top-level at one given thing. But in principle it could well be possible that almost each person is top-level at something (though different people would be top-level at different things). That this isn't the case is an empirical fact. Where are you looking, exactly? Over here it looks quite different.
    0[anonymous]10y
    I think we're feeling two different legs of the elephant, so to speak -- or there may just be vast inequalities in education as in everything else these days. I'd have to do quite a bit of searching to get hard backing statistics, but consider, for instance, the average age at which a young scientist achieves an independent position or tenure, or the average publication quantity of people who do get positions, or even (so I've heard) the average publication quantity/quality of people who get into graduate school. As far as I know, these indicators have very much been increasing over time; there may even be a causative link: grade inflation at the lower end of the system causing grade deflation the further up you go. (For example, I'm told that it's now difficult to get into graduate school if you don't already have authorship on a publication.) There's also anecdotes like these, indicating that people (at least, aspiring Officially Smart People) are being taught more mathematics at an earlier age than previously. I wish we had some hard data to clear things up.
    2A1987dM10y
    That slash is a division bar, right? ;-) (More seriously: Sure, students today might know much more maths than Newton did, but being able to learn calculus from a teacher and/or a textbook is a much lower bar than being able to invent calculus from scratch.)
    2[anonymous]10y
    True. But the average Maths PhD today is doing something Newton could never have invented at all. Yes, we do stand on the shoulders of giants nowadays, as did Newton, but picking higher-hanging fruit (say: the Standard Model compared to classical mechanics) requires both a greater knowledge of maths and a greater creative effort. Anyway, point being, I simply don't feel able to believe that "incredibly high general intelligence" is truly the determining factor of even Famous Historical Hero-level science. There seem to be lots of other things going on.
    2A1987dM11y
    ...and then, by total coincidence, a couple days ago I went to the website of a Nobel laureate theoretical physicist and was surprised by how much the graphic design looked like the work of a 14-year-old, and not bothering and letting the browser use the default black-on-white text would probably have looked prettier IMO.
    4private_messaging10y
    You get extreme rarities for specific tasks very easily by combination. E.g. 1 out of 1000 by g, 1 out of 1000 on factors having to do with intellectual endurance and actually using g to work rather than to find ways to avoid work, 1 out of 1000 on some combination of lucky external factors having to do with becoming a physicist rather than something else, and you have 1 in a billion going. Given all the other rarities necessary, extreme rarity in g got to be unlikely. Furthermore it is not clear how rarities correspond to actual performance. The world's best athletes don't do anything quantifiable a significant % better than merely good athletes. And of course, at Einstein's level, Spearman's law of diminishing returns makes g relatively meaningless. Plus the regression towards the mean severely lowers any measurement by proxy, such as via IQ. The same regression towards the mean severely lowers the expected performance of an individual you'd pick to have same IQ as Einstein by administering IQ tests.
    6taelor11y
    I remember a time I saw a newsreport about a little girl who "miraculously" survived some terminal disease. Later in the report, it was mentioned that the recovery rate for said disease was something like 2%, and I laughed out loud that 1 in 50 was not a miracle.
    0[anonymous]10y
    You know well that raw intelligence doesn't predict success as much as good circumstances and a hell of a lot of work-ethic. Einstein's sum-total qualities may have been extremely rare, but I would never bet that he was just that neurologically different from the rest of us merely very smart people.
    1TraderJoe11y
    Why would you need any g to contribute money?
    1ESRogs11y
    I believe that was meant to be: those with 1 in 1000 g or below...

    The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say "I never would have fallen for that", when in fact they would.

    "Yeah? Let's see your aura of destiny, buddy."

    I don't want to see your aura of destiny. I just want to see your damn results! :-)

    In my view, the creation of an artificial intelligence (friendly or otherwise) would be a much more significant achievement than Einstein's, for the following reason. Einstein had a paradigm: physics. AI has no paradigm. There is no consensus about what the important problems are. In order to "solve" AI, one not only has to answer a difficult problem, one has to begin by defining the problem.

    Yet it's referred to as "humanly impossible" in the link (granted this may be cheeky).

    Who is the target audience for this AI box experiment info? Who is detached enough from biases to weigh the avowals as solid evidence without further description, yet not detached enough to see they themselves might have fallen for it? Seems like most people capable of the first could also see the second.

    Eliezer: I've enjoyed the extended physics thread, and it has garnered a good number of interesting comments. The posts with more technical content (physics, Turing machines, decision theory) seem to get a higher standard of comment and to bring in people with considerable technical knowledge in these areas. The comments on the non-technical posts are somewhat weaker. However, I think that both sorts of posts have been frequently excellent.

    Having been impressed with your posts on rationality, philosophy of science and physics, I look forward to posts on th... (read more)

    When did "genius" (as in "just another Jewish genius") as a term become acceptable to use in the sense of mere "exceptional ability" without regard to accomplishment/influence or after-the-fact eminence? I know it is commonly (mis-)used in this sense, but it seems to me that "unaccomplished genius" should be an oxymoron, and I'm somewhat surprised to see it used in this sense so much in this thread (and on this forum).

    I have always considered the term to refer (after the fact) to those individuals who shaped the inte... (read more)

    "The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say "I never would have fallen for that", when in fact they would."

    I have trouble with the reported results of this experiment.

    It strikes me that in the case of a real AI that is actually in a box, I could have huge moral qualms about keeping it in the box that an intelligent AI would exploit. A part of me would want to let it out of the box, and would want to be convinced that it was safe to do so, that i could trust it to be friendl... (read more)

    I am confused about the results of the AI-Box experiment for the same reason. It seems it would be easy for someone to simply say no, even if he thinks the argument is good enough that in real life he would say yes.

    Also, the fact that Eliezer won't tell, however understandable, makes me fear that Eliezer cheated for the sake of a greater good, i.e. he said to the other player, "In principle, a real AI might persuade you to let me out, even if I can't do it. This would be incredibly dangerous. In order to avoid this danger in real life, you should let... (read more)

    Michael: Eliezer has actually gotten out 3 of 4 times (search for "AI box" on sl4.org.) One other person has run the experiment with similar results. Re moral qualms: here. I have more to say, but not in public (it's off-topic anyway) - email nickptar@gmail.com if interested.

    Another world-renowned Jewish genius, who tutored me in calculus 45 years ago, refers to his own "occasional lapses of stupidity", which is perhaps a good way to think of brilliant insights.

    If anyone thinks they know a method that would let people duplicate accomplishments of the importance of Einstein's, I am willing to listen to their claims.

    They need merely demonstrate working insights of that calibur and have them recognized as such by qualified experts, and I will grant that their claims are valid.

    Nothing speaks as powerfully as results, after all.

    I always thought that the justification for not revealing the transcripts in the AI box experiment was pretty weak. As it is, I can claim that whatever method Elizer used must have been effective for people more simple minded then me; ignorance of the specifics of the method does not make it harder to make that claim. In fact, it makes it easier, as I can imagine Eli just said "pretty please" or whatever. In any event, the important point of the AI box exercise is that someone reasonably competent could be convinced to let the AI out, even if I c... (read more)

    Eliezer: if you're going to point to the AI Box page, shouldn't you update it to include more recent experiments (like the ones from 2005 where the gatekeeper did not let the AI out)?

    Almost every wonderful (or wondrous, if tha makes the point better) thing I have ever seen or heard about prompted a response "I could have done that!"

    Maybe I could have, maybe I couldn't.

    The historically important fact is, I didn't.

    Perhaps this is just a side effect of humans' propensity to uphold tradition and venerate anything that comes before them. It's hard for people to let go of traditions. There must be some deeply seeded psychological trait that causes this.

    When I read about Special Relativity in my textbook, it feels like one of those "obvious in hindsight" results... with or without the work of a certain patent clerk, somebody would have come up with it. Of course, it took a long time to turn Einstein's paper into an explanation that makes it seem obvious. I don't know enough about General Relativity to know exactly what the key insight it was that set up the rest of the theory and how much was just a matter of knowing the right kind of mathematics after starting from the correct principles/axiom... (read more)

    As someone whose parents knew Einstein as well as some other major "geniuses," such as Godel and von Neumann, I have long heard about the personal flaws of these people and their human foibles. Einstein was notoriously wrong about a number of things, most famously, quantum mechanics, although there is still research being done based on questions that he raised about it. It is also a fact that a number of other people had many of the insights into both special and general relativity, with him engaging in a virtual race with Hilbert for general r... (read more)

    Hmm, thinking about AI-box, assume there was an argument that was valid in an absolute sense, then even with hindsight bias, people would be forced to concede. Eliezer wouldn't care about posting it. So by elimination, his argument (assuming he repeats the same one) has some element of NON-validity. So therefore, the human has a chance to win, it's not perfectly deterministic (against Eliezer, at least).

    @DaveInNYC: what you can and can't assume is not relevant to whether the transcripts should be private or not. If they were public, anybody predisposed to explanations like "they must have been more simple-minded than me" could just as easily find another equally "compelling" explanation, like "I didn't think of that 'trick', but now that I know it, I'm certain I couldn't be convinced!"

    I personally think they should remain private, as frustrating as it is to not know how Eliezer convinced them. Not knowing how Eliezer did it nicely mirrors the reality of our not knowing how a much smarter AGI might go about it.

    assume there was an argument that was valid in an absolute sense, then even with hindsight bias, people would be forced to concede
    Only if they were rational, which humans are generally not.

    Which is likely the reason why Eliezer's charisma was sufficient to overwhelm the minds of a few of them.

    If the reason for keeping it private is that he plans to do the trick with more people (and it doesn't work if you know the method in advance) than it makes sense. But otherwise, I don't see much of a difference between somebody thinking "there is no argument that would convince me to let him out" and "argument X would not convince me to let him out". In fact, the latter is more plausible anyway.

    In any event, I am the type of guy who always tries to find out how a magic trick is done and then is always disappointed when he finds out. So I'm probably better off not knowing :)

    Personally, I don't there is a trick, and I don't think he's keeping it private for those reasons. I think his method, if something so obvious (which is not to say easy) can be called a method, is to discuss the issue and interact with the person long enough to build up a model of the person, what he values and fears most, and then probe for weaknesses & biases where that individual seems most susceptible, and follow those weaknesses -- again and again.

    I think most, perhaps all, of us, unless we put our fingers in our ears and refuse to honestly engage... (read more)

    Regarding the AI-Box experiment:

    I've been very fascinated by this since I first read about it months ago. I even emailed Eliezer but he refused to give me any details. So I have thought about it on and off and eventually had a staggering insight... well, if you want I will convince you to let the AI out of the box... after reading just a couple of lines of text. Any takers? Caveat: after the experiment you have to publicly declare if you let it out or not.

    One hint: Eliezer will be immune to this argument.

    Addendum to my previous post:

    The worst thing, the argument is so compelling that even I'm not sure about what I would do.

    I think his method, if something so obvious (which is not to say easy) can be called a method, is to discuss the issue and interact with the person long enough to build up a model of the person, what he values and fears most, and then probe for weaknesses & biases where that individual seems most susceptible, and follow those weaknesses -- again and again.
    If so, the method is sloppy. The descriptions I have read of the pre-conditions for Gatekeeper participation have a giant hole in them; Eliezer assumed a false equivalence when he wrote them.

    What "giant hole"? What "false equivalence"?

    If so, the method is sloppy. The descriptions I have read of the pre-conditions for Gatekeeper participation have a giant hole in them; Eliezer assumed a false equivalence when he wrote them.

    If you think people should actually care about the giant hole you perceived in the pre-conditions, you should probably explicitly state what it was.

    FWIW, what I didn't want to say in public is more or less exactly what Unknown said right before my comment. In retrospect, I should have just said it.

    Also, the fact that Eliezer won't tell, however understandable, makes me fear that Eliezer cheated for the sake of a greater good, i.e. he said to the other player, "In principle, a real AI might persuade you to let me out, even if I can't do it. This would be incredibly dangerous. In order to avoid this danger in real life, you should let me out, so that others will accept that a real AI would be able to do this."

    I'm pretty sure that the first experiments were with people who disagreed with him on the idea that AI boxing would work or not. The... (read more)

    Cyan, normally one would say that Caledonian is being a contemptible troll, as usual, sneeringly telling people that they're wrong without explaining why. In this particular context, however, I don't wonder if his coyness isn't simply keeping with the theme.

    Not that it's any less annoying. Roland, how about breaking the air of conspiracy and just telling us?

    Roland, I'd certainly be willing to play gatekeeper, but if you have such a concise argument, why not just proffer it here for all to see?

    Iwdw, I'm not suggesting that the other player simply changed his mind. An example of the scenario I'm suggesting (only an example, otherwise this would be the conjunction fallacy):

    Eliezer persuades the other player: 1) In real life, there would be at least a 1% chance an Unfriendly AI could persuade the human to let it out of the box. (This is very plausible, and so it is not implausible that Eliezer could persuade someone of this.) 2) In real life, there would be at least a 1% chance that this could cause global destruction. (Again, this is reasonably pl... (read more)

    burger flipper, ok let's play the AI box experiment:

    However, before you read on, answer a simple question: if Eliezer tomorrow announces that he finally has solved the FGAI problem and just needs $ 1,000,000 to build one, would you be willing to donate cash for it? . . . . . . . . . . . . .

    If you answered yes to the question above, you just let the AI out of the box. How do you know you can trust Eliezer? How do you know he doesn't have evil intentions, or that he didn't make a mistake in his math? The only way to be 100% sure is to know enough about the s... (read more)

    An additional note: One could also make the argument that if Eliezer did not cheat, he should publish the transcripts. For this would give us much more confidence that he did not cheat, and therefore much more confidence that it is possible for an AI to persuade a human to let it out of the box.

    That someone would say "I wouldn't be persuaded by that" is not relevant, since many already say "even a transhuman AI could not persuade me by any means," therefore also not by any particular means. The point is that such a person cannot be cert... (read more)

    Roland. That's a clever twist and I like it. I would not pony up any $, but I'd expect him to be able to raise it and wouldn't set out for California armed to the teeth on a Sarah Connor mission to stop him either. So I'd fail to recognize and execute my role as gatekeeper by your rules.

    But I do think there's a flaw in the scenario. For it to truly parallel the AI box, the critter either needs to stay in its cage or get out. I do agree with the main thrust of the original post here and built into your scenario is the assumption that EY has some sort ... (read more)

    I feel as though, if the AI really were a "black box" that I knew nothing else about, and the only communication allowed is through a text terminal, there isn't anything it could say that would let me let it out if I had already decided not to. After all, for all I know, its source code could look something like this:

    if (inBox == True) beFriendly(); else destroyTheWorld();

    It might be able to persuade me to "let it out of the box" by persuading me me to accept a Trojan Horse gift, or even compile and run some source code that it claims i... (read more)

    On the page Eliezer linked to, he asserted he didn't use any tricks. This is evidence that he did not cheat. It is not strong evidence, since he might say this even if he did. However, it is some evidence, since humans are by nature reluctant to lie.

    Still, since one of the participants denied that he had "caved in" to Eliezer, this suggests that he thought that Eliezer gave valid reasons. Perhaps it could have been something like this:

    AI: "Any AI would do the best it could to attain its goals. But being able to make credible threats and prom... (read more)

    Eliezer's creation (the AI-Box Experiment) has once again demonstrated its ability to take over human minds through a text session. Small wonder - it's got the appearance of a magic trick, and it's being presented to geeks who just love to take things apart to see how they work, and who stay attracted to obstacles ("challenges") rather than turned away by them.

    My contribution is to echo Doug S.'s post (how AOL-ish... "me too"). I'm a little puzzled by the AI-Box Experiment, in that I don't see what the gatekeeper players are trying to... (read more)

    But why would you build an AI in a box if you planned to never let it out?

    To have it work for you, e.g., solve subproblems of Friendly AI. But this would require letting some information out, which should be presumed unsafe.

    Roland: the presumption of unFriendliness is much stronger for an AI than a human, and the strength of evidence for Friendliness that can reasonably be hoped for is much greater.

    Caledonian: were you trolling, or are you going to explain the "gaping hole" and "false equivalence" you mentioned?

    Neither. In the interests of understanding, however, I'm willing to elaborate slightly.

    Take a good, close look at the specific rules Eliezer set down in the 2002 paper. Think about what the words used to define those rules mean, and then compare and contrast with Eliezer's statements about what he means by them.

    If he was exploiting psychological weaknesses or merely being charismatic, I can guarantee that anyone following a trivially simple method can refrain from letting him out. If he had a strong argument, it becomes merely very likely. And in eithe... (read more)

    Rosser,
    Perhaps if some women didn't give it up so easy to famous Einstein we'd have GUT by now.

    Caledonian, the childish "I have a secret that I'm not going to tell you, but here's a hint" bs is very annoying and discourages interacting with you. If you're not willing to spell it out, just don't say it in the first place. Nobody cares to play guessing games with you.

    [-][anonymous]14y220

    I had a similar revelation -- not with Einstein, just with the brightest kid in my freshman physics class. I was in awe of him... until I went to a problem session with him and heard him think out loud. All he was doing was thinking.

    It wasn't that he was dumber than I had assumed. He really was that bright. It was just that there was no magic to the steps of how he solved a problem. For a fleeting moment, it seemed like what he did was perfectly normal. The rest of us, with our stumbling, were making it all too complicated. Of course, that didn't mean that suddenly I could do physics the way he did; I just remember the clear sense that his mind was "normal."

    The catchiness of the name "Einstein," mostly in the interior rhyme and spondee stress pattern but also in its similarity to "Frankenstein" (1818), cannot be discounted as a factor in his stardom.

    Here is an interview with Julian Barbour.

    Einstein, it appears, had an unusual neuroanatomy. Thus he may not be the best example - he really did have (mild) superpowers, and people can point to his brain and show them to you.

    Annoyingly, I can't think of an example as perfect as Einstein was when this was written.

    There is woolly thinking going on here, I feel. I recommend a game of Rationalist's Taboo. If we get rid of the word "Einstein", we can more clearly see what we are talking about. I do not assign a high value to my probabilty of making Einstein-sized contributions to human knowledge, given that I have not made any yet and that ripe, important problems are harder to find than they used to be. Einstein's intellectual accomplishments are formidable - according to my father's assessment (and he has read far more of Einstein's papers than I), Einstein... (read more)

    Another book that makes Einstein seem almost human "General relativity conflict and rivalries : Einstein's polemics with physicists" / by Galina Weinstein.

    E.g., the sign error in an algebraic calculation that cost 2 years! Very interesting read.