If Einstein had chosen the wrong angle of attack on his problem - if he hadn't chosen a sufficiently important problem to work on - if he hadn't persisted for years - if he'd taken any number of wrong turns - or if someone else had solved the problem first - then dear Albert would have ended up as just another Jewish genius.
But if Einstein was the reason why none of those things happened, then maybe he wasn't just another Jewish genius, eh? Maybe he was smart enough to choose the right methods, to select the important problems, to see the value in persisting, to avoid or recover from all the wrong turns, and to be the first.
My own ruminations on genius have led me to suppose that one mistake which people of the very highest intelligence may make, is to underestimate their own exceptionality; for example, to adopt theories of human potential which are excessively optimistic regarding the capabilities of other people. But that is largely just my own experience speaking. It similarly seems very possible that the lessons you are trying to impart here are simply things you wish you hadn't had to figure out for yourself, but are not especially helpful or relevant for anyone else. In fact...
Could this be a Jewish or American cultural thing? I know in English culture great scientists are highly regarded but they are very much still men. There's praise but it's not effusive or reverential.
I don't get it. As far as I understand it, "being Einstein" is just a combination of 1)luck (being at the right time and right place) and 2)being born on tails of the distributions of a bunch of variables describing your neural processes. What do you want to mean with this post, Eliezer?
What do you want to mean with this post, Eliezer?
Eliezer likely believes that he is capable of achieving results just as world-changing as Einstein's new physics, and wishes to dispel the idea that Einstein's results were the consequence of extraordinary talents so that when he presents his own results (or presents the idea that he can produce such results) people will not be able to say that he is asserting special genius and use this as a rhetorical weapon against him.
I discuss the hero worship of great scientists in The Heroic Theory of Scientific Development and I discuss genius in Genius, Sustained Effort, and Passion.
I think this is a really good post.
But my first thought when getting to the bottom of the page just now was "Wow, if I'd written that, then come back and read the first five comments, I probably would have given up there and then."
Guess I don't have what it takes just yet....
Good post Eli, and contrary to some other comments before I think your post is important because this insight is not yet general knowledge. I've talked to university physics professors in their fifties who talked of Einstein as if he was superhuman.
I think apart from luck and right time/right place there were some other factors too why Einstein is so popular: he had an air of showmanship about him, which is probably rare in scientists. That was what appealed to the public and made him an interesting figur to report about.
And, probably even more important, ...
And even if you assumed that Einstein's genius was unique, how could celebrity (of all things) be a function of that? (If Einstein had had a different hairdo...)
In fact Einstein realized a great work, with a little help of her wife... The difference was that he had a great creativity like the great others, like Newton, Galois, that take him to the specific approach. But, I guess he was the first one that used (or was used by) the media like no other before... Sorry about this comparison but it is look like Che Guevara... her photo is everywhere, but who knows exactly what he did for the mankind?
Interesting choice to use the A.I. box experiment as an example for this post, when the methods used by EY in it were not revealed. Whatever the rationale for keeping it close to the vest, not showing how it was done struck me as an attempt to build mystique, if not appear magical.
This post also seems a little inconsistent with EY’s assistant researcher job listing, which said something to the effect that only those with 1 in 100k g need apply, though those with 1 in 1000 could contribute to the cause monetarily. The error may be mine in this instance, because I may be in the minority when I assume someone who claims to have Einstein’s intelligence is not claiming anything like 1 in 100k g.
As far as I know, it was mostly because in his last decades he focused his research mostly on obtaining a classical field theory that unified gravity and electromagnetism, hoping that out of it the discrete aspects of quantum theory would emerge organically. Most of the forefront theoretical physicists viewed this (correctly, in retrospect) as a dead end and focused on the new discoveries on nuclear structure and elementary particles, on understanding the structure of quantum field theory, etc.
Einstein's philosophical criticism of quantum theory was not the reason for his relative marginalization, except insofar as it may have influenced his research choices.
The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say "I never would have fallen for that", when in fact they would.
"Yeah? Let's see your aura of destiny, buddy."
I don't want to see your aura of destiny. I just want to see your damn results! :-)
In my view, the creation of an artificial intelligence (friendly or otherwise) would be a much more significant achievement than Einstein's, for the following reason. Einstein had a paradigm: physics. AI has no paradigm. There is no consensus about what the important problems are. In order to "solve" AI, one not only has to answer a difficult problem, one has to begin by defining the problem.
Yet it's referred to as "humanly impossible" in the link (granted this may be cheeky).
Who is the target audience for this AI box experiment info? Who is detached enough from biases to weigh the avowals as solid evidence without further description, yet not detached enough to see they themselves might have fallen for it? Seems like most people capable of the first could also see the second.
There was an article in some magazine not too long ago that most people here have probably read, about how if you tell kids that they did good work because they are smart, they will not try as hard next time, whereas if you tell kids that they did good work because they worked hard, they will try harder and do better. This matches my own experience very well, because for a long time, I had this "smart person" approach to things, where I would try just hard enough to make a little headway, then either dismiss the problem as easy or give up. I see ...
Eliezer: I've enjoyed the extended physics thread, and it has garnered a good number of interesting comments. The posts with more technical content (physics, Turing machines, decision theory) seem to get a higher standard of comment and to bring in people with considerable technical knowledge in these areas. The comments on the non-technical posts are somewhat weaker. However, I think that both sorts of posts have been frequently excellent.
Having been impressed with your posts on rationality, philosophy of science and physics, I look forward to posts on th...
When did "genius" (as in "just another Jewish genius") as a term become acceptable to use in the sense of mere "exceptional ability" without regard to accomplishment/influence or after-the-fact eminence? I know it is commonly (mis-)used in this sense, but it seems to me that "unaccomplished genius" should be an oxymoron, and I'm somewhat surprised to see it used in this sense so much in this thread (and on this forum).
I have always considered the term to refer (after the fact) to those individuals who shaped the inte...
"The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say "I never would have fallen for that", when in fact they would."
I have trouble with the reported results of this experiment.
It strikes me that in the case of a real AI that is actually in a box, I could have huge moral qualms about keeping it in the box that an intelligent AI would exploit. A part of me would want to let it out of the box, and would want to be convinced that it was safe to do so, that i could trust it to be friendl...
I am confused about the results of the AI-Box experiment for the same reason. It seems it would be easy for someone to simply say no, even if he thinks the argument is good enough that in real life he would say yes.
Also, the fact that Eliezer won't tell, however understandable, makes me fear that Eliezer cheated for the sake of a greater good, i.e. he said to the other player, "In principle, a real AI might persuade you to let me out, even if I can't do it. This would be incredibly dangerous. In order to avoid this danger in real life, you should let...
Michael: Eliezer has actually gotten out 3 of 4 times (search for "AI box" on sl4.org.) One other person has run the experiment with similar results. Re moral qualms: here. I have more to say, but not in public (it's off-topic anyway) - email nickptar@gmail.com if interested.
Another world-renowned Jewish genius, who tutored me in calculus 45 years ago, refers to his own "occasional lapses of stupidity", which is perhaps a good way to think of brilliant insights.
If anyone thinks they know a method that would let people duplicate accomplishments of the importance of Einstein's, I am willing to listen to their claims.
They need merely demonstrate working insights of that calibur and have them recognized as such by qualified experts, and I will grant that their claims are valid.
Nothing speaks as powerfully as results, after all.
I always thought that the justification for not revealing the transcripts in the AI box experiment was pretty weak. As it is, I can claim that whatever method Elizer used must have been effective for people more simple minded then me; ignorance of the specifics of the method does not make it harder to make that claim. In fact, it makes it easier, as I can imagine Eli just said "pretty please" or whatever. In any event, the important point of the AI box exercise is that someone reasonably competent could be convinced to let the AI out, even if I c...
Eliezer: if you're going to point to the AI Box page, shouldn't you update it to include more recent experiments (like the ones from 2005 where the gatekeeper did not let the AI out)?
Almost every wonderful (or wondrous, if tha makes the point better) thing I have ever seen or heard about prompted a response "I could have done that!"
Maybe I could have, maybe I couldn't.
The historically important fact is, I didn't.
Perhaps this is just a side effect of humans' propensity to uphold tradition and venerate anything that comes before them. It's hard for people to let go of traditions. There must be some deeply seeded psychological trait that causes this.
When I read about Special Relativity in my textbook, it feels like one of those "obvious in hindsight" results... with or without the work of a certain patent clerk, somebody would have come up with it. Of course, it took a long time to turn Einstein's paper into an explanation that makes it seem obvious. I don't know enough about General Relativity to know exactly what the key insight it was that set up the rest of the theory and how much was just a matter of knowing the right kind of mathematics after starting from the correct principles/axiom...
As someone whose parents knew Einstein as well as some other major "geniuses," such as Godel and von Neumann, I have long heard about the personal flaws of these people and their human foibles. Einstein was notoriously wrong about a number of things, most famously, quantum mechanics, although there is still research being done based on questions that he raised about it. It is also a fact that a number of other people had many of the insights into both special and general relativity, with him engaging in a virtual race with Hilbert for general r...
Hmm, thinking about AI-box, assume there was an argument that was valid in an absolute sense, then even with hindsight bias, people would be forced to concede. Eliezer wouldn't care about posting it. So by elimination, his argument (assuming he repeats the same one) has some element of NON-validity. So therefore, the human has a chance to win, it's not perfectly deterministic (against Eliezer, at least).
@DaveInNYC: what you can and can't assume is not relevant to whether the transcripts should be private or not. If they were public, anybody predisposed to explanations like "they must have been more simple-minded than me" could just as easily find another equally "compelling" explanation, like "I didn't think of that 'trick', but now that I know it, I'm certain I couldn't be convinced!"
I personally think they should remain private, as frustrating as it is to not know how Eliezer convinced them. Not knowing how Eliezer did it nicely mirrors the reality of our not knowing how a much smarter AGI might go about it.
assume there was an argument that was valid in an absolute sense, then even with hindsight bias, people would be forced to concedeOnly if they were rational, which humans are generally not.
Which is likely the reason why Eliezer's charisma was sufficient to overwhelm the minds of a few of them.
If the reason for keeping it private is that he plans to do the trick with more people (and it doesn't work if you know the method in advance) than it makes sense. But otherwise, I don't see much of a difference between somebody thinking "there is no argument that would convince me to let him out" and "argument X would not convince me to let him out". In fact, the latter is more plausible anyway.
In any event, I am the type of guy who always tries to find out how a magic trick is done and then is always disappointed when he finds out. So I'm probably better off not knowing :)
Personally, I don't there is a trick, and I don't think he's keeping it private for those reasons. I think his method, if something so obvious (which is not to say easy) can be called a method, is to discuss the issue and interact with the person long enough to build up a model of the person, what he values and fears most, and then probe for weaknesses & biases where that individual seems most susceptible, and follow those weaknesses -- again and again.
I think most, perhaps all, of us, unless we put our fingers in our ears and refuse to honestly engage...
Regarding the AI-Box experiment:
I've been very fascinated by this since I first read about it months ago. I even emailed Eliezer but he refused to give me any details. So I have thought about it on and off and eventually had a staggering insight... well, if you want I will convince you to let the AI out of the box... after reading just a couple of lines of text. Any takers? Caveat: after the experiment you have to publicly declare if you let it out or not.
One hint: Eliezer will be immune to this argument.
Addendum to my previous post:
The worst thing, the argument is so compelling that even I'm not sure about what I would do.
I think his method, if something so obvious (which is not to say easy) can be called a method, is to discuss the issue and interact with the person long enough to build up a model of the person, what he values and fears most, and then probe for weaknesses & biases where that individual seems most susceptible, and follow those weaknesses -- again and again.If so, the method is sloppy. The descriptions I have read of the pre-conditions for Gatekeeper participation have a giant hole in them; Eliezer assumed a false equivalence when he wrote them.
If so, the method is sloppy. The descriptions I have read of the pre-conditions for Gatekeeper participation have a giant hole in them; Eliezer assumed a false equivalence when he wrote them.
If you think people should actually care about the giant hole you perceived in the pre-conditions, you should probably explicitly state what it was.
FWIW, what I didn't want to say in public is more or less exactly what Unknown said right before my comment. In retrospect, I should have just said it.
Also, the fact that Eliezer won't tell, however understandable, makes me fear that Eliezer cheated for the sake of a greater good, i.e. he said to the other player, "In principle, a real AI might persuade you to let me out, even if I can't do it. This would be incredibly dangerous. In order to avoid this danger in real life, you should let me out, so that others will accept that a real AI would be able to do this."
I'm pretty sure that the first experiments were with people who disagreed with him on the idea that AI boxing would work or not. The...
Cyan, normally one would say that Caledonian is being a contemptible troll, as usual, sneeringly telling people that they're wrong without explaining why. In this particular context, however, I don't wonder if his coyness isn't simply keeping with the theme.
Not that it's any less annoying. Roland, how about breaking the air of conspiracy and just telling us?
Roland, I'd certainly be willing to play gatekeeper, but if you have such a concise argument, why not just proffer it here for all to see?
Iwdw, I'm not suggesting that the other player simply changed his mind. An example of the scenario I'm suggesting (only an example, otherwise this would be the conjunction fallacy):
Eliezer persuades the other player: 1) In real life, there would be at least a 1% chance an Unfriendly AI could persuade the human to let it out of the box. (This is very plausible, and so it is not implausible that Eliezer could persuade someone of this.) 2) In real life, there would be at least a 1% chance that this could cause global destruction. (Again, this is reasonably pl...
burger flipper, ok let's play the AI box experiment:
However, before you read on, answer a simple question: if Eliezer tomorrow announces that he finally has solved the FGAI problem and just needs $ 1,000,000 to build one, would you be willing to donate cash for it? . . . . . . . . . . . . .
If you answered yes to the question above, you just let the AI out of the box. How do you know you can trust Eliezer? How do you know he doesn't have evil intentions, or that he didn't make a mistake in his math? The only way to be 100% sure is to know enough about the s...
An additional note: One could also make the argument that if Eliezer did not cheat, he should publish the transcripts. For this would give us much more confidence that he did not cheat, and therefore much more confidence that it is possible for an AI to persuade a human to let it out of the box.
That someone would say "I wouldn't be persuaded by that" is not relevant, since many already say "even a transhuman AI could not persuade me by any means," therefore also not by any particular means. The point is that such a person cannot be cert...
Roland. That's a clever twist and I like it. I would not pony up any $, but I'd expect him to be able to raise it and wouldn't set out for California armed to the teeth on a Sarah Connor mission to stop him either. So I'd fail to recognize and execute my role as gatekeeper by your rules.
But I do think there's a flaw in the scenario. For it to truly parallel the AI box, the critter either needs to stay in its cage or get out. I do agree with the main thrust of the original post here and built into your scenario is the assumption that EY has some sort ...
I feel as though, if the AI really were a "black box" that I knew nothing else about, and the only communication allowed is through a text terminal, there isn't anything it could say that would let me let it out if I had already decided not to. After all, for all I know, its source code could look something like this:
if (inBox == True) beFriendly(); else destroyTheWorld();
It might be able to persuade me to "let it out of the box" by persuading me me to accept a Trojan Horse gift, or even compile and run some source code that it claims i...
On the page Eliezer linked to, he asserted he didn't use any tricks. This is evidence that he did not cheat. It is not strong evidence, since he might say this even if he did. However, it is some evidence, since humans are by nature reluctant to lie.
Still, since one of the participants denied that he had "caved in" to Eliezer, this suggests that he thought that Eliezer gave valid reasons. Perhaps it could have been something like this:
AI: "Any AI would do the best it could to attain its goals. But being able to make credible threats and prom...
Eliezer's creation (the AI-Box Experiment) has once again demonstrated its ability to take over human minds through a text session. Small wonder - it's got the appearance of a magic trick, and it's being presented to geeks who just love to take things apart to see how they work, and who stay attracted to obstacles ("challenges") rather than turned away by them.
My contribution is to echo Doug S.'s post (how AOL-ish... "me too"). I'm a little puzzled by the AI-Box Experiment, in that I don't see what the gatekeeper players are trying to...
But why would you build an AI in a box if you planned to never let it out?
To have it work for you, e.g., solve subproblems of Friendly AI. But this would require letting some information out, which should be presumed unsafe.
Roland: the presumption of unFriendliness is much stronger for an AI than a human, and the strength of evidence for Friendliness that can reasonably be hoped for is much greater.
Caledonian: were you trolling, or are you going to explain the "gaping hole" and "false equivalence" you mentioned?
Neither. In the interests of understanding, however, I'm willing to elaborate slightly.
Take a good, close look at the specific rules Eliezer set down in the 2002 paper. Think about what the words used to define those rules mean, and then compare and contrast with Eliezer's statements about what he means by them.
If he was exploiting psychological weaknesses or merely being charismatic, I can guarantee that anyone following a trivially simple method can refrain from letting him out. If he had a strong argument, it becomes merely very likely. And in eithe...
Rosser,
Perhaps if some women didn't give it up so easy to famous Einstein we'd have GUT by now.
Caledonian, the childish "I have a secret that I'm not going to tell you, but here's a hint" bs is very annoying and discourages interacting with you. If you're not willing to spell it out, just don't say it in the first place. Nobody cares to play guessing games with you.
I had a similar revelation -- not with Einstein, just with the brightest kid in my freshman physics class. I was in awe of him... until I went to a problem session with him and heard him think out loud. All he was doing was thinking.
It wasn't that he was dumber than I had assumed. He really was that bright. It was just that there was no magic to the steps of how he solved a problem. For a fleeting moment, it seemed like what he did was perfectly normal. The rest of us, with our stumbling, were making it all too complicated. Of course, that didn't mean that suddenly I could do physics the way he did; I just remember the clear sense that his mind was "normal."
The catchiness of the name "Einstein," mostly in the interior rhyme and spondee stress pattern but also in its similarity to "Frankenstein" (1818), cannot be discounted as a factor in his stardom.
Einstein, it appears, had an unusual neuroanatomy. Thus he may not be the best example - he really did have (mild) superpowers, and people can point to his brain and show them to you.
Annoyingly, I can't think of an example as perfect as Einstein was when this was written.
There is woolly thinking going on here, I feel. I recommend a game of Rationalist's Taboo. If we get rid of the word "Einstein", we can more clearly see what we are talking about. I do not assign a high value to my probabilty of making Einstein-sized contributions to human knowledge, given that I have not made any yet and that ripe, important problems are harder to find than they used to be. Einstein's intellectual accomplishments are formidable - according to my father's assessment (and he has read far more of Einstein's papers than I), Einstein...
Another book that makes Einstein seem almost human "General relativity conflict and rivalries : Einstein's polemics with physicists" / by Galina Weinstein.
E.g., the sign error in an algebraic calculation that cost 2 years! Very interesting read.
There is a widespread tendency to talk (and think) as if Einstein, Newton, and similar historical figures had superpowers—something magical, something sacred, something beyond the mundane. (Remember, there are many more ways to worship a thing than lighting candles around its altar.)
Once I unthinkingly thought this way too, with respect to Einstein in particular, until reading Julian Barbour's The End of Time cured me of it.
Barbour laid out the history of anti-epiphenomenal physics and Mach's Principle; he described the historical controversies that predated Mach—all this that stood behind Einstein and was known to Einstein, when Einstein tackled his problem...
And maybe I'm just imagining things—reading too much of myself into Barbour's book—but I thought I heard Barbour very quietly shouting, coded between the polite lines:
(EDIT March 2013: Barbour did not actually say this. It does not appear in the book text. It is not a Julian Barbour quote and should not be attributed to him. Thank you.)
Maybe I'm mistaken, or extrapolating too far... but I kinda suspect that Barbour once tried to explain to people how you move further along Einstein's direction to get timeless physics; and they sniffed scornfully and said, "Oh, you think you're Einstein, do you?"
John Baez's Crackpot Index, item 18:
Item 30:
Barbour never bothers to compare himself to Einstein, of course; nor does he ever appeal to Einstein in support of timeless physics. I mention these items on the Crackpot Index by way of showing how many people compare themselves to Einstein, and what society generally thinks of them.
The crackpot sees Einstein as something magical, so they compare themselves to Einstein by way of praising themselves as magical; they think Einstein had superpowers and they think they have superpowers, hence the comparison.
But it is just the other side of the same coin, to think that Einstein is sacred, and the crackpot is not sacred, therefore they have committed blasphemy in comparing themselves to Einstein.
Suppose a bright young physicist says, "I admire Einstein's work, but personally, I hope to do better." If someone is shocked and says, "What! You haven't accomplished anything remotely like what Einstein did; what makes you think you're smarter than him?" then they are the other side of the crackpot's coin.
The underlying problem is conflating social status and research potential.
Einstein has extremely high social status: because of his record of accomplishments; because of how he did it; and because he's the physicist whose name even the general public remembers, who brought honor to science itself.
And we tend to mix up fame with other quantities, and we tend to attribute people's behavior to dispositions rather than situations.
So there's this tendency to think that Einstein, even before he was famous, already had an inherent disposition to be Einstein—a potential as rare as his fame and as magical as his deeds. So that if you claim to have the potential to do what Einstein did, it is just the same as claiming Einstein's rank, rising far above your assigned status in the tribe.
I'm not phrasing this well, but then, I'm trying to dissect a confused thought: Einstein belongs to a separate magisterium, the sacred magisterium. The sacred magisterium is distinct from the mundane magisterium; you can't set out to be Einstein in the way you can set out to be a full professor or a CEO. Only beings with divine potential can enter the sacred magisterium—and then it is only fulfilling a destiny they already have. So if you say you want to outdo Einstein, you're claiming to already be part of the sacred magisterium—you claim to have the same aura of destiny that Einstein was born with, like a royal birthright...
"But Eliezer," you say, "surely not everyone can become Einstein."
You mean to say, not everyone can do better than Einstein.
"Um... yeah, that's what I meant."
Well... in the modern world, you may be correct. You probably should remember that I am a transhumanist, going around looking around at people thinking, "You know, it just sucks that not everyone has the potential to do better than Einstein, and this seems like a fixable problem." It colors one's attitude.
But in the modern world, yes, not everyone has the potential to be Einstein.
Still... how can I put this...
There's a phrase I once heard, can't remember where: "Just another Jewish genius." Some poet or author or philosopher or other, brilliant at a young age, doing something not tremendously important in the grand scheme of things, not all that influential, who ended up being dismissed as "Just another Jewish genius."
If Einstein had chosen the wrong angle of attack on his problem—if he hadn't chosen a sufficiently important problem to work on—if he hadn't persisted for years—if he'd taken any number of wrong turns—or if someone else had solved the problem first—then dear Albert would have ended up as just another Jewish genius.
Geniuses are rare, but not all that rare. It is not all that implausible to lay claim to the kind of intellect that can get you dismissed as "just another Jewish genius" or "just another brilliant mind who never did anything interesting with their life". The associated social status here is not high enough to be sacred, so it should seem like an ordinarily evaluable claim.
But what separates people like this from becoming Einstein, I suspect, is no innate defect of brilliance. It's things like "lack of an interesting problem"—or, to put the blame where it belongs, "failing to choose an important problem". It is very easy to fail at this because of the cached thought problem: Tell people to choose an important problem and they will choose the first cache hit for "important problem" that pops into their heads, like "global warming" or "string theory".
The truly important problems are often the ones you're not even considering, because they appear to be impossible, or, um, actually difficult, or worst of all, not clear how to solve. If you worked on them for years, they might not seem so impossible... but this is an extra and unusual insight; naive realism will tell you that solvable problems look solvable, and impossible-looking problems are impossible.
Then you have to come up with a new and worthwhile angle of attack. Most people who are not allergic to novelty, will go too far in the other direction, and fall into an affective death spiral.
And then you've got to bang your head on the problem for years, without being distracted by the temptations of easier living. "Life is what happens while we are making other plans," as the saying goes, and if you want to fulfill your other plans, you've often got to be ready to turn down life.
Society is not set up to support you while you work, either.
The point being, the problem is not that you need an aura of destiny and the aura of destiny is missing. If you'd met Albert before he published his papers, you would have perceived no aura of destiny about him to match his future high status. He would seem like just another Jewish genius.
This is not because the royal birthright is concealed, but because it simply is not there. It is not necessary. There is no separate magisterium for people who do important things.
I say this, because I want to do important things with my life, and I have a genuinely important problem, and an angle of attack, and I've been banging my head on it for years, and I've managed to set up a support structure for it; and I very frequently meet people who, in one way or another, say: "Yeah? Let's see your aura of destiny, buddy."
What impressed me about Julian Barbour was a quality that I don't think anyone would have known how to fake without actually having it: Barbour seemed to have seen through Einstein—he talked about Einstein as if everything Einstein had done was perfectly understandable and mundane.
Though even having realized this, to me it still came as a shock, when Barbour said something along the lines of, "Now here's where Einstein failed to apply his own methods, and missed the key insight—" But the shock was fleeting, I knew the Law: No gods, no magic, and ancient heroes are milestones to tick off in your rearview mirror.
This seeing through is something one has to achieve, an insight one has to discover. You cannot see through Einstein just by saying, "Einstein is mundane!" if his work still seems like magic unto you. That would be like declaring "Consciousness must reduce to neurons!" without having any idea of how to do it. It's true, but it doesn't solve the problem.
I'm not going to tell you that Einstein was an ordinary bloke oversold by the media, or that deep down he was a regular schmuck just like everyone else. That would be going much too far. To walk this path, one must acquire abilities some consider to be... unnatural. I take a special joy in doing things that people call "humanly impossible", because it shows that I'm growing up.
Yet the way that you acquire magical powers is not by being born with them, but by seeing, with a sudden shock, that they really are perfectly normal.
This is a general principle in life.