"Persevere."  It's a piece of advice you'll get from a whole lot of high achievers in a whole lot of disciplines.  I didn't understand it at all, at first.

At first, I thought "perseverance" meant working 14-hour days.  Apparently, there are people out there who can work for 10 hours at a technical job, and then, in their moments between eating and sleeping and going to the bathroom, seize that unfilled spare time to work on a book.  I am not one of those people—it still hurts my pride even now to confess that.  I'm working on something important; shouldn't my brain be willing to put in 14 hours a day?  But it's not.  When it gets too hard to keep working, I stop and go read or watch something.  Because of that, I thought for years that I entirely lacked the virtue of "perseverance".

In accordance with human nature, Eliezer1998 would think things like: "What counts is output, not input."  Or, "Laziness is also a virtue—it leads us to back off from failing methods and think of better ways."  Or, "I'm doing better than other people who are working more hours.  Maybe, for creative work, your momentary peak output is more important than working 16 hours a day."  Perhaps the famous scientists were seduced by the Deep Wisdom of saying that "hard work is a virtue", because it would be too awful if that counted for less than intelligence?

I didn't understand the virtue of perseverance until I looked back on my journey through AI, and realized that I had overestimated the difficulty of almost every single important problem.

Sounds crazy, right?  But bear with me here.

When I was first deciding to challenge AI, I thought in terms of 40-year timescales, Manhattan Projects, planetary computing networks, millions of programmers, and possibly augmented humans.

This is a common failure mode in AI-futurism which I may write about later; it consists of the leap from "I don't know how to solve this" to "I'll imagine throwing something really big at it".  Something huge enough that, when you imagine it, that imagination creates a feeling of impressiveness strong enough to be commensurable with the problem.  (There's a fellow currently on the AI list who goes around saying that AI will cost a quadrillion dollars—we can't get AI without spending a quadrillion dollars, but we could get AI at any time by spending a quadrillion dollars.)  This, in turn, lets you imagine that you know how to solve AI, without trying to fill the obviously-impossible demand that you understand intelligence.

So, in the beginning, I made the same mistake:  I didn't understand intelligence, so I imagined throwing a Manhattan Project at the problem.

But, having calculated the planetary death rate at 55 million per year or 150,000 per day, I did not turn around and run away from the big scary problem like a frightened rabbit.  Instead, I started trying to figure out what kind of AI project could get there fastest.  If I could make the Singularity happen one hour earlier, that was a reasonable return on investment for a pre-Singularity career.  (I wasn't thinking in terms of existential risks or Friendly AI at this point.)

So I didn't run away from the big scary problem like a frightened rabbit, but stayed to see if there was anything I could do.

Fun historical fact:  In 1998, I'd written this long treatise proposing how to go about creating a self-improving or "seed" AI (a term I had the honor of coining).  Brian Atkins, who would later become the founding funder of the Singularity Institute, had just sold Hypermart to Go2Net.  Brian emailed me to ask whether this AI project I was describing was something that a reasonable-sized team could go out and actually do.  "No," I said, "it would take a Manhattan Project and thirty years," so for a while we were considering a new dot-com startup instead, to create the funding to get real work done on AI...

A year or two later, after I'd heard about this newfangled "open source" thing, it seemed to me that there was some preliminary development work—new computer languages and so on—that a small organization could do; and that was how the Singularity Institute started.

This strategy was, of course, entirely wrong.

But even so, I went from "There's nothing I can do about it now" to "Hm... maybe there's an incremental path through open-source development, if the initial versions are useful to enough people."

This is back at the dawn of time, so I'm not saying any of this was a good idea.  But in terms of what I thought I was trying to do, a year of creative thinking had shortened the apparent pathway:  The problem looked slightly less impossible than it did the very first time I approached it.

The more interesting pattern is my entry into Friendly AI.  Initially, Friendly AI hadn't been something that I had considered at all—because it was obviously impossible and useless to deceive a superintelligence about what was the right course of action.

So, historically, I went from completely ignoring a problem that was "impossible", to taking on a problem that was merely extremely difficult.

Naturally this increased my total workload.

Same thing with trying to understand intelligence on a precise level.  Originally, I'd written off this problem as impossible, thus removing it from my workload.  (This logic seems pretty deranged in retrospect—Nature doesn't care what you can't do when It's writing your project requirements—but I still see AIfolk trying it all the time.)  To hold myself to a precise standard meant putting in more work than I'd previously imagined I needed.  But it also meant tackling a problem that I would have dismissed as entirely impossible not too much earlier.

Even though individual problems in AI have seemed to become less intimidating over time, the total mountain-to-be-climbed has increased in height—just like conventional wisdom says is supposed to happen—as problems got taken off the "impossible" list and put on the "to do" list.

I started to understand what was happening—and what "Persevere!" really meant—at the point where I noticed other AIfolk doing the same thing: saying "Impossible!" on problems that seemed eminently solvable—relatively more straightforward, as such things go.  But they were things that would have seemed vastly more intimidating at the point when I first approached the problem.

And I realized that the word "impossible" had two usages:

1)  Mathematical proof of impossibility conditional on specified axioms;

2)  "I can't see any way to do that."

Needless to say, all my own uses of the word "impossible" had been of the second type.

Any time you don't understand a domain, many problems in that domain will seem impossible because when you query your brain for a solution pathway, it will return null.  But there are only mysterious questions, never mysterious answers.  If you spend a year or two working on the domain, then, if you don't get stuck in any blind alleys, and if you have the native ability level required to make progress, you will understand it better.  The apparent difficulty of problems may go way down.  It won't be as scary as it was to your novice-self.

And this is especially likely on the confusing problems that seem most intimidating.

Since we have some notion of the processes by which a star burns, we know that it's not easy to build a star from scratch.  Because we understand gears, we can prove that no collection of gears obeying known physics can form a perpetual motion machine.  These are not good problems on which to practice doing the impossible.

When you're confused about a domain, problems in it will feel very intimidating and mysterious, and a query to your brain will produce a count of zero solutions.  But you don't know how much work will be left when the confusion clears.  Dissolving the confusion may itself be a very difficult challenge, of course.  But the word "impossible" should hardly be used in that connection.  Confusion exists in the map, not in the territory.

So if you spend a few years working on an impossible problem, and you manage to avoid or climb out of blind alleys, and your native ability is high enough to make progress, then, by golly, after a few years it may not seem so impossible after all.

But if something seems impossible, you won't try.

Now that's a vicious cycle.

If I hadn't been in a sufficiently driven frame of mind that "forty years and a Manhattan Project" just meant we should get started earlier, I wouldn't have tried.  I wouldn't have stuck to the problem.  And I wouldn't have gotten a chance to become less intimidated.

I'm not ordinarily a fan of the theory that opposing biases can cancel each other out, but sometimes it happens by luck.  If I'd seen that whole mountain at the start—if I'd realized at the start that the problem was not to build a seed capable of improving itself, but to produce a provably correct Friendly AI—then I probably would have burst into flames.

Even so, part of understanding those above-average scientists who constitute the bulk of AGI researchers, is realizing that they are not driven to take on a nearly impossible problem even if it takes them 40 years.  By and large, they are there because they have found the Key to AI that will let them solve the problem without such tremendous difficulty, in just five years.

Richard Hamming used to go around asking his fellow scientists two questions:  "What are the important problems in your field?", and, "Why aren't you working on them?"

Often the important problems look Big, Scary, and Intimidating.  They don't promise 10 publications a year.  They don't promise any progress at all.  You might not get any reward after working on them for a year, or five years, or ten years.

And not uncommonly, the most important problems in your field are impossible.  That's why you don't see more philosophers working on reductionist decompositions of consciousness.

Trying to do the impossible is definitely not for everyone.  Exceptional talent is only the ante to sit down at the table.  The chips are the years of your life.  If wagering those chips and losing seems like an unbearable possibility to you, then go do something else.  Seriously.  Because you can lose.

I'm not going to say anything like, "Everyone should do something impossible at least once in their lifetimes, because it teaches an important lesson."  Most of the people all of the time, and all of the people most of the time, should stick to the possible.

Never give up?  Don't be ridiculous.  Doing the impossible should be reserved for very special occasions.  Learning when to lose hope is an important skill in life.

But if there's something you can imagine that's even worse than wasting your life, if there's something you want that's more important than thirty chips, or if there are scarier things than a life of inconvenience, then you may have cause to attempt the impossible.

There's a good deal to be said for persevering through difficulties; but one of the things that must be said of it, is that it does keep things difficult. If you can't handle that, stay away!  There are easier ways to obtain glamor and respect.  I don't want anyone to read this and needlessly plunge headlong into a life of permanent difficulty.

But to conclude:  The "perseverance" that is required to work on important problems has a component beyond working 14 hours a day.

It's strange, the pattern of what we notice and don't notice about ourselves.  This selectivity isn't always about inflating your self-image.  Sometimes it's just about ordinary salience.

To keep working was a constant struggle for me, so it was salient:  I noticed that I couldn't work for 14 solid hours a day.  It didn't occur to me that "perseverance" might also apply at a timescale of seconds or years.  Not until I saw people who instantly declared "impossible" anything they didn't want to try, or saw how reluctant they were to take on work that looked like it might take a couple of decades instead of "five years".

That was when I realized that "perseverance" applied at multiple time scales.  On the timescale of seconds, perseverance is to "not to give up instantly at the very first sign of difficulty".  On the timescale of years, perseverance is to "keep working on an insanely difficult problem even though it's inconvenient and you could be getting higher personal rewards elsewhere".

To do things that are very difficult or "impossible",

First you have to not run away.  That takes seconds.

Then you have to work.  That takes hours.

Then you have to stick at it.  That takes years.

Of these, I had to learn to do the first reliably instead of sporadically; the second is still a constant struggle for me; and the third comes naturally.

New Comment
38 comments, sorted by Click to highlight new comments since:

I think if history remembers you, I'd bet that it will be for the journey more than its end. If the interesting introspective bits get published in a form that gets read, then I'd bet it will be memorable in the way that Lao zi or Sun zi is memorable. In case the Singularity / Friendly AI stuff doesn't work out, please keep up the good work anyway.

If history remembers him, it will be because the first superhuman intelligence didn't destroy the world and with it all history. I'd say the Friendly AI stuff is pretty relevant to his legacy.

I'd say "thanks" but that is so completely not what I am trying to do.

Wow, one of those quiet "aha" moments. Just by explaining something I'd misunderstood, you've totally changed my direction. Seriously, thanks.

Thanks! And: In cases like this, it helps me to know which "Aha" you got from what - if you can say. I'm never quite sure whether my writing is going to work, and so I'm always trying to figure out what did work. Email me if you prefer not to comment.

What a fantastic motivational post.

Some things that definitely worked for me:

1) To know you personally experienced a feeling of "No way, that is too hard" yet kept going 2) That you said: "the total mountain-to-be-climbed has increased in height" which was getting me increasingly anxious recently. 3) This: "When you're confused about a domain, problems in it will feel very intimidating and mysterious, and a query to your brain will produce a count of zero solutions. But you don't know how much work will be left when the confusion clears."

Basically, the fact that you have described situations which I have experienced myself, that were tipping the balance of my emotions towards the "Forget about it, too hard" side, and said that in those situations you just kept walking, untarnished.

Which clearly shows that this kind of reasoning would only seduce someone who already thinks you are a good fellow, and a smart hard worker. I'm unsure which bits would be persuasive for someone who started less wrong by this post.

has increased in height That is usually where I end up after estimating the amount of time required to do a software project (small or large). But, as EY pointed out, once you do the work to figure out what must be done, then the problem is just 'really hard'.

Trying to do the impossible is definitely not for everyone. Natural talent is only the ante to sit down at the table. The chips are the years of your life. If wagering those chips and losing seems like an unbearable possibility to you, then go do something else. Seriously. Because you can lose.

This is an awesome quote that is going into my collection, but could you please restate this for posterity as something like the following, making clear that you mean "impossible" and not impossible:

Trying to do the "impossible" is definitely not for everyone. Natural talent is only the ante to sit down at the table. The chips are the years of your life. If wagering those chips and losing seems like an unbearable possibility to you, then go do something else. Seriously. Because you can lose.

I'm like you in that I can't stand the grimace and slog model of "perseverance" (and the way some people elevate "mortification of the flesh" as a virtue makes me flinch in horror), unlike in that every time I've hit even a medium-hard problem I've tended to bounce off and re-script on the assumption I can't.

So the "aha" was the idea that pushing into a problem can convert a sheer cliff into a hill climb, but that the danger comes each time something looks like another cliff. The proper response is not to bounce but to push. There is no cliff until you can prove it (and don't trust a facile proof).

Also, now I get to look back at my many "I can't"s and re-examine them for opportunities to push.

"""A superintelligence will more-likely be interested in conservation. Nature contains a synopsis of the results of quadrillions of successful experiments in molecular nanotechnology, performed over billions of years - and quite a bit of information about the history of the world. That's valuable stuff, no matter what your goals are."""

My guess is that an AI could re-do all those experiments from scratch within three days. Or maybe nanoseconds. Depending on whether it starts the moment it leaves the lab or as a Jupiter brain.

Attempting the "impossible": like chewing, chewing, and chewing, unable to swallow; it's not soft and small enough yet. When you do get to swallow a small bit, you will often regurgitate it. But some of it may remain in your system, enough to subsist on, just barely, and you may know not to take another bite of the same part of "impossible".

Thousands of years ago, philosophers began working on "impossible" problems. Science began when some of them gave up working on the "impossible" problems, and decided to work on problems that they had some chance of solving. And it turned out that this approach eventually lead to the solution of most of the "impossible" problems.

[-]IL00

Eliezer, I remember an earlier post of yours, when you said something like: "If I would never do impossible things, how could I ever become stronger?" That was a very inspirational message for me, much more than any other similar sayings I heard, and this post is full of such insights.

Anyway, on the subject of human augmentation, well, what about them? If you are talking about a timescale of decades, than intelligence augmentation does seems like a worthy avenue of investment (it doesn't has to be full scale neural rewiring, it could be just smarter nootropics).

[-]rw10

Eliezer said:

'On the timescale of years, perseverance is to "keep working on an insanely difficult problem even though it's inconvenient and you could be getting higher personal rewards elsewhere".'

This is inconsistent with utility maximization assumptions. Elsewhere on O.B. it has been discussed that the pursuit of dreams can be a payoff in itself. We must notice that it is the expectation of highest personal rewards, not the rewards themselves, that drive our decision-making. Working on the seemingly very hard problems is rewarding because it is among the set of most rewarding activities available.

Perseverance, like everything, is good in moderation.

The funny thing is that the recent popularization of economics, all the Freakonomics books (Dan Ariely, Tyler Cowen, Tim Hartford, Robert Frank, Steve Landsburg, Barry Nalebuff), is summed up by Steve Levitt when he said he likes solving little problems rather than not solving big problems. Thus, economists still don't understand business cycles, growth, inequality--but they are big on why prostitutes don't use condoms, or sumo wrestlers cheat in tournaments, or why it is optimal to peel bananas from the 'other' end. It's better than banging your head against the wall, but I don't think anyone spends the first two years in econ grad school to solve these problems.

@eric falkenstein

"solving little problems"

But you know eric, solving the seemingly little problem often illuminates a great natural principle. One of my favorite examples of this is Huygens, when down with the flu, suddenly noticing how the pendulums of his clocks always ended up swinging against each other. Such a tiny thing, how important could it be? Yet in the end so-called coupled oscillation is everywhere from lasers to fireflies. Never underestimate the power of the small insight.

This from the 5th:

But if you can't do that which seems like a good idea - if you can't do what you don't imagine failing - then what can you do?

And this from today:

To do things that are very difficult or "impossible",

First you have to not run away. That takes seconds.

Then you have to work. That takes hours.

Then you have to stick at it. That takes years.

are nice little nuggets of wisdom. If I were more cynical, I might suggest they are somewhat commonsense, at least to those attracted to seemingly intractable dilemmas and difficult work of the mind, but I won't. It's good to have it summed up.

I find this rah-rah stuff very encouraging, Eliezer. Zettai daijyobu da yo and all that. Good to bear in mind in my own work. But I think it is important to remember that not only is it possible you will fail, it is in fact the most likely outcome. Another very likely outcome: you will die.

You need a memento mori.

Phil Goetz: I would still really LOVE to see you write up the book length version of the above comment as you once suggested.

"If you are talking about a timescale of decades, than intelligence augmentation does seems like a worthy avenue of investment"

This seems like the way to go to me. It's like "generation ships" in sci-fi. Should we launch ships to distant star systems today, knowing that ships launched 10 years from now will overtake them on the way?

Of course in the case of AI, we don't know what the rate of human enhancement will be, and maybe the star isn't so distant after all.

The things you have had to say about "impossible" research problems are among your most insightful. They fly right in the face of the more devilishly erroneous human intuitions, especially group intuitions, about what are and are not good ways to spend time and resources.

"Trying to do the impossible is definitely not for everyone. Natural talent is only the ante to sit down at the table"

It's a bit of a technical thing, but as a 20-year professional poker player, I would suggest that you change "ante" to "buy-in." It's what you mean.

An "ante" is the chips you put in to play before being dealt cards on any given single hand (and most poker games now, being Texas Hold'em, don't have antes at all).

A "buy-in" is everything you're putting at risk.

Reminded me immediately of Philippe Petit — the French artist who gained fame for his high-wire walk between the Twin Towers (WTC) in 1974. Commenting on the vision of his yet-to-be-accomplished project, he said:

"It's impossible, that's sure. So let's start working."

The inspiring documentary on his feat, Man On Wire, won an Academy Award in 2008. His kind of perseverance seems to rely on a rather unbreakable spirit of self-confidence.

There are two issues entangled in not trying to do the impossible, but doing it.

The first issue is trying. To try is to focus on your success or failure, instead of focusing on the problem and a solution. That brings your ego into the equation. That's one of the best ways to fail. The majority of problems people have are trivial to solve once they are not our personal problems, embedded in our hopes, fears, pride, and personal bits of crazy.

I try to get myself and my ego out of my problems. One of my "thinking hats" is a mythical older brother Jonathan that I channel to think about my problems for me. I've tried the trick with other people "what would your big sister say you should do"? Presto! The solution is obvious.

The second issue is the evidence for impossible - how do you claim to know it is impossible? To believe that something is impossible is to mistake "No one sees how it is possible" for "It is impossible". As the years have gone by, I've been increasingly struck by just how stupid humans are, and how most of our intelligence is just the painfully slow accumulation of the cultural store of better concepts. Once you keep in mind how humans have been wrong forever about most things, the fact that everyone claims that something is impossible loses its misperceived predictive force.

I liked the link to Kernighan:

http://www.reddit.com/r/programming/comments/8yzkq/rubber_duck_debugging_wikipedia/

Those were all helpful for problem solving, but I was trying to get at something a little different.

When your family or friends share real issues they have in their lives, haven't you noticed that the solutions are obvious, but they don't do them? When you bring up the solution, they see it, but change the subject as fast as they can. The next thing out of their mouths is "But ..." The problem isn't confusion or lack of knowledge. It's denial. It's evasion. It's twisted motivation. They don't need more rational techniques, they need an emotional technique to set their crazy aside and let the rationality they have out.

That's what I was trying to get at on "the first issue" above. When you're stuck on your problem, distancing yourself from the problem often helps distance unhelpful motivations from the problem.

In case it isn't clear, I know my friends see the same thing in me, and I see it in myself as well.

Agree with observation, disagree with interpretation. I've tried the obvious solutions to various problems for a long time. It's not "I know this would work but I don't want to admit it and go do it.". It's "I know this obviously looks like it would work. I have no idea why this wouldn't work. I expect everyone including myself to come up with that within five seconds of hearing the problem, and I can't come up with an objection. Yet, when I try to do it, things that should happen just don't. I have no clue why, it just fails silently.". I still don't understand the concept of a glass window, but at least I can learn that flying through it doesn't work.

That happens sometimes too - everyone, including you, thinks they have the solution, and they're wrong. "It's so easy, just do blah blah blah." Turns out blah blah blah didn't work.

But are you sure you "did it"? Did you confirm that you actually did blah blah blah according to someone besides yourself? Did they witness you do it, or just get your report? Surely you could search the web and either find people confirming the same failure, or claiming that the solution did in fact work for them.

Another possible reason is that "everyone" is engaged in some kind of denial, disinterest, or dishonesty. Maybe they think they're being "encouraging" with their solution. Maybe they don't really care, and are tossing out the first thing that comes to mind. Maybe they have some of their own motivational perversity which makes them want to believe that the problem is easy.

You say you have no clue, but is it really impossible to find one? Have you tried all these avenues?

It always possible that everyone is just wrong - though in some cases that's a tremendous opportunity to profit by what is right. But it sucks more to not even try the "known" solution out of some motivational perversity.

But are you sure you "did it"? Did you confirm that you actually did blah blah blah

No! That's the problem. The failure is slippery. Often it happens somewhere in the deep dark recesses of my mind where grues do lurk. Sometimes it looks like it did and a year later it turns out to be due to an external circumstance. Sometimes it has a clear external cause, but shouldn't I have planned for it and found a way to apply the solution anyway? It always fails in a way that leaves "You're lazy and stupid. Try harder." quite likely, until after several rounds of trying harder I'm just left saying "Okay, I may be lazy and stupid. But there's no amount of trying that allows me to beat the laziness and stupidity, so I'm going to route around them.".

Surely you could search the web and either find people confirming the same failure, or claiming that the solution did in fact work for them.

Another possible reason is that "everyone" is engaged in some kind of denial, disinterest, or dishonesty.

Their solutions are confirmed to work so often they're obvious. The reasoning seems to be "Well, that would obviously work if there were no additional constraints, and I can't see any additional constraints.". But additional constraints exist, invisibly.

So the solution is confirmed, and it works for other people, but you're failing to execute the solution as others do, and attribute the failure to laziness and stupidity.

First, I doubt that either are the issue. I looked at a bunch of your posts, and you're what most people would call smart. You may be lazy, but I doubt it. I would guess that with a well defined problem that you care about and a clear path to completion, if you have no emotional angst associated with the problem, or otherwise consuming your attention, you can happily crank away at the problem all the live long day. Am I right?

I can call that lazy and stupid too, but I wouldn't be using the terms in the same way most people do.

Your "additional constraints" and "laziness and stupidity" are sounding a lot like what I was calling "motivational perversity".

Looking back, I guess I wasn't clear. I didn't mean to imply doing absolutely nothing, or even very little, even when I was talking about evasion. Because evasion is an activity in itself, and if you didn't have the issue to deal with, you probably wouldn't be engaging in the activity you use for evasion so much. The ways of not doing what you need to do can become compulsive activities in themselves. But even when you directly attack the problem, you do it ineffectively, often in ways that you know are ineffective. That looks a lot like lazy and stupid, but I don't that really gets at the heart of the problem.

Well... either you're laughably wrong, or you're only right in the sense that your "if you have no emotional angst" conditional very rarely applies.

Your "additional constraints" and "laziness and stupidity" are sounding a lot like what I was calling "motivational perversity".

Yes. I'm trying to explain motivational perversity. Your hypothesis seems to be "People evade stuff. If they said 'okay, no more evading' and tried really hard to do stuff that should work, stuff would work". I call that "lazy and stupid". My other hypothesis is "People try to do stuff, but invisible obstacle deflect them; it looks like they're evading solutions, but actually it's more like sliding off them".

If I misunderstood and you don't predict "no more evading " will help, what is "motivational perversity" doing?

I certainly don't expect that saying "no more evading" will generally work. The real perversity is that you know that you're evading, but you're still doing it anyway. Trying really hard to do stuff won't necessarily work, even when actually doing the known right stuff will. You're deviating from the known solution in some what that isn't apparent to you.

Sliding off isn't a bad way to put it. You start down the path, but find yourself diverted again and again off the path. Are you hitting obstacles, and taking the wrong path? Just not persevering? Did the will to execute wane? Did the will to evade wax?

I'll give you some examples. Maybe you start bemoaning some injustice in the situation. Or obsessing over what might happen. Or going into speculative analysis cycles. Or optimization cycles. Or you're leaving some step out because it "shouldn't" matter, whether the shouldn't is epistemic or moral.

Okay, but then what to do about it? "Motivational perversity" seems to be doing useful predictive work - namely, that "try harder" is a good solution. If our best description is "people fail to do stuff despite trying really really hard, and we don't really know why" - what are we even doing placing the source of the failure in "motivation" rather than "executive function" or "modelling and planning" or "sharing control between conscious and subconscious modules" or even "executing motor actions"?

Hmmm, it's been a while, but someone pointed me here again, so here goes.

I placed it on motivation because the modeling and planning is easy, if the problem isn't yours.

What to do about it? Talk to someone else. I went to counseling for a year. One of the mistakes I made, and the counselor made, was never just having him suggest a solution. He's all busy "not directing" me, when in fact what I needed was perspective. Going round and round in my head wasn't getting anywhere.

Eventually I think I found some of my motivational perversion, some of my unacknowledged beliefs and choices that explained my bad behavior.

[-][anonymous]00

If you spend a year or two working on the domain, then, if you don't get stuck in any blind alleys, and if you have the native ability level required to make progress, you will understand it better. The apparent difficulty of problems may go way down. It won't be as scary as it was to your novice-self.

Actually, one notices this effect after 3 courses in college. If you practice, related problems become easier to solve. And I guess that's as close to a universal rule as I'll get any time soon.

How do you even get to this level of introspection? Incredible stuff dude.

It would be interesting to see a list of solutions to problems that were that were previously thought, e.g. by almost all experts in the field, to be clearly impossible i.e. insoluble.

One that occurs to me is public key encryption. I.e. the very notion that you could send a message in code where anyone can see the encoded message and know how you're encrypting it, yet can't decode it.

Relativity may be another case - specifically weird things about e.g. simultaneity and time dilation, which seemed to be more or less logical impossibilities. While that was a discovery as much as a solution (i.e. theory), the solution was extremely unobvious.