All of alkjash's Comments + Replies

alkjash40

I appreciate the effort but am hoping to solve this problem in an afternoon (if not five minutes) and forget about it, instead of acquiring the correct language to think about things or a full theory of diet and nutrition.

alkjash30

My thought process goes like: on most weekdays I sure wish I could skip breakfast and/or lunch and only have one sit-down meal with my family in the evening. Time savings and convenience are the main concerns I suppose.

The first solution that came to mind was to try Soylent/Mealsquares/Huel for a month and cross my fingers, 50/50 it just goes well and solves the problem. I posted to see if there were any obvious considerations I was missing, or clear standout options to try first.

Pre-made frozen meals and protein bars are also plausibly acceptable meal rep... (read more)

0nim
FWIW, this is likely to be a worse problem with a meal replacement than a protein bar, and a worse problem with a protein bar than a frozen option. That adds complexity. Are there social norms at work which necessitate eating with others? If so, having a shake or similar every day may not meet those needs. Are you aware of the concept of OMAD (one meal a day)? I don't think it's super likely that this is the right solution for you, but it seems like you'd learn useful things about the best solution for your food-is-inconvenient problem by considering it as an option and determining why you would rule it out. Basically unless you're diabetic or attempting to gain weight, you can just have all your day's calories in a single meal instead of spread across multiple. Again, there are many reasons why this might not be a good fit, but it seems worth making sure that it's in your overton window as an option that works for some people. (edit to add) a "full meal" for someone who's smaller, sedentary, or pursuing weight loss can be a protein bar. A "full meal" for someone who's larger, more active, or pursuing weight gain can be 10x that amount, at the extreme. We sort of have a standard daily intake of 2,000kcal from nutrition facts, but not even food packaging attempts to prescribe how many meals an individual eats in a day, how they distribute their intake across those meals, and therefore asking whether an item is packaged in a size suitable for a "full meal" is like asking whether a piece of software will run on "a computer".
alkjash20

I don't disagree with what you're saying about theoretically rational agents. I think the content of my post was [there are a bunch of circumstances in which humans are systematically irrational, sunk cost fallacy is on net a useful corrective heuristic in those circumstances. Attempting to make rational decisions via explicit legible calculations will in practice underperform just following the heuristic.]

To spell out a bit more, imagine my mood swings cause a large random error term to be added to all explicit calculations. Then if the decision process is to drop a project altogether at any point where my calculations say the project is doomed, then I will drop a lot of projects that are not actually doomed.

1Peter Berggren
I agree with you on this, but I also don't think "sunk cost fallacy" isn't the right word to describe what you're saying. The rational behavior here is to factor in the existence of a random error term resulting from mood swings into these calculations, and if you can't fully factor it in, then generally err on the side of keeping projects going. I understand "sunk cost fallacy" to mean "factoring in the amount of effort already spent into these decisions," which does seem like a pure fallacy to me. It's reasonable e.g. when about to watch a movie to say "I'm in a bad mood, I don't know how bad a mood I'm in, so even though I think the movie's not worth watching, I'll watch it anyway because I don't trust my assessment and I decided to watch it when in a calmer state of mind." Sunk cost fallacy is where you treat it differently if you bought yourself the tickets versus if they were given to you as a gift, which does seem, even in your apology for "sunk cost fallacy," to remain a fallacy.
alkjash20

I still don't understand. Your valuation of the project will still change over time as information actually gets revealed though. The probability the project will turn out worthwhile can fluctuate.

1Peter Berggren
At any given point, you have some probability distribution over how worthwhile the project will be. The distribution can change over time, but it can change either for better or for worse. Therefore, at any point, if a rational agent expects it not to be worthwhile to expend the remaining effort to get the result, they should stop. Of course, if you are irrational and intentionally fail to account for evidence as a way of getting out of work, this does not apply, but that's the problem then, not your lack of sunk costs.
alkjash20

I don't follow. As a project progresses it seems common to acquire new information and continuously update your valuation of the project.

1Peter Berggren
Sorry if this is confusing. What I'm saying is, you have some estimate of the project's valuation, and this factors in the information that you expect to get in the future about the project's valuation (cf. Conservation of Expected Evidence). If there's some chance the project will turn out worthwhile, you know that chance already. But there must also be some counterbalancing chance that the project will turn out even less worthwhile than you think.
alkjash31

I taught game theory at Princeton and wish I'd seen this explanation beforehand, excellent framing.

alkjash116

In the territory, bad event happens [husband hits wife, missile hits child, car hits pedestrian]. There is no confusion about the territory: everyone understands the trajectories of particles that led to the catastrophe. But somehow there is a long and tortuous debate about who is responsible/to blame ["She was wearing a dark hoodie that night," "He should have come to a complete stop at the stop sign", "Why did she jaywalk when the crosswalk was just 10 feet away!"].

The problem is that we mean a bunch of different things simultaneously by blame/responsibi... (read more)

alkjash113

I don't have a complete reply to this yet, but wanted to clarify if it was not clear that the position in this dialogue was written with the audience (a particularly circumspect broad-map-building audience) in mind. I certainly think that the vast majority of young people outside this community would benefit from spending more time building broad maps of reality before committing to career/identity/community choices. So I certainly don't prescribe giving up entirely.

ETA: Maybe a useful analogy is that for Amazon shopping I have found doing serious research... (read more)

Seeing patterns where there are none is also part of my writing process.

This paper of mine answers exactly this question (nonconstructively, using the minimax theorem).

I feel there is an important thing here but [setting the zero point] is either not the right frame, or a special case of the real thing, [blame and responsibility are often part of the map and not part of the territory] closely related to asymmetric justice and the copenhagen interpretation of ethics

2Raemon
I'm interested in hearing more about what you meant here, if you're up for digging into it.

Afaict, the first simple game is not the prisoner's dilemma, nor is it zero-sum, nor is the prisoner's dilemma zero-sum.

5Stephen Zhao
The first game is the prisoner's dilemma if you read the payoffs as player A/B, which is a bit different from how it's normally presented. And yes, prisoner's dilemma is not zero sum.

This is not intended as a criticism in any way, but this post seems to overlap largely with https://www.lesswrong.com/posts/k9dsbn8LZ6tTesDS3/sazen. 

[Edit: After looking at the timestamps it looks like that post actually came out after, anyway it might be an helpful alternative perspective on the same phenomenon.]

Is it just me or are alignment-related post titles getting longer and longer?

This post has a lot of particular charms, but also touches on a generally under-represented subject in LessWrong: the simple power of deliberate practice and competence. The community seems saturated with the kind of thinking that goes [let's reason about this endeavor from all angles and meta-angles and find the exact cheat code to game reality] at the expense of the simple [git gud scrub]. Of course, gitting gud at reason is one very important aspect of gitting gud in general, but only one aspect.

The fixation on calibration and correctness in this commun... (read more)

It seems important to notice that we don't have control over when these "shimmying" strategies work, or how. I don't know the implication of that yet. But it seems awfully important.

A related move is when applying force to sort of push the adaptive entropy out of a certain subsystem so that that subsystem can untangle some of the entropy. Some kinds of meditation are like this: intentionally clearing the mind and settling the body so that there's a pocket of calmness in defiance of everything relying on non-calmness, precisely because that creates clarity

... (read more)
3Valentine
Yep. I'm receiving that. Thank you. That update is still propagating and will do so for a while.   Ah, interesting. I can't reliably let go of any given outcome, but there are some places where I can tell I'm "gripping" an outcome and can loosen my "grip". (…and then notice what was using that gripping, and do a kind of inner dialogue so as to learn what it's caring for, and then pass its trust tests, and then the gripping on that particular outcome fully leaves without my adding "trying to let go" to the entropic stack.) Aiming for indirect efforts still feels a bit to me like "That outcome over there is the important one, but I don't know how to get there, so I'm gonna try indirect stuff." It's still gripping the outcome a little when I imagine doing it. It sounds like here there's a combo of (a) inferential gap and (b) something about these indirect strategies I haven't integrated into my explicit model.   Yep.
alkjash166

Very strongly agree with the part of this post outlining the problem, your definition of "addiction" captures how most people I know spend time (including myself). But I think you're missing an important piece of the picture. One path (and the path most likely to succeed in my experience) out of these traps is to shimmy towards addictive avoidance behaviors which optimize you out of the hole in a roundabout way. E.g. addictively work out to avoid dealing with relationship issues => accidentally improve energy levels, confidence, and mood, creating slack... (read more)

I think you're missing an important piece of the picture. One path (and the path most likely to succeed in my experience) out of these traps is to shimmy towards addictive avoidance behaviors which optimize you out of the hole in a roundabout way. E.g. addictively work out to avoid dealing with relationship issues => accidentally improve energy levels, confidence, and mood, creating slack to solve relationship issues. E.g. obsessively work on proving theorems to procrastinate on grant applications => accidentally solve famous problem that renders gra

... (read more)

Sure, no big deal.

You can't fight fire with fire, getting out of a tightly wound x-risk trauma spiral involves grounding and building trust in yourself, not being scared into applying the same rigidity in the opposite direction. 

The comment is generally illuminating but this particular sentence seems too snappy and fake-wisdomy to be convincing. Would you mind elaborating?

There's a class of things that could be described as losing trust in yourself and in your ability to reason.

For a mild example, a friend of mine who tutors people in math recounts that many people have low trust in their ability to mathematical reasoning. He often asks his students to speak out loud while solving a problem, to find out how they are approaching it. And some of them will say something along the lines of, "well, at this point it would make the most sense to me to [apply some simple technique], but I remember that when our teacher was demonstr... (read more)

5TekhneMakre
IDK if helpful, but my comment on this post here is maybe related to fighting fire with fire (though Elizabeth might have been more thinking of strictly internal motions, or something else): https://www.lesswrong.com/posts/kcoqwHscvQTx4xgwa/?commentId=bTe9HbdxNgph7pEL4#comments And gjm's comment on this post points at some of the relevant quotes: https://www.lesswrong.com/posts/kcoqwHscvQTx4xgwa/?commentId=NQdCG27BpLCTuKSZG
5Elizabeth
That's a super reasonable request that I wish I was able to fulfill. Engaging with Val on this is extremely costly for me, and it's not reasonable to ask him to step out of a conversation on his own post, so I can't do it here. I thought about doing a short form post but couldn't feature creeped myself to the point it was infeasible.

Thanks for sharing this, it puts into relief a problem I've noticed about academic research: the real research happens behind closed doors and in private communications that the young people don't have access to. Young people end up only learning about the finished theorems much later on in polished form.

That's great to hear, I've been slowly working on this myself in recent years. E.g., it's greatly improved my gaming experience - from being a total lurker to engaging with Discords, posting bugs and suggestions, occasionally writing Steam guides - it's enriching for sure.

I don't know about skills plural, but the game definitely drilled in that particular skill of aiming to falsify one's hypotheses instead of just confirming them. That's a skill well worth a dozen hours of deliberate practice in my opinion.

1MondSemmel
Glad to hear you enjoyed it! Seeing how you do math research, do you share my sense that this game requires / benefits from some basic skills that are also required in rationality and research, albeit at a necessarily much shallower level?

I reimplemented the game in vanilla Python and managed to simulate it several hundred times with ~10k random species for a total of hundreds of thousands of generations.

Unfortunately, I didn't read Hylang documentation carefully and thought foragers could simultaneously eat one of every food available, instead of just the most nutritious one...

Only my throwaway locust clone survived under the real rules. :'(

Haven't played Osu! for many years now unfortunately. I only got into it briefly to practice mouse accuracy for FPS games, but that motivation has dried up. I suspect Osu! would still be damn good fun without it, so I'll let you know if it gets to the top of my gaming queue. :)

Here are two recentish papers I really enjoyed reading, which I think are fairly reasonable to approach. Some of the serious technical details might be out of reach.

https://arxiv.org/abs/1909.03562

http://www.cs.tau.ac.il/~nogaa/PDFS/induniv1.pdf

I tried Touhou Perfect Cherry Blossom at one point and never got past any difficulty, so I defer to your expertise here. There's a general skill of getting better at focusing one's attention in tandem with getting better at execution and this post is only a first approximation.

Yea, I think there's some general pattern of the form:

  1. Research is weird and mysterious.
  2. Instead of studying research, why don't we study the minds that do research?
  3. But minds are equally weird and mysterious!
  4. Ah yes, but you are yourself possessed of a mind, which, weirdly enough, can imitate other minds at a mysteriously deep level without consciously understanding what they're doing.
  5. Profit.

I love the film study post, thanks for linking! This all reminds me of a "fishbowl exercise" they used to run at the MIRI Summer Fellows program, where everyone crowded around for half an hour and watched two researchers do research. I suppose the main worry about transporting such exercises to research is that you end up watching something like this.

alkjash120

But then he encounters the rigamarole of the whole process you describe in your post and it stops him from doing what he originally dreamed. He needs to get published. He needs to do original research. He needs to help his advisor and other professors do their research. He needs to do all of that because otherwise he won't be respected enough to actually have a career in physics research. But doing that kind of work isn't why he got into physics in the first place!

I'm confused about the claim that the academic process is at all misaligned with his original... (read more)

1frontier64
No I don't think the academic process is aligned with making paradigm-shifting breakthroughs. Scott Alexander wrote a good piece that address this question. His purpose was to rebut the notion that modern scientists are way less efficient than their historical counterparts. I generally agree with his conclusion that the modern academic research apparatus isn't hampering scientific advancement in any way that would affect the trendlines. Yet I think he also cites a lot of good evidence which rebuts the opposite notion: that academic research has done anything positive for scientific advancement. Although Scott himself doesn't come to that conclusion. Most of the examples of paradigm shifting work I can think of came from giving people who were very smart a large stipend of money to live off of and allowing them to research what they wanted (Newton, Liebniz, even Einstein counts as working as a patent examiner essentially gave him a stipend and an office where he got to do thought experiments). The other similar effective method is getting a lot of smart people working together, give them a bunch of money of course, and also give them a goal to accomplish within a few years (i.e. Manhattan Project, cryptography protocols). Money and smart people seems to be a good baseline for what's required for scientific advancement. Academic research has a lot of money and smart people that's for sure! But it also has a lot of other features, the features you describe in your post, and it's not clear to me that they actually do anything. Based on historical evidence it seems that if we gave research grants to smart and personable university graduates and gave them carte blanche to do with the money what they wished that would work just as well as the current system.
3JBlack
They may or may not be instrumental in achieving the original goal, but they're not the goal and certainly not the envisioned process. That's regardless of whether the original process was ever realistic. In particular the process toward "having a whole community and field at your back" is about 95% politics, not research, and requires a very different mindset and skill set than actually doing research.

We are using the word "coast" differently - what I meant by coasting is that many of the professors I know would have to actively sabotage their own research groups and collaborators to not produce ~five nice papers a year (genuine though perhaps not newsworthy contributions to the state of knowledge). 

Of course, the state of affairs seriously varies with the quality of the institution.

Right, the structure is quite simple. The only thing that came to mind about finite factored sets as combinatorial objects was studying the L-function of the number of them, which surely has some nice Euler product. Maybe you can write it as a product of standard zeta functions or something? 

alkjashΩ480

Are there any interesting pure combinatorics problems about finite factored sets that you're interested in?

8Scott Garrabrant
1. Given a finite set Ω of cardinality n, find a computable upper bound on the largest finite factored set model that is combinatorially different from all smaller finite factored set models. (We say that two FFS models are combinatorially different if they say the same thing about the emptiness of all boolean combinations of histories and conditional histories of partitions of Ω.) (Such an upper bound must exist because there are only finitely many combinatorially distinct FFS models, but a computable upper bound, would tell us that temporal inference is computable.) 2. Prove the fundamental theorem for finite dimensional factored sets. (Seems likely combinatorial-ish, but I am not sure.) 3. Figure out how to write a program to do efficient temporal inference on small examples. (I suspect this requires a bunch of combinatorial insights. By default this is very intractable, but we might be able to use math to make it easier.) 4. Axiomatize complete consistent orthogonality databases (consistent means consistent with some model, complete means has an opinion on every possible conditional orthogonality) (To start, is it the case that compositional semigraphoid axioms already work?) If by "pure" you mean "not related to history/orthogonality/time," then no, the structure is simple, and I don't have much to ask about it.

This is great!

I'm interested in the educational side of this, particularly how to do one-on-one mentorship well. I've had effective mentors in the past who did anything from [blast me with charisma and then leave me to my own devices] to [put me under constant surveillance until I past the next test, rinse, repeat.] Can you say something about your educational philosophy/methods?

4johnswentworth
There's a lot of different kinds-of-value which mentorship can provide, but I'll break it into two main classes: * Things which can-in-principle be provided by other channels, but can be accelerated by 1-on-1 mentorship. * Things for which 1-on-1 mentorship is basically the only channel. The first class includes situations where mentorship is a direct substitute for a textbook, in the same way that a lecture is a direct substitute for a textbook. But it also includes situations where mentorship adds value, especially via feedback. A lecture or textbook only has space to warn against the most common failure-modes and explain "how to steer", and learning to recognize failure-modes or steer "in the wild" takes practice. Similar principles apply to things which must be learned-by-doing: many mistakes will be made, many wrong turns, and without a guide, it may take a lot of time and effort to figure out the mistakes and which turns to take. A mentor can spot failure-modes as they come up, point them out (which potentially helps build recognition), point out the right direction when needed, and generally save a lot of time/effort which would otherwise be spent being stuck. A mentor still isn't strictly necessary in these situations - one can still gain the relevant skills from a textbook or a project - but it may take longer that way. For these use-cases, there's a delicate balance. On the one hand, the mentee needs to explore and learn to recognize failure-cases and steer on their own, not become reliant on the mentor's guidance. On the other hand, the mentor does need to make sure the mentee doesn't spend too much time stuck. The socratic method is often useful here, as are the techniques of research conversation support role. Also, once a mistake has been made and then pointed out, or once the mentor has provided some steering, it's usually worth explicitly explaining the more general pattern and how this instance fits it. (This also includes things like pointing
alkjash140

This is fascinating and I'd love to hear more depth on whatever you'd be willing to share.

Regarding the suggestion to start with something small, I think in hindsight it was kind of a manipulation on my part to make the tool seem safer and to try to get more people to try it. In my limited experience, internal conflicts that seem small rarely turn out to be. 

When I first tried IDC at CFAR, the initial "small starting point" of "Should I floss?" dredged up a whole complex about distrust of doctors in particular and authority in general. A typical experience with watching myself and others IDC is that regardless of the starting point, one ends up in a grand dramatic battle of angels and demons over one's soul.

4Emiya
Alright, I don't think I have any problem talking a bit about it in private with you, for the time being I'd rather avoid sharing more in public though. If anyone else thinks information on this could be helpful they can contact me, put please only do so if you think it's really relevant you know.
alkjash120

Thanks for reminding me about this talk! I read it one more time just now and was struck by passages that I completely missed the first couple times:

Ed David was concerned about the general loss of nerve in our society. It does seem to me that we've gone through various periods. Coming out of the war, coming out of Los Alamos where we built the bomb, coming out of building the radars and so on, there came into the mathematics department, and the research area, a group of people with a lot of guts. They've just seen things done; they've just won a war which

... (read more)
5Ben Pace
You're welcome. And wow, they're both great paragraphs. And I didn't remember either of those paragraphs either.
alkjash160

Is the following interpretation equivalent to the point?

It can be systematically incorrect to "update on evidence." What my brain experiences as "evidence" is actually "an approximation of the posterior." Thus, the actual dog is [1% scary], but my prior says dogs are [99% scary], I experience the dog as [98% scary] which my brain rounds back to [99% scary]. And so I get more evidence that I am right.

I'm not totally convinced this is the right way to think about it, any given useful mutation will depend on some constant number of coordinates flipping, so in this high-dimensional space you're talking about, useful mutations would look like affine subspaces of low codimension. When you project down to the relevant few dimensions, there's probably more copies of virus than points to fit in, and it takes a long time for them to spread out.

I guess it depends on the geometry of the problem, whether there are a small number of relevant mutations that make a difference, each with a reasonable chance of being reached, or a huge number of relevant mutations each of which is hard to reach.

Adding onto this a little, here's a toy model of viral genetic diversity based on my high-school level biology. 

Suppose the virus' DNA starts out as 000 (instead of ACTG for simplicity), and it needs to mutate into 111 to become stronger. Each individual reproduction event has some small probability p of flipping one of these bits. Some bit flips cause the virus to fail to function altogether, while others have no or negligible effect on the virus. As time goes on, the number of reproduction events starting from a given bitstring grows exponentially, ... (read more)

3Douglas_Knight
Given the high dimension of the search space, I think (b) is negligible and the linear model (a) of your first comment is better. In low dimension the boundary of the unit sphere is small and you can have a lot of copies on the inside, having to pass through the sphere to reach new terrain. Whereas, in high dimensions, the population will quickly thin out and all be unique, so what matters is the total volume of space explored, not how long it takes to get anywhere.

re: why are there more scary new strains now: 

Have people have already accounted for the fact that the more virus there is in the world, the more likely it is for one of these viruses to mutate? If there's 5x as many cases of covid floating around right now than in September, a strain as bad as the UK strain will emerge 5x as quickly in expectation.

6alkjash
Adding onto this a little, here's a toy model of viral genetic diversity based on my high-school level biology.  Suppose the virus' DNA starts out as 000 (instead of ACTG for simplicity), and it needs to mutate into 111 to become stronger. Each individual reproduction event has some small probability p of flipping one of these bits. Some bit flips cause the virus to fail to function altogether, while others have no or negligible effect on the virus. As time goes on, the number of reproduction events starting from a given bitstring grows exponentially, so the likelihood of getting one more 1 grows exponentially as well. However, each time you jump from 000 to 100, it's not as if all other copies of 000 turn into 100, so making the next jump takes a while of waiting on lots of copies of 100 to happen. And then some 101 appears, and there's no jump for a while again as that strain populates. The upshot is that you imagine the viral population to be "filling out the Hamming cube" one bitflip at a time and the weight of each bitstring is the total number of viruses with that code, and a genuinely new strain only appears when all 3 bits get flipped in some copy. But: (a) The more total copies of the virus there is, the faster a bad mutation happens (speed scaling linearly). (b) Assuming that some mutations require multiple independent errors to occur (which seems likely?), the virus population is "making incremental research progress" over time by spreading out across the genetic landscape towards different strains, even when no visibly different strains occur.

This feels like an extremely important point. A huge number of arguments devolve into exactly this dynamic because each side only feels one of (the Rock|the Hard Place) as a viscerally real threat, while agreeing that the other is intellectually possible. 

Figuring out that many, if not most, life decisions are "damned if you do, damned if you don't" was an extremely important tool for me to let go of big, arbitrary psychological attachments which I initially developed out of fear of one nasty outcome.

I agree, but I was more asking about how you think your insight about the "distance to safety" can help with that.

Well, after a bounded number of initially difficult "far-out explorations" that cover the research landscape efficiently, the hope is that almost everything is reasonably close to safety henceforth.

Interesting. My own approach is usually to collaborate/ask someone who knows the subject you want to learn. But that does require being okay with asking stupid questions.

Yes, I think your approach is ideal for the efficiency of learning if anxiety wa... (read more)

alkjash130

Very nice post! I would add that it is a useful and nontrivial skill to notice what you're paying attention to. It may not be helpful to try getting curious unless you know concretely what this means about how you move your eyes and attention.

To give a video game example, players new to a genre have no idea where to put their eyes on the screen. When I told a friend playing Hades to put their eyes on their own character, instead of on the enemies, they instantly started taking half as much damage. I got a lot better at Dark Souls, on the other hand, by sta... (read more)

To be clear, the papers would almost certainly have gone through anyway, the helpful thing was being very comfortable with Bayes rule and immediately noticing, for example, that conditioning on an event with probability 1-o(1) doesn't influence anything by very much. 

Another trick I derived from this comfort is to almost never actually condition on small-probability events. Instead, the better thing to do is to modify the random variables you care about to fail catastrophically in the small probability scenario. 

For example, in graph theory I mig... (read more)

How do you think this apply to intellectual pursuits? I have in mind research advising: in my experience, some people that I think could be great researchers are terrified of exploring some part of knowledge where there is no answer yet. And even we established researchers can easily be afraid of learning a new subject or a new technique that would help them tremendously. Maybe the comfort flags should be links with stuff that the graduate student/researcher knows well? Anecdotally, people seem more open to learning about what you want to say if you link i

... (read more)
4adamShimi
I agree, but I was more asking about how you think your insight about the "distance to safety" can help with that. Interesting. My own approach is usually to collaborate/ask someone who knows the subject you want to learn. But that does require being okay with asking stupid questions.
alkjash250

I wonder if the following are also examples of motive ambiguity:

  • Mothers choosing to stay at home.
  • Researchers choosing to be bad at teaching.
  • Mathematicians choosing to work on problems with no applications.
1agc
I wonder if these things happen more in cultures with a tradition of religious sacrifice.
alkjash120

Let me share some more gears/evidence. I believe something a little more interesting happens than what you're saying (which is definitely one piece of the puzzle).

(1) It's fun to look at how the audience organizes itself during math talks. The faculty almost always sit in the front row, point out mistakes more directly ("You mean this" instead of "Is this correct?"), ask questions more often (and with less hand-raising), and sometimes even feel comfortable to answer questions in the speaker's stead. I suspect this is a social role that everyone learns thro... (read more)

Fascinating! Definitely plan to check this out, thanks for the recommendations and detailed introduction.

alkjash130

Thank you for writing this, it led me to reconsider this phenomenon from a different perspective and revisit Lsusr's post as well as competent elites, which seemed to really string things together for me.

  • Lsusr is primarily talking about success "outside of the usual system", which generally frees someone up even more from the usual system. Start-ups are the primary example of this.
  • Alkjash is primarily talking about success within the existing system. The stereotypical successful career is an example of this.

This definitely feels like part of the thing, but... (read more)

In academia, the standard solution to all of the ennui and anxious underconfidence a grad student or postdoc feels is ... wait for it ... tenure. Your inhibitions magically disappear when you become faculty, and mathematicians often become confident to explore, gregarious, and willing to state beliefs even in dimensions orthogonal to their expertise (e.g. Terry Tao on Trump). 

Is this actually true?

My model of anxious underconfidence is something more akin to pjeby's - that if have some moderate successes under your belt but still feel underconfident, ... (read more)

Load More