It seems to me that narratives are skewed and highly simplified abstractions of (empirical) reality that then are subject to selection pressure, such that the most viral ones (within any subculture) dominate, where virality is often negatively correlated with accuracy. Yet, when hearing narratives from people we like and trust, we humans seem to have deeply ingrained urges to quickly believe them. This gets most apparent when you hear the narratives other subcultures are spreading that affect you or your beliefs negatively. Like, hearing the narratives of AI skeptics & ethicists (say about AI water usage, about AI not being "actually intelligent", or about all AI doomers secretly trying to inflate stock prices) really drove home a Gellman-Amnesia-style realization for me, how deeply flawed narratives tend to be, and that this is very likely true for the narratives I'm affected by (without even realizing these are narratives!).
Narratives are usually a combination of an overly simplistic conclusion about some part of the world paired with radically filtered evidence. (And I guess this claim in itself is a bit of a narrative about narratives)
I agree with you though that narratives may be required to actually do things in the world and pure empiricism will be insufficient.
I read your title and thought "exactly!". I then read your post and it was pretty much exactly what I expected after reading the title. So, ironically, it seems like you perfectly compressed the state of your mind into a few words. :) But to be fair, that's probably mostly because we've made very similar experiences and doesn't translate to human<->LLM communication.
When vibe-coding, many things work really fast, but I often end up in these cases where the thing I want changed is very nuanced and I can see that just blurting it out would cause the LLM to do something different from what I have in mind. So I sometimes have to write like 5 paragraphs to describe one relatively small change. Then the LLM comes up with a plan which I have to read, which again takes time, and sometimes there are 1-2 more details to clear up, so it's a whole process, and all of this would kind of work naturally without me even noticing if I were to write the code.
A year ago I wrote a post in a somewhat similar direction, but the recent months of vibe coding with Opus 4.5 really gave me a new appreciation for all the different bottlenecks that remain. Once "writing code" is automated - which is basically now - it's not like programmers are instantly replaced (evidently), we just hop on to the next bottleneck below. So, the average programmer will maybe be sped up by some percentage, with only extreme outliers getting a multiple-fold increase in output, and the rest merely shifts to focus on different things in their work. It's still kind of mindblowing to me that that's how it is. Perhaps it gets "solved" once the entire stack, from CEO to PM to testers to programmers, is AIs - but then I guess they would also have to communicate via not-flawlessly-efficient means with each other (and sometimes themselves, until continual learning is solved), and would still run into these coordination overhead issues? But I guess all that overhead is less notable when the systems themselves run at 100x our speed and work 24h/day.
Even with a car, there are cases where traffic and/or finding a parking spot can cause huge variance. It really depends on the type of meeting / circumstances of the other people whether it's worth completely minimizing the risk of being late at the expense of potentially wasting a lot of your own time.
E.g., when I visit somebody at their home, then it will likely be bearable for them to welcome me 10 minutes later. Whereas if we meet at some public space, it may be very annoying for the person to stand around on their own (particularly if the person has social anxiety and gets serious disutility from the experience).
That all being said, probably the majority of minutes that people are late to things are self-inflicted, and I agree with OP it makes sense in general to reduce that part (and more generally striving to be a reliable person).
I can relate to a lot of this. But I think in my case the motivation for reinventing the wheel also comes down to fundamentally not enjoying activities like "reading documentation" or generally "understanding what another person has done". But implementing my own library is usually fun. And I can often justify it to myself (and sometimes others) because it will then match the given use case perfectly and will be exactly as big/complex as needed, rather than being some huge highly general universal solution full of bells and whistles we won't even need. Which can be a real advantage - but it's also just one side of a trade-off, and I tend to weigh that side more highly than others, for probably rather self-serving reasons.
I once heard from a developer friend that he sometimes just reads things like the Docker documentation for fun in his spare time. It gave me great appreciation for how different people can be and how difficult it really is to overcome the typical mind fallacy... :) I never would have thought people can enjoy that. And now I'm interested in somehow finding that same enjoyment in myself, because I think it would make many things much easier if I could overcome that aversion that keeps pushing me in the direction of reinventing all the wheels.
I'm not sure what you're hinting at, but in 99.9% of cases when I'm out of the house, I do carry a smartphone around. If you mean that it's annoying when the display gets confused by water, then I agree that's a real disadvantage (but I doubt people's attitude towards being exposed to rain changed that much between 2006 and today, so there certainly is some severe general dislike of rain independent from smartphones). If this is not what you mean, then please elaborate. :)
Agreed, that's one of the exceptions I was thinking of - if you're getting soaked and have no way to get into dry clothes anytime soon, there's little way around finding that rather unpleasant. But I'd say 95% of my rain encounters are way less severe than that, and in these cases, my (previous) attitude towards the rain really was the main issue about the whole situation.
People compare things that are close together in some way. You compare yourself to your neighbors or family, or to your colleagues at work, or to people that do similar work as you do in other companies.
Isn't one pervasive problem today that many people compare themselves to those they see on social media, often including influencers with a very different lifestyle? So it seems to me that comparisons that are not so local are in fact often made, it primarily depends on what you're exposed to - which to some degree is indeed the people around you, but nowadays more and more also includes the skewed images people on the internet, who often don't even know you exist, broadcast to the world.
But maybe this is also partially your point. Maybe it would theoretically help to expose people a lot to "the reality of the 90s" or something, but I guess it's a bit of an anti-meme and hence hard to do.
I agree that telling people how well off they are on certain scales is probably not super effective, but I'm still sometimes glad these perspectives exist and I can take them into consideration during tough times.
Relatedly, at some point as a teenager I realized that being exposed to rain is actually usually not that terrible, and I had just kind of been accidentally conditioned to dislike it because it's a normal thing to dislike and I never met anyone who appeared to enjoy the experience. But turns out, once you stop actively maintaining that resistance and welcome the rain, it can be pretty nice to walk around in rain while everyone around you tries to escape it. (Some exceptions apply, of course)
Yeah, fair enough. My impression has been that some people feel guilty about caring about themselves more than about others, or that it's seen as not very virtuous. But maybe such views are less common (or less pronounced) than the vibes I've often picked up imply. :)
After using Claude Code for a while, I can't help but conclude that today's frontier LLMs mostly meet the bar for what I'd consider AGI - with the exception of two things, that, I think, explain most of their shortcomings:
Most frontier models are marketed as multimodal, but this is often limited to text + some way to encode images. And while LLM vision is OK for many practical purposes, it's far from perfect, and even if they had perfect sight, being limited to singular images is still a huge limitation[1].
Imagine you, with your human general intelligence, were sitting in a dark room, and were conversing with someone who has a complex, difficult problem to solve, and you do your best to help them. But you can only communicate through a mostly text-based interface that allows this person to send you occasional screenshots or photos. Further imagine that every hour or so you lose your entire memory & mental model of the problem, and find yourself with nothing but a high-level and very lossy summary of what has been discussed before.
I think it's very likely that under such restrictive circumstances, it's just very hard to not run into all kinds of failure modes and limitations of capability, even for the undoubtedly general intelligence that is you.
So, in some sense, I'd think that there's an "intelligence overhang", where the raw intelligence that exists in these LLMs can't fully unfold due to modality & context window limitations. These limitations mean that Claude Code et al. don't yet show the effects on the economy and world as a whole that many would have expected from AGI. But I'd argue it makes sense to decouple the actual "intelligence" from the limiting way in which it's currently bound to interact with the world - even if, as some might correctly argue, modality & context window are just an inherent property of LLMs. Because this is an important detail about the state of things that, I suppose, is neither part of most of the definitions people gave for AGI in the past, nor of the vague intuitions they had about what the term means.
as opposed to, say, understanding video, including sound, and including a sense of time. (This is not to say that vision is necessary for general intelligence, of course; but that's kind of my whole point: the general intelligence is already there, it's just that the modality + context restrictions mean AI is still much less effective at influencing the world in the way that a "naively" imagined AGI would)