Your comment makes it sound a bit like there is no need for performance, but taking servers or REST services as an example, most programmers care about throughput, and almost all about latency which are both measured with e.g. prometheus. When your website takes one more second to load you lose clients, and if your code is slow it shows up on the cloud provider's bill. Even if you are IO bound, you can batch requests, go async, or do less IO.
The reason people don't bother hand-optimizing code is because the hardware is really fast, and because a handful of programmers put a lot of efforts writing optimizing compilers and optimized frameworks so the average output is good enough for the average workload.
Nitpick on the hashmap example: while I agree that the compiler does not produce optimal code in that case (which may be your main point), there is no need to write assembly to get the speed-up you describe, you can iterate the backing array in C. The compiler may or may not generate SIMD code however, so you may want to use SIMD intrinsics which is very close to assembly.
Musings on human actions, chemical reactions and threshold potentials:
Chemical reactions don't occur unless a specific threshold of energy is reached ; that threshold is called the activation energy. Would it be fruitful to model human actions in the same way, as in they don't occur unless a specific activation energy is reached?
Chemistry has the concept of a catalyst: a substance that lowers the activation energy required for a reaction. Is there an equivalent for human action? On the top of my head I can think of a few:
These are all catalysts: they make it easier to get started on an action.
If from chemistry we go up one level on the ladder of abstraction, to neurons, triggering actions involves threshold potentials, for example to make neurons spike and tell the body to move. If we can measure these threshold potentials, could we look at our brain and go "yep, these neurons have a higher threshold potential, that's an ugh field." Could we then decide to lower that threshold by using a catalyst?
Get your shit together and go play the winners’ bracket.
No ; if I want to play I do, if I don't I don't.
That's success.
This whole framing in terms of games is misleading. It doesn't matter what bracket you're playing at, if you feel you have to play you've already lost.
Alas, memetic pressures and credential issuance and incentives are not particularly well aligned with truth or discovery, so this strategy fails predictably in a whole slew of places.
Can you provide specific examples of places where this fails predictably to illustrate? Better: can you make a few predictions of future failures?
If I understand correctly, your position is that we lose status points when we say weird (as in a few standard deviations outside the normal range) but likely true things, and it's useful to get the points back by being cool (=dressing well).
It seems true that there is only so much weird things you can say before people write you off as crazy.
Do you think a strategy where you try to not lose points in the first place would work? for example by letting your interlocutor come to the conclusion on their own by using the Socratic method?
Wow. We are literally witnessing the birth of a new replicator. This is scary.
High-level actions don’t screen off intent
, consequences do.
Chesterton's Missing Fence
Reading the title, I first thought of a situation related to the one you describe, where someone ponders the pros and cons of fencing an open path, and after giving it thoughtful consideration, decides not to, for good reason.
So it's not a question of removing the fence, but that it was never even built, it is "missing". Yet the next person that comes upon the path would be ill-advised to fence it without thoroughly weighing the pros and cons, given that someone else decided not to fence that path.
You may think this all sounds abstract, but if you program often this is actually a situation you come across: programmer P1 spends a lot of time considering the design of a data structure or a codebase and so on, rejects all considered possibilities but the one that they implement, and perhaps document if they have time. But they will usually not document why they rejected and did not implement the N other possibilities they considered.
P2 then comes in thinking "Gee that sure would be convenient if the code had feature F, I can't believe P1 didn't think of that! How silly of them!", not realizing that feature F was carefully considered and rejected, because if you implement it bad thing B happens. There's your missing fence, never was built in the first place, and with good reasons.
Restricting "comment space" to what a prompted LLM approves slightly worries me: I imagine a user tweaking its comment (that may have been flagged as a false positive) so that it fits in the mold of the LLM, and then commenters internalize what the LLM likes and doesn't like, and the comment section ends up filtered through the lens of whatever LLM is doing moderation. The thought of such a comment section does not bring joy.
Is there a post that reviews prior art on the topic of LLM moderation and its impacts? I think that would be useful before taking a decision.
I agree (at least on the short term as you point out), but it seems hard to predict what these places will be (and thus hard to prepare for it), and it still seems likely that the market will be tough for the 90% of the programmers that are not experts in the specific niche things AIs are not good at.