My approach to the margin note/marking conundrum:
This way, initial reading is a lot faster and pain-free. On the second pass, you also know the gist of the book and are more easily able to compress the marked information and to sort nice-to-know, have-to-know and irrelevant (that's why I use two colors; anything unmarked -> probably irrelevant). Remember: marking is just for finding things, it's not note taking on its own. So re-read as quickly as possible. The intervals for re-reading (if after each chapter or after finishing the book) depends on how dense the book is.
This technique can of course be combined with other techniques like pre-reading and skimming Adler talks about in his book. For skimming, I mark interesting passages to re-read vertically down the margin, usually whole paragraphs. For e-books, I mark the first few works of a paragraph. Then I proceed as usual, or do not read non-premarked passages at all.
I have read How to take smart notes by Söhnke Ahrens, in its German original language. A few observations I've made:
I'm personally an avid fan of the Zettelkasten and use it extensively, together with Progressive Summarization (Tiago Forte) for the preliminary work on short-form written sources (and Cornell method for lectures/video sources). This book serves as a neat primer for getting started, or to think about your second brain from another perspective. However, it's not word class literature and the English version reads a little awkward in my opinion.
I have personally used this book to build a skeleton of information, before filling it in with blog posts from https://zettelkasten.de/ . However, reading https://zettelkasten.de/posts/overview/ is a good alternative for that.
I believe what you describe is something that arises from internet dialogues and how generations that grew up 'digitally native' use culture techniques learned on the internet to shape dialogue.
The internet and social media also makes dialogue and monologue that would've been fringe positions more visible, and lead to electronic screaming matches between bipartisan opinions. In the real world, positions and party lines are drawn in part by physical separation - bars that are frequented by a certain clientele, neighborhoods that draw specific types and occupations, and so on. Those are the Facebook groups and sub-reddits of today. Banning and not allowing counter-arguments are the internet equivalent of not beeing welcome and social pressure to conform to group standards. Cancel culture is the modern equivalent of booing someone from stage or kicking them out of the social group. The only thing that changed are visibility and scale.
'Epistemic conditions', as you call it, have always been bad in informal settings between people that weren't experts in their field. Classical print/TV journalism led to some standards what of what the broad public saw as legitimate arguments and opinions - in the form of what has been covered and which expert were invited. That information monopoly disappeared as a result of the rise of internet, as well.
Argumentation in-between bipartisan groups has almost always been name-calling and sub-complex trains of argumentation even in the past, end even with journalism as a filter. Argumentation between members of a group has been a kind of self-affirmation and agreeing to each other. German (my native language) has a word for that which is enlightening (and quite old): "Stammtischgelaber", meaning the conversations of people who regularly meet in a pub and talk drunken bullshit about things they really don't know about.
Im my opinion, what you seen in Facebook groups is modern Stammtischgelaber which is highly visible and far-reaching. Because information isn't curated anymore, everyone can add their 2 cents to the debate, which heats up more and more because bipartisan groups openly meet each other. The heated, angry debates on the internet need containment strategies which spill over into real life debate culture.
Scott Alexander wrote a piece about internet conversations a while ago fur further reading: https://slatestarcodex.com/2018/05/08/varieties-of-argumentative-experience/
Progressive Summarization by Tiago Forte is a note taking technique that focuses on compression as the primary knowledge work that you do on information (books/articles/lectures). For this technique, loss of information by summarizing further and further is a feature of knowledge work. It's called "progressive" summerization because you do not compress all sources as much as possible. Instead, you begin by marking up your source. Then if the information is useful you summarize further in a separate document, and so on.
This is a usage of information loss as something to be embraced. I would think filtering information is also another way of losing information intentionally - for example when you curate information.
What you describe is how I understand how Pattern Recognition theories of mind and Categories/Concepts in neurological prediction models work. I first read about this in How emotions are made by Lisa Feldmann Barrett. Look into Google scholar or into that books reference section to go down that rabbit hole if you please.
Link to overview article: https://praxis.fortelabs.co/progressive-summarization-a-practical-technique-for-designing-discoverable-notes-3459b257d3eb/ Also has something about prediction models on his site.
EDIT also look into conceptional hierarchies; I don't know if that's the direction you're looking for,though.
The book as well as the Zettelkasten method in itself doesn't directly solve the problems you stated in your article. It isn't a system that tells you what to extract out of the books you read. There's a lot of discussion of what to extract and how deep to extract in forums, and the tequnique itself doesn't prescribe anything.
The main problem the ZK tries to solve isn't curation of what to extract from your sources. Instead, it tries to solve the problem of information siloing that happens when you take classic notes about books that are separate from each other. Later, the ZK becomes an Ideation tool - with enough notes in the system, you can work out new knowledge and ideas just be connecting things that weren't connected before.
It's not about mechanical steps, either. It's a change in how to record and organize notes. Instead of one book > one long note about the book, you 'atomize' knowledge into many smaller notes. Each of these notes are like mini-Wikipedia articles about a specific thing. Than you re-connect the notes, like in the world wide web. One book leads to many notes, and one note can have references to many books.
Examples of note titles, just to give you an idea: 'Reading as forgetting'; 'the brain isn't for retention'; 'information bottleneck of the brain is an advantage'; 'GTD: Getting it out of the head as central paradigm'; 'deep learning in AI'. Those are all closely interconnected but have totally different sources. Each of those notes is between 100-300 words long.
A few observations of mine on what to take notes on:
Sometimes, I have 4-5 new ZK notes for a 300 page book. Sometimes I make 5 new ZK notes for one page alone. The more valuable the source, the more time I will spend with it.
One interesting thing about the ZK principle is that it's additive. If I read a few books about a subject, I don't need to note down the basics that I read again and again. Instead, I can focus on adding the nuances and Individualities that each book adds on top of the basics. This way, there are note trails that are almost like discussions: 'Author A says this is so-and-so', 'Author B says this is this-and-that', 'comparison Author A, Author B', and so on. Very satisfying, and a huge boon of the technique.