(some of these are things that I thought & read about a lot already, others are more of a "I should do more here" thing… talk to me to find out which ones are which)
DMs here should work, you can also reach me on Discord as nobody2342 or on Telegram as @locally_cartesian. (Or ask for Matrix or Signal.)
I should also learn about the current de-facto standard way of writing software (docker, JS / Typescript, …) at some point…
Technology Connections viewers already know this somewhat related bit: Consider switching to loose powder instead of tabs, or having both. The dishwasher runs three cleaning cycles (pre-wash, main, rinse), and the tab only goes in for the second phase. The first phase tries to get all the food and grease off using just water… which isn't ideal. Adding like 1/2 a teaspoon of the loose powder directly onto the door / into the tub at the bottom will greatly support the pre-wash phase and should deal with most things.
Since I started doing that, I don't bother scraping at all (obviously? still discarding loose food remains in the bin first) and basically never get stuck bits. (Every couple of months stuff like strongly baked-on cheese from e.g. a gratin may stick to the dish, but that's it.)
The way I approach situations like that is to write code in Lua and only push stuff that really has to be fast down to C. (Even C+liblua / using a Lua state just as a calling convention is IMHO often nicer than "plain" C. I can't claim the same for Python...) End result is that most of the code is readable, and usually (i.e. unless I stopped keeping them in sync) the "fast" functions still have a Lua version that permits differential testing.
Fundamentally agree with the C not C++/Rust/... theme though, C is great for this because it doesn't have tons of checks. (And that's coming from someone who's using Coq / dependent types regularly, including at work.) Compilers generally want to see that the code is safe under all possible uses, whereas you only care about the specific constellations that you actually use when prototyping. Convincing the compiler, disabling checks, and/or adding extra boilerplate adds overhead that seriously slows down the process of exploration, which is not something that you want to deal with in that mode.
Sabine Hossenfelder's assessment (quickly) summarized (and possibly somewhat distorted by that):
This seems to be another case of "reverse advice" for me. I seem to be too formal instead of too lax with these spatial metaphors. I immediately read the birds example as talking about the relative positions and distances along branches of the Phylogenetic tree, your orthogonality description as referring to actual logical independence / verifiable orthogonality, and it's my job to notice hidden interaction and stuff like weird machines and so I'm usually also very aware of that, just by habits kicking in.
Your post made me realize that instead of people's models being hard to understand, there simply may not be a model that would admit talking in distances or directions, so I shouldn't infer too much from what they say. Same for picking out one or more vectors, for me that doesn't imply that you can move along them (they're just convenient for describing the space), but others might automatically assume that's possible.
As others already brought up, once you've gotten rid of the "false" metaphors, try deliberately using the words precisely. If you practice, it becomes pretty easy and automatic over time. Only talk about distances if you actually have a metric space (doesn't have to be euclidean, sphere surfaces are fine). Only talk about directions that actually make sense (a tree has "up" and "down", but there's no inherent order to the branches that would get you something like "left" or "right" until you impose extra structure). And so on... (Also: Spatial thinking is incredibly efficient. If you don't need time, you can use it as a separate dimension that changes the "landscape" as you move forward/backward, and you might even manage 2-3 separate "time dimensions" that do different things, giving you fairly intuitive navigation of a 5- or 6-dimensional space. Don't lightly give up on that.)
Nitpick: "It makes sense to use 'continuum' language" - bad word choice. You're not talking about the continuum (as in real numbers) but about something like linearity or the ability to repeatedly take small steps and get predictable results. With quantized lengths and energy levels, color isn't actually a continuous thing, so that's not the important property. (The continuum is a really really really strange thing that I think a lot of people don't really understand and casually bring up. Almost all "real numbers" are entirely inaccessible! Because all descriptions of numbers that we can use are finite, you can only ever refer to a countable subset of them, the others are "dark" and for almost all purposes might as well not exist. So usually rational numbers (plus a handful of named constants) are sufficient, especially for practical / real world purposes.)
Main constraint you're not modeling is how increasing margin size increases total pages and thus cost.
That's why I'm saying it probably won't need that for the footers. There's ~10mm between running footer and text block, if that's reduced to ~8 or 9mm and those 1-2mm go below the footer instead, that's still plenty of space to clearly separate the two, while greatly reducing the "falling off the page" feeling. (And the colored bars that mark chapters are fine, no need to touch those.)
Design feedback: Alignment is hard, even when it's just printing. Consider bumping up the running footer by 1-2mm next time, it ended up uncomfortably close to the bottom edge at times. (Also the chapter end note / references pages were a mess.) More details:
variance: For reference, in the books that I have, the width of the colored bars along the page edge at each chapter (they're easy to measure) varies between ~4.25mm and ~0.75mm, and sometimes there's a ~2mm width difference between top and bottom. (No complaints here. The thin / rotated ones look a bit awkward if you really look at them, but you'll likely be distracted by the nice art on the facing page anyway. So who cares, and they do their job.)
footers: Technically, the footer was always at least 2mm away from the edge (so it didn't really run the risk of getting cut off), but occasionally it felt so close that it was hard not to notice. That distracted from reading, and made those pages feel uncomfortable… giving it just 1 or 2mm more should take out the tension. (While I didn't experiment with it, my gut feeling says the text block probably won't have to move to make more space.)
end notes/references: These just looked weird to me. Rambling train of thought style notes:
Apart from that, I loved the design! Thanks to everyone involved for making the books, they're lovely! <3
Sounds great so far, some questions:
And (different category)
Re solanine poisoning, just based on what's written in Wikipedia:
Solanine Poisoning / Symptoms
[...] One study suggests that doses of 2 to 5 mg/kg of body weight can cause toxic symptoms, and doses of 3 to 6 mg/kg of body weight can be fatal.[5][...]
Safety / Suggested limits on consumption of solanine
The average consumption of potatoes in the U.S. is estimated to be about 167 g of potatoes per day per person.[11] There is variation in glycoalkaloid levels in different types of potatoes, but potato farmers aim to keep solanine levels below 0.2 mg/g.[18] Signs of solanine poisoning have been linked to eating potatoes with solanine concentrations of between 0.1 and 0.4 mg per gram of potato.[18] The average potato has 0.075 mg solanine/g potato, which is equal to about 0.18 mg/kg based on average daily potato consumption.[19]
Calculations have shown that 2 to 5 mg/kg of body weight is the likely toxic dose of glycoalkaloids like solanine in humans, with 3 to 6 mg/kg constituting the fatal dose.[20] Other studies have shown that symptoms of toxicity were observed with consumption of even 1 mg/kg.[11]
If 0.18 mg/kg = 167 g of Potatoes, then 1 g/kg is reached at 927g of potatoes, which equals about 800 calories. So if you "eat as much as you want", I'm not surprised at all if people show solanine poisoning symptoms.
(And that's still ignoring probable accumulation over prolonged time of high consumption.)
My gut feeling (no pun intended) says the mythical "super-donor" is a very good excuse to keep looking / trying without having to present better results, and may never be found. Doing the search directly in the "microbiome composition space" instead of doing it on people (thereby indirectly sampling the space) feels way more efficient, assuming it is tractable at all.
If some people are already looking into synthesis, is there anything happening in the direction of "extrapolating" towards better samples? (I.e. take several good-but-not-great donors that fall short in different ways, look at what's same / different between their microbiome, then experiment with compositions that ought to be better according the current understanding, and repeat.)
Unfortunately that requires Facebook =/ and most of my friends avoid / don't have Facebook for privacy reasons.
Alternatives: