While the scientific method developed in pieces over many centuries and places, Joseph Ben-David argues that in 17th century Europe there was a rapid accumulation of knowledge, restricted to a small area for about 200 years. Ruby explores whether this is true and why it might be, aiming to understand "what causes intellectual progress, generally?"
Written in an attempt to fulfill @Raemon's request.
AI is fascinating stuff, and modern chatbots are nothing short of miraculous. If you've been exposed to them and have a curious mind, it's likely you've tried all sorts of things with them. Writing fiction, soliciting Pokemon opinions, getting life advice, counting up the rs in "strawberry". You may have also tried talking to AIs about themselves. And then, maybe, it got weird.
I'll get into the details later, but if you've experienced the following, this post is probably for you:
Few people know this, but boiling is a cooling effect. If you somehow lower the boiling point of water below ambient temperature, you will get boiling water, as it quickly cools down to its current boiling point. The easiest way to do this, is to create a partial vacuum with a vacuum pump. A glass of water inside the bell will start boiling at room temperature, as pressure drops.
This is a fun demonstration I have shown students. I always ask “Is there anyone brave enough to get boiling hot water poured in their hand?” There is always someone. The shock is universal, each time newly boiled water is poured into a tense hand:
“It is cold!?”
Yes, the temperature has dropped significantly below room temperature....
The beauty of “hot” is that it is a relative term. Hot for whom?
It's implicitly relative to the speaker and/or the listener. Claiming that because you didn't specify one of those it's from the perspective of an ice cube is just another example of the same thing: being "clever" by deliberately pretending that there's no such thing as conversational implicature.
Having your words be literally accurate is not the spark of genius you think it is.
When a claim is shown to be incorrect, defenders may say that the author was just being “sloppy” and actually meant something else entirely. I argue that this move is not harmless, charitable, or healthy. At best, this attempt at charity reduces an author’s incentive to express themselves clearly – they can clarify later![1] – while burdening the reader with finding the “right” interpretation of the author’s words. At worst, this move is a dishonest defensive tactic which shields the author with the unfalsifiable question of what the author “really” meant.
...⚠️ Preemptive clarification
The context for this essay is serious, high-stakes communication: papers, technical blog posts, and tweet threads. In that context, communication is a partnership. A reader has a responsibility to engage in good faith, and an author
Bob’s statement 2: “All I really meant was that I had blue pens at my house” is not literally true. For what proposition is that statement being used as evidence?
It's not being used as evidence for anything.
"All I really meant" is a colloquial way of saying "the part relevant to the proposition in question was..." As such, it was in fact truthful.
"The rules say we must use consequentialism, but good people are deontologists, and virtue ethics is what actually works."
"Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god."
LLMs can be deeply confusing. Thanks to a commission, today we go back to basics.
How did we get such a wide array of confusingly named and labeled models and modes in ChatGPT? What are they, and when and why would you use each of them for what purposes, and how does this relate to what is available elsewhere? How does this relate to hallucinations, sycophancy and other basic issues, and what are the basic ways of mitigating those issues?
If you already know these basics, you can and should skip this post.
This is a reference, and a guide for the new and the perplexed, until the time comes that they change everything again, presumably with GPT-5.
Tech companies are notorious for...
Ethan Mollick's Using AI Right Now: A Quick Guide from 2025-06 is in the same genre and pretty much says the same thing, but the presentation is a bit different and it may suit you better, so check it out. Naturally it doesn't discuss Grok 4, but it also does discuss some things missing here.
The following is a nitpick on an 18 year old blog post.
This fable is retold a lot. The progenitor of it as a rationalist mashal is probably Yudkowsky's classic sequence article. To adversarially summarize:
Point 2 is incorrect. https://en.m.wikipedia.org/wiki/Niihau_incident
The Niihau incident sparked the popular hysteria that led to internment.
Imagine if you will, one of the 9/11 hijackers parachuting from the plane before it crashed, asking a random muslim for help, then having that muslim be willing to immediately get himself into a shootouts, commit arson, kidnappings, and misc mayhem.
Then imagine that it was covered in a media environment where the executive branch had been advocating for war for over a decade, and voices which spoke against it we...
The concept of weird machine is the closest to be useful here and an important quetion here is "how to check that our system doesn't form any weird machine here".
A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience
This time, we’re diving into a groundbreaking paper:
"Binary and analog variation of synapses between cortical pyramidal neurons"
Authors: Sven Dorkenwald, Nicholas L Turner, Thomas Macrina, Kisuk Lee, Ran Lu, Jingpeng Wu, Agnes L Bodor, Adam A Bleckert, Derrick Brittain, Nico Kemnitz, William M Silversmith, Dodam Ih, Jonathan Zung, Aleksandar Zlateski, Ignacio Tartavull, Szi-Chieh Yu, Sergiy Popovych, William Wong, Manuel Castro, Chris S Jordan, Alyssa M Wilson, Emmanouil Froudarakis, JoAnn Buchanan, Marc M Takeno, Russel Torres, Gayathri Mahalingam, Forrest Collman, Casey M Schneider-Mizell, Daniel J Bumbarger, Yang Li, Lynne Becker, Shelby Suckow, Jacob Reimer, Andreas S Tolias, Nuno Macarico da Costa, R Clay Reid, H Sebastian
Institutions: Princeton Neuroscience Institute, Princeton University, United States; Computer Science Department,...
I wouldn’t worry too much about these. It’s not at all clear that all the alignment researchers moving to Anthropic is net-negative, and for AI 2027, the people who are actually inspired by it won’t care too much if you’re being dunked on.
Plus, I expect basically every prediction about the near future to be wrong in some major way, so it’s very hard to determine what actions are net negative vs. positive. It seems like your best bet is to do whatever has the most direct positive impact.
Thought this would help, since these worries aren’t productive, and anything you do in the future is likely to lower p(doom). I’m looking forward to whatever you’ll do next.
There’s a battle in the field of ethics between three approaches—Consequentialism, Virtue Ethics and Deontology, but this framing is all wrong, because they’re all on the same side. By treating ethics as an adversarial all-or-nothing (zero-sum) debate, we are throwing out great deal of baby for the sake of very little bathwater.
First of all some (very basic) definitions.