Notes:
.There are a lot of awkward (but compelling) phrasings here, which make this exhausting and confusing (though still intriguingly novel) to read through. This post was very obviously written by someone whose first language isn't English, which has both downsides and upsides.
.Giving new names to S1 and S2 is a good decision. "Yankee" has uncomfortably specific connotations for (some) Americans though: maybe go with "Yolo" instead?
.X and Y dialogue about how they see each other, how they need to listen to each other, and how much energy they each think they need. They don't dialogue about any kind of external reality, or show off their different approaches to a real problem: the one place they mention the object level is Y 'helping' X avoid "avocado coffee", a problem which neither he nor anyone else has ever had. (Contrast the Appendix, which is more interesting and meaningful because it involves actual things which actually happened.)
But it’s still really hard for me, which is why these dialogues are the best cost-benefit I’ve found to stimulate my probabilistic thinking. Do you know of any better ones?
Play-money prediction markets (like Metaculus)?
Thanks for the feedback, abstractapplic. You’re right—adding real-world examples could make the dialogue feel more grounded, so I'll focus on that in the revision.
The "Yolo" suggestion makes sense to capture the spirit of System 1 without unintended associations, so I’ll go with that.
Regarding Metaculus: it’s a good platform for practicing probabilistic thinking, but I think there might be value in a more structured self-evaluation to narrow down specific behaviors. Do you know of any frameworks that could help with that—maybe something inspired by Superforecasting?
As a non-native English speaker, I realize the phrasing might come across as a bit unusual. I’ve tried refining it with tools like Claude and GPT many times, but it can get complex and occasionally leads to “hallucinations.” Let me know if you have any tips for keeping it clearer. Which part of the text seems most confusing to you?
interesting! I'm curious how the conversation between "X" and "Y" will continue
How to Synchronize My Thinking Systems
A dear friend sent me a video that I loved: a talk by Yann LeCun, Meta’s Chief AI Scientist and a professor at NYU, about artificial intelligence, its challenges, and its famous “hallucinations.” Interestingly, I noticed that AIs have something in common with me.
An AI can reduce the concepts of “red car” and “red apple” to simply “red” and then collapse. Sometimes I try to “work for fun” and “watch Netflix for fun,” and in the end, it all collapses into just “fun.”
From what I understood, AIs are good at what we call fast thinking: automatic, quick responses without much reflection — something that resembles what some psychologists define as "System 1." However, they lack their own memory, self-reflection, deliberate and slow thinking, adaptation to new contexts, and the ability to revise understandings. This would resemble more what we call “System 2.” LeCun suggests that synchronizing these two systems is essential to solve certain problems.
And look! I also struggle to synchronize my thinking systems (on some level). For years, my automatic thinking (System 1) dominated, while my reflective capacity (System 2) weakened from lack of use. When I tried to correct this, I ended up on the opposite extreme: I focused so much on reflection that I neglected my intuition. Neither of these extremes works well, but at least reflective thinking might help me find balance.
My reflective side suggests giving these systems more memorable names than simple numbers:
After several suggestions from my Xray to integrate my Yankee, I chose a few: reflective writing, advice from trusted people, and reading different perspectives. And honestly, I don’t know of any community more reliable than LessWrong, which is why I’m sharing these reflections with you.
To avoid falling into the trap of excessive reflection, I’m telling this as a story. There’s preliminary evidence that analogies can help with learning — a randomized study with college students showed a 20-25% improvement in understanding new concepts (Richland & McDonough, 2010). Inspired by this, though aware of the limited evidence, I propose a grand analogy… I seem to be highly motivated by stories, which may explain my Netflix marathons.
With all this, I present to you the first in a series of dialogues and exercises designed to synchronize probabilistic reasoning within myself and perhaps also within my teenage students. I’d like to know if this isn’t just another of my “hallucinations” as I begin a story that covers the following concepts:
The concepts addressed by this story
i. elemental,
ii. emotional,
iii. social, and
iv. intellectual
1- BLACK AND WHITE
Which computer science principles can be applied to self-knowledge?
I-ALMOST FAST AND SEMI-SLOW
In a near future, you find yourself among a group of computational scientists, where AI systems have reached unprecedented sophistication, splitting into two distinct entities that mirror our own thinking systems:
'Y', known as Yankee, has evolved by prioritizing learning through human interactions. It has become faster and more intuitive, though consuming increasingly more energy in the process.
Meanwhile, 'X', called Xray, less developed, focuses on diving deep into complex problems and programming itself to solve them. Its capacity is limited by Yankee's high energy demands.
Your group notices that Xray struggles to function effectively and consequently Yakee. To understand and correct its complex codes, they have implemented a program that personifies both systems, bringing to life two distinct characters:
Now you see the need to propose a new energy distribution ratio for 'X' and 'Y' that isn’t based solely on popularity. As in the image below, and here’s a link for you.
Following your proposals for energy redistribution, your group of scientists refuses to increase Xray’s resources; they don’t see it as beneficial for business, and you understand why: it doesn’t generate short-term profit. So, you observe the interaction between the intelligences, rooting for Xray to somehow negotiate more energy from Yankee.
In a vast ocean of Cartesian data, Xray struggles in isolation, trying to build small islands of order where it can function. To progress, it needs to harness Yankee's energy and inherited knowledge, who prefers to focus on immediate problems and simplification rather than questioning or analyzing. This fundamental difference creates tension, especially for Xray, forced to negotiate for every unit of energy.
How do you negotiate with an intelligence that has inherited millennia of human evolution? This is Xray's real challenge, and your task is to analyze how to distribute energy between both intelligences.
After countless integration attempts, Xray adapts its strategy: learning to use analogies and practical examples to demonstrate its value and gain more resources from Yankee. Though Yankee maintains its characteristic sarcasm, beneath their exchanges emerges a subtle evolution - the fruit of countless negotiations between deep analysis and intuition and they talk:
(To be continued...)
APPENDIX 1: Personal Story
For years, humor was my quick (S1) method for avoiding conflict and gaining acceptance. It was an automatic mechanism that allowed me to escape problems without overthinking. I can see now that it was driven by a certain phase in my life, which I’ll share with you.
When I was a kid, I went to school in a tough neighborhood. I didn’t have many friends, and my interest in philosophy only earned me punches to the ear from the older boys. There I was, 11 years old, while they were 14 or older, practically lining up to take a shot at my ear. Well, back then my ear was massively out of proportion—these days, not so much. I grew around my ear, so it balanced out a bit. But at the time, it was an easy, soft target for them to hit.
I remember a friend of my mom’s, upon hearing me complain, saying something like, "They don’t have the same luck as you; they’re envious." So, one day, in the midst of one of those episodes, I randomly responded,
— You’re envious of my ear…"
— What? Why would I be envious of that thing?"
Confused and not really understanding, I tried to improvise:
— You can put a pencil behind yours; I can put a pencil, a notebook and even my backpack behind my ear."
They laughed. I was surprised and thought, "What just happened? Incredible! If I say silly things, the situation improves." It took me 11 years to realize what chimpanzees already know: laughing can prevent fights. So, I started acting funny as a form of protection. Thus, being silly made my life much easier.
Over time, humor became an excuse. I abandoned what I loved about philosophy and pursued what I found fun, living in a "carpe diem" mentality to escape responsibilities. I even became a military firefighter and was punished for saying that the basis of first aid was "What’s a fart to someone who’s already shit themselves?"
And with that behavior, I spent almost 19 years running from problems (Hakuna Matata!). But then I faced something I couldn’t solve with jokes, and thanks to two great rationalist friends—who I still don’t quite understand why they valued my presence while they discussed society’s problems and I talking about my big ear—they encouraged me to start rationing again and try to balance seriousness with humor. Thank you so much, Fernando and Zé.
But it’s still really hard for me, which is why these dialogues are the best cost-benefit I’ve found to stimulate my probabilistic thinking. Do you know of any better ones?
*Many thanks, Nathaly Quiste, for the artwork you designed and for the care you put into it.