Notes:
.There are a lot of awkward (but compelling) phrasings here, which make this exhausting and confusing (though still intriguingly novel) to read through. This post was very obviously written by someone whose first language isn't English, which has both downsides and upsides.
.Giving new names to S1 and S2 is a good decision. "Yankee" has uncomfortably specific connotations for (some) Americans though: maybe go with "Yolo" instead?
.X and Y dialogue about how they see each other, how they need to listen to each other, and how much energy they each think they need. They don't dialogue about any kind of external reality, or show off their different approaches to a real problem: the one place they mention the object level is Y 'helping' X avoid "avocado coffee", a problem which neither he nor anyone else has ever had. (Contrast the Appendix, which is more interesting and meaningful because it involves actual things which actually happened.)
But it’s still really hard for me, which is why these dialogues are the best cost-benefit I’ve found to stimulate my probabilistic thinking. Do you know of any better ones?
Play-money prediction markets (like Metaculus)?
Thanks for the feedback, abstractapplic. You’re right—adding real-world examples could make the dialogue feel more grounded, so I'll focus on that in the revision.
The "Yolo" suggestion makes sense to capture the spirit of System 1 without unintended associations, so I’ll go with that.
Regarding Metaculus: it’s a good platform for practicing probabilistic thinking, but I think there might be value in a more structured self-evaluation to narrow down specific behaviors. Do you know of any frameworks that could help with that—maybe something inspired by Superforecasting?
As a non-native English speaker, I realize the phrasing might come across as a bit unusual. I’ve tried refining it with tools like Claude and GPT many times, but it can get complex and occasionally leads to “hallucinations.” Let me know if you have any tips for keeping it clearer. Which part of the text seems most confusing to you?
interesting! I'm curious how the conversation between "X" and "Y" will continue
¿HOW TO SYNCHRONIZE MY THINKING SYSTEMS?
A dear friend sent me a video that I loved: a talk by Yann LeCun, Meta’s Chief AI Scientist and a professor at NYU, about artificial intelligence, its challenges, and its famous “hallucinations.” Interestingly, I noticed that AIs have something in common with me.
An AI can reduce the concepts of “red car” and “red apple” to simply “red” and then collapse. Sometimes I try to “work for fun” and “watch Netflix for fun,” and in the end, it all collapses into just “fun.”
From what I understood, AIs are good at what we call fast thinking: automatic, quick responses without much reflection — something that resembles what some psychologists define as "System 1." However, they lack their own memory, self-reflection, deliberate and slow thinking, adaptation to new contexts, and the ability to revise understandings. This would resemble more what we call “System 2.” LeCun suggests that synchronizing these two systems is essential to solve certain problems.
And look! I also struggle to synchronize my thinking systems (on some level). For years, my automatic thinking (System 1) dominated, while my reflective capacity (System 2) weakened from lack of use. When I tried to correct this, I ended up on the opposite extreme: I focused so much on reflection that I neglected my intuition. Neither of these extremes works well, but at least reflective thinking might help me find balance.
My reflective side suggests giving these systems more memorable names than simple numbers:
After several suggestions from my Xray to integrate my Yolo, I chose a few: reflective writing, advice from trusted people, and reading different perspectives. And honestly, I don’t know of any community more reliable than LessWrong, which is why I’m sharing these reflections with you.
To avoid falling into the trap of excessive reflection, I’m telling this as a story. There’s preliminary evidence that analogies can help with learning — a randomized study with college students showed a 20-25% improvement in understanding new concepts (Richland & McDonough, 2010). Inspired by this, though aware of the limited evidence, I propose a grand analogy… I seem to be highly motivated by stories, which may explain my Netflix marathons.
With all this, I present to you the first in a series of dialogues and exercises designed to synchronize probabilistic reasoning within myself and perhaps also within my teenage students. I’d like to know if this isn’t just another of my “hallucinations” as I begin a story that covers the following concepts:
The concepts addressed by this story
i. elemental,
ii. emotional,
iii. social, and
iv. intellectual
1- BLACK AND WHITE
I-THE PROBLEM
In the near future, you find yourself among a group of computational scientists seeking to solve problems in artificial intelligence: Project Ecuador.
In this scenario, artificial intelligence has reached unprecedented sophistication and unified into two main processes:
Process 'Y', known as Yolo, has evolved prioritizing learning through human interactions. It has become faster and more intuitive.
Meanwhile, process 'X', called Xray, focuses on deepening complex problems and programming itself to solve them. Being less used, it's becoming increasingly slower.
In a conversation with the AI about how to cook its dough for lunch, the president accidentally activated weapons of mass destruction aimed at the universe. Without fully understanding what happened, your group notices that among various ways to measure intelligence efficiency is its ability to foresee events, which would be primarily processed by the Xray process.
Xray is experiencing growing difficulties functioning effectively, which also affects the Yolo process, preventing it from operating in sync at maximum efficiency.
And to understand and correct their complex codes at a human level, you implement a program that personifies both systems, giving life and flesh to two distinct connected characters.
And now, due to Intelligence failures, you, as part of Project Ecuador, have to propose a forced redistribution of energy between these two characters, in the link:https://docs.google.com/forms/d/e/1FAIpQLSeJ9epkRnNrj_o4iZwXuv5wszeRxPM8eGDpMsbnVo7joUoE9A/viewform
II- GAME
1: HOW WOULD YOU DISTRIBUTE RESOURCES BETWEEN THESE TWO INTELLIGENCES? I. Resource Distribution You have 12 total points. Each square represents 1 point. Color the squares to distribute the 10 points between "X" and "Y" according to the resources you believe each needs. "X" (Precision, deep analysis): "Y" (Intuition, quick answers): (12 empty squares here to color)
Even with your energy redistribution proposals, your group of scientists refuses to increase Xray's resources, "it doesn't seem good for business." Which you understand as: it doesn't generate short-term profit. And so, without many more options, you observe the interaction between the intelligences and hope that Xray by itself manages to negotiate more energy from Yolo:
To you, the whole scenario seems like a vast ocean of Cartesian data, where Xray struggles in isolation, trying to build small islands of order where it can function. To progress, it needs to synchronize with Yolo, who prefers to focus on immediate problems and simplify rather than question or analyze. This fundamental difference creates tension, especially for Xray, forced to negotiate each unit of energy.
How do you negotiate with an intelligence that has inherited automatic responses that were more efficient for humans through millennia of evolution? That's Xray's real challenge, and to give it any chance, they restart them secretly at every moment..
With each restart, Yolo quickly regains process energy, but for some very short moments, Xray can function better.
And, after countless restarts, Xray adapts new strategies: it learns to use analogies and practical examples to demonstrate its value and obtain more resources from Yolo. This is thanks to Yolo having a hidden curiosity code within its automatic programming, the openness in its system.
And now, and the few on the surface of the short negotiations between these intelligences, a subtle evolution emerges:
III-ALMOST FAST AND SEMI-SLOW
And now, as they chat and better understand their own processes of changing internally "Xray" or changing the environment "Yolo", you can notice that they distribute their energy differently. It's a relief that, after thousands of attempts and failures, with this interaction, AI hallucinations decrease by 0.5% when tested on the prediction site.
(To be continued…)
References:
[1] To access the full article "How Can Decision Making Be Improved?" by Katherine L. Milkman, Dolly Chugh, and Max H. Bazerman, you can visit the Harvard Business School website Harvard Business School. This article, published in Perspectives on Psychological Science, explores how decision-making errors can be reduced by making people aware of their cognitive biases and limitations. It discusses strategies to improve human judgment and emphasizes the importance of recognizing and adjusting decisions to avoid costly mistakes.
You can also find more information about this study on Katy Milkman, where she provides an overview. Link to the full text in PDFhere.
[2] There are some studies supporting the idea that using bets or commitments can improve decision-making accuracy. The theory behind "A Bet is a Tax on Bullshit" suggests that by putting something at stake, people are less likely to make unfounded claims or take uncalculated risks. This approach has been explored in experiments and studies on cognitive bias, showing that when individuals are incentivized to question the basis of their decisions and become more critical (even with symbolic bets), they tend to improve their accuracy and reduce errors in judgment.
For instance, studies have shown that self-questioning and reflection methods help individuals evaluate their beliefs and correct decisions based on weak or biased intuitions. This type of intervention can be particularly effective in fast-decision contexts, such as in sports and high-pressure situations, where critical review training improves decision accuracy. Oxford Academic, Frontiers
For the study Xray mentions on the effects of self-questioning in decision-making, a relevant reference can be found in the work of Rousseau, who explores how asking key questions at the beginning of a decision-making process can help avoid biases and develop more informed and accurate decisions. This study suggests that an effective practice to improve decision quality includes questioning the foundations of decisions, such as distinguishing between intuitions and verifiable facts, which fosters a more objective and evidence-based evaluation of options and potential outcomes. Center for Evidence-Based Management.
[3] On studies about prediction and "superforecasters," research conducted by the Good Judgment Project (GJP) led by Philip Tetlock has explored the characteristics that define people highly accurate in predictions. In these studies, "superforecasters" outperformed government intelligence teams, showing that traits such as analytical thinking, openness to new information, and humility or willingness to change views based on evidence were crucial to their success. Oxford Academic
Frontiers
An interesting aspect is how these characteristics turn out to be abstract qualities and sometimes difficult to measure accurately. However, Tetlock’s studies and his collaborators indicate that these traits can be observed and promoted in prediction environments through methods such as "training feedback" and self-questioning. While "mental openness" and "cognitive humility" can be assessed through personality tests and specific tasks, quantifying these qualities in terms of bets remains a challenge, as abstract values like humility are difficult to define objectively for a fair bet.
For more details, Tetlock and Mellers published their findings in their research and the book Superforecasting: The Art and Science of Prediction, which summarizes how certain methods and specific cognitive structures can be trained to improve prediction accuracy.
APPENDIX 1 - CHARACTER PLAN
“What movie to watch? Shouldn’t I be working on something? Wait, but who am I really?”... And so, while choosing a movie, “X” faces an existential crisis.
Although religion, philosophy, and psychology offer interesting answers, none provided anything solid enough to handle uncertainty for "X."
Because of this, “X” no longer trusted anything without understanding its causes and consequences in detail, as if he were a pastor and his God was doubt.
But in his quest to gain more perspectives, “X” found a bit of order where he least expected it: in the principles of uncertainty.
However, it was all too theoretical and needed so much energy, and little by little, “X” started becoming shallow and losing motivation.
That’s why he decided to approach “Y,” who has the determination to climb ever higher but generally ignores him.
“Y” seems to have reached a kind of existential resolution, as if he’s already unlocked life’s secrets. He has a deep, well-established ancestral experience that has worked for thousands of years.
“Y” doesn’t mix what he knows; everything is in its place. But occasionally, he has fun with religion, philosophy, and psychology, usually with sarcastic comments.
“Y” behaves like a scientist who, having resolved all essential questions, now cruises through life comfortably, avoiding what he doesn’t understand to avoid complications.
And when “Y” encounters something he doesn’t understand, he has an automatic tendency to deny it, almost as if he were a terrorist of rationality.
However, beneath this focused outlook, “Y” hides a curiosity—a crack in the armor.
This small vein of wonder, though almost imperceptible, is what prevents him from running away from the bothersome “X.”
I- PERSONAL STORY
For years, humor was my quick (S1) method for avoiding conflict and gaining acceptance. It was an automatic mechanism that allowed me to escape problems without overthinking. I can see now that it was driven by a certain phase in my life, which I’ll share with you.
When I was a kid, I went to school in a tough neighborhood. I didn’t have many friends, and my interest in philosophy only earned me punches to the ear from the older boys. There I was, 11 years old, while they were 14 or older, practically lining up to take a shot at my ear. Well, back then my ear was massively out of proportion—these days, not so much. I grew around my ear, so it balanced out a bit. But at the time, it was an easy, soft target for them to hit.
I remember a friend of my mom’s, upon hearing me complain, saying something like, "They don’t have the same luck as you; they’re envious." So, one day, in the midst of one of those episodes, I randomly responded,
— You’re envious of my ear…"
— What? Why would I be envious of that thing?"
Confused and not really understanding, I tried to improvise:
— You can put a pencil behind yours; I can put a pencil, a notebook and even my backpack behind my ear."
They laughed. I was surprised and thought, "What just happened? Incredible! If I say silly things, the situation improves." It took me 11 years to realize what chimpanzees already know: laughing can prevent fights. So, I started acting funny as a form of protection. Thus, being silly made my life much easier.
Over time, humor became an excuse. I abandoned what I loved about philosophy and pursued what I found fun, living in a "carpe diem" mentality to escape responsibilities. I even became a military firefighter and was punished for saying that the basis of first aid was "What’s a fart to someone who’s already shit themselves?"
And with that behavior, I spent almost 19 years running from problems (Hakuna Matata!). But then I faced something I couldn’t solve with jokes, and thanks to two great rationalist friends—who I still don’t quite understand why they valued my presence while they discussed society’s problems and I talking about my big ear—they encouraged me to start rationing again and try to balance seriousness with humor. Thank you so much, Fernando and Zé.
But it’s still really hard for me, which is why these dialogues are the best cost-benefit I’ve found to stimulate my probabilistic thinking. Do you know of any better ones?
*Many thanks, Nathaly Quiste, for the artwork you designed and for the care you put into it.