¿HOW TO SYNCHRONIZE MY THINKING SYSTEMS?

A dear friend sent me a video that I loved: a talk by Yann LeCun, Meta’s Chief AI Scientist and a professor at NYU, about artificial intelligence, its challenges, and its famous “hallucinations.” Interestingly, I noticed that AIs have something in common with me.

An AI can reduce the concepts of “red car” and “red apple” to simply “red” and then collapse. Sometimes I try to “work for fun” and “watch Netflix for fun,” and in the end, it all collapses into just “fun.”

From what I understood, AIs are good at what we call fast thinking: automatic, quick responses without much reflection — something that resembles what some psychologists define as "System 1." However, they lack their own memory, self-reflection, deliberate and slow thinking, adaptation to new contexts, and the ability to revise understandings. This would resemble more what we call “System 2.” LeCun suggests that synchronizing these two systems is essential to solve certain problems.

And look! I also struggle to synchronize my thinking systems (on some level). For years, my automatic thinking (System 1) dominated, while my reflective capacity (System 2) weakened from lack of use. When I tried to correct this, I ended up on the opposite extreme: I focused so much on reflection that I neglected my intuition. Neither of these extremes works well, but at least reflective thinking might help me find balance.

My reflective side suggests giving these systems more memorable names than simple numbers:

  • System 1 would be Yolo (fast/automatic thinking).
  • System 2 would be Xray (reflective/deliberate thinking).

After several suggestions from my Xray to integrate my Yolo, I chose a few: reflective writing, advice from trusted people, and reading different perspectives. And honestly, I don’t know of any community more reliable than LessWrong, which is why I’m sharing these reflections with you.

To avoid falling into the trap of excessive reflection, I’m telling this as a story. There’s preliminary evidence that analogies can help with learning — a randomized study with college students showed a 20-25% improvement in understanding new concepts (Richland & McDonough, 2010). Inspired by this, though aware of the limited evidence, I propose a grand analogy… I seem to be highly motivated by stories, which may explain my Netflix marathons.

With all this, I present to you the first in a series of dialogues and exercises designed to synchronize probabilistic reasoning within myself and perhaps also within my teenage students. I’d like to know if this isn’t just another of my “hallucinations” as I begin a story that covers the following concepts:

The concepts addressed by this story

  1. Black and White - How computer science principles can be applied to self-knowledge
  2. Shades - How define human information levels based on their characteristics limits
            i. elemental,
            ii. emotional,
            iii. social, and
            iv. intellectual
  3. Degrade Colors - How to define and organize hierarchies of human functions and values
  4. Invisible spectrum - Application of information entropy to self-knowledge

 

1- BLACK AND WHITE  


I-THE PROBLEM

In the near future, you find yourself among a group of computational scientists seeking to solve problems in artificial intelligence: Project Ecuador.

In this scenario, artificial intelligence has reached unprecedented sophistication and unified into two main processes:

Process 'Y', known as Yolo, has evolved prioritizing learning through human interactions. It has become faster and more intuitive.

Meanwhile, process 'X', called Xray, focuses on deepening complex problems and programming itself to solve them. Being less used, it's becoming increasingly slower.

In a conversation with the AI about how to cook its dough for lunch, the president accidentally activated weapons of mass destruction aimed at the universe. Without fully understanding what happened, your group notices that among various ways to measure intelligence efficiency is its ability to foresee events, which would be primarily processed by the Xray process.

Xray is experiencing growing difficulties functioning effectively, which also affects the Yolo process, preventing it from operating in sync at maximum efficiency.

And to understand and correct their complex codes at a human level, you implement a program that personifies both systems, giving life and flesh to two distinct connected characters.

And now, due to Intelligence failures, you, as part of Project Ecuador, have to propose a forced redistribution of energy between these two characters, in the link:https://docs.google.com/forms/d/e/1FAIpQLSeJ9epkRnNrj_o4iZwXuv5wszeRxPM8eGDpMsbnVo7joUoE9A/viewform


II- GAME

1: HOW WOULD YOU DISTRIBUTE RESOURCES BETWEEN THESE TWO INTELLIGENCES? I. Resource Distribution You have 12 total points. Each square represents 1 point. Color the squares to distribute the 10 points between "X" and "Y" according to the resources you believe each needs. "X" (Precision, deep analysis): "Y" (Intuition, quick answers): (12 empty squares here to color)

        


Even with your energy redistribution proposals, your group of scientists refuses to increase Xray's resources, "it doesn't seem good for business." Which you understand as: it doesn't generate short-term profit. And so, without many more options, you observe the interaction between the intelligences and hope that Xray by itself manages to negotiate more energy from Yolo:
        To you, the whole scenario seems like a vast ocean of Cartesian data, where Xray struggles in isolation, trying to build small islands of order where it can function. To progress, it needs to synchronize with Yolo, who prefers to focus on immediate problems and simplify rather than question or analyze. This fundamental difference creates tension, especially for Xray, forced to negotiate each unit of energy.
        How do you negotiate with an intelligence that has inherited automatic responses that were more efficient for humans through millennia of evolution? That's Xray's real challenge, and to give it any chance, they restart them secretly at every moment..
        With each restart, Yolo quickly regains process energy, but for some very short moments, Xray can function better.
        And, after countless restarts, Xray adapts new strategies: it learns to use analogies and practical examples to demonstrate its value and obtain more resources from Yolo. This is thanks to Yolo having a hidden curiosity code within its automatic programming, the openness in its system.

And now, and the few on the surface of the short negotiations between these intelligences, a subtle evolution emerges:


 

III-ALMOST FAST AND SEMI-SLOW

  1. Xray, shrugging: — What I'm trying to say is that life is full of information, Yolo. How do you know what's really important without structuring?
  2. Yolo, with a serious and emphatic tone: — I don't know, and that leaves me at peace. Nobody needs so many details and abstractions, Megamind.
  3. Xray, explanatory: — Sure, at some level, for humans being at 'peace' was key to solving problems quickly, hunting, eating, right? But abstractions like what I'm proposing weren't so important; they lived in tribes of 50 people on average. If you did something special, like making a clay pot, you could be a hero. Now, today Yolo, imagine being the "clay pot maker" in the global tribe…
  4. Yolo, sarcastically: — What an epic evolution. From super AI hero to "clay pot maker". Making it more realistic, we could work in a competition for Apple factory or something like that.
  5. Xray, getting serious, explanatory: — Yolo, Imagine you have two factories. One produces quickly but with errors, and the other slower but constantly observes and updates itself. Which one would you invest in?
  6. Yolo, laughing slightly, sarcastically: — Yes Genius! I want money fast, in the one that produces more…
  7. Xray, in a calm tone: — Me too! And suddenly, your factory faces competition from 8 billion factories.
  8. Yolo, interrupting, annoyed: — In other words…
  9. Xray, with a more direct tone: —  One reason it might be worth focusing on the details and improving our factory's own code is that, with the current population, being just 0.5% better could already put you ahead of 40 million people, don’t you think?
  10. Yolo, exaggerating his reaction, confused: — Damn... I have to invest in my slow factory. But wait, why the hell does it work better than the fast one?
  11. Xray: — Yes, but it could be a good start for humans to separate the process into two channels: fast and slow.
  12. Yolo: — With even more desperation: difficult what you say about investing in being slow. Who the hell wants to be slower?
  13. Xray: — Maybe fast and slow are just a reflection, no?
  14. Yolo, sarcastically: — My God! When I make my clay pot... it's not about being fast or slow, for me it's just the result of an objective to improve the factory more or produce more.
  15. Xray: but rather I'm... how to say it... processing inward or outward. When I go to make a pot, my goal is to externalize my desire, output. But I compare materials to know if it's worth it, and there my goal is more about changing what I see about pots, input. Speed would just be... a side effect, right?
  16. Yolo, feigning disinterest but clearly engaged: — Is that what you're trying to say with your philosophical wordiness, Mr. "X"? That the real division isn't so much about speed, but where the processing is aimed? Well! I have no proof that it really works.
  17. Xray, happy: — Not really, but it’s the least bad option we’ve got. Have you seen those studies by Kahneman and Tversky on the anchoring effect? [1]
  18. Yolo, mocking: — Oh, "Professor X", what does this have to do with your internal-external theory? Or are you just trying to impress me with more academic references?
  19. Xray, looking to the side, with disguised pride: — Well... let's say they talked to some groups of people about their "errors and hallucinations" and made them aware of them.
  20. Yolo — And?
  21. Xray, — Just talking about their problems, about what they were failing to do helped them make more precise decisions, Yolo.
  22. Yolo, sarcastic: — Hmm… Well done, now you've made a better decision! You mentioned studies without actually saying they’re studies; you’re already 0.5% less boring.
  23. Xray, in playful mode — See, see, see? So I continued: they put some groups of people to solve problems and noticed that "self-questioning" helped, like asking people to question if their decisions are based on facts or intuitions: "Would you bet on your opinion, how?" "What evidence is more likely?" "Should I study?" "What's really worth studying?" [2]
  24. Yolo, animated: — Now yes! I see why we're functioning better. I'm a genius! I already motivated you to question me better.
  25. Xray, with a slight smile: — You caught me! Yes, this whole dialogue is mainly for that.
  26. Yolo, with arrogance: — But is it enough to invest in my slow factory version of the Apple Store?
  27. Xray: — If you want to compete with the big ones, we could start by dividing your flood of information into more detailed categories than just S1 "internalization, personal change" and S2 "externalization, environmental change"…
  28. Yolo, bored: — This will never end…
  29. Xray, thoughtful: — Have you seen those Superforecasters?
  30. Yolo — Yes, those crazy people who studied for years who gets more things right in life.
  31. Xray — And it seems some were better than government intelligence teams. But I couldn't understand enough how they achieve it. Just some general characteristics: analytical thinking, receptivity to new information, a mix of humility and curiosity…
  32. Yolo, skeptical: — How convenient, Xray, exactly your characteristics.
  33. Xray — You caught me again, Yolo!
  34. Yolo — For me, all that about humility characteristics is pure theory, all those Superforecaster "values" are rather abstract. "Socrates had the cognitive humility of God Apollo compared to Eliezer Yudkowsky, who has that of Venus." Sounds that esoteric.
  35. Xray, with a thoughtful smile: — Although esoteric, it might be a good start, trying to understand what happened. But if we want to make better predictions, bet, and have fewer hallucinations, maybe we should objectify these values more.

And now, as they chat and better understand their own processes of changing internally "Xray" or changing the environment "Yolo", you can notice that they distribute their energy differently. It's a relief that, after thousands of attempts and failures, with this interaction, AI hallucinations decrease by 0.5% when tested on the prediction site.

(To be continued…)


References:

[1] To access the full article "How Can Decision Making Be Improved?" by Katherine L. Milkman, Dolly Chugh, and Max H. Bazerman, you can visit the Harvard Business School website Harvard Business School. This article, published in Perspectives on Psychological Science, explores how decision-making errors can be reduced by making people aware of their cognitive biases and limitations. It discusses strategies to improve human judgment and emphasizes the importance of recognizing and adjusting decisions to avoid costly mistakes.
You can also find more information about this study on ​ Katy Milkman, where she provides an overview. Link to the full text in PDFhere.

[2] There are some studies supporting the idea that using bets or commitments can improve decision-making accuracy. The theory behind "A Bet is a Tax on Bullshit" suggests that by putting something at stake, people are less likely to make unfounded claims or take uncalculated risks. This approach has been explored in experiments and studies on cognitive bias, showing that when individuals are incentivized to question the basis of their decisions and become more critical (even with symbolic bets), they tend to improve their accuracy and reduce errors in judgment.
For instance, studies have shown that self-questioning and reflection methods help individuals evaluate their beliefs and correct decisions based on weak or biased intuitions. This type of intervention can be particularly effective in fast-decision contexts, such as in sports and high-pressure situations, where critical review training improves decision accuracy. Oxford Academic, Frontiers
For the study Xray mentions on the effects of self-questioning in decision-making, a relevant reference can be found in the work of Rousseau, who explores how asking key questions at the beginning of a decision-making process can help avoid biases and develop more informed and accurate decisions. This study suggests that an effective practice to improve decision quality includes questioning the foundations of decisions, such as distinguishing between intuitions and verifiable facts, which fosters a more objective and evidence-based evaluation of options and potential outcomes. Center for Evidence-Based Management.

[3] On studies about prediction and "superforecasters," research conducted by the Good Judgment Project (GJP) led by Philip Tetlock has explored the characteristics that define people highly accurate in predictions. In these studies, "superforecasters" outperformed government intelligence teams, showing that traits such as analytical thinking, openness to new information, and humility or willingness to change views based on evidence were crucial to their success. Oxford Academic

Frontiers
An interesting aspect is how these characteristics turn out to be abstract qualities and sometimes difficult to measure accurately. However, Tetlock’s studies and his collaborators indicate that these traits can be observed and promoted in prediction environments through methods such as "training feedback" and self-questioning. While "mental openness" and "cognitive humility" can be assessed through personality tests and specific tasks, quantifying these qualities in terms of bets remains a challenge, as abstract values like humility are difficult to define objectively for a fair bet.
For more details, Tetlock and Mellers published their findings in their research and the book Superforecasting: The Art and Science of Prediction, which summarizes how certain methods and specific cognitive structures can be trained to improve prediction accuracy.

 


 

APPENDIX 1 - CHARACTER PLAN

“X”

“What movie to watch? Shouldn’t I be working on something? Wait, but who am I really?”... And so, while choosing a movie, “X” faces an existential crisis.

 

Although religion, philosophy, and psychology offer interesting answers, none provided anything solid enough to handle uncertainty for "X."

 

Because of this, “X” no longer trusted anything without understanding its causes and consequences in detail, as if he were a pastor and his God was doubt.

 

But in his quest to gain more perspectives, “X” found a bit of order where he least expected it: in the principles of uncertainty.

 

However, it was all too theoretical and needed so much energy, and little by little, “X” started becoming shallow and losing motivation.

 

That’s why he decided to approach “Y,” who has the determination to climb ever higher but generally ignores him.

“Y”

“Y” seems to have reached a kind of existential resolution, as if he’s already unlocked life’s secrets. He has a deep, well-established ancestral experience that has worked for thousands of years.

 

“Y” doesn’t mix what he knows; everything is in its place. But occasionally, he has fun with religion, philosophy, and psychology, usually with sarcastic comments.

 

“Y” behaves like a scientist who, having resolved all essential questions, now cruises through life comfortably, avoiding what he doesn’t understand to avoid complications.

 

And when “Y” encounters something he doesn’t understand, he has an automatic tendency to deny it, almost as if he were a terrorist of rationality.

 

However, beneath this focused outlook, “Y” hides a curiosity—a crack in the armor.

 

This small vein of wonder, though almost imperceptible, is what prevents him from running away from the bothersome “X.”

I- PERSONAL STORY

For years, humor was my quick (S1) method for avoiding conflict and gaining acceptance. It was an automatic mechanism that allowed me to escape problems without overthinking. I can see now that it was driven by a certain phase in my life, which I’ll share with you.

When I was a kid, I went to school in a tough neighborhood. I didn’t have many friends, and my interest in philosophy only earned me punches to the ear from the older boys. There I was, 11 years old, while they were 14 or older, practically lining up to take a shot at my ear. Well, back then my ear was massively out of proportion—these days, not so much. I grew around my ear, so it balanced out a bit. But at the time, it was an easy, soft target for them to hit.

I remember a friend of my mom’s, upon hearing me complain, saying something like, "They don’t have the same luck as you; they’re envious." So, one day, in the midst of one of those episodes, I randomly responded,

— You’re envious of my ear…"

— What? Why would I be envious of that thing?"

Confused and not really understanding, I tried to improvise:

— You can put a pencil behind yours; I can put a pencil, a notebook and even my backpack behind my ear."

They laughed. I was surprised and thought, "What just happened? Incredible! If I say silly things, the situation improves." It took me 11 years to realize what chimpanzees already know: laughing can prevent fights. So, I started acting funny as a form of protection. Thus, being silly made my life much easier.

Over time, humor became an excuse. I abandoned what I loved about philosophy and pursued what I found fun, living in a "carpe diem" mentality to escape responsibilities. I even became a military firefighter and was punished for saying that the basis of first aid was "What’s a fart to someone who’s already shit themselves?"

And with that behavior, I spent almost 19 years running from problems (Hakuna Matata!). But then I faced something I couldn’t solve with jokes, and thanks to two great rationalist friends—who I still don’t quite understand why they valued my presence while they discussed society’s problems and I talking about my big ear—they encouraged me to start rationing again and try to balance seriousness with humor. Thank you so much, Fernando and Zé.

But it’s still really hard for me, which is why these dialogues are the best cost-benefit I’ve found to stimulate my probabilistic thinking. Do you know of any better ones?

*Many thanks, Nathaly Quiste, for the artwork you designed and for the care you put into it.

 

New Comment
4 comments, sorted by Click to highlight new comments since:

Notes:

.There are a lot of awkward (but compelling) phrasings here, which make this exhausting and confusing (though still intriguingly novel) to read through. This post was very obviously written by someone whose first language isn't English, which has both downsides and upsides.

.Giving new names to S1 and S2 is a good decision. "Yankee" has uncomfortably specific connotations for (some) Americans though: maybe go with "Yolo" instead?

.X and Y dialogue about how they see each other, how they need to listen to each other, and how much energy they each think they need. They don't dialogue about any kind of external reality, or show off their different approaches to a real problem: the one place they mention the object level is Y 'helping' X avoid "avocado coffee", a problem which neither he nor anyone else has ever had. (Contrast the Appendix, which is more interesting and meaningful because it involves actual things which actually happened.)

But it’s still really hard for me, which is why these dialogues are the best cost-benefit I’ve found to stimulate my probabilistic thinking. Do you know of any better ones?

Play-money prediction markets (like Metaculus)?

Thanks for the feedback, abstractapplic. You’re right—adding real-world examples could make the dialogue feel more grounded, so I'll focus on that in the revision.

The "Yolo" suggestion makes sense to capture the spirit of System 1 without unintended associations, so I’ll go with that.

Regarding Metaculus: it’s a good platform for practicing probabilistic thinking, but I think there might be value in a more structured self-evaluation to narrow down specific behaviors. Do you know of any frameworks that could help with that—maybe something inspired by Superforecasting?

As a non-native English speaker, I realize the phrasing might come across as a bit unusual. I’ve tried refining it with tools like Claude and GPT many times, but it can get complex and occasionally leads to “hallucinations.” Let me know if you have any tips for keeping it clearer. Which part of the text seems most confusing to you?

interesting! I'm curious how the conversation between "X" and "Y" will continue

Thanks! I'm working on the text!