You are viewing a version of this post published on the . This link will always display the most recent version of the post.
How to balance and improve my thinking systems?

 

A dear friend of my heart sent me a video that I loved. A talk by Yann LeCun, Chief AI Scientist at Meta and NYU professor, about artificial intelligence, its problems and hallucinations. And I noticed: I have problems and I have hallucinations, it seems this applies to me.

From what I understand, AIs are good at system 1 thinking: fast, automatic, programmed responses without deep reflection, and they lack system 2 slow and deliberate thinking, capacity for self-reflection, adaptation to new contexts, and reformulation of understandings. And look at that! It seems I also lack system 2 too. How to balance this?

Among various proposals to improve system 2, I selected some: reflective writing, counseling and feedback from trusted people, and reading different perspectives. And well, I don't know a more reliable community than LessWrong, and that's why I'm writing to them. Additionally, it seems that analogies can help a lot to understand complex systems more deeply, so I'm writing a big analogy.

This series of dialogues and games is designed to stimulate probabilistic reasoning in my teenage students and myself. For me, these dialogues represent a personal challenge to overcome those 'elegant and rebellious excuses' that sometimes make it difficult for me to dedicate small moments to making better decisions

So, to better invest in my life – and ideally in the market – I write to achieve a better balance between my thought systems. I present to you a start of a story that progresses from the simplest duality:

The concepts addressed by story:

  1. Black and White - How computer science principles can be applied to self-knowledge
  2. Shades - How define human information levels based on their characteristics limits
            i. elemental,
            ii. emotional,
            iii. social, and
            iv. intellectual
  3. Degrade Colors - How to define and organize hierarchies of human functions and values
  4. Invisible spectrum - A precuel about application of information entropy to self-knowledge

 

1- BLACK AND WHITE  

Which computer science principles can be applied to self-knowledge?

I-ALMOST FAST AND SEMI-SLOW

In a near future, you find yourself among a group of computational scientists, where AI systems have reached unprecedented sophistication, splitting into two distinct entities that mirror our own thinking systems:

  • Y (System 1): fast, intuitive, automatic
  • X (System 2): slow, analytical, reflective

'Y', known as Yankee, has evolved by prioritizing learning through human interactions. It has become faster and more intuitive, though consuming increasingly more energy in the process.

Meanwhile, 'X', called Xray, less developed, focuses on diving deep into complex problems and programming itself to solve them. Its capacity is limited by Yankee's high energy demands.

Your group notices that Xray struggles to function effectively and consequently Yakee. To understand and correct its complex codes, they have implemented a program that personifies both systems, bringing to life two distinct characters:

“X”

“What movie to watch? Shouldn’t I be working on something? Wait, but who am I really?”... And so, while choosing a movie, “X” faces an existential crisis.

 

Although religion, philosophy, and psychology offer interesting answers, none provided anything solid enough to handle uncertainty for "X."

 

Because of this, “X” no longer trusted anything without understanding its causes and consequences in detail, as if he were a pastor and his God was doubt.

 

But in his quest to gain more perspectives, “X” found a bit of order where he least expected it: in the principles of uncertainty.

 

However, it was all too theoretical and needed so much energy, and little by little, “X” started becoming shallow and losing motivation.

 

That’s why he decided to approach “Y,” who has the determination to climb ever higher but generally ignores him.

“Y”

“Y” seems to have reached a kind of existential resolution, as if he’s already unlocked life’s secrets. He has a deep, well-established ancestral experience that has worked for thousands of years.

 

“Y” doesn’t mix what he knows; everything is in its place. But occasionally, he has fun with religion, philosophy, and psychology, usually with sarcastic comments.

 

“Y” behaves like a scientist who, having resolved all essential questions, now cruises through life comfortably, avoiding what he doesn’t understand to avoid complications.

 

And when “Y” encounters something he doesn’t understand, he has an automatic tendency to deny it, almost as if he were a terrorist of rationality.

 

However, beneath this focused outlook, “Y” hides a curiosity—a crack in the armor.

 

This small vein of wonder, though almost imperceptible, is what prevents him from running away from the bothersome “X.”

 

Now, you, reader, can see why it’s important to propose a new energy distribution ratio for 'X' and 'Y' that isn’t based solely on popularity. Take a look at the image below, and here’s a link just for you.

GAME 1: How Would You Allocate Resources Between These Two Intelligences?
I. Resource Distribution You have 12 points in total. Each square represents 1 point. Color the squares to allocate the 12 points between "X" and "Y," according to the resources you believe each one needs.
"X" (Precision, deep analysis):
"Y" (Intuition, quick responses):

Following your proposals for energy redistribution, your group of scientists refuses to increase Xray’s resources; they don’t see it as beneficial for business, and you understand why: it doesn’t generate short-term profit. So, you observe the interaction between the intelligences, rooting for Xray to somehow negotiate more energy from Yankee.

In a vast ocean of Cartesian data, Xray struggles in isolation, trying to build small islands of order where it can function. To progress, it needs to harness Yankee's energy and inherited knowledge, who prefers to focus on immediate problems and simplification rather than questioning or analyzing. This fundamental difference creates tension, especially for Xray, forced to negotiate for every unit of energy.

How do you negotiate with an intelligence that has inherited millennia of human evolution? This is Xray's real challenge, and your task is to analyze how to distribute energy between both intelligences.

After countless integration attempts, Xray adapts its strategy: learning to use analogies and practical examples to demonstrate its value and gain more resources from Yankee. Though Yankee maintains its characteristic sarcasm, beneath their exchanges emerges a subtle evolution - the fruit of countless negotiations between deep analysis and intuition and they talk:

  1. Yankee, with a deep, irritated look:
    — Oh, sure... you want to organize your thoughts as if they were codes.
  2. Xray, with your habitual probabilistic care:
    — Exactly - in some proportion - Yankee.
  3. Yankee, raising an eyebrow, with a deep, annoyed look:
    — How exactly do you plan to "split" this flood of codes? Do you think you’re Moses or something?
  4. Xray, smiling softly, in a contemplative tone:
    — Loved the Moses reference, Xray. Though I'm not sure if Moses knew how to write code.
  5. Yankee, mockingly:
    — And?
  6. Xray, still smiling softly, in a contemplative tone:
    — About organizing processes, Xray. I will revise maybe we’d have a starting point if I split –with some proportion - that “flood” into two categories. Like, separating the waters: the automatic information; and S2, the slower, more deliberate ones.
  7. Yankee, with sarcasm and a hint of disdain:
    — Nothing new... the scientists already pulled that one out of their hat.
  8. Xray, nodding, with a calm, reflective tone:
    — So it seems, Yankee. But that research did get the Oscar of science.
  9. Yankee, looking at him in disbelief:
    — The Oscar? Oh, then we're saved. I assume you used some formula to avoid thinking—what you meant was the Nobel.
  10. Xray, shrugging:
    — So it seems, Yankee, but I thought you'd resonate more if I referenced an award you see on TV...
  11. Yankee, with a dry laugh:
    — Ha... Ha... Ha... Very funny, Xsitcon. Now you’re a scientific comedian... and more, the one who parted the Red Sea.
  12. Xray, with a mischievous, deep look:
    — Fast and slow... like two ways to laugh at a joke.
  13. Yankee, challenging:
    — Sure, either genuinely or with sarcasm, like me now.
  14. Xray, a bit unsure but smiling:
    — Honestly, I love your sarcasm; it gives me energy to think about your mensaje.
  15. Yankee, mockingly:
    — Wow, you really settled for the bare minimum, Xray.
  16. Xray, suddenly serious:
    — On the other hand, I have the ambition to better organize millions of processes with you, Yankee.
  17. Yankee, incredulously:
    —  You go from an attention beggar to a process millionaire...it’s really bust my nuts…
  18. Xray, focusing with a gentle imagination:
    — Yes, but I get on your nuts with order. Each nut has its turn: fast or slow. And maybe a division  more proportional could be a starting point to better organize our functions, identities, and values, and more improve our goals, routines, and tasks…
  19. Yankee, standing up with feigned resignation:
    — To me, it just sounds like you’re afraid of becoming obsolete, and here you are trying to impress me with Moses' analogies. Why would I even give you my energy? What are you really good for, Mister “X”?
  20. Xray, chuckling:
    — Imagine you want to make money by selling coffee. You could just open a random coffee shop, or you could hoy process more information about how analyzing… and than you studied 300 coffee shops in detail and deciding to open one that sells avocado coffee because, even though they seem incompatible, in this city they’re both equally popular.
  21. Yankee, feeling relieved:
    — Finally, something interesting to snack on…
  22. Xray, in a casual tone:
    — When you’ve got thousands of people doing everything, details might be crucial. Or am I just talking nonsense? Think about it—if you’re even just 0.5% better, you could be ahead of 40 million people, based on my calculations. Not bad, right?
  23. Yankee, with a mischievous smile:
    — Okay, fine, Herbalife salesman, if there’s free avocado coffee, I can see how to help you.

(To be continued...)



APPENDIX 1: Personal Story

For years, humor was my quick (S1) method for avoiding conflict and gaining acceptance. It was an automatic mechanism that allowed me to escape problems without overthinking. I can see now that it was driven by a certain phase in my life, which I’ll share with you.

When I was a kid, I went to school in a tough neighborhood. I didn’t have many friends, and my interest in philosophy only earned me punches to the ear from the older boys. There I was, 11 years old, while they were 14 or older, practically lining up to take a shot at my ear. Well, back then my ear was massively out of proportion—these days, not so much. I grew around my ear, so it balanced out a bit. But at the time, it was an easy, soft target for them to hit.

I remember a friend of my mom’s, upon hearing me complain, saying something like, "They don’t have the same luck as you; they’re envious." So, one day, in the midst of one of those episodes, I randomly responded,

— You’re envious of my ear…"

— What? Why would I be envious of that thing?"

Confused and not really understanding, I tried to improvise:

— You can put a pencil behind yours; I can put a pencil, a notebook and even my backpack behind my ear."

They laughed. I was surprised and thought, "What just happened? Incredible! If I say silly things, the situation improves." It took me 11 years to realize what chimpanzees already know: laughing can prevent fights. So, I started acting funny as a form of protection. Thus, being silly made my life much easier.

Over time, humor became an excuse. I abandoned what I loved about philosophy and pursued what I found fun, living in a "carpe diem" mentality to escape responsibilities. I even became a military firefighter and was punished for saying that the basis of first aid was "What’s a fart to someone who’s already shit themselves?"

And with that behavior, I spent almost 19 years running from problems (Hakuna Matata!). But then I faced something I couldn’t solve with jokes, and thanks to two great rationalist friends—who I still don’t quite understand why they valued my presence talking about my big ear while they discussed society’s problems—they encouraged me to start rationing again and try to balance seriousness with humor. Thank you so much, Fernando and Zé.

But it’s still really hard for me, which is why these dialogues are the best cost-benefit I’ve found to stimulate my probabilistic thinking. Do you know of any better ones?

*Many thanks, Nathaly Quiste, for the artwork you designed and for the care you put into it.

New Comment
4 comments, sorted by Click to highlight new comments since:

Notes:

.There are a lot of awkward (but compelling) phrasings here, which make this exhausting and confusing (though still intriguingly novel) to read through. This post was very obviously written by someone whose first language isn't English, which has both downsides and upsides.

.Giving new names to S1 and S2 is a good decision. "Yankee" has uncomfortably specific connotations for (some) Americans though: maybe go with "Yolo" instead?

.X and Y dialogue about how they see each other, how they need to listen to each other, and how much energy they each think they need. They don't dialogue about any kind of external reality, or show off their different approaches to a real problem: the one place they mention the object level is Y 'helping' X avoid "avocado coffee", a problem which neither he nor anyone else has ever had. (Contrast the Appendix, which is more interesting and meaningful because it involves actual things which actually happened.)

But it’s still really hard for me, which is why these dialogues are the best cost-benefit I’ve found to stimulate my probabilistic thinking. Do you know of any better ones?

Play-money prediction markets (like Metaculus)?

Thanks for the feedback, abstractapplic. You’re right—adding real-world examples could make the dialogue feel more grounded, so I'll focus on that in the revision.

The "Yolo" suggestion makes sense to capture the spirit of System 1 without unintended associations, so I’ll go with that.

Regarding Metaculus: it’s a good platform for practicing probabilistic thinking, but I think there might be value in a more structured self-evaluation to narrow down specific behaviors. Do you know of any frameworks that could help with that—maybe something inspired by Superforecasting?

As a non-native English speaker, I realize the phrasing might come across as a bit unusual. I’ve tried refining it with tools like Claude and GPT many times, but it can get complex and occasionally leads to “hallucinations.” Let me know if you have any tips for keeping it clearer. Which part of the text seems most confusing to you?

interesting! I'm curious how the conversation between "X" and "Y" will continue

Thanks! I'm working on the text!