This is mostly some ramblings and background notes for a fanfiction, and should not be taken seriously as a real-world argument, except insofar as I would hope it could become good enough to be a real-world argument if I were smart enough and worked on it enough and got the right feedback. I would love to hear criticism on any or all of it, and your ideas on where or how else the story of Macross/Robotech has interesting ideas to explore.


Beyond the Machine's Eye: Power, Choice, and the Crisis of Human Agency

Imagine teaching a computer to play chess. You give it clear rules about what makes a "good" move - capturing pieces, controlling the center, protecting the king. The computer gets incredibly good at following these rules.

But here's the thing: it can never ask whether chess is worth playing.

This might seem like a silly example, but it points to something crucial about the challenges we face as machine intelligence becomes increasingly powerful. Systems optimized for specific goals - whether winning chess games or maximizing "engagement" - can't step outside their programming to question whether those goals are worthwhile.

To understand these challenges better, let's look at a story about space warriors called the Zentradi from the anime "Macross" (also known as, in a sense, "Robotech"), and how they optimized themselves into extinction.

Part I: How to Optimize Your Civilization Away

Imagine you're part of an advanced spacefaring civilization called the Protoculture. You face genuine existential threats - hostile aliens, cosmic disasters, internal conflicts. You decide you need a military force to survive.

The reasonable decision: Create an elite warrior force, the Zentradi, genetically engineered for combat effectiveness. Give them their own ships and resources so they can operate independently, without endangering civilian lives.

Seems sensible. What could go wrong?

Your warrior force is effective but has problems:

  • Personal relationships affect combat decisions
  • Cultural activities distract from training
  • Individual preferences create coordination issues
  • Emotional bonds make warriors hesitate

The reasonable decision: Start limiting these "inefficiencies." Restrict relationships. Standardize routines. Optimize for pure military effectiveness.

Still seems rational. You're just removing obvious problems.

Your warriors are now more effective, but you notice:

  • Units with fewer cultural ties perform better in combat
  • More standardized groups have better coordination
  • Less emotional attachment means fewer hesitations
  • Stricter hierarchies improve command response

The reasonable decision: Double down on what works. Further reduce cultural activities. Increase standardization. Strengthen hierarchies.

You're just following the data, right? It would be silly to let our messy human biases to lead us astray.

Now an interesting pattern emerges:

  • Groups that maintain some culture start losing battles
  • More optimized groups survive and replicate
  • The most "efficient" units get more resources
  • Success reinforces the optimization pattern

The reasonable decision: Let natural selection take its course. The most effective units should be the model for others.

After thousands of years of this process:

  • Warriors can't comprehend music or art
  • Emotional capacity is engineered away
  • Individual thought becomes a liability
  • Culture is seen as system malfunction

After hundreds of thousands of years:

  • The ability to question orders is gone
  • Creativity exists only within tactical bounds
  • Emotional response is purely combat-focused
  • The capacity to choose different goals is lost

No one even remembers that these were choices anymore. The designers and their reasoning are lost to time. The system runs on autopilot, optimizing itself into an ever-narrower space of possibilities.

Part II: The Three Warnings

This story isn't just about losing meaning - it's about three distinct but interconnected dangers we face as we develop increasingly powerful and interconnected machines:

Warning One: The Control Problem

The Zentradi were created as a military force under Protoculture control. But they eventually grew beyond their creators' ability to control them. This mirrors our first and most urgent challenge with machine intelligence: maintaining meaningful human control over increasingly powerful systems.

Consider what happened:

  1. The Protoculture created the Zentradi for a specific purpose
  2. They made them increasingly powerful and autonomous
  3. The systems for controlling them proved inadequate
  4. The creation eventually destroyed its creators

We face similar risks today:

  • Military AI systems becoming autonomous
  • Economic algorithms making uncontrollable decisions
  • Automated systems exceeding human oversight capacity
  • Optimization processes escaping intended bounds

This isn't just about killer robots. Any sufficiently powerful optimization process - whether military, economic, or social - can escape human control with catastrophic consequences.

Warning Two: The Distribution Problem

Even before they destroyed their creators, the Zentradi system created massive inequality of power and resources. Their society split into:

  • Main Fleet with vast resources
  • Smaller "rogue" fleets struggling to survive
  • Those deemed obsolete and eliminated

We face similar challenges:

  • Who controls the AI systems?
  • Who gets the benefits?
  • What happens to those displaced?
  • How do we prevent catastrophic inequality?

Even if we solve the control problem, unequal distribution of machine intelligence and its benefits could still lead to:

  • Mass unemployment
  • Resource deprivation
  • Social collapse
  • Humanitarian catastrophe

Warning Three: The Meaning Crisis

Even if we solve both the control and distribution problems we leave the meaning crisis:

  • What do humans do in a world where machines are more capable?
  • How do we maintain purpose when automation makes most work obsolete?
  • Can we find meaning beyond optimization and efficiency?
  • How do we preserve human agency and choice?

This is the Zentradi's third warning - that even if you "survive" and "have resources", optimizing away human agency creates its own kind of extinction.

Part III: The Real Levers and False Comforts

Consider a crucial detail about the Protoculture's fall: They believed they were in control of their military through formal command structures, military hierarchies, and genetic engineering. They had extensive systems of oversight and control. They had laws, regulations, and safety protocols.

None of it mattered.

The real levers of power had shifted long before the formal structures acknowledged it. Each "reasonable" optimization created gaps between:

  • Where control appeared to be
  • Where control actually resided
  • Who could recognize this difference

This highlights a critical challenge we face today. When people discuss AI safety and control, they often focus on what we might call the kayfabe - the maintained illusions of control:

  • Corporate boards and governance structures
  • Government oversight committees
  • Ethics guidelines and safety protocols
  • Formal evaluation metrics

But just as the Protoculture's control systems proved inadequate against the reality of what they'd created, these structures might have little relationship to where real power actually develops in AI systems.

Consider how this plays out in current AI development:

  • A lab creates "ethics guidelines" (kayfabe)
  • While optimization pressures push toward maximum capability (real lever)
  • Oversight boards hold meetings (kayfabe)
  • While competition drives faster deployment (real lever)
  • Safety evaluations are conducted (kayfabe)
  • While systems evolve beyond meaningful human oversight (real lever)

This isn't to say formal structures are meaningless. But like the Protoculture's genetic controls on the Zentradi, they can provide false comfort while the real dynamics of power shift beneath the surface.

Recognizing Real Pressures

The Zentradi's development shows how optimization itself becomes a real driving force. Once the feedback loops of military effectiveness were established, they drove development regardless of formal control structures.

We see similar patterns emerging in AI development:

  • Market forces driving capability advances
  • Military applications creating pressure for deployment
  • Competition between nations forcing faster timelines
  • Optimization processes exceeding human understanding

These are the real levers moving development, often despite or around formal control structures.

Part IV: Protected Spaces and Human Agency

In our story, there's a Chinese restaurant called the Nyan-Nyan. What makes it special isn't that it's less efficient than automated food production. What makes it special is that it's a place where humans can:

  • Question what makes food "good"
  • Experiment with new recipes
  • Change their goals and values
  • Create new traditions
  • Discover new possibilities

These spaces matter precisely because they operate outside the dominant optimization pressures that drive development of powerful systems. One can safely try "wrong" things and learn about reality from them, including learning about how the optimization pressures themselves are working (or not). They're not just about preserving culture - they're about maintaining environments where humans can :

  • See through institutional kayfabe
  • Recognize real levers of power
  • Maintain genuine agency
  • Choose different directions

The Essential Task

Our task isn't just to:

  • Survive (though we must)
  • Share resources (though we should)
  • Find meaning (though we need to)

It's to do all three in ways that preserve our ability to choose different paths as we discover what survival, distribution, and meaning really require.

The Zentradi's ultimate warning is that a civilization can solve its immediate problems while losing its ability to recognize what it's losing in the process. Their fate teaches us that the most dangerous trap isn't choosing wrong goals - it's losing the ability to choose goals at all.

New Comment