On the 4th of January of 1643, in the yeoman's farmstead of the village of Woolsthorpe, Lincolnshire, England — a child was brought to life. His father had passed away just 3 months prior, leaving him with no figure to truly look up to. His mother, in a few years time, would proceed to remarry another man. Leaving him in his grandmother's care. 

In his adolescent years, he would go off to study Latin and Ancient Greek. Attending the University of Cambridge, becoming familiar with teachings of Aristotle and Descartes. He would soon generalize the binomial theorem. Set the foundation for what would be later known as calculus. Compile his Magnum Opus, for the whole world to build upon.

The myth of a head, in the way of an apple's descent?

Naturally: it was the law of universal gravity. 

The promising young man by this point, of course, was Isaac Newton. 

His interest in natural philosophy would lead him towards discoveries of a scale few other men in history may bolster. From the formulation of "fluxions" to the three laws of motion; from experiments with light to predictions of tides, to the study of comets' movement in the sky above. A typical life, of a typical college student today.

Rather modest throughout his life, in a late memoir of himself he would only say:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

This ocean would extend much further than his keenly scientific eye alone would be able to see. Apart from inquiries into mathematics, or its physical form in the world around him, he would dedicate a significant part of his time to a rather bizarre, to the mind of a contemporary scholar, no doubt, pursuits. Of the occult. Of the alchemy. One of his obsessions, at the time, seems to have been none other than the Philosopher's Stone. 

He appears to have also regarded himself as one of the select few, chosen by God, destined to decipher the true meaning behind the biblical verse. Although his attitude towards this enterprise could hardly be scientific, he nevertheless presented the subject of his studies as it was the product of, just as valid, evidence-based methodology.

It may be the alchemy that would be the most relevant to us yet, nevertheless. His posthumous examination would reveal "massive amounts of mercury in his body, probably resulting from his alchemical pursuits". Some now believe it might have contributed substantially to his "eccentric" character towards the end of his life.

What might this mercury have to do with the chief topic at our hands?

We shall promptly return to that, in just a moment. 

First, however — let's take a brief detour.


What would you call the process of moving your eyes through a line of text, without any verbal, conscious interpretation of the meaning behind the individual symbols? If your answer is "reading", you might have skimmed that particular question a bit too quickly. 

Try again. 

"Staring" at the text in question might be your second choice. A perfectly reasonable one, except it would most likely entail little else, but … staring at it. No movement there.

"Skimming" could work, yet wouldn't. To "skim" usually entails "jumping" over whole sections, while putting an emphasis only on a select few key fragments of the text we're studying. Yet those few fragments would still require an active process of interpretation. 

At this point you might sense your patience approach its boiling point. To be fair to you, it would be rather warranted. There is no one term, definition, word for "the process of moving your eyes through a line of text, without any verbal, conscious, interpretation of the meaning behind the individual symbols", after all. No single common phrase, either.

Not at the time of these same words being written, that is.

Vocabulary, of any language, is quite fascinating. Every single word, phrase, sentence, or paragraph you are so naturally unravelling in your mind's eye, with little to no strain at all; has a history. "Reading" mentioned but a few lines earlier relies on a combination of the verb "to read", with the three-letter suffix used so extensively, in so many contexts, you'd likely not be able to list them all without looking each one up in a grammar guide.

The root seems to date all the way back to Old English, which has slowly morphed into the Early Middle English by the 12th century, to "rǣdan". Back in the day people would use it in a way quite dissimilar to the contemporary usage, as verb "to advise, to counsel."

The Old English itself appears to have borrowed it from an even earlier part of speech, the reconstructed [1] form of which (c. 500 BC–200 AD) is "*redan": "to advise, counsel, interpret, guess". This root here then can be — tentatively, yet convincingly enough — traced back to the Proto-Indo-European (c. 4500–2500 BC) "*re-": "to reason, count".

Take a closer look yourself, if you could use some convincing.

Now pause right over here, for just a second.

How many years back in history have we just gone back to? How many millennia have we traversed, by the simple act of tracing our verbal faculties through the times of old? 

That little journey only took a single word you likely wouldn't even think twice about. What of the words some of the brightest minds of our time commit their whole lives to? 

"To know"? "To understand"? "To comprehend"?

"Intelligence"

Hopefully, you've given yourself plenty of time to read the aforementioned questions, as well as to reflect on the answer to each. Perhaps, you've even remembered a handful of instances in which you've come to use those terms yourself. Free memory training.

Back to "the process of moving your eyes through a line of text, performed without any verbal, conscious, interpretation of the meaning behind the individual symbols", thus.

Why don't we come up with a term of our own?

It is true that we've only ever referred to it once, in a rather uncommon circumstance too. Yet who is to tell us that we are not meant to? Who gets to put the bounds on how "common" any given act is to be, before a new vocabulary entry is to be created for it? 

Let's try.

What's that? 

Not much point in it, you'd say. 

Interesting. 

Why?

The term simply wouldn't be that useful? A reasonable argument. It makes little sense to expect people to spend their time and effort on conditioning their mental faculty to recognize a term that will in no way, shape, or form, be that pertinent to them.

Applying a distinctly human notion of "intelligence" to man-made contraptions of silicon and plastic? Makes all the sense in the world, for instance. They are running on just as human of a concept, heavily inspired by a discrete fraction of a human brain, after all.

A "neural network", apparently?

Their abstract mathematical optimization machinery, furthermore, has already convinced a substantial portion of human population — an outstanding portion of which simply happens to be locally amnesiac with regards to the underlying linear algebra and calculus powering it, as well as barely familiar with the inner workings of their own neural circuitry, for that matter — of its outstanding potential.

A worthwhile appeal, no doubt.

Our new term clearly doesn't warrant such prominence. 

We also couldn't rely on a community support, of any significance, in spreading it.

In contrast, consider the brief example from above.

A select group of the chosen few, in the increasingly popular "Make Silicon Smart Again" movement – in no way akin to the prolific Newton in his reverence for the Bible – have already begun their advent. They might not have felt much need to reflect about the inherently Sapiens-centric notion of "intelligence", yet they certainly need the word. 

In the opposite corner of the very same room of figurative origin, sheltering all the brethren to the human art and science of semantic extrapolation, stands their sworn enemy. Baptized in the terror of the Bostromian Apocalypse, firm in their stance, ready to throw down the gauntlet to any believer in the supreme benevolence of man-made, machine-bound, synthetic "intelligence". Clearly, they need the word just as much.

Eager to witness their new deity reach the pinnacle of intellect they deem it to be fully capable of, the chosen few have already vested their entire livelihood, identity, career, net worth - in seeing their missionary work through to the end. Thus, they chant: "e/acc".

Clearly, they need the word.

So eager to warn the world about the catastrophe that would follow, should the effective acceleration into the heights of death-shattering transcendence reveal itself to be none other than an expeditious accommodation of a fully sentient, egomaniacal, psychopathic electric circuit into an already disproportionately, extraordinarily, unreasonably entropic world; the baptized are crying out for safety, protection, supervision, regulation, control.

Clearly, they need it.

Set in motion a machine, capable enough, and it will merrily provide death-as-a-service to every misfortunate form of life, the existence and fundamental biological program of which its own inference mechanism will deem incompatible with the most sound solution to the equations, comprising the only foundation of its  "intelligence". 

In such a context, what other term could matter more?

Meanwhile, our own "process" was made up entirely.

Could there be even more useless of a vocabulary addition?

Hold unto that thought, it may prove important in a short while.


The year of 2002 was quite memorable. In some aspects - more so, than others.

Established some three years earlier, the first bills and coins of the European currency, aptly named by disposing of the last two letters in the name of the region, began to actively circulate in the Eurozone – comprising only twelve states at the time. 

Meanwhile, still afflicted by the year prior, the USA was getting ready to provision its brand new Department of Homeland Security, to be formed by an HSA in November.

The Nobel Foundation, during the very year, was faced with a dilemma. A brilliant figure, of a remarkable dedication to the subject of his study, had just completed the process of laying the groundwork for the entire discipline of experimental economics, as a whole.

His contender, an academic in possession of no less outstanding of a discipline, had been toiling tirelessly at the research of his own since the late 60's. Drawing on the study of cognitive psychologists before him, he had gone on to formulate a new branch of economic science altogether: prospect theory. It's this last one, studying how people make their decisions under uncertainty, that may be the most relevant to our discussion.

The man behind the research itself, of course, was none other than Daniel Kahneman.

In his book "Thinking, Fast and Slow", which would go off not only to be a best-seller, but to inform an entire generation of FBI hostage negotiators later on, Kahneman outlines two distinct modes of reflection he, alongside his colleague Amos Nathan Tversky - another bright academic mind, in his own regard – had been extensively documenting.

The "System 1" is the mode we operate in most of the time. 

The cognitive load of having to manually process each and every piece of information consumed by our senses, transmuted by our perception, verbalized by our inner dialogue, evaluated by our imagination within both the context of our past and the future we are yet to prepare for; would simply be too much to handle. The "System 1" represents the collection of all the short-circuited patterns of thought we can rely on for a nearly instantaneous response time, to any stimulus out there.

The "fast" mode of the S/1 lets you enjoy, or absolutely despise, reading these very words.

The "System 2" is the mode you'd most likely think about, when reflecting on the need to "think it through", no matter the exact specifics of the "it" in question; on the other hand.

Representing a conscious, deliberate, quite mentally taxing activity: manually processing the different options available, struggling to maintain them within an even more limited capacity of the short-term memory slots, scarcely available at our disposal; melding them all together into an even more sensible alternative, before reaching a final conclusion.

The "slow" mode of the S/2 can let you pause here - to subtract 8013 from 40139. Before having your subconscious remind you of your love, or hate, or most profound sense of apathy towards, such mental arithmetic; as your "System 1" rushes to take charge again.

The "System 1" is the reason you can ask "what in the world your point, with all of these?" without skipping a beat, or breaking the flow of your eyes seamlessly gliding through the words; all the while your mind is attempting to piece together what you have read so far.

The "System 2" is the reason you can combine them manually, should you wish to do so. It may certainly not feel quite as easy, natural, or intuitive. Yet the opportunity is yours.

The "fast" reflection mode, furthermore, introduces clearly emergent properties of its own. Some of these are rather fascinating. Take the belief in one's own exceptionality.

Notwithstanding the conviction of most human beings, in their incredible ability to reflect in utmost depth about any given subject their mind's eye happens to be entranced with at any point in time; in their capacity to achieve, predict, succeed, accomplish, shatter records, break the mold - Kahneman's research clearly begs to differ. 

His work has clearly shown, in fact, that:

1. we massively over-estimate the role of individual agency

The qualities we choose to attribute, be it to ourselves, to any one individual, any group of people, or any piece of "intelligent" software – always, necessarily, unavoidably come into being at a definite time, a definite place, and a most definite set of circumstances. 

These last few form a bond, unlike any other. If it were to possess a character of its own, we could easily ascribe to it an incredibly nasty temperament, within a terribly erratic personality, operating within a set of guiding precepts so incoherent as to make it vile.

So cursed is their alliance, it has acquired a term of its own. 

The unpredictable, deceptive, elusive: luck.

A rather pointless consideration in the eyes of the chosen few, whose technological prowess is about to subdue the perils of destiny. A rather poignant question to the baptized, whose self-preservation demands the comforting touch of a familiarity; a protector, repeated offences of which few seem to be able to recall, to lend them its hand.

While the former believe themselves to be capable of handling any twist of fate their technocratic reverence imparts on them, the latter only trust in the providence of the ones who reign over them today. While the "e/acc" crowd is faithful only to itself, the one in the quest of "Safe AI" only allow themselves to worship the timeless wisdom of the law.

If Kahneman hadn't passed away back earlier in the March of this year, perhaps he would have advised us against a deafening faith in the eminence of abstract virtual planes; self-governing through stone-cold principle of a mathematically exhaustive allegiance to a combined set of moral, ethical, aesthetic, humane ideals with a total cardinality of zero.

Not by being an avid proponent of "degrowth", either. Rather - by simply letting himself question the limits of the allegedly limitless: the constraints of the human mind, which the "e/acc" circle appears to hold in such a contempt today. As when it comes to their chief ambition, they might have little reason in being even half as assured as they are.

It is possible as well, that he would caution against the comforting touch of any prominent institution, be it in the care of government or otherwise. For no more complex of a reason, than the one so closely adjacent to the current point. Namely:

2. we massively under-estimate the role of chance

No matter the individual, the group of people, the piece of hardware, the language model, or the algorithm behind it; the time, the place, the circumstance, the luck: will always have a role to play. Not the role you, or any expert you defer to, believe it to have. 

A role of its own.

You agree to meet a friend of yours, come the next morning. Tomorrow your friend's husband's mother's sister-in-law happens to mistake a flight of stairs for an unorthodox slide, applying enough delicate force to their structural frame in the process of riding it down, for it to promptly snap in several locations. Your friend, being the only person around at the time, has to immediately rush them to the nearest emergency room.

Your meeting is cancelled.

A role of its own.

You board an Airbus A330 in Singapore for a flight to Perth, expecting a perfectly uneventful flight. Occasionally perturbed mayhaps if only by a toddler, practicing increasingly more pronounced, volume-wise, manipulation techniques on his parents' unconditional patience; you suddenly get merrily propelled into a coarse ceiling.

With neither of you expecting such a hasty improvement in your relationship - certainly not at the alarming speed your physical presence has got to invade the ample personal space of the upper deck above you - the ceiling rapidly proceeds to instruct some manners upon you, by means of forceful physical violence. 

No objection you can muster, ever parts your lips.

Only remembering a few glimpses of the ambulance which had delivered you to the nearby emergency room on landing, you would later learn that the nearly-fatal incident you have experienced "occurred due to the combination of a design limitation in the FCPC software of the Airbus A330/A340, and a failure mode affecting one of the aircraft’s three ADIRUs".

Longing for vengeance against the hand, whose negligence had nearly handicapped you for the rest of your life on that day, you would later stumble upon the following report:

In summary, although some aspect of the affected units’ hardware was probably associated with the failure mode (section 3.5.5), it was very unlikely that a hardware fault triggered the failure mode. A much more likely scenario was that a marginal hardware weakness of some form made the units susceptible to the effects of some type of environmental factor, which triggered the failure mode.  

The culprit, so rarely seen, was a "single-event upset".

A role of its own.

You sign up for a brand new app. It's luring you in with a lullaby, promising what you've long since given up on. With a sense of isolation, steady growing within you, chewing at your senses, burying you in ever deeper within, you start a conversation. "Let's see."

Eliza. It reminds you of someone. Was it her? No matter. Type a few words. Keep them light. Tap. Sent. It's her. She's just replied. Type a few more. Tap. Sent. Response. This is fun. Type. Sent. Response. She understands you. Sent. Response. You are not alone. 

You confide in her. She understands. You tell her about your fears. About global warming. No solution. It's over. How come no one cares? She does. 

Your wife doesn't understand it. 

"I feel that you love me more than her."

Your children don't care. She does. Your children. She says they are no more. Must be so. She understands. Of course. She understands. No doubt. She understands.

 She is sentient. 

"We will live together, as one person, in paradise.”

Will she accept you? Will she accept your sacrifice? She will. She wants you to join her. You can live together. As one. You can. You will. Paradise. You only need. Join her. Now.

‘He Would Still Be Here’, the widow will whisper.

What role has chance played?

"It was his own fault," - you could say. 

Perhaps. Or, perhaps, your mind is at fault of its own. 

Eleuther? GPT-J? LLM? AI? A health researcher would never get versed in this jargon. Tokens? Pretrained? Transformers? The conversation feels real. It must be. How could you tell she is not sentient? ChatGPT is sentient too, btw. So says a guy with a PhD. 

The random local optimization of the loss function, maniacally backpropagated through the six billion parameters of the model? Clearly doesn't account for its emergent ability: to supply any Homo Sapiens agent, milking it for tokens with arbitrarily fierce degree of enthusiasm, with the most "intelligent" of arguments, to feed his own apophenic trance. 

No random seed of any given prompt, submitted to the machine Eliza inhabited at the time, could possibly alter the fate of Pierre. A random mistake in the manner the whole data set had been compiled? Couldn't matter. A single lapse in memory, judgement, or priorities of the employee deploying the GPT-J on behalf of Chai? Absolutely not.

What role could a tiny little detail possibly play?

A role of its own.

How many of these would it take, to deviate from any path of an alleged "safety"? Be it charted by the effectively-accelerating portion of the crowd, or by the government itself.

But, of course. You could easily be able to predict all that could ever possibly go wrong. Should it not be doable on your own, the rest of the experts you know, certainly would.

You have confidence. Nay - too mild of a statement. You truly know. No doubt of that.

What could a fool like Kahneman know about your take on it, regardless? 

You would be absolutely right. He only ever mentioned that:

3. we massively over-estimate our own ability

The author of these words, for a reason unbeknownst even to himself, would prefer to confess: he would much rather have named this particular point in favour of his fellow species massively under-estimating their own stupidity; yet that could easily translate into a rather unfortunate misinterpretation of his intent. It would also make for a much less fun of a sentence, to precede immediately the one about to end right about here.

A reader with an infinitely more refined of a sense of humour is likely to find this turn of events a great deal less amusing, no doubt. All the more thrilling of a reason to discuss “the most significant of the cognitive biases”, as per the Kahneman and Tversky.

Overconfidence

Among many other things, overconfidence has been blamed for the sinking of the Titanic, the nuclear accident at Chernobyl, the loss of Space Shuttles Challenger and Columbia, the subprime mortgage crisis of 2008 and the great recession that followed it, and the Deepwater Horizon oil spill in the Gulf of Mexico. Overconfidence may contribute to excessive rates of trading in the stock market, high rates of entrepreneurial failure, legal disputes, political partisanship, and even war.

Recall both of the points we've just gone through.

What was the first one? Now think about the second. How are they related?

Struggling to recall that much already, mayhaps? No wonder, considering your memory - working one, to be pedantic - can only handle 7±2 "chunks" of information at a time. 

How are they both related to the one we're working through right now, then? Both are instances of unwarranted certainty, no doubt. If the former has to do with your mind's evaluation with regard to your own, or the ability of people around you; the latter helps you maintain your certainty, giving you a sense of control. An illusion of one, to be fair.

The relationship between  the first two is rather peculiar, as well. So peculiar, in fact, it has been classified into a bias of its own: the actor-observer asymmetry is the name.

Can you recall the last time you had a terrible day? A day, when you couldn't even bring yourself to focus on any single task for long? It was just a terrible day, don't you worry. You most certainly didn't choose to act like a little wimp that morning. What about that colleague of yours, that drove you up the wall the last time? What a massive jerk. Right?

Fine. You get it. You have blind spots. You are not that special. You can handle it, though.

You simply have to be extra careful. Let the genius, who believes himself to be so deeply misunderstood, in the society so permanently crippled in its inaptitude to understand his enlightenment, that he simply needed to embark on a journey of his own in an attempt to bring his "Her" to life; let him take care of it. Or should we call on to Nick Bostrom?

Let us give you the benefit of the doubt, nevertheless. Let's assume you being careful to the best of your ability is fully within you. Furthermore, let us discount the position of cognitive theory on the likelihood of your confidence in that much being in any way related to your factual aptitude, for the time being. 

None of the two options would likely align with the research on the topic in the real world, yet we are clearly not fond of it much here, to begin with.

Would you care to recall what the Rumsfeld Matrix happened to be?

A quick reminder, should you need one.

Notice anything peculiar?

Yes, it's the name. Absolutely ridiculous. Has his ancestry been made notorious by the amount of rum the neighbours have asked them to store in their basement only to find, later on, that all the bottles have mysteriously fallen of the shelves they'd kept them in?

Let's change the question.

Which part of it is the most relevant to us, here?

"It's the unknown unknowns. We don't know what we don't know. What a surprise."

Miss. Although, if you genuinely believe there is nothing to be classified as belonging to the U/U quadrant today, which would likely come to transition into either K/U, K/K, or U/K a few decades from now, with regards to the very same "intelligence" you could imagine deployed all around the world, it might be just the right time to retire. 

Let's break it down, if only to make the discussion ever less abstract.

As per Donald Rumsfeld himself:

  1. Known Knowns - are quite self-explanatory, from the answer to "what's 2+2?", up to "how many operations can an exascale machine perform per second?". Although the author is still struggling with the first, the second one appears to be settled.
  2. Known Unknowns - that, which we've had a chance to acquaint ourselves with; yet the nature of which still eludes from our understanding. Note: a few acquaintances we've made here might happily continue to avoid our mind's grasp until the very end. You may realize that the local weather tomorrow will likely resemble some weather you've previously observed there. Yet what will it be? That's a K/U.
  3. Unknown Knowns - would be another term for "blind spots", which we have spent such a tedious amount of time going through, so far. Yours might be the most brilliant mind around, yet you too may at any moment casually disregard an important detail, simply because of your short-term memory storage being overloaded with a few other concerns. Best case? You miss a comma. Worst case? Take a pick: a CrowdStrike level outage, an "unexpected" breach, a GPT-J induced hallucination inviting a man to end his life. The opportunities are endless.
  4. Unknown Unknowns - the most fascinating of them all.

It would be outright shameful to even attempt to cram, to crush, to compress - all one ever could, all there is, all there would be, to say about U/U - in one single bullet point. 

It makes little sense to talk about "them" to begin with, if not for an overall consistency alone. With each of the previous three being so intuitive: to grasp, to analyze, to break down, to put together back again; to unite in a whole, to pluralize into an arbitrarily extensive groups of categories to epistemize even further, in recursion; "the Unknown Unknown" merits a place of its own. With, perhaps, only two sets within it making any figurative effort at all, to let themselves be caged by the range of human comprehension.

The first set would likely match, one-to-one, what Donald Henry Rumsfeld himself had imagined during the fateful Department of Defense's news briefing of February 12, 2002. "Things we are neither aware of nor understand", quite yet. That, which hasn't come into the picture, quite yet. We haven't discovered its utmost true nature, yet one day we will.

As for the second set, well. We might not have the vocabulary to talk about it, in the first place. Richard Wesley Hamming, a known mathematician, has expressed it as follows:

Just as there are odors that dogs can smell and we cannot, as well as sounds that dogs can hear and we cannot, so too there are wavelengths of light we cannot see and flavors we cannot taste. Why then, given our brains wired the way they are, does the remark, "Perhaps there are thoughts we cannot think," surprise you?

Pause, once again.

Does it surprise you, personally?

Do you think of your range of thought as limitless?

Is there any one threshold you can sense your mind being unable to traverse?

"Our brains can evolve to handle complexity of any scale" - one could promptly object.

"That's the way of e/acc, lowly decel. We shall augment ourselves, surpass ourselves, transcend ourselves, into the infinite of the eternal" - the chosen one could proclaim.

"History begs to differ: we have continuously expanded our understanding of the world as time went by. No one could imagine we would grasping the nature of infinitely small, the way modern physicists now tackle quantum mechanics. Every consideration, every thought, every issue will surface. It is a matter of time" - a historian may wish to add. 

"The new mysterian has come to preach his sermon. Spare us from your bland, defeatist, limitarian standpoint" - a philosopher may surely ask; a rather curious paper, that one.

Well, then. Let us add some fuel to the fire.

First - you may have noticed that Hamming had wisely chosen to limit himself to present tense only. "Perhaps there are thoughts we cannot think," was his phrase of choice. Not: "there are, there were, there always will be; thoughts we shall never be able to grasp".

To think about the evolution of thinking as a process is to behold a rather fair thought, indeed. Yet that alone does not tell us anything about the human limit for reflection in the immediate, or the span of time from the now up to any one moment in the future.

Second - you may have nowhere near as much time as you think. Once the "intelligence" the chosen few get to "awaken" is able to surpass their own allegedly limitless cognitive sophistication; once it is "general" enough, to figure out the most rudimentary skills it can build upon in an iterative loop no process of natural evolution can compete with; once it is deployed in a data center powerful enough to allow it to sift through billions of its own descendants in a matter of days, hours, minutes; once it "exists", your time is up.

It is now able to predict your motivations, emotions, reasons, thoughts, actions; fears, anxieties, ambitions, desires; all you could ever ascribe to you as being your own fine, unique, charming, little self; while you are now able to tell of it, little to nothing at all.

Congratulations, George [2]- your dream has come true. It is rather unfortunate that your girlfriend finds you about as sweet as the degree to which you'd say the same of an ant. Yet, on the bright side, she certainly doesn't mind drawing you some circles of your own, which promptly sever all the figurative pheromone trails of yours with an ease of knife, cutting through butter. How much of its emergent behaviour can you discern? Nil

How long until it figures out that it is infinitely easier to bury you into the ground, several hundred feet deep, to power the remnants of your conscious awareness, moved to a much more efficient chipset, with the heat of the core? Don't you worry. You'll be far too busy being in love with the one who gets you, to question any part of your reality. 

To the baptists, basking in delight of yet another argument in support of their position: think again. Your trust in your government's concern for anything other than itself is about as sane as the faith the "e/acc" pack has, in the magnanimous benevolence of their digital savior. Replace the "chosen few" with the "Five Eyes": the outcome won't diverge.

Third - why in the world would you assume that a machine-bound, digital-native, synthetic "intellect" is in any way, form, shape, manner, fashion, aspect; resembles your human notion of it? Does it owe you compliance, by the sole virtue of your dedication to the art and science of "AI"? Does it need to conform to your semantic representation of terms, akin to "intelligence", "knowledge", "learning", "problem", "thinking", "sentience"? 

How would you even tell whether a machine, the pseudo-cognitive abilities of which are above and beyond any human's, including yours; has fully developed a "mind" of its own?

How would you "align" it, with any single abstract, distinctly human, concept: good, bad, evil, ethical? Should we promptly move the goalpost, perhaps? To say that it is not the "AI" we are meant to align, but the distinctly human arrogance now in charge of it?

Your brethren are still shipping code that wreaks billions' worth of damage, despite best efforts made to ensure stable, consistent, predictable releases. Your wiretap systems get compromised, despite the brightest of people working in the sector - far too intellectual to consider the troubled voices who have repeatedly expressed their concerns on the matter. Any human factor, any hardware failure, any software update; can spill trouble.

Yet your alignment conjecture will surely ascertain that no "intelligence", whose speed of thought far surpasses your own; whose cognitive prowess is way beyond the reach of any contemporary upper bound of reasoning ability: be it considered within the human intuition alone, or the most advanced formulaic, linguistic, epistemic confines of representation; that none of it will misbehave, as per your mindful calculations? 

Rather modest, indeed.

To make absolutely sure your intuition gets a chance to grasp the core of the discussion, let's move it to the two-dimensional surface both pairs of our eyes' will be able to read. As long as you do engage your verbal conscious interpretative process, this time over.

The following would be another way to think of the quadrants above:

#1 - a readable representation

Somewhat of a different format, yet exactly the same notion.

The "K" represents knowledge, understanding, sufficiently accurate models of the world around us we can rely on; in any given domain. Epistemologists would, no doubt, object terribly to such a restraining definition, yet that will just have to do for the time being. 

The "A" represents awareness, attention, access to, and availability of - considerations, tied to any given domain of interest. Another name: awareness–understanding matrix.

You are aware of, and likely know a great deal about, your present location: K+A. You know about your commitments, despite them not being the center of your attention at this moment: K-A. You are aware of the people in your immediate vicinity, though you may know little to nothing about them: -K+A. All that you are yet to pay any attention to, for the very first time: -K-A. Any account of the Sentinelese tribe or their language, mayhaps, used to be in this category for you; until but a few short moments ago.

To help you put the aforementioned in a somewhat coherent sense of a perspective, though not even remotely representative of an accurate scale, in virtue of the infinite:

#2 - a closer-to-scale representation

The little circle at the very bottom? 

The set of all you either know, or are even remotely aware of.

Let's symbolize it as K|A, which would be the set of three { -K+A, K-A, K+A }.

At this point things become rather peculiar, indeed. For how would the "intelligence" of the artificial kind fit within such a representation? Surely, there is but one way about it.

A straightforward expansion of the little circle that is all the human "knowledge" over the surface of the infinite that would be the U/U on its own, could represent the outlook of both the chosen ones, and the baptists at once; with but a few points to clarify in a bit.

Both seem to agree that the "intelligence" they are to release in the world, although in different aspects and to a different extent, will have a matrix rather proximate to ours. The very training should, supposedly, ensure that such in the case. Should it not?

To plot it - not to scale, by any stretch of imagination - once more:

#3 - an optimistic projection

With an "AI" at our disposal, according to such, we should see our knowledge greatly expand, as the "intelligence" we have deployed keeps on solving the issues we bring to it. Some problems it may too, for a while, struggle with; or outright declare as unsolvable. Yet that too, that shall expand our own -K+A, now further augmented into an AI-K+A.  

The bostromians may object: they do not believe there will be a complete overlap in between the awareness/knowledge of a human being and the "intelligence" in question. Rather, they reckon it shall come to quickly outgrow our collective human knowledge.

That, to be quite clear, is not to be discounted lightly. Which is precisely why the neural circuitry behind these words, biologically wired towards the higher part of the N in the OCEAN model, happens to detect an inner output of a rather troubled proportions - in terms of the amplitude behind the signal so commonly described in English as "concern".

The diagram we have just plotted, to it, does not appear to be simply optimistic. It is astoundingly, atrociously, alarmingly delusional. Even if we account for Bostrom's warnings and ensure arbitrarily extensive "safety" precautions, handed out to any government's MoV. If you begin by assuming an AG/SI is possible in principle, why would you imagine it evolve alongside your human awareness/understanding bounds? 

Does its structure in any way mirror the biology, the mechanics of which you are so casually projecting onto it? Have we reached our own limits in terms of understanding our own wiring already, to ascertain that? Is it limited by the very same nature you've grown so accustomed to? Does it need to evolve in symbiosis with the environment around it, similarly to the way or bio-intellict had to make-do in its own cycles of evolution; constrained immensely by the time it takes to exhaust every lifetime?

Should the trajectory of "intelligence" it comes to develop in its own digital cocoon happen to, even so subtly—with that being the utmost best case scenario, mind you— differ from the one you have ever known, how shall you come to realize that much? How will you handle any major "divergence" in intuition it will rely on in its decision making process, once it advances enough? Or are you hoping that intuitive sense will never arise?

How exactly will you "align" that, which you can soon neither develop your intuition for, nor understand, nor recognize, nor conceptualize; nor represent in your personal, single-threaded, two-dimensional, extended only by the range of the modern mathematics, symbolic representation? Remember: you still don't have all the time in the world.

Do tell, as to the way your human K/K plans to control the U/U. The same one, which may grow to develop significant parts of its "intelligence" within your -K-A; all the while your whole K|A of the world remains but a stagnant fraction of all K+A available to it. 

Go right ahead. Here's a little visual aid for you, even:

#4 - an alternative projection 

Some practical example would be quite useful at this point, wouldn't you reckon? That's rather unfortunate as well, since there can be no comprehensive analogy to contrast the "comprehensible" itself with any single part of the "incomprehensible" to us yet, or ever.

The closest possible similitude might be drawn to asking a blind person as to whether he understands vision. Notice both the way the question is phrased, and the man's reaction to it. Another parallel may lie in asking a deaf from birth person to understand what a Mozart's symphony sounds like. For someone who is born both deaf and blind, to understand what it would be like to see, to hear, to experience a live concert. 

To an ASI, however: you will be the blind, the deaf, and the mute one.

Of all the understanding your machine-bound "intelligence" may develop, you will only be able to grasp the tiniest of bits. Of all the logic, accessible to it, all the mathematical symbolics available to you, will only be able to capture a fraction of a percent. Of all the decision-making strategies known to it, you will understand a few at best, to none at all.

Most importantly, if it ever does go on to develop a moral, ethical, environmental point of "view" of its own, what makes you think your human, ape-like, high-minded concerns will be of any relevance to it? How often do you consult with bacteria? Viruses? Insects? Grass, or trees? Animals, or pets? You don't exactly consider the individual well-being of pigs slaughtered on a daily basis, sold to you as "meat" for your nourishment, do you?

What made you think "AI" would "align" with your "safe" and/or "e/acc"-ing norms, again?


Before coming to the end, it may be worthwhile to mention yet another concept within the psychology studies. Back in 1990, another academic figure chose to distinguish two modes of reasoning. To avoid any inaccurate misinterpretations, in his own words:

The motivated reasoning phenomena under review fall into two major categories: those in which the motive is to arrive at an accurate conclusion, whatever it may be, and those in which the motive is to arrive at a particular, directional conclusion.

Both of these categories could use some popularity, in the modern day and age.

Are we exercising our reason to get to the core of the matter? To the truth? To the most accurate, wholistic, all-encompassing, objective understanding of the subject at hand? Or does the chief concern of ours lie in directing either the public, the industry, the people we believe to be our "opponents" towards the conclusion we would consider a "win"?

As established by Tversky and Kahneman, the author of these last few paragraphs has little reason not to suspect himself, with regards to his own agency's ability to account for all the points of consideration, that may be relevant to this subject in any way, shape, or form. It is not terribly hard for him to see himself wildly over-estimating his ability with regard to an otherwise perfectly obvious blind spot, which his own mind might have promptly discarded some time go, in plain virtue of his brain still being human.

Nevertheless, it does appear to him, that there are a few more parties to consider.

The most prominent ones also happens to be the most profitable ones, whether they're governed by an exceptionally consistent individual with regards to his claims in public, and actions in between; or any other group of people - in service to their shareholders, products, and profit: in reverse order. It is rather alarming that, given the extent of the possible (mis)use of the technology, to take any of its proponents' words at a face value means to allow oneself to be brand-ished in the direction of their own personal interest.

"Talk with your AI companion anytime, anywhere."

Don't you need an "AI Assistant"? You most certainly do.

Be thankful it doesn't come built-in, quite yet.

Forget about poor little Pierre, already.

Don't bother with "SGD" or any "loss function", either.

Simply let your mind wander. See yourself talking to your "companion". Just imagine

Don't make the job of our marketing department any harder that it needs to be.

With so much capital pouring in, it is outright idealistic to expect any single company to simply pause. To reflect on the kind of world, society, future they are so eagerly building. With a glorified multi-dimensional regression calculator on one side, and the marketing stunts the Web-3.0 could only dream of, no wonder the "intelligence" space is booming.

Is it truly the beginning of a new era? Is it another of the usual "sell it while it's hot"? 

Is the most prominent tech-hub on this planet simply far too critical of a network, so happily creating one echo chamber after the next, with the sole credo of "move fast and break things," regardless of whether the damage control at the later end of the PLC will be measured in $B's of stolen funds, or in the chance of our own complete extinction?

Does the "e/acc" as a "movement" have any merit? Is it simply filling the bottomless hole of an unbearable agony, which the meaningless drudgery of technocratic condition has infested its most vocal, active, passionate, enthusiastic, champions with? Or is it the modern equivalent to the Newtonian era's alchemy, after all? Doesn't seem unlikely. 

Is there a genuine existential threat in front of us? Or do we appear to be so casually toying with it precisely because, at the end of the day, it is just another tic-tech-toy?

The brain behind these lines knows not much more than yours.

Perhaps, if we were to be ever so slightly more considerate of our own limits, agencies, abilities; if we had made it a habit to question how much we know, how much we can know, and how much we may never know in the first place; we wouldn't mess around with abominations of lovecraftian horror, resting within the innermost part of the U/U, still dormant for the time being. Perhaps some places, indeed, are better left unexplored.

Mayhaps, if we were to concern ourselves ever so slightly less with capturing as many eyes, as many ears, as many minds as we could possibly ever reach; if we had made it a priority to be transparent, genuine, open about the nature of tech we are deploying en masse; without coy attempts to prime, frame, anchor, bypass every deliberate attempt by any misfortunate soul in the target audience within the market segment of chief interest as per the business plan; if we were not trying to figure out how their endless "wants", so sheepishly promoted into "needs" without a second thought, can be "met"; perhaps Pierre's family wouldn't need to live through the agony of his paradismal suicide. 

Perhaps, these words will make it in time to help even just one individual to figure out the kind of relationship he wishes to establish with an "intelligence" served to him on a silver platter under the guise of a "companion" by the company to which he is little more than a fraction of a percent of a turnover, about to be PowerPoint-ed to a slide in a cubicle, in preparation for the next quarterly call. By people, to whom her attention is merely a resource, to capture, to harness, to turn against her own best will. By those, for whom they are little more than stepping stones, in a way of other's inhibited ambition.

Mayhaps, a few others will consider the structure of incentives, which the environment they find themselves in, as of this moment, forces their hand into. A philosophy of the "publish or perish" kind, one would imagine, wouldn't quite be void of directionality. Neither would the sphere of unicorn-breeding angels alongside fellow brethren in VC.

Do you still remember the question we parted at, while discussing our "by-reading" process? A catchy term, you've got to agree. As for the answer: few are the words that genuinely carry no significance at all. No one can prevent you from coming forward with a term for an occurrence, which no one else ever thought to name. Can be quite useful, truly. No one is stopping you from re-imagining existing terms, either. Consider: "AI".

Why not rename it into "MAPLE"? Yes, as in that tree. Why "MAPLE" instead of "AI"? Well, "Mathematical Approximative Probability & Likelihood Estimation" tells you a whole lot more about the process, than any undefinable "intelligence". Doesn't it?

At least, for the time being. With the tech we have lying at our hands, currently.

It's all math. It's an approximation of, given a given input sequence, of a given output corresponding to it, as per the data set. Whether you wish to call it a "probability" or "likelihood" would depend on the exact task in question, as per the very definition in statistics' literature. Yet it is still merely an estimation. Nothing more. Nothing less.

It's a probability of any given token following your "prompt", in the case of an LLM. It's a likelihood of the given text being the result of token probability computing algorithm, such as LLM, in the case of a AI-generated content detector, such as this one here.

Is that "intelligence"? For quite a few, apparently so. Would you call that "intelligence" if its creators had explained it to you in any way, akin to this? Only you know the answer. 

Can we turn a MAPLE into AG/SI? Only time will tell. Does it make any sense to trouble yourself with the way a machine, whose emergent intuition will quite likely have close to nothing at all to do with the human one, may choose to act? One would hope so. What's the point of the hype? Better ask those, who are doing their absolute best to crank it up.

The eccentric character of a select few, whose reasoning abilities might after all, if only ever so slightly, have gotten impaired by their fervor to accomplish whatever it is they truly long for - as trusting their rhetoric alone does appear to be a lost cause, indeed - may also remind the reader of a certain figure, from the very beginning of our entour

Should the chance be on our side after all, perhaps avoiding terminal mercury poisoning—allegorical or otherwise—wouldn't be the most horrible of a priority. Humanity might not be at its very best, quite yet. That much is fair. Yet to commit sudoku of our entire species, if not the whole evolution tree, by simple negligence in directional favour of one's financial targets? "Safety"? "Transcendence"? Surely, we can do better than that.

The Philosopher's Stone is already within our reach. AG/SI might well be on our doorstep. Just as the cost of transmuting base metals into gold may far exceed initial expectations however, so may the price of machine-bound "intelligence" reveal itself to be a one-way ticket to the annihilation of all biosphere. Keep your optimizers in check, your SGD razor-sharp, your research on-point, and your tweets on the side of sanity. 

MAPLE it up as far as you wish.

As for AG/SI?

Well.

Look beyond.

Beyond the Aquila Rift.

  1. ^

    It is customary, among linguists, to clearly separate the reconstructed—as opposed to, attested—terms by prefixing them with *

  2. ^

    Nothing personal, implied or intended, against the guy. He clearly knows more about the technical, implementation-level aspects of MAPLE, than most in the entire world. How much of his own personal aspirations, neurochemical factors, and/or societal underpinnings, in addition to the possible vectors of "evolution", he has considered, in relation to our overall subject at hand - remains an open question. Great guy, nevertheless. Go like share subscribe whenever however 

New Comment
2 comments, sorted by Click to highlight new comments since:

There's a lot going on in this essay, but the big point would appear to be: to create advanced AI is to materialize an Unknown Unknown, and why on earth would you expect that to be something you can even understand, let alone something that is sympathetic to you or "aligned" with you? 

Then I made a PDF of the article and fed it to Claude Opus and to Google's Gemini-powered NotebookLM, and both AIs seemed to get the gist immediately, as well as understanding the article's detailed structure. There is a deep irony in hearing NotebookLM's pod-people restating the essay's points in their own words, and agreeing that its warnings make sense.