Epistemic statusMusings about questioning assumptions and purpose. Not clear if I correctly extract wisdom from fiction. Possibly other fallacies present - bringing these ideas to the light of commentary will improve or dispel them

 

I distinguish a wise algorithm from an intelligent algorithm by the wise algorithm’s additional ability to pursue goals that would not lead to the algorithm’s instability. I consider the following desirable properties of a wise algorithm:

  • Questioning assumptions, which could be aided by:
    • Possessing multiple representations and modes of thinking to avoid getting stuck 
    • Integrating new information on the fly
  • Having a stable sense of purpose that does not interfere with it’s stability, which I refer to as a good sense of purpose

 

Questioning assumptions

Paradoxes are dissolved by questioning assumptions. For example, let’s consider how solving the black body radiation paradox gave birth to quantum physics. 

 

Initial assumption: energy is shared equally by electrons vibrating at different frequencies -> implies an ultraviolet catastrophe, where heated objects emit infinite energy(imagine an oven turning into supernova)

Revised assumption: in order to vibrate at a certain frequency, an electron must have a quantum of energy proportional to that frequency -> predictions in agreement with experimental data, which don’t show an ultraviolet catastrophe

Now for the details.

According to classical physics, light is an electromagnetic wave that is produced by the vibration of an electron. Heat is the kinetic energy of random motion, so in a hot object the electrons vibrate in random directions, generating light as a result. Thus a hotter object emits more light. 

Classical mechanics assumes that there is no limit to the range of frequencies that the electrons in a body can vibrate. This implies that an object can emit infinite energy - the so-called ultraviolet catastrophe. However, experiments showed that the radiation of a blackbody(object that does not reflect any light) becomes small at short wavelengths(high frequencies) - the left-hand side of the graph.

Source: https://physics.weber.edu/carroll/honors/failures.htm

Max Planck questioned the assumption that each frequency must have the same energy. Instead, he proposed that energy is not shared equally by electrons vibrating at different frequencies. Planck also said that energy comes in clumps, calling a clump of energy a quantum. The size of such a clump of energy depends on the frequency of the vibration as follows:

 

                    Energy of a quantum = (calibration constant) x frequency of vibration

 

                                                                     E = h x f

 

with the calibration factor h = 6 x 10-34  measured in [Joules x seconds] is Planck’s constant.

At high frequencies, the quantum of energy required to vibrate is so large that vibrations can’t happen.

 

To reiterate

Initial assumption: energy is shared equally by electrons vibrating at different frequencies

Revised assumption: in order to vibrate at a certain frequency, an electron must have a quantum of energy proportional to that frequency.

 

So how could AI identify assumptions and falsify them? Most of the time, assumptions are not spelled out, so a form of semantic decompression would be required: expanding a statement while keeping the meaning the same. Something analogous to image upscaling. Causal understanding would help to guide the semantic decompression and would also make the process more efficient, as the most significant assumptions would be questioned first.

 

These ideas, if they make sense beyond my musings, might bloom someday. In the meantime, let’s attend to two important ideas introduced by the Numenta research group with Hierarchical Temporal Memory: multiple representations of an input based on context and, as a bonus, the processing of data streams in an online manner.

 

First, having multiple representations for the same input would allow for analyzing the input with a different set of assumptions. Mashing together the work of Daniel Kahneman and Amos Tversky with that of Gerd Giggerenzer, we could say that heuristics turn into fallacies when used in the wrong context. 

 

Steering mechanisms can be used to determine the modes of information processing that an AI model has, but it's also important to know which assumptions work best with which kind of problems. Also, steering does not address how one would influence the process of developing these processing modes.

 

To encourage an AI model to have multiple perspectives, Numenta HTM maintains several representations for a given input. Inputs to the HTM are part of a data stream, with a given input being encoded differently in different contexts. 

 

Consider the following sentences:

 

  • I ate a pear
  • I have eight pears

 

The words “ate” and “eight” sound identical and it is very likely that at some point in the brain there are neurons that respond identically to “ate” and “eight”. But further down the processing chain there will be neurons that encode different representations, based on the other words in the sentence.

source: https://www.numenta.com/assets/pdf/whitepapers/hierarchical-temporal-memory-cortical-learning-algorithm-0.2.1-en.pdf

 

Second, being adapted to process data streams in an online manner. Current learning algorithms have benefited from  improved processing power that allow them to improve in a reasonable time when exposed to the huge amount of data available. But data is not just big, it’s getting bigger, fast: in 2013, some claimed that 90% of the world’s data had been created in the previous 2 years. With more and more devices coming online, this trend most likely still continues today.

 

Therefore, an algorithm like HTM could provide the next leap in performance by displaying life-long learning and adapting to new data quicker. No need to set apart different computing machines for training and deployment. AI algorithms will no longer be slaves to the distant, quickly receding past.

 

A sense of purpose

 

Generalization from fiction becomes a fallacy when borrowing assumptions that can’t be substantiated in the real world. Fiction may not provide accurate factual information, but good fiction often presents important psychological truths. Accurate psychology is needed to provoke an intended feeling in the reader. 

 

With that in mind, let’s mine the play “Iona''(Jonas) by Marin Sorescu for wisdom with regards to purpose. The play follows the eponymous  biblical character, who is having an existential crisis while in the stomach of a whale. He eventually musters up the courage to slice open the whale’s belly, only to find himself in the belly of a different whale - think Matryoshka. When he finally reaches shore, he gazes at the horizon, only to realize it’s another kind of whale belly. “An endless sequence of whale bellies” - once one horizon is breached, a new one will take its place. The character is facing infinite regress: once breached, the horizon returns in a different form. Faced with this, the character feels a loss of purpose, but recovers by deciding to continue his exploration of boundaries inward, as one might consider when pondering the Buddhist saying: "When you get what you desire, you become a different person. Try to become that person and you might not need to get the object of your desires."

How might one describe purpose? One way is to say that purpose is given by internal representations intrinsic to a cognitive algorithm(symbol grounding). Stephen Larson borrows a definition from Bickhard and Terveen: 

 

As the state of a system changes, the meaning of a system’s semantics is the effect to the system of propagating its constraints in response to those changes.

 

Larson exemplifies this with an amoeba. When exposed to a sugar gradient, the amoeba will travel up the gradient to take advantage of a better food source. Enabling this movement to occur are molecular processes capable of detecting sugar levels and translating that detection into motion. The propagation of constraints that the sugar molecules enable is the meaning of the sugar molecule to the amoeba.

What meaning could a cognitive algorithm capable of self reflection attribute to itself? To avoid the infinite regress faced by Jonas, I proposed that the meaning or purpose a wise cognitive algorithm should attribute to itself only makes sense in relation to a different cognitive algorithm instantiated as a different entity.

The life of a person is meaningful insofar as it has an impact on the lives of others. Isolation and solitary confinement are considered punishments in all cultures. If we are not needed, our genes will not be passed on to the next generation. So wisdom and the desire for self-preservation would compel a person to be of service to society. 

As an important practical application and significant milestone in the development of AI, Rodney Brooks proposes creating care taking robots. Such robots would have their purpose defined relative to the wellbeing of vulnerable people in need of their care. 

 

There is a strong economic incentive for such an application. Aging populations across many countries require caregiving. There is a general trend of declining birth rates: as women all over the world get more integrated into the workforce and learn of contraceptive methods, they generally tend to have fewer children. After the population plateaus, robotic caretakers will become a necessity.

 

To solve the population problem on the other end, ECW could also be adapted for child rearing.

Childcare has some important differences compared to eldercare:

 

  • children must learn to express themselves as a part of their development process, which poses both an inference problem(how to understand what they need) and a didactic problem(how to teach them to clearly express their desires)
  • children perform experiments as part of their development, so they require more supervision in order to not come to harm
  • as they develop, children’s behaviors, beliefs and preferences change, so it’s important to be able to adapt to these changes 
  • children are much more fragile than the elderly, so they would need to be handled with more finesse

Wide-eyed optimists dream of visions such as described in the short story A Sun will always sing, visions that one cannot help but wish come true.

Suppose the robots will take care of us. Then we should concern ourselves with avoiding scenarios where we atrophy in their care, as described in  “The Machine Stops” by E.M. Forster, popularized by the movie Wall-E with the added twist of a happy ending. One philosophical argument against a life of blissed quasi-vegetative state would be that the purpose of one's life is to fulfill one’s potential, allowing one to bring the most benefit they can to the world. But our desires are strong and wireheading is very tempting.

Conclusion

There is no one path to wisdom, but it seems to me that one of the only ways to lead meaningful lives is to provide as many benefits to each other as we can.

New Comment
2 comments, sorted by Click to highlight new comments since:

Please go further towards maximization of clarity. Let's start by this example:
> Epistemic statusMusings about questioning assumptions and purpose.
Are those your musings about agents questioning their assumptions and word-views?

And like, do you wish to improve your fallacies?

> ability to pursue goals that would not lead to the algorithm’s instability.
higher threshold than ability, like inherent desire/optimisation? 
What kind of stability? Any from https://en.wikipedia.org/wiki/Stable_algorithm? I'd focus more on sort of non-fatal influence. Should the property be more about the alg being careful/cautious?

>Are those your musings about agents questioning their assumptions and word-views?
- Yes, these are my musings about agents questioning their assumptions and world-views.

>And like, do you wish to improve your fallacies?
- I want get better at avoiding fallacies. What I desire for myself I also desire for AI. As Marvin Minsky put it: "Will robots inherit the Earth? Yes, but they will be our children." 

>higher threshold than ability, like inherent desire/optimisation? 
What kind of stability? Any from https://en.wikipedia.org/wiki/Stable_algorithm? I'd focus more on sort of non-fatal influence. Should the property be more about the alg being careful/cautious?
- I was thinking of stability in terms of avoiding infinite regress, as illustrated by Jonas noticing the endless sequence of metaphorical whale bellies.

Philosopher Gabriel Liiceanu in his book "Despre limită" (English:  Concerning limit - unfortunately, no English version seems to be available) argues that we fell lost when we loose our landmark-limit i.e. in the desert/in the middle of the ocean on a cloudy night with no navigational tools.  I would say that we can also get lost in our mental landscape and thus be unable to decide which goal to pursue. 

Consider the paperclip maximizing algorithm: once it has turned all available matter in the Universe into paperclips, what will it do? And if the algorithm can predict that it will reach this confusing state, does it decide to continue the paperclip optimization? As a Buddhist saying goes: "When you get what you desire, you become a different person. Consider becoming that version of yourself first and you might find that you no longer need the object of your desires.".