Tech writers and other assorted oracles seem to be on the verge of a new era of hand-wringing and teeth-gnashing over AI’s failure to meet expectations (“An Understanding of AI’s Limitations Is Starting to Sink in,” 2020). Self-driving cars seem to be in a holding pattern, forever waiting for one more safety test, bots have the charm of a glorified IVR phone tree, and even the #autoerection features on most word processors continue to be suspect.

We see this angst with AI growing due to the fact that digital machines continue to require the user to accommodate the machine, but not vice versa. Siri may sound vaguely understanding of who I am, but my Siri does not yet know my name. This inability to detect and engage with human individuality fuels much of the disappointment surrounding AI. Of the four functional types of AI (reactive machines, limited memory, theory of mind (ToM), and self-aware),  (Understanding the Four Types of Artificial Intelligence, 2016) early expectations were that reactive machines would be a stepping stone to aware machines. Deep Blue and AlphaGo have exceeded expectations in defeating—nay, crushing—experts at very complex pattern-based games. We don’t fully understand how they do it, but they clearly do not use information about their human opponents in the process. Contrast this with the sensitivity to human emotions seen in diverse mammals. Such a contrast highlights just how far AI has to go, as well as providing an insight into our approach to AI.

ToM AI is the current cutting edge in the digital understanding of humans. ToM is defined by its ability to understand that others have their own mindsets. The best-known work in ToM AI (Rabinowitz et al., 2018) uses a meta-learning approach to model the behavior of existing AI systems and, as such, it is not grounded in neuroscience or psychology. This Machine ToM (MToM) observes the actions of other AI agents built with different rewards, parameterizations, and policies, including inefficient and biased ones; it is trained to predict an agent’s behavior. MToM is not only able to predict other AI’s behavior, it acts on their false beliefs. These findings were interpreted (Hutson et al., 2018) as indicating that digital cooperation with—and the deception of—humans is now possible.

A preview: our work seeks to train AIs using psychometric data modeled on a neuroscience-based approach to human growth. This refers to systems developed by psyML in conjunction with the technologist Kai Mildenberger and the social neuroscientist Laura Harrison. We develop AI for the explicit purpose of improving the psychological, social, and cognitive functioning of humans. This effort is, in part, a response to ToM AI approaches that train AI for deception, either explicitly or implicitly. Such purely technological approaches to AI, which also compose the bulk of approaches to artificial general intelligence, fail to consider their moral and ethical implications. Nor are they tied to any theoretical models of human processing, which precludes the ability to apply the scientific method.

Image Source

At the highest level, our model is based on the work of Karl Friston. In his free energy principle (FEP), he writes “…any biological system, from a single-cell organism to a social structure should have, encoded in its internal (macroscopic) states, a representation of causal structure in its external milieu; and should act to fulfil predictions based on that representation” (Friston, 2012). The homeostatic model we use is allostasis, specifically the concept of allostatic load (AL), which explains the deterioration of biological systems, as well as their emergent properties, including cognition, emotion and well-being, and dysfunctional responses to extreme or chronic stress. This fits naturally within the FEP. Drawing on the evidence that people can improve brain functioning over time, through top-down control mechanisms, we have proposed an extension of allostatic load called allostatic growth. This suggests that the conscious direction of thoughts, behaviors, emotions, and social relationships can neuro-remodel our brains to alter the setpoints of our allostatic system, thus improving human functioning. This model for human growth provides us, in whole or in part, with a testable framework for causal testing.

The centrality of human growth and adaptability as the motivating force for our AI requires that AI developers not only model AI on systems consistent with human growth, but also that they develop AI that has the direct ability to help individuals become more adaptive. In effect, we see human systems as the model for AI development and as a target for AI's improvement. As such, we refer to this entire approach as Digital Humanity (DH).

Neuroscience and psychology tell us much about the mindset associated with allostatic growth. On a more granular level, our approach relies on psychometrics, the process of validly quantifying latent constructs, to provide much of the raw data we need to train DH systems. Valid assessments of psychological constructs, from personality to emotion to compassion to ToM, have long been available, in addition to sizable data troves that can be found with some digging. The recent use of written language across most social media platforms provides a large collection of one of the richest forms of expression of the embodied human mind ever assembled. As such, we believe much of the data critical to DH are contained in online text and, with advanced psychometrics, can be validly quantified. We continue to expand the range of psychological constructs we currently quantify beyond personality and emotion and to include physiological, behavioral, social, and temporal markers to allow for the modeling of allostatic growth using the mathematics of the FEP. We ultimately envision DH to be an AI system optimized to have the adaptability of a person, in terms of their social, biological, and psychological development and status.

Much of the discussion around AI, notably when it focuses on self-aware AI, takes on a tone of doomed inevitability. Given the limitless nature of human curiosity and creativity, we agree self-aware AI is coming. But AI is a human creation. How we develop and train AI will affect its ultimate nature. If we seed its earliest incarnations with the ability to deceive us, with no ties to our well-being and value, AI will be very different than it would be if it is trained for empathy, compassion, neuro-remodeling, and the collective good.  For this to happen, the AI community must develop a degree of trust, transparency, and collaboration. To this end, we are grateful for Towards AI for publishing much of the discussion needed to develop such a community.