2500 years ago, Heraclitus famously said that “everything flows and nothing stays fixed”. While his insight can offer some solace when dealing with the slings and arrows of outrageous fortune, scientists are usually not so easily satisfied. Is it true that nothing is fixed? And given things flow, how precisely do they flow?
After the scientific revolution headed off in the 16th century, scientists have made some significant progress in denoting what may be fixed. Newton discovered that planetary orbits adhere to the immutable laws of gravity, and captured their flow in the beautiful language of calculus that plagues high school students to this day. Einstein realized that the speed of light remains constant, even as space and time contort around it.
Beyond understanding what is fixed, scientists have also progressed in understanding how things are changing: from the climate to the stock market to the incredible complexity of 80 billion interconnected spiking neurons constituting a human brain; dynamics come in many shapes and forms and pose a formidable challenge for those trying to build models from them.
Our human psyches, being intertwined with the dynamics of the brain, and embedded in a complex world full of uncertainty, are likewise always in flux. And so, it should come as no surprise that mental illnesses can likewise feature complex dynamical patterns. Bipolar disorder serves as a prime example: it is characterized by fluctuating moods that oscillate between depressive and manic phases, and its unpredictability can be hard to grapple with for those afflicted by it.
When Johannes Kepler tried to distil the observed motions of the orbits into equations, it took him nearly a decade to formulate his renowned laws, based on the data he received from Tycho Brahe. Nevertheless, 21st-century scientists do not have to solely rely on pen and paper when figuring out equations of motion, they are supported by computers and learning algorithms. Machine learning and AI techniques have become increasingly powerful at automatically extracting patterns from diverse data. They have already found their way into many medical applications, and are promised to find their way into many more.
My research group at the Central Institute for Mental Health focuses on a subfield of AI, called dynamical systems reconstruction. We are using neural networks to extract models from the flows we observe around us. Those networks can stem from measurements of the heart or brain, or, as in the example of IMMERSE, smartphone data from the everyday life of those affected by mental health challenges.
Once we have extracted a series of models, we can use them to predict the immediate future, or we can use them to understand hidden dynamic patterns and trends, much the same as when trying to forecast short-term weather and long-term climate change.
More specifically, in my project, we are looking at integrating information across several jointly observed modalities. The data in smartphone-based studies like IMMERSE is often collected using the Experience Sampling Method (ESM), which provides subjects with questionnaires about their current mood several times each day (e.g. how happy are you on a scale from 1 to 10?). However, smartphones also have the ability to collect passive data, such as step counts, activity patterns, or GPS data, which does not rely on the time-intensive and sometimes unreliable participation in surveys by the study subjects. This passively collected data can, at the same time, look very different from the ESM ratings (such as numbers of steps taken per hour, or abstract GPS coordinates that look something like “41.40338, 2.17403”), and integrating across them requires quite some ingenuity.
As part of my project, we have developed a novel training paradigm, based on several recent advances in how to extract dynamical models. The framework, called the Multimodal Variational Autoencoder-Teacher Forcing (yes, it is a mouthful), can take several time series observed at the same time, and use them to extract a shared dynamics model.
More data usually means more expressive and powerful models, and integrating across passive and active sensor data promises a more thorough understanding of the dynamical processes underlying mental illness. Imagine, for example, an upcoming depressive episode: while individuals might exhibit reduced motivation to complete surveys, there could also be a noticeable decline in physical activity or more time spent indoors, as indicated by step or GPS data. Combining this information in a joint model can provide clinicians with early warning signs to effectively counter the onset of these episodes, leading to improved therapy.
While employing AI models in clinical settings requires care and a thorough understanding of the specific technical and ethical challenges of the field, it also promises improved person-centered care: an AI model can provide a perspective much more tailored to the individual, since it can extract complex relationships from the data unique to that individual. These relationships might be harder to see for a clinician under time pressure or based solely on the self-report of the subjects during consultation hours with little data to back it up.
Thus, the hope is that integration of these models into clinical practice and smartphone-based therapy approaches can offer clinicians an improved understanding of individual challenges, providing a synergy between the human element of therapy and the AI’s ability to discern hidden patterns from many forms of data.