Fusing the power of data-driven techniques and model-based approaches

We work with data and signals, researching and developing techniques in artificial intelligence, machine learning, and statistical signal processing to generate, transform, extract, and interpret information.

We enhance the power of modern data-driven techniques, including deep learning, by incorporating a priori information in the form of models and expert domain knowledge. This allows us to deal with complex problems even if only very limited training data is available.

Our focus is on the theory and the methods, but our research is motivated by real-life questions.  Applications of our research range from mobile communications to neuroscience and medicine. Continue reading to learn about how our research results may be applied to some featured problems.

Multi-modal data analysis

We develop techniques for the joint analysis of data acquired from multiple modalities. One application is the analysis of the complex interactions between subsystems of the autonomic nervous system.

A multimodel analysis of the autonomic nervous system provides insights into its internal organization

The complex functionality of the autonomic nervous system (ANS) is achieved by task-specific modulation of the organization of several organ-specific subnetworks as well as their interaction. Organ-specific regulative mechanisms of the ANS, like heart-rate control or electrodermal activity, are integrated via a central autonomic network. Typical analysis of ANS activity is unimodal and considers organ-specific subsystems individually, which are subject to high day-to-day variations with poor systematics.

We develop techniques for a multimodal analysis of the ANS, which provide insights into the internal organization of the ANS and how its subsystems exchange information. Particularly interesting is the question how specific stressors like physical exercise may perturb organ functions and thus alter ANS activity and internal interactions. The more severe the external stressor, the stronger this internal reorganization, as was confirmed in a study looking at the effects induced by running an ultramarathon.

A potential application may be the prediction of epileptic seizures

Another extremely strong stressor is an epileptic seizure. Epilepsy is one of the most common neurological disorders affecting around 50 million people worldwide. Epileptic seizures are caused by sudden excessive electrical discharges from the brain. The side effects of these seizures are not only physical injuries, but may also lead to anxiety and depression. Being able to predict epileptic seizures would go a long way in improving the quality of life for people suffering from epilepsy.

While EEG is the gold standard in detection and diagnosis of epilepsy, a continuous long-term recording of EEG is not feasible in everyday life. On the other hand, wearable devices may easily record several modalities characterizing ANS activity (such as heart rate, blood volume pressure, or electrodermal activity). A joint analysis of these modalities may identify a group of biomarkers that together may be more powerful for prediction of epileptic seizures than any of them individually.

This is joint work with Prof. Claus Reinsberger at the Institute of Sports Medicine, Paderborn.

Selected publications
S. Vieluf, V. Scheer, T. Hasija, P. J. Schreier, and C. Reinsberger, “Multimodal approach towards understanding the changes in the autonomic nervous system induced by an ultramarathon,” Research in Sports Medicine, doi: 10.1080/15438627.2019.1665522, 2019
S. Vieluf, T. Hasija, R. Jakobsmeyer, P. J. Schreier, and C. Reinsberger, “Exercise-induced changes of multimodal interactions within the autonomic nervous network,” Frontiers in Physiology, vol. 10, doi: 10.3389/fphys.2019.00240, Mar. 2019

Structure-revealing data fusion in neuroscience

In biomedical imaging for the study of brain function, an increasing number of studies are collecting multiple measurements from different modalities, e.g., functional MRI, structural MRI, and EEG. These modalities provide complementary information. For instance, fMRI has very good spatial resolution but poor temporal resolution, whereas EEG has high temporal resolution but poor spatial localization. It is thus of interest to fuse the measurements obtained from these different techniques to combine their respective advantages.

On the other hand, data fusion is also interesting for multiple datasets that are all of the same type but acquired from different samples, at different time points, or under different conditions. We investigate data-driven joint analysis techniques of these multiple datasets such that all available observations can fully interact and inform each other.

Matrix and tensor factorizations can explain the relationships of the observations by extracting features that allow attaching a physical interpretation. These features may then be used for classification, detection, change analysis, and prediction.

A critical step in these methods is discovering the relationship of multiple datasets by identifying common (or correlated/dependant) components as well as distinct information between them. The identification of coupled components between datasets becomes particularly challenging when sample sizes are small. Past approaches have often relied upon ad-hoc rules of thumb to deal with such scenarios whereas we have proposed techniques to address these problems systematically using sound statistical techniques.

This is joint work with the MLSP lab of Prof. Tülay Adali, University of Maryland, USA, who is supported by a Humboldt Research Award.

Selected publications
T. Hasija, T. Marrinan, C. Lameiro, and P. J. Schreier, “Determining the dimension and structure of the subspace corelated across multiple data sets," Signal Processing, vol. 176, Nov. 2020,
Y. Levin-Schwartz, Y. Song, P. J. Schreier, V. D. Calhoun, and T. Adali, “Sample-poor estimation of order and common signal subspace with application to fusion of medical imaging data," NeuroImage, vol. 134, pp. 486-493, July 2016

Processing intraoperative X-rays by incorporating prior information

Sample publication
A. Pries, P. J. Schreier, A. Lamm, S. Pede, and J. Schmidt, “Deep Morphing: Detecting bone structures in fluoroscopic X-rays with prior knowledge,"

X-rays taken intraoperatively are typically low-dose X-rays in order to limit the radiation exposure of surgeons and operating room staff. Such low-dose X-rays (called fluoroscopic X-rays) have much lower SNR and worse contrast than diagnostic X-rays. Moreover, fluoroscopic X-rays are usually acquired with so-called C-arms, which can be easily repositioned and rotated during surgery. This means the images do not have a standardized appearance. Further complicating matters for data-driven approaches is the fact that often only few training samples are available because intraoperative X-rays are seldom saved. All this makes automatic processing of fluoroscopic X-rays challenging.

We develop approaches based on deep learning to process such fluoroscopic X-rays when only a small training dataset is available and the images have low quality. We attack the problem by incorporating high-level information about the objects, which could be a simple geometrical model, like a circular outline, or a more complex statistical model. A simple geometrical representation can sufficiently describe some objects and only requires minimal labeling. Statistical shape models can be used to represent more complex objects.

For the automated labeling of anatomical points on bones, we have introduced a computationally efficient two-stage approach called deep morphing, which fits a high-level geometrical or statistical description to the output of a deep segmentation network.