Skip to main content

2021 | Buch

Bildverarbeitung für die Medizin 2021

Proceedings, German Workshop on Medical Image Computing, Regensburg, March 7-9, 2021

herausgegeben von: Prof. Dr. Christoph Palm, Prof. Dr. Thomas M. Deserno, Prof. Dr. Heinz Handels, Prof. Dr. Andreas Maier, Prof. Dr. Klaus Maier-Hein, Prof. Dr. Thomas Tolxdorff

Verlag: Springer Fachmedien Wiesbaden

Buchreihe : Informatik aktuell

insite
SUCHEN

Über dieses Buch

In den letzten Jahren hat sich der Workshop "Bildverarbeitung für die Medizin" durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2021 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes - einige davon in englischer Sprache - umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere Bildgebung und -akquisition, Maschinelles Lernen, Bildsegmentierung und Bildanalyse, Visualisierung und Animation, Zeitreihenanalyse, Computerunterstützte Diagnose, Biomechanische Modellierung, Validierung und Qualitätssicherung, Bildverarbeitung in der Telemedizin u.v.m.

Inhaltsverzeichnis

Frontmatter
Learning from Imperfect Data: Weak Labels, Shifting Domains, and Small Datasets in Medical Imaging

Machine learning approaches, and especially deep neural networks, have had tremendous success in medical imaging in the past few years. Machine learning-based image reconstruction techniques are used to acquire high-resolution images at a much faster pace than before. Automated, quantitative image analysis with convolutional neural networks is as accurate as the assessment of an expert observer.

Marleen de Bruijne
Interactive Design of Convolutional Neural Networks for Medical Image Analysis

Convolutional neural networks (CNNs) have played a role in image analysis with several well-succeeded applications involving object detection, segmentation, and identification. The design of a CNN model traditionally relies on the pre-annotation of a large dataset, the choice of the model's architecture, and the tunning of the training hyperparameters. These models are sought as “ black-boxes”, implying that one cannot explain their decisions.

Alexandre Xavier Falcāo
Artificial Intelligence in Endoscopy

Artificial intelligence (AI) will revolutionize our daily life and will have tremendous impact on health care. Especially the influence in disciplines where imaging plays an important role seems to be substantial. Radiology, pathology and endoscopy will benefit from these developments.

Helmut Messmann
Learning-based Patch-wise Metal Segmentation with Consistency Check

Metal implants that are inserted into the patient's body during trauma interventions cause heavy artifacts in 3D X-ray acquisitions. Metal Artifact Reduction (MAR) methods, whose first step is always a segmentation of the present metal objects, try to remove these artifacts. Thereby, the segmentation is a crucial task which has strong influence on the MAR's outcome. This study proposes and evaluates a learning-based patch-wise segmentation network and a newly proposed Consistency Check as post-processing step. The combination of the learned segmentation and Consistency Check reaches a high segmentation performance with an average IoU score of 0.924 on the test set. Furthermore, the Consistency Check proves the ability to significantly reduce false positive segmentations whilst simultaneously ensuring consistent segmentations.

Tristan M. Gottschalk, Andreas Maier, Florian Kordon, Björn W. Kreher
Localization of the Locus Coeruleus in MRI via Coordinate Regression

The locus coeruleus (LC) is a small nucleus in the brain stem. It is gaining increasing interest of the neuroscientific community due to its potentially important role in the pathogenesis of several neurodegenerative diseases such as Alzheimer's disease. In this study, an existing LC segmentation approach has been improved by adding a preceding LC localization to reduce false positive segments. For the localization, we propose a network that can be trained using coordinate regression and allows insights into its function via attention maps.

Max Dünnwald, Matthew J. Betts, Emrah Düzel, Steffen Oeltze-Jafra
Semantically Guided 3D Abdominal Image Registration with Deep Pyramid Feature Learning

Deformable image registration of images with large deformations is still a challenging task. Currently available deep learning methods exceed classical non-learning-based methods primarily in terms of lower computational time. However, these convolutional networks face difficulties when applied to scans with large deformations. We present a semantically guided registration network with deep pyramid feature learning that enables large deformations by transferring features from the images to be registered to the registration networks. Both network parts have U-Net architectures. The networks are trained end-to-end and evaluated with two datasets, both containing contrast enhanced liver CT images and ground truth liver segmentations. We compared our method against one classical and two deep learning methods. Our experimental validation shows that our proposed method enables large deformation and achieves the highest Dice score and the smallest surface distance of the liver in contrast to other deep learning methods.

Mona Schumacher, Daniela Frey, In Young Ha, Ragnar Bade, Andreas Genz, Mattias Heinrich
Heatmap-based 2D Landmark Detection with a Varying Number of Landmarks

Mitral valve repair is a surgery to restore the function of the mitral valve. To achieve this, a prosthetic ring is sewed onto the mitral annulus. Analyzing the sutures, which are punctured through the annulus for ring implantation, can be useful in surgical skill assessment, for quantitative surgery and for positioning a virtual prosthetic ring model in the scene via augmented reality. This work presents a neural network approach which detects the sutures in endoscopic images of mitral valve repair and therefore solves a landmark detection problem with varying amount of landmarks, as opposed to most other existing deep learning-based landmark detection approaches. The neural network is trained separately on two data collections from different domains with the same architecture and hyperparameter settings. The datasets consist of more than 1; 300 stereo frame pairs each, with a total over 60; 000 annotated landmarks. The proposed heatmap-based neural network achieves a mean positive predictive value (PPV) of 66:68 $$\pm $$ ± 4:67% and a mean true positive rate (TPR) of 24:45 $$\pm $$ ± 5:06% on the intraoperative test dataset and a mean PPV of 81:50 $$\pm $$ ± 5:77% and a mean TPR of 61:60 $$\pm $$ ± 6:11% on a dataset recorded during surgical simulation. The best detection results are achieved when the camera is positioned above the mitral valve with good illumination. A detection from a sideward view is also possible if the mitral valve is well perceptible.

Antonia Stern, Lalith Sharan, Gabriele Romano, Sven Koehler, Matthias Karck, Raffaele De Simone, Ivo Wolf, Sandy Engelhardt
Ultrasound-based Navigation of Scaphoid Fracture Surgery

For minimally-invasive surgery of the scaphoid, navigation based on ultrasound images instead of fluoroscopy reduces costs and prevents exposure to ionizing radiation. We present a machine learning based two-stage approach that tackles the tasks of image segmentation and point cloud registration individually. For this, Deeplabv3+ as well as the PRNet architecture were trained on two newly generated datasets. An evaluation on in-vitro data results in an average surface distance error of 1.1mm and a mean rotational deviation of 6:2° with a processing time of 9 seconds. We conclude that near real-time navigation is feasible.

Peter Broessner, Benjamin Hohlmann, Klaus Radermacher
Abstract: 3D Guidance Including Shape Sensing of a Stentgraft System

During endovascular aneurysm repair (EVAR) procedures, medical instruments are guided with two-dimensional (2D) fluoroscopy and conventional digital subtraction angiography. However, this guidance requires X-ray exposure and contrast agent administration, and the depth information is missing. To overcome these drawbacks, a three-dimensional (3D) guidance approach based on tracking systems is introduced and evaluated [1].

Sonja Jäckle, Verónica García-Vázquez, Tim Eixmann, Florian Matysiak, Felix von Haxthausen, Malte Sieren, Hinnerk Schulz-Hildebrandt, Gereon Hüttmann, Floris Ernst, Markus Kleemann, Torben Pätz
Abstract: Move Over There One-click Deformation Correction for Image Fusion during Endovascular Aortic Repair

Endovascular aortic repair (EVAR) is an X-ray guided procedure for treating aortic aneurysms with the goal to prevent rupture. During this minimally invasive intervention, stent grafts are inserted into the vasculature to support the diseased vessel wall. By overlaying information from preoperative 3-D imaging onto the intraoperative images, radiation exposure, contrast agent volume, and procedure time can be reduced.

Katharina Breininger, Marcus Pfister, Markus Kowarschik, Andreas Maier
Interactive Visualization of Cerebral Blood Flow for Arteriovenous Malformation Embolisation

Arteriovenous malformations in the brain are abnormal connections between cerebral arteries and veins without the capillary system. They might rupture with fatal consequences. Their treatment is highly patient-specific and includes careful analysis of the vessels' configuration. We present an application that visualizes the blood flow after different combinations of blockages of feeder arteries. In order to convey a detailed representation of flow in all regions of the vascular structure, we utilized the visual effect graph of the Unity game engine that allows displaying several million particles simultanously. We conducted an informal evaluation with a clinical expert. He rated our application as beneficial in addition to the tools used in clinical practice, since the interactive blockage of arteries provides valuable feedback regarding the influence of the blood flow of the remaining arteries.

Ulrike Sprengel, Patrick Saalfeld, Sarah Mittenentzwei, Moritz Drittel, Belal Neyazi, Philipp Berg, Bernhard Preim, Sylvia Saalfeld
Rotation Invariance for Unsupervised Cell Representation Learning
Analysis of The Impact of Enforcing Rotation Invariance or Equivariance on Representation for Cell Classification

While providing powerful solutions for many problems, deep neural networks require large amounts of training data. In medical image computing, this is a severe limitation, as the required expertise makes annotation efforts often infeasible. This also applies to the automated analysis of hematopoietic cells in bone marrow whole slide images. In this work, we propose approaches to restrict a neural network towards learning of rotation invariant or equivariant representation. Even though the proposed methods achieve this goal, it does not increase classification scores on unsupervisedly learned representations.

Philipp Gräbel, Ina Laube, Martina Crysandt, Reinhild Herwartz, Melanie Baumann, Barbara M. Klinkhammer, Peter Boor, Tim H. Brümmendorf, Dorit Merhof
Abstract: Deep Learning-based Quantification of Pulmonary Hemosiderophages in Cytology Slides

Exercise-induced pulmonary hemorrhage (EIPH) is a common condition in sport horses with negative impact on performance. Cytology of bronchoalveolar lavage fluid by use of a scoring system is considered the most sensitive diagnostic method. Manual grading of macrophages, depending on the degree of cytoplasmic hemosiderin content, on whole slide images (WSI) is however monotonous and time-consuming.

Christian Marzahl, Marc Aubreville, Christof A. Bertram, Jason Stayt, Anne Katherine Jasensky, Florian Bartenschlager, Marco Fragoso, Ann K. Barton, Svenja Elsemann, Samir Jabari, Jens Krauth, Prathmesh Madhu, Jörn Voigt, Jenny Hill, Robert Klopfleisch, Andreas Maier
Learning the Inverse Weighted Radon Transform

X-ray phase-contrast imaging enhances soft-tissue contrast. The measured differential phase signal strength in a Talbot-Lau interferometer is dependent on the object's position within the setup. For large objects, this affects the tomographic reconstruction and leads to artifacts and perturbed phase values. In this paper, we propose a pipeline to learn a filter and additional weights to invert the weighted forward projection. We train and validate the method with a synthetic dataset. We tested our pipeline on the Shepp-Logan phantom, and found that our method suppresses the artifacts and the reconstructed image slices are close to the actual phase values quantitatively and qualitatively. In an ablation study we showed the superiority of our fully optimized pipeline.

Philipp Roser, Lina Felsner, Andreas Maier, Christian Riess
Table Motion Detection in Interventional Coronary Angiography

The most common method for detecting coronary artery stenosis is interventional coronary angiography (ICA). However, 2D angiography has limitations because it displays complex 3D structures of arteries as 2D X-ray projections. To overcome these limitations, 3D models or tomographic images of the arterial tree can be reconstructed from 2D projections. The 3D modeling process of the arterial tree requires accurate acquisition geometry since in many ICA acquisitions the patient table is translated to cover the entire area of interest, the original calibrated geometry is no longer valid for the 3D reconstruction process. This study presents methods for identifying the frames acquired during table translation in an angiographic scene. Spatio-temporal methods based on deep learning were used to identify translated frames. Three different architectures3D convolutional neural network (CNN), bidirectional convolutional long short term memory (CONVLSTM), and fusion of bi-directional CONVLSTM and 3D CNNwere trained and tested. The combination of CONVLSTM and 3D CNN surpasses the other two methods and achieves a macro f1-score (mean f1-scores of two classes) of 93%.

Junaid R. Rajput, Karthik Shetty, Andreas Maier, Martin Berger
Semi-permeable Filters for Interior Region of Interest Dose Reduction in X-ray Microscopy

In osteoporosis research, the number and size of lacunae in cortical bone tissue are important characteristics of osteoporosis development. In order to reconstruct lacunae well in X-ray microscopy while protecting bone marrow from high-dose damage in in-vivo experiments, semi-permeable X-ray filters are proposed for dose reduction. Compared with an opaque filter, image quality with a semi-permeable filter is improved remarkably. For image reconstruction, both iterative reconstruction with reweighted total variation (wTV) and FDK reconstruction from penalized weighted least-square (PWLS) processed projections can reconstruct lacunae when the transmission rate of the filter is as small as 5%. However, PWLS is superior in computation efficiency.

Yixing Huang, Leonid Mill, Robert Stoll, Lasse Kling, Oliver Aust, Fabian Wagner, Anika Grüneboom, Georg Schett, Silke Christiansen, Andreas Maier
An Optical Colon Contour Tracking System for Robot-aided Colonoscopy
Localization of a Balloon in an Image using the Hough-transform

During colonoscopy there is a risk that the intestinal wall may be injured or may pain occur by the insertion of an endoscope. Surgery through endoscopes must be learned by physicians through extensive training. To simplify the insertion of endoscopes, research is being carried out on robotic - aided systems. Here, a sensor is needed to detect the contour of the intestine in order to enable an injury - free and painless insertion of the endoscope. In this paper a tube - balloon is designed for a gentle contour tracking of the intestinal anatomy. This is inserted through the working channel of the endoscope and placed in the intestinal lumen in front of the endoscope's head in the field of view of the camera. A Matlab - algorithm is used to detect the balloon in each image. The balloon appears as a two - dimensional circle, which can be detected using a Hough -Transformation. The displacement of balloon after touching the intestine wall is calculated as a vector between the circle's center and the image center. This ensures that the robot - aided endoscope can follow the intestinal contour.

Giuliano Giacoppo, Anna Tzellou, Joonhwan Kim, Hansoul Kim, Dong-Soo Kwon, Kent W. Stewart, Peter P. Pott
Externe Ventrikeldrainage mittels Augmented Reality und Peer-to-Peer-Navigation

Das hier vorgestellte System verbindet das neue Konzept der Peer-to-Peer-Navigation mit dem Einsatz von Augmented Reality zur Unterstützung von bettseitig durchgeführten externen Ventrikeldrainagen. Das sehr kompakte und genaue Gesamtsystem beinhaltet einen Patiententracker mit integrierter Kamera, eine Augmented-Reality-Brille mit Kamera und eine Punktionsnadel bzw. einen Pointer mit zwei Trackern, mit dessen Hilfe die Anatomie des Patienten aufgenommen wird. Die exakte Position und Richtung der Punktionsnadel wird unter Zuhilfenahme der aufgenommenen Landmarken berechnet und über die Augmented-Reality-Brille für den Chirurgen sichtbar auf dem Patienten dargestellt. Die Methode zur Kalibrierung der statischen Transformationen zwischen Patiententracker und daran befestigter Kamera beziehungsweise zwischen den Trackern der Punktionsnadel sind für die Genauigkeit sehr wichtig und werden hier vorgestellt. Das Gesamtsystem konnte in vitro erfolgreich getestet werden und bestätigt den Nutzen eines Peer-to-Peer-Navigationssystems.

Simon Strzeletz, José Moctezuma, Mukesch Shah, Ulrich Hubbe, Harald Hoppe
Abstract: Contour-based Bone Axis Detection for X-ray-guided Surgery on the Knee

The anatomical axis of long bones is an important reference line for guiding fracture reduction and assisting in the correct placement of guide pins, screws, and implants in orthopedics and trauma surgery. While planning such axes can be easily done on pre-operative static data, doing so consistently on live images during surgery is inherently more complex due to motion and a limited field of view. In addition, non-sterile interaction with a planning software is unwanted.

Florian Kordon, Andreas Maier, Benedict Swartman, Maxim Privalov, Jan Siad El Barbari, Holger Kunze
Novel Evaluation Metrics for Vascular Structure Segmentation

For the diagnosis of eye-related diseases segmentation of the retinal vessels and the analysis of the tortuousness, completeness, and thickness of these vessels are the fundamental steps. The assessment of the quality of the retinal vessel segmentation, therefore, plays a crucial role. Conventionally, different evaluation metrics for retinal vessel segmentation have been proposed. Most of them are based on pixel matching. Recently, a novel non-global measure has been introduced. It focuses on the skeletal similarity between vessel segments rather than the pixel-wise overlay and redefines the terms of the confusion matrix. In our work, we re-implement this evaluation algorithm and discover the design flaws in the algorithm. Therefore, we propose modifications to the metric. The basic structure of the algorithm, which combines the thickness and curve similarity is preserved. Meanwhile, the calculation of the curve similarity is modified and extended. Furthermore, our modifications enable us to apply the evaluation metric to three-dimensional data. We show that compared to the conventional pixel matching-based metrics our proposed metric is more representative for cases where vessels are missing, disoriented, or inconsistent in their thickness.

Marcel Reimann, Weilin Fu, Andreas Maier
A Machine Learning Approach Towards Fatty Liver Disease Detection in Liver Ultrasound Images

Fatty liver disease (FLD) is one of the prominent diseases which affects the normal functionality of the liver by building vacuoles of fat in the liver cells. FLD is an indicator of imbalance in the metabolic system and could cause cardiovascular diseases, liver inflammation, cirrhosis and furthermore neoplasm. Detection and specification of a FLD are bene

Adarsh Kuzhipathalil, Anto Thomas, Keerthana Chand, Elmer Jeto Gomes Ataide, Alexander Link, Annika Niemann, Sylvia Saalfeld, Michael Friebe, Jens Ziegle
Automated Deep Learning-based Segmentation of Brain, SEEG and DBS Electrodes on CT Images

Stereoelectroencephalography (sEEG) and deep brain stimulation (DBS) are effective surgical diagnostic and therapeutic procedures of the depth electrodes implantation in the brain. The benefit and outcome of these procedures directly depend on the electrode placement. Our goal was to accurately segment and visualize electrode position after the sEEG and DBS procedures. We trained a deep learning network to automatically segment electrodes trajectories and brain tissue from postsurgical CT images. We used 90 head CT scans that include intracerebral electrodes and their corresponding segmentation masks to train, validate and test the model. Mean accuracy and dice score in 5-fold cross-validation for the 3D-cascade U-Net model were 0.99 and 0.92, respectively. When the network was tested on an unseen test set, the dice overlap with the manual segmentations was 0.89. In this paper, we present a deep-learning approach for automatic patient-specific delineation of the brain, the sEEG and DBS electrodes from different varying quality of CT images. This robust method may inform on the postsurgical electrode positions fast and accurately. Moreover, it is useful as an input for neurosurgical and neuroscientific toolboxes and frameworks.

Vanja Vlasov, Marie Bofferding, Loïc Marx, Chencheng Zhang, Jorge Goncalves, Andreas Husch, Frank Hertel
Segmentation of the Fascia Lata in Magnetic Resonance Images of the Thigh
Comparison of an Unsupervised Technique with a U-Net in 2D and Patch-wise 3D

To quantify muscle properties in the thigh, the segmentation of the fascia lata is crucial. For this purpose, the U-Net architecture was implemented and compared for 2D images and patched 3D image stacks in magnetic resonance images (MRI). The training data consisted of T1 MRI data sets from elderly men. To test the performance of the models, they were applied on other data sets of different age groups and gender. The U-Net approaches were superior to an unsupervised semiautomatic method and reduced post-processing time.

Lis J. Louise P, Klaus Engelke, Oliver Chaudry
Abstract: Automatic CAD-RADS Scoring using Deep Learning

Coronary CT angiography (CCTA) has established its role as a noninvasive modality for the diagnosis of coronary artery disease (CAD). The CAD-Reporting and Data System (CAD-RADS) has been developed to standardize communication and aid in decision making based on CCTA findings. The CAD-RADS score is determined by manual assessment of all coronary vessels and the grading of lesions within the coronary artery tree.

Felix Denzinger, Michael Wels, Katharina Breininger, Mehmet A. Gülsün, Max Schöbinger, Florian André, Sebastian Buß, Johannes Görich, Michael Sühling, Andreas Maier
Towards Deep Learning-based Wall Shear Stress Prediction for Intracranial Aneurysms

This work aims at a deep learning-based prediction of wall shear stresses (WSS) for intracranial aneurysms. Based on real patient cases, we created artificial surface models of bifurcation aneurysms. After simulation and WSS extraction, these models were used for training a deep neural network. The trained neural network for 3D mesh segmentation was able to predict areas of high wall shear stress.

Annika Niemann, Lisa Schneider, Bernhard Preim, Samuel Voß, Philipp Berg, Sylvia Saalfeld
Evaluating Design Choices for Deep Learning Registration Networks
Architecture Matters

The variety of recently proposed deep learning models for deformable pairwise image registration leads to the question how beneficial certain architectural design considerations are for the registration performance. This paper aims to take a closer look at the impact of some basic network design choices, i.e. the number of feature channels, the number of convolutions per resolution level and the differences between partially independent processing streams for fixed and moving images and direct concatenation of input scans. Starting from a simple single-stream U-Net architecture, we investigate extensions and modifications and propose a model for 3D abdominal CT registration evaluated on data from the Learn2Reg challenge that outperforms the baseline network VoxelMorph used for comparison.

Hanna Siebert, Lasse Hansen, Mattias P. Heinrich
Learning the Update Operator for 2D/3D Image Registration

Image guidance in minimally invasive interventions is usually provided using live 2D X-ray imaging. To enhance the information available during the intervention, the preoperative volume can be overlaid over the 2D images using 2D/3D image registration. Recently, deep learning-based 2D/3D registration methods have shown promising results by improving computational efficiency and robustness. However, there is still a gap in terms of registration accuracy compared to traditional optimization-based methods. We aim to address this gap by incorporating traditional methods in deep neural networks using known operator learning. As an initial step in this direction, we propose to learn the update step of an iterative 2D/3D registration framework based on the Point-to-Plane Correspondence model. We embed the Point-to-Plane Correspondence model as a known operator in our deep neural network and learn the update step for the iterative registration. We show an improvement of 1.8 times in terms of registration accuracy for the update step prediction compared to learning without the known operator.

Srikrishna Jaganathan, Jian Wang, Anja Borsdorf, Andreas Maier
Abstract: Generation of Annotated Brain Tumor MRIs with Tumor-induced Tissue Deformations for Training and Assessment of Neural Networks

Machine learning methods, especially neural networks, have proven to excel at many image processing and analysis methods in the medical image domain. Yet, their success strongly relies on the availability of large training data sets with high quality ground truth annotations, e.g. expert segmentation of anatomical/pathological structures. Therefore, generating realistic synthetic data with ground truth labels has become crucial to boost the performance of neural networks.

Hristina Uzunova, Jan Ehrhardt, Heinz Handels
Abstract: Multi-camera, Multi-person, and Real-time Fall Detection using Long Short Term Memory

Falls occurring at home are a high risk for elderly living alone. Several sensor-based methods for detecting falls exist and – in majority – use wearables or ambient sensors. Video-based fall detection is emerging. However, the restricted view of a single camera, distinguishing and tracking of persons, as well as high false-positive rates pose limitations.

Christian Heinrich, Samad Koita, Mohammad Taufeeque, Nicolai Spicher, Thomas M. Deserno
Abstract: Probabilistic Dense Displacement Networks for Medical Image Registration
Contributions to the Learn2Reg Challenge

Medical image registration plays a vital role in various clinical workflows, diagnosis, research studies and computer-assisted interventions. Currently, deep learning based registration methods are starting to show promising improvements that could advance the accuracy, robustness and computation speed of conventional algorithms. However, until recently there was no commonly used benchmark dataset available to compare learning approaches with each other and their conventional (not trained) counterparts.

Lasse Hansen, Mattias P. Heinrich
Abstract: Joint Imaging Platform for Federated Clinical Data Analytics

Image analysis is one of the most promising applications of artificial intelligence (AI) in healthcare, potentially improving prediction, diagnosis and treatment of diseases. While scientific advances in this area critically depend on the accessibility of large-volume and high-quality data, sharing data between institutions faces various ethical and legal constraints as well as organizational and technical obstacles. The Joint Imaging Platform (JIP) of the German Cancer Consortium (DKTK) addresses these issues by providing federated data analysis technology in a secure and compliant way [1].

Jonas Scherer, Marco Nolden, Jens Kleesiek, Jasmin Metzger, Klaus Kades, Verena Schneider, Hanno Gao, Peter Neher, Ralf Floca, Heinz-Peter Schlemmer, Klaus Maier-Hein
Towards Mouse Bone X-ray Microscopy Scan Simulation

Osteoporosis occurs when the body loses too much bone mass, and the bones become brittle and fragile. In the aging society of Europe, the number of people with osteoporosis is continuously growing. The disease not only severely impairs the life quality of the patients, but also causes a great burden to the healthcare system. To investigate on the disease mechanism and metabolism of the bones, X-ray microscopy scans of the mouse tibia are taken. As a fundamental step, the microstructures, such as the lacunae and vessels of the bones, need to be segmented and analyzed. With the recent advances in the deep learning technologies, segmentation networks with good performance have been proposed. However, these supervised deep nets are not directly applicable for the segmentation of these micro-structures, since manual annotations are not feasible due to the enormous data size. In this work, we propose a pipeline to model the mouse bone micro-structures. Our workflow integrates conventional algorithms with 3D modeling using Blender, and focuses on the anatomical micro-structures rather than the intensity distributions of the mouse bone scans. It provides the basis towards generating simulated mouse bone X-ray microscopy images, which could be used as the ground truth for training segmentation neural networks.

Weilin Fu, Leonid Mill, Stephan Seitz, Tobias Geimer, Lasse Kling, Dennis Possart, Silke Christiansen, Andreas Maier
Dataset on Bi- and Multi-nucleated Tumor Cells in Canine Cutaneous Mast Cell Tumors

Tumor cells with two nuclei (binucleated cells, BiNC) or more nuclei (multinucleated cells, MuNC) indicate an increased amount of cellular genetic material which is thought to facilitate oncogenesis, tumor progression and treatment resistance. In canine cutaneous mast cell tumors (ccMCT), binucleation and multinucleation are parameters used in cytologic and histologic grading schemes (respectively) which correlate with poor patient outcome. For this study, we created the first open source data-set with 19,983 annotations of BiNC and 1,416 annotations of MuNC in 32 histological whole slide images of ccMCT. Labels were created by a pathologist and an algorithmic-aided labeling approach with expert review of each generated candidate. A state-of-the-art deep learning-based model yielded an F1 score of 0.675 for BiNC and 0.623 for MuNC on 11 test whole slide images. In regions of interest (2:37mm2) extracted from these test images, 6 pathologists had an object detection performance between 0.270 - 0.526 for BiNC and 0.316 - 0.622 for MuNC, while our model archived an F1 score of 0.667 for BiNC and 0.685 for MuNC. This open dataset can facilitate development of automated image analysis for this task and may thereby help to promote standardization of this facet of histologic tumor prognostication.

Christof A. Bertram, Taryn A. Donovan, Marco Tecilla, Florian Bartenschlager, Marco Fragoso, Frauke Wilm, Christian Marzahl, Katharina Breininger, Andreas Maier, Robert Klopfleisch, Marc Aubreville
Abstract: Data Augmentation for Information Transfer
Why Controlling for Confounding Effects in Radiomic Studies is Important and How to do it

The major goal of radiomics studies is the identification of predictive and reliable markers. It is, therefore, crucial to account for unwanted confounding effects that affect the radiomic features like scanning noise, annotator bias, or the used imaging device and parameter. Usually, these confounding effects are not sufficiently represented in the main cohort of radiomics studies and consequently are investigated in smaller side-studies.

Michael Götz, Klaus Maier-Hein
Reduction of Stain Variability in Bone Marrow Microscopy Images
Influence of Augmentation and Normalization Methods on Detection and Classification of Hematopoietic Cells

The analysis of cells in bone marrow microscopy images is essential for the diagnosis of many hematopoietic diseases such as leukemia. Automating detection, classification and quantification of different types of leukocytes in whole slide images could improve throughput and reliability. However, variations in the staining agent used to highlight cell features can reduce the accuracy of these methods. In histopathology, data augmentation and normalization techniques are used to make neural networks more robust but their application to hematological image data needs to be investigated. In this paper, we compare six promising approaches on three image sets with different staining characteristics in terms of detection and classification.

Philipp Gräbel, Martina Crysandt, Reinhild Herwartz, Melanie Baumann, Barbara M. Klinkhammer, Peter Boor, Tim H. Brümmendorf, Dorit Merhof
Cell Detection for Asthma on Partially Annotated Whole Slide Images
Learning to be EXACT

Asthma is a chronic inflammatory disorder of the lower respiratory tract and naturally occurs in humans and animals including horses. The annotation of an asthma microscopy whole slide image (WSI) is an extremely labour-intensive task due to the hundreds of thousands of cells per WSI. To overcome the limitation of annotating WSI incompletely, we developed a training pipeline which can train a deep learning-based object detection model with partially annotated WSIs and compensate class imbalances on the fly. With this approach we can freely sample from annotated WSIs areas and are not restricted to fully annotated extracted sub-images of the WSI as with classical approaches. We evaluated our pipeline in a cross-validation setup with a fxed training set using a dataset of six equine WSIs of which four are partially annotated and used for training, and two fully annotated WSI are used for validation and testing. Our WSI-based training approach outperformed classical sub-image-based training methods by up to 15% mAP and yielded human-like performance when compared to the annotations of ten trained pathologists.

Christian Marzahl, Christof A. Bertram, Frauke Wilm, Jörn Voigt, Ann K. Barton, Robert Klopfleisch, Katharina Breininger, Andreas Maier, Marc Aubreville
Combining Reconstruction and Edge Detection in Computed Tomography

We present two methods that combine image reconstruction and edge detection in computed tomography (CT) scans. Our first method is as an extension of the prominent filtered backprojection algorithm. In our second method we employ $$\mathcal{l}^{1}$$ l 1 -regularization for stable calculation of the gradient. As opposed to the first method, we show that this approach is able to compensate for undersampled CT data.

Jürgen Frikel, Simon Göppel, Markus Haltmeier
2D Respiration Navigation Framework for 3D Continuous Cardiac Magnetic Resonance Imaging

Continuous protocols for cardiac magnetic resonance imaging enable sampling of the cardiac anatomy simultaneously resolved into cardiac phases. To avoid respiration artifacts, associated motion during the scan has to be compensated for during reconstruction. In this paper, we propose a sampling adaption to acquire 2D respiration information during a continuous scan. Further, we develop a pipeline to extract the different respiration states from the acquired signals, which are used to reconstruct data from one respiration phase. Our results show the benefit of the proposed workflow on the image quality compared to no respiration compensation, as well as a previous 1D respiration navigation approach.

Elisabeth Hoppe, Jens Wetzl, Philipp Roser, Lina Felsner, Alexander Preuhs, Andreas Maier
Residual Neural Network for Filter Kernel Design in Filtered Back-projection for CT Image Reconstruction

Filtered back-projection (FBP) has been widely applied for computed tomography (CT) image reconstruction as a fundamental algorithm. Most of the filter kernels used in FBP are designed by analytic methods. Recently, the precision learning-based ramp filter (PL-Ramp) has been proposed to formulate FBP to directly learn the reconstruction filter. However, it is difficult to introduce regularization terms in this method, which essentially provides a massive solution space. Therefore, in this paper, we propose a neural network based on residual learning for filter kernel design in FBP, named resFBP. With such a neural network, it is possible for us to limit the solution space by introducing various regularization terms or methods to achieve better reconstruction quality on the test set. The experiment results demonstrate that both quality and reconstruction error of the proposed method has great superiority over FBP and also outperforms PL-Ramp when projection data are polluted by Poisson noise or Gaussian noise.

Jintian Xu, Chengjin Sun, Yixing Huang, Xiaolin Huang
Abstract: Automatic Plane Adjustment in Surgical Cone Beam CT-volumes

Cone beam computed tomography (CBCT) is used intra-operatively to assess the result of surgery. Due to limitations of patient positioning and the operating theater in general, the acquisition usually cannot be performed such that the axis-aligned multiplanar reconstructions (MPR) of the volume match the anatomically oriented MPRs. This needs to be corrected manually, which is a time-consuming and complex task and requires the surgeon to interact with non-sterile equipment.

Celia Martín Vicario, Florian Kordon, Felix Denzinger, Markus Weiten, Sarina Thomas, Lisa Kausch, Jochen Franke, Holger Keil, Andreas Maier, Holger Kunze
Abstract: Towards Automatic C-arm Positioning for Standard Projections in Orthopedic Surgery

Guidance and quality control in orthopedic surgery increasingly relies on intra-operative fluoroscopy using a mobile C-arm. The accurate acquisition of standardized and anatomy-specific projections is essential in this process. The corresponding iterative positioning of the C-arm is error-prone and involves repeated manual acquisitions or even continuous fluoroscopy.

Lisa Kausch, Sarina Thomas, Holger Kunze, Maxim Privalov, Sven Vetter, Jochen Franke, Andreas H. Mahnken, Lena Maier-Hein, Klaus Maier-Hein
Open-Science Gefäßphantom für neurovaskuläre Interventionen

Computerassistenzsysteme könnten Ärzten in neurovaskul vaskulären Interventionen eine Hilfestellung bei bestehenden Schwierigkeiten, wie der Positionierung der eingesetzten Instrumente bieten, indem sie deren Lage im Gefäßaum des Patienten anzeigen. Zur Evaluierung derartiger Systeme stellen wir ein Gefäßphantom der hirnversorgenden Arterien für die Simulation neurovaskulärer Interventionen vor. Das Phantom wurde durch Segmentierung des Gefäßaums der computertomographischen Angiographie-Aufnahme (CTA) eines Patienten erstellt, nachbearbeitet und anschließend als flexibler 3D-Druck gefertigt. Die Methodik zur Phantom-Erstellung wurde beschrieben, um im Sinne des open-science-Gedankens die eigenständige Fertigung individueller Phantome zu ermöglichen. Anhand einer CTA des gefertigten Phantoms konnte eine weitgehende Übereinstimmung mit dem Gefäßaum des Patienten gezeigt werden. Die Einsatzfähigkeit des Phantoms wurde zudem durch das Einbringen neurovaskulärer Instrumente untersucht. Darüber hinaus konnte das Phantom erfolgreich in einer Beispielanwendung von Comupterassistenzsystemen eingesetzt werden. Alle für die Herstellung des Phantoms relevanten Dateien wurden zusammen mit den Versuchsergebnissen unter https://osf.io/yg95d/ zur Verfügung gestellt.

Lena Stevanovic, Benjamin J. Mittmann, Florian Pfiz, Michael Braun, Bernd Schmitz, Alfred M. Franz
Abstract: Semi-supervised Segmentation Based on Error-correcting Supervision

Pixel-level classification is an essential part of computer vision. For learning from labeled data, many powerful deep learning models have been developed recently. In this work, we augment such supervised segmentation models by allowing them to learn from unlabeled data. Our semi-supervised approach, termed Error-Correcting Supervision, leverages a collaborative strategy. Apart from the supervised training on the labeled data, the segmentation network is judged by an additional network.

Robert Mendel, Luis Antonio de Souza Jr, David Rauber, João Paulo Papa, Christoph Palm
Abstract: Efficient Biomedical Image Segmentation on EdgeTPUs

The U-Net architecture [1] is a state-of-the-art neural network for semantic image segmentation that is widely used in biomedical research. It is based on an encoder-decoder framework and its vanilla version shows already high performance in terms of segmentation quality. Due to its large parameter space, however, it has high computational costs on both, CPUs and GPUs. In a research setting, inference time is relevant, but not crucial for the results.

Andreas M. Kist, Michael Döllinger
Human Axon Radii Estimation at MRI Scale
Deep Learning Combined with Large-scale Light Microscopy

Non-invasive assessment of axon radii via MRI is of increasing interest in human brain research. Its validation requires representative reference data that covers the spatial extent of an MRI voxel (e.g., 1mm2). Due to its small field of view, the commonly used manually labeled electron microscopy (mlEM) can not representatively capture sparsely occurring, large axons, which are the main contributors to the effective mean axon radius (reff) measured with MRI. To overcome this limitation, we investigated the feasibility of generating representative reference data from large-scale light microscopy (lsLM) using automated segmentation methods including a convolutional neural network (CNN). We determined large, mis-/undetected axons as the main error source for the estimation of reff ( $$\approx $$ ≈ 10 %). Our results suggest that the proposed pipeline can be used to generate reference data for the MRI-visible reff and even bears the potential to map spatial, anatomical variation of reff.

Laurin Mordhorst, Maria Morozova, Sebastian Papazoglou, Björn Fricke, Jan M. Oeschger, Henriette Rusch, Carsten Jäger, Markus Morawski, Nikolaus Weiskopf, Siawoosh Mohammadi
Age Estimation on Panoramic Dental X-ray Images using Deep Learning

Dental panoramic X-ray images provide important information about an adolescent's age because the sequential development process of teeth is one of the longest in the human body. Such dental panoramic projections can be used to assess the age of a person. However, the existing manual methods for age estimation suffer from a low accuracy rate. In this study, we propose a supervised regression-based deep learning method for automatic age estimation of adolescents aged 11 to 20 years to reduce this estimation error. To evaluate the model performance, we used a new dental panoramic X-ray data set with 14,000 images of patients in the considered age range. In an early investigation, our proposed method achieved a mean absolute error (MAE) of 1.08 years and error-rate (ER) of 17.52% on the test data set, which clearly outperformed the dental experts' estimation.

Sarah Wallraff, Sulaiman Vesal, Christopher Syben, Rainer Lutz, Andreas Maier
Multi-modal Unsupervised Domain Adaptation for Deformable Registration Based on Maximum Classifier Discrepancy

The scarce availability of labeled data makes multi-modal domain adaptation an interesting approach in medical image analysis. Deep learning-based registration methods, however, still struggle to outperform their non-trained counterparts. Supervised domain adaptation also requires labeled- or other ground truth data. Hence, unsupervised domain adaptation is a valuable goal, that has so far mainly shown success in classification tasks. We are the first to report unsupervised domain adaptation for discrete displacement registration using classifier discrepancy in medical imaging. We train our model with mono-modal registration supervision. For cross-modal registration no supervision is required, instead we use the discrepancy between two classifiers as training loss. We also present a new projected Earth Mover's distance for measuring classifier discrepancy. By projecting the 2D distributions to 1D histograms, the EMD L1 distance can be computed using their cumulative sums.

Christian N. Kruse, Lasse Hansen, Mattias P. Heinrich
Abstract: A Completely Annotated Whole Slide Image Dataset of Canine Breast Cancer to Aid Human Breast Cancer Research

Canine mammary carcinoma (CMC) has been used as a model to investigate the tumorigenesis of human breast cancer and the same histological grading scheme is commonly used to estimate patient outcome for both. One key component of this grading scheme is the density of cells undergoing cell division (mitotic figures, MF). Current publicly available datasets on human breast cancer only provide annotations for small subsets of whole slide images (WSIs).

Marc Aubreville, Christof A. Bertram, Taryn A. Donovan, Christian Marzahl, Andreas Maier, Robert Klopfleisch
Acquisition Parameter-conditioned Magnetic Resonance Image-to-image Translation

A Magnetic Resonance Imaging (MRI) exam typically consists of several sequences that yield different image contrasts. Each sequence is parameterized through a multitude of acquisition parameters that influence image contrast, signal-to-noise ratio, scan time and/or resolution. Depending on the clinical indication, different contrasts are required by the radiologist to make a diagnosis. As the acquisition of MR sequences is time consuming, and acquired images may be corrupted due to motion, a method to synthesize MR images with fine-tuned contrast settings is required. We therefore trained an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition and echo time. Our approach is able to synthesize missing MR images with adjustable MR image contrast and yields a mean absolute error of 0.05, a peak signal-to-noise ratio of 23.23 dB and structural similarity of 0.78.

Jonas Denck, Jens Guehring, Andreas Maier, Eva Rothgang
Fine-tuning Generative Adversarial Networks using Metaheuristics
A Case Study on Barrett's Esophagus Identification

Barrett's esophagus denotes a disorder in the digestive system that affects the esophagus' mucosal cells, causing reflux, and showing potential convergence to esophageal adenocarcinoma if not treated in initial stages. Thus, fast and reliable computer-aided diagnosis becomes considerably welcome. Nevertheless, such approaches usually suffer from imbalanced datasets, which can be addressed through Generative Adversarial Networks (GANs). Such techniques generate realistic images based on observed samples, even though at the cost of a proper selection of its hyperparameters. Many works employed a class of nature-inspired algorithms called metaheuristics to tackle the problem considering distinct deep learning approaches. Therefore, this paper's main contribution is to introduce metaheuristic techniques to fine-tune GANs in the context of Barrett's esophagus identification, as well as to investigate the feasibility of generating high-quality synthetic images for early-cancer assisted identification.

Luis A. Souza, Leandro A. Passos, Robert Mendel, Alanna Ebigbo, Andreas Probst, Helmut Messmann, Christoph Palm, João Paulo Papa
Neural Networks with Fixed Binary Random Projections Improve Accuracy in Classifying Noisy Data

The trend of Artificial Neural Networks becoming\bigger"and \deeper" persists. Training these networks using back-propagation is considered biologically implausible and a time-consuming task. Hence, we investigate how far we can go with fixed binary random projections (BRPs), an approach which reduces the number of trainable parameters using localized receptive fields and binary weights. Evaluating this approach on the MNIST dataset we discovered that contrary to models with fully-trained dense weights, models using fixed localized sparse BRPs yield equally good performance in terms of accuracy, saving 98% computations when generating the hidden representation for the input. Furthermore, we discovered that using BRPs leads to a more robust performance – up to 56% better compared to dense models – in terms of classifying noisy inputs.

Zijin Yang, Achim Schilling, Andreas Maier, Patrick Krauss
M3d-CAM
A PyTorch Library to Generate 3D Attention Maps for Medical Deep Learning

Deep learning models achieve state-of-the-art results in a wide array of medical imaging problems. Yet the lack of interpretability of deep neural networks is a primary concern for medical practitioners and poses a considerable barrier before the deployment of such models in clinical practice. Several techniques have been developed for visualizing the decision process of DNNs. However, few implementations are openly available for the popular PyTorch library, and existing implementations are often limited to two-dimensional data and classification models. We present M3d-CAM, an easy easy to use library for generating attention maps of CNN-based PyTorch models for both 2D and 3D data, and applicable to both classification and segmentation models. The attention maps can be generated with multiple methods: Guided Backpropagation, Grad-CAM, Guided Grad-CAM and Grad-CAM++. The maps visualize the regions in the input data that most heavily influence the model prediction at a certain layer. Only a single line of code is sufficient for generating attention maps for a model, making M3d-CAM a plug-and-play solution that requires minimal previous knowledge.

Karol Gotkowski, Camila Gonzalez, Andreas Bucher, Anirban Mukhopadhyay
Coronary Plaque Analysis for CT Angiography Clinical Research

The analysis of plaque deposits in the coronary vasculature is an important topic in current clinical research. From a technical side mostly new algorithms for different sub tasks e.g. centerline extraction or vessel/plaque segmentation – are proposed. However, to enable clinical research with the help of these algorithms, a software solution, which enables manual correction, comprehensive visual feedback and tissue analysis capabilities, is needed. Therefore, we want to present such an integrated software solution. A MeVisLab-based implementation of our solution is available as part of the Siemens Healthineers syngo.via Frontier and OpenApps research extension. It is able to perform robust automatic centerline extraction and inner and outer vessel wall segmentation, while providing easy to use manual correction tools. Also, it allows for annotation of lesions along the centerlines, which can be further analyzed regarding their tissue composition. Furthermore, it enables research in upcoming technologies and research directions: it does support dual energy CT scans with dedicated plaque analysis and the quantification of the fatty tissue surrounding the vasculature, also in automated set-ups.

Felix Denzinger, Michael Wels, Christian Hopfgartner, Jing Lu, Max Schöbinger, Andreas Maier, Michael Sühling
Robust Slide Cartography in Colon Cancer Histology
Evaluation on a Multi-scanner Database

Robustness against variations in color and resolution of digitized whole-slide images (WSIs) is an essential requirement for any computer-aided analysis in digital pathology. One common approach to encounter a lack of heterogeneity in the training data is data augmentation. We investigate the impact of different augmentation techniques for whole-slide cartography in colon cancer histology using a newly created multi-scanner database of 39 slides each digitized with six different scanners. A state of the art convolutional neural network (CNN) is trained to differentiate seven tissue classes. Applying a model trained on one scanner to WSIs acquired with a different scanner results in a significant decrease in classification accuracy. Our results show that the impact of resolution variations is less than of color variations: the accuracy of the baseline model trained without any augmentation at all is 73% for WSIs with similar color but different resolution against 35% for WSIs with similar resolution but color deviations. The grayscale model shows comparatively robust results and evades the problem of color variation. A combination of multiple color augmentations methods lead to a significant overall improvement (between 33 and 54 percentage points). Moreover, fine-tuning a pre-trained network using a small amount of annotated data from new scanners benefits the performance for these particular scanners, but this effect does not generalize to other unseen scanners.

Petr Kuritcyn, Carol I. Geppert, Markus Eckstein, Arndt Hartmann, Thomas Wittenberg, Jakob Dexl, Serop Baghdadlian, David Hartmann, Dominik Perrin, Volker Bruns, Michaela Benz
Digital Staining of Mitochondria in Label-free Live-cell Microscopy

Examining specific sub-cellular structures while minimizing cell perturbation is important in the life sciences. Fluorescence labeling and imaging is widely used. With the advancement of deep learning, digital staining routines for label-free analysis have emerged to replace fluorescence imaging. Nonetheless, digital staining of sub-cellular structures such as mitochondria is sub-optimal. This is because the models designed for computer vision are directly applied instead of optimizing them for microscopy data. We propose a new loss function with multiple thresholding steps to promote more effective learning for microscopy data. We demonstrate a deep learning approach to translate the labelfree brightfield images of living cells into equivalent fluorescence images of mitochondria with an average structural similarity of 0.77, thus surpassing the state-of-the-art of 0.7 with L1. We provide insightful examples of unique opportunities by data-driven deep learning-enabled image translations.

Ayush Somani, Arif Ahmed Sekh, Ida S. Opstad, Åsa Birna Birgisdottir, Truls Myrmel, Balpreet Singh Ahluwalia, Krishna Agarwal, Dilip K. Prasad, Alexander Horsch
Influence of Inter-Annotator Variability on Automatic Mitotic Figure Assessment

Density of mitotic figures in histologic sections is a prognostically relevant characteristic for many tumours. Due to high interpathologist variability, deep learning-based algorithms are a promising solution to improve tumour prognostication. Pathologists are the gold standard for database development, however, labelling errors may hamper development of accurate algorithms. In the present work we evaluated the benefit of multi-expert consensus (n = 3, 5, 7, 9, 11) on algorithmic performance. While training with individual databases resulted in highly variable F1 scores, performance was notably increased and more consistent when using the consensus of three annotators. Adding more annotators only resulted in minor improvements. We conclude that databases by few pathologists and high label precision may be the best compromise between high algorithmic performance and time investment.

Frauke Wilm, Christof A. Bertram, Christian Marzahl, Alexander Bartel, Taryn A. Donovan, Charles-Antoine Assenmacher, Kathrin Becker, Mark Bennett, Sarah Corner, Brieuc Cossic, Daniela Denk, Martina Dettwiler, Beatriz Garcia Gonzalez, Corinne Gurtner, Annabelle Heier, Annika Lehmbecker, Sophie Merz, Stephanie Plog, Anja Schmidt, Franziska Sebastian, Rebecca C. Smedley, Marco Tecilla, Tuddow Thaiwong, Katharina Breininger, Matti Kiupel, Andreas Maier, Robert Klopfleisch, Marc Aubreville
Automatic Vessel Segmentation and Aneurysm Detection Pipeline for Numerical Fluid Analysis

Computational Fluid Dynamic calculations are a great assistance for rupture prediction of cerebral aneurysms. This procedure requires a consistent surface, as well as a separation of the blood vessel and aneurysm on this surface to calculate rupture-relevant scores. For this purpose we present an automatic pipeline, which generates a surface model of the vascular tree from angiographies determined by a markerbased watershed segmentation and label post-processing. Aneurysms on the surface model are then detected and segmented using shape-based graph cuts along with anisotropic diffusion and an iterative Support Vector Machine based classification. Aneurysms are correctly detected and segmented in 33 out of 35 test cases. Simulation relevant vessels are successfully segmented without vessel merging in 131 out of 144 test cases, achieving an average dice coefficient of 0.901.

Johannes Felde, Thomas Wagner, Hans Lamecker, Christian Doenitz, Lina Gundelwein
Abstract: Widening the Focus
Biomedical Image Segmentation Challenges and the Underestimated Role of Patch Sampling and Inference Strategies

The field of biomedical computer vision has considerably been influenced by image analysis challenges, which are mainly dominated by deep learning-based approaches. Much effort is put into challenge-specific optimization of model design, training schemes and data augmentation techniques. The paper [1] aims to widen the focus beyond model architecture and training pipeline design by shedding a light on inference efficiency and the role of patch sampling strategies for large images that cannot be processed at once.

Frederic Madesta, Rüdiger Schmitz, Thomas Rösch, René Werner
End-to-end Learning of Body Weight Prediction from Point Clouds with Basis Point Sets

The body weight of a patient is an important parameter in many clinical settings, e.g. when it comes to drug dosing or anesthesia. However, assessing the weight through direct interaction with the patient (anamnesis, weighing) is often infeasible. Therefore, there is a need for the weight to be estimated in a contactless way from visual inputs. This work addresses weight prediction of patients lying in bed from 3D point cloud data by means of deep learning techniques. Contrary to prior work in this field, we propose to learn the task in an end-to-end fashion without relying on hand-crafted features. For this purpose, we adopt the concept of basis point sets to encode the input point cloud into a low-dimensional feature vector. This vector is passed to a neural network, which is trained for weight regression. As the originally proposed construction of the basis point set is not ideal for our problem, we develop a novel sampling scheme, which exploits prior knowledge about the distribution of input points. We evaluate our approach on a lying pose dataset (SLP) and achieve weight estimates with a mean absolute error of 4.2 kg and a mean relative error of 6.4% compared to 4.8 kg and 7.0% obtained with a basic PointNet.

Alexander Bigalke, Lasse Hansen, Mattias P. Heinrich
Abstract: Deep Learning Algorithms Out-perform Veterinary Pathologists in Detecting the Mitotically Most Active Tumor Region

Manual count of mitotic figures, which is determined in the tumor region with the highest mitotic activity, is a key parameter of most tumor grading schemes. The mitotic count has a known high inter-rater disagreement and is strongly dependent on the area selection due to uneven mitotic figure distribution. In our work[1], we assessed the question, how significantly the area selection could impact the mitotic count.

Marc Aubreville, Christof A. Bertram, Christian Marzahl, Corinne Gurtner, Martina Dettwiler, Anja Schmidt, Florian Bartenschlager, Sophie Merz, Marco Fragoso, Olivia Kershaw, Robert Klopeisch, Andreas Maier
Abstract: Maximum A-posteriori Signal Recovery for OCT Angiography Image Generation

Optical coherence tomography angiography (OCTA) is a clinically promising modality to image retinal vasculature. For this end, optical coherence tomography (OCT) volumes are repeatedly scanned and intensity changes over time are used to compute OCTA images. Because of patient movement and variations in blood ow, OCTA data are prone to noise.

Lennart Husvogt, Stefan B. Ploner, Siyu Chen, Daniel Stromer, Julia Schottenhamml, Yasin Alibhai, Eric Moult, Nadia K. Waheed, James G. Fujimoto, Andreas Maier
Abstract: Simultaneous Estimation of X-ray Back-scatter and Forward-scatter using Multi-task Learning

Scattered radiation is a major concern that affects X-ray image-guided procedures in two ways. First, in complicated procedures, backscatter significantly contributes to the patient's (skin) dose. Second, forward scatter reduces contrast in projection images and introduces artifacts in 3-D reconstructions.

Philipp Roser, Xia Zhong, Annette Birkhold, Alexander Preuhs, Christopher Syben, Elisabeth Hoppe, Norbert Strobel, Markus Kowarschik, Rebecca Fahrig, Andreas Maier
Deep Learning-based Spine Centerline Extraction in Fetal Ultrasound

Ultrasound is widely used for fetal screening. It allows for detecting abnormalities at an early gestational age, while being time and cost effective with no known adverse effects. Searching for optimal ultrasound planes for these investigations is a demanding and time-consuming task. Here we describe a method for automatically detecting the spine centerline in 3D fetal ultrasound images. We propose a two-stage approach combining deep learning and classic image processing techniques. First, we segment the spine using a deep learning approach. The resulting probability map is used as input for a tracing algorithm. The result is a sequence of points describing the spine centerline. This line can be used for measuring the spinal length and for generating view planes for the investigation of anomalies.

Astrid Franz, Alexander Schmidt-Richberg, Eliza Orasanu, Cristian Lorenz
Abstract: Studying Robustness of Semantic Segmentation under Domain Shift in Cardiac MRI

Cardiac magnetic resonance imaging (cMRI) is an integral part of diagnosis in many heart related diseases. Recently, deep neural networks (DNN) have demonstrated successful automatic segmentation, thus alleviating the burden of time-consuming manual contouring of cardiac structures. Moreover, frameworks such as nnU-Net provide entirely automatic model configuration to unseen datasets enabling out-of-the-box application even by non-experts.

Peter M. Full, Fabian Isensee, Paul F. Jäger, Klaus Maier-Hein
On Efficient Extraction of Pelvis Region from CT Data

The first step in automated analysis of medical volumetric data is to detect slices, where specific body parts are located. In our project, we aimed to extract the pelvis regionfrom whole-body CT scans. Two deep learning approaches, namely, an unsupervised slice score regressor, and a supervised slice classification method, were evaluated on a relatively small-sized dataset. The result comparison showed that both methods could detect the region of interest with accuracy above 93%. Although the straightforward classification method delivered more accurate results (accuracy of 99%), sometimes it tended to output discontinuous regions, which can be solved by combination of both approaches.

Tatyana Ivanovska, Andrian O. Paulus, Robert Martin, Babak Panahi, Arndt Schilling
CT Normalization by Paired Image-to-image Translation for Lung Emphysema Quantification

In this work a UNet-based normalization method by paired image-to-image translation of Chest CT images was developed. Due to different noise-levels, emphysema quantification shows sincere subordination to the choice of the filterkernel. Images for training and testing of 71 patients were available, reconstructed using the smooth Siemens B20f filterkernel and the sharp B80f _lterkernel. Results were evaluated in regard to the image quality, including a visual assessment by two imaging experts, the L1 distance, the emphysema quantification (emphysema index and Dice overlap of emphysema segmentations). Emphysema quantification was compared to classical normalization methods. Our approach lead to very good image quality in which the mean B20f L1 distance to the B80f could be reduced by about 88:5% and the mean Dice was raised by 189% after normalization. Classical methods were outperformed. Even though small differences between B20f and normalized B80f images were noticed, the normalized images were found to be overall of diagnostic quality.

Insa Lange, Fabian Jacob, Alex Frydrychowicz, Heinz Handels, Jan Ehrhardt
Ultrasound Breast Lesion Detection using Extracted Attention Maps from a Weakly Supervised Convolutional Neural Network

In order to detect lesions on medical images, deep learning models commonly require information about the size of the lesion, either through a bounding box or through the pixel-/voxel-wise annotation of the lesion, which is in turn extremely expensive to produce in most cases. In this paper, we aim at demonstrating that by having a single central point per lesion as ground truth for 3D ultrasounds, accurate deep learning models for lesion detection can be trained, leading to precise visualizations using Grad-CAM. From a set of breast ultrasound volumes, healthy and diseased patches were used to train a deep convolutional neural network. On the one hand, each diseased patch contained in its central area a lesion's center annotated by experts. On the other hand, healthy patches were extracted from random regions of ultrasounds taken from healthy patients. An AUC of 0.92 and an accuracy of 0.87 was achieved on test patches.

Dalia Rodríguez-Salas, Mathias Seuret, Sulaiman Vesal, Andreas Maier
Abstract: Extracting and Leveraging Nodule Features with Lung Inpainting for Local Feature Augmentation

Chest X-ray (CXR) is the most common examination for fast detection of pulmonary abnormalities. Recently, automated algorithms have been developed to classify multiple diseases and abnormalities in CXR scans. However, because of the limited availability of scans containing nodules and the subtle properties of nodules in CXRs, state-of-the-art methods do not perform well on nodule classification.

Sebastian Gündel, Arnaud A. A. Setio, Sasa Grbic, Andreas Maier, Dorin Comaniciu
Abstract: Automatic Dementia Screening and Scoring by Applying Deep Learning on Clock-drawing Tests

Dementia is one of the most common neurological syndromes in the world. Usually, diagnoses are made based on paper-and-pencil tests and scored by personal judgments of experts. This technique can introduce errors and has high inter-rater variability.

Shuqing Chen, Daniel Stromer, Harb Alnasser Alabdalrahim, Stefan Schwab, Markus Weih, Andreas Maier
Deep Learning Compatible Differentiable X-ray Projections for Inverse Rendering

Many minimally invasive interventional procedures still rely on 2D fluoroscopic imaging. Generating a patient-specific 3D model from these X-ray data would improve the procedural workflow, e.g., by providing assistance functions such as automatic positioning. To accomplish this, two things are required. First, a statistical human shape model of the human anatomy and second, a differentiable X-ray renderer. We propose a differentiable renderer by deriving the distance travelled by a ray inside mesh structures to generate a distance map. To demonstrate its functioning, we use it for simulating X-ray images from human shape models. Then we show its application by solving the inverse problem, namely reconstructing 3D models from real 2D fluoroscopy images of the pelvis, which is an ideal anatomical structure for patient registration. This is accomplished by an iterative optimization strategy using gradient descent. With the majority of the pelvis being in the fluoroscopic field of view, we achieve a mean Hausdorff distance of 30mm between the reconstructed model and the ground truth segmentation.

Karthik Shetty, Annette Birkhold, Norbert Strobel, Bernhard Egger, Srikrishna Jaganathan, Markus Kowarschik, Andreas Maier
Abstract: Are Fast Labeling Methods Reliable?
A Case Study of Computer-aided Expert Annotations on Microscopy Slides

Deep-learning-based pipelines have shown the potential to revolutionalize microscopy image diagnostics by providing visual augmentations and evaluations to a pathologist. However, to match human performance, the methods rely on the availability of vast amounts of high-quality labeled data, which poses a significant challenge. To circumvent this, augmented labeling methods, also known as expert-algorithm-collaboration, have recently become popular.

Christian Marzahl, Christof A. Bertram, Marc Aubreville, Anne Petrick, Kristina Weiler, Agnes C. Gläsel, Marco Fragoso, Sophie Merz, Florian Bartenschlager, Judith Hoppe, Alina Langenhagen, Anne Katherine Jasensky, Jörn Voigt, Robert Klopfleisch, Andreas Maier
Abstract: Time Matters
Handling Spatio-temporal Perfusion Information for Automated Treatment in Cerebral Ischemia Scoring

Although video classification is a well addressed task in computer vision (CV), corresponding CV methods have so far only rarely been translated to the automatic assessment of X-ray digital subtraction angiography (DSA) imaging. We demonstrate the feasibility of a respective method translation by making the first attempt on automatic treatment in cerebral ischemia (TICI) scoring. [1] In a clinical setting, the TICI score is used to evaluate the initial as well as the perfusion state after thrombectomy, i.e. the intervention success.

Maximilian Nielsen, Moritz Waldmann, Thilo Sentker, Andreas Frölich, Jens Fiehler, René Werner
A Geometric and Textural Model of the Colon as Ground Truth for Deep Learning-based 3D-reconstruction

For endoscopic examinations of the large intestine, the limited field of vision related to the keyhole view of the endoscope can be a problem. A panoramic view of the video images acquired during a colonoscopy can potentially enlarge the field of view in real-time and may ensure that the performing physician has examined the entire organ. To train and test such a panorama-generation system, endoscopic video sequences with information about the geometry are necessary, but rarely exist. Therefore, we created a virtual phantom of the colon with a 3D-modelling software and propose different methods for realistic-looking textures. This allows us to perform a “virtual colonoscopy” and provide a well-defined test environment as well as supplement our training data for deep learning.

Ralf Hackner, Sina Walluscheck, Edgar Lehmann, Thomas Eixelberger, Volker Bruns, Thomas Wittenberg
Deep Learning-basierte Oberflächenrekonstruktion aus Binärmasken

Die Darstellung anatomischer Strukturen auf Basis von Segmentierungsergebnissen in Form von Binärmasken ist eine grundlegende Aufgabe im Bereich der medizinischen Visualisierung. Hierfür werden meist polygonale Oberflächen genutzt. Bei Binärmasken fehlt jedoch die Information über die tatsächliche Oberfläche, wodurch die erzeugten Repräsentationen unter Artefakten leiden. Eine Glättung der Maske oder der polygonalen Repräsentation kann dies reduzieren, führt jedoch auch zu Ungenauigkeiten. Diese Arbeit beschreibt einen Ansatz zur Berechnung einer vorzeichenbehafteten Distanzfunktion für eine gegebene Binärmaske auf Basis eines neuronalen Netzwerkes. Diese lässt sich anschließend mit klassischen Methoden in eine Oberfläche überführen, deren Visualisierung glatt und artefaktfrei ist und nur minimale Abweichungen einführt.

Carina Tschigor, Grzegorz Chlebus, Christian Schumann
A Novel Trilateral Filter for Digital Subtraction Angiography

In this paper, we formulate a novel Trilateral Filter (TF) for denoising digital subtracted angiography (DSA) without losing any vessel information. The harmful effect of X-rays limits the dose resulting in degraded signal-to-noise ratio (SNR). A bilateral filter (BF) is often applied for edge-preserving denoising. However, for a low SNR image, the filter needs to be iterated with smaller spatial window to avoid oversmoothing of low-contrast vessels. The proposed TF combines the BF of wider spatial window with the Frangi vessel enhancement filter to denoise the DSA and to improve the vessel visibility without the need for iteration. The experimental results shows that our method provides better vessel preservation and greater noise reduction than the BF.

Purvi Tripathi, Richard Obler, Andreas Maier, Hendrik Janssen
Abstract: JBFnet
Low Dose CT-denoising by Trainable Joint Bilateral Filtering

Deep neural networks have shown great success in low dose CT denoising. However, most of these deep neural networks have several hundred thousand trainable parameters. This, combined with the inherent non-linearity of the neural network, makes the deep neural network difficult to understand with low accountability.

Mayank Patwari, Ralf Gutjahr, Rainer Raupach, Andreas Maier
Interactive Visualization of 3D CNN Relevance Maps to Aid Model Comprehensibility
Application to the Detection of Alzheimer's Disease in MRI Images

Relevance maps derived from convolutional neural networks (CNN) indicate the influence of a particular image region on the decision of the CNN model. Individual maps are obtained for each single input 3D MRI image and various visualization options need to be adjusted to improve information content. In the use case of model prototyping and comparison, the common approach to save the 3D relevance maps to disk is impractical given the large number of combinations. Therefore, we developed a web application to aid interactive inspection of CNN relevance maps. For the requirements analysis, we interviewed several people from different stakeholder groups (model/visualization developers, radiology/neurology staff) following a participatory design approach. The visualization software was conceptually designed in a Model–View–Controller paradigm and implemented using the Python visualization library Bokeh. This framework allowed a Python server back-end directly executing the CNN model and related code, and a HTML/Javascript front-end running in any web browser. Slice-based 2D views were realized for each axis, accompanied by several visual guides to improve usability and quick navigation to image areas with high relevance. The interactive visualization tool greatly improved model inspection and comparison for developers. Owing to the well-structured implementation, it can be easily adapted to other CNN models and types of input data.

Martin Dyrba, Moritz Hanzig
Abstract: VirtualDSA++
Automated Segmentation, Vessel Labeling, Occlusion Detection, and Graph Search on CT Angiography Data

Computed tomography angiography (CTA) is one of the most commonly used modalities in the diagnosis of cerebrovascular diseases like ischemic strokes. Usually, the anatomy of interest in ischemic stroke cases is the Circle of Willis and its peripherals, the cerebral arteries, as these vessels are the most prominent candidates for occlusions. The diagnosis of occlusions in these vessels remains challenging, not only because of the large amount of surrounding vessels but also due to the large number of anatomical variants.

Florian Thamm, Markus Jürgens, Hendrik Ditt, Andreas Maier
Interval Neural Networks as Instability Detectors for Image Reconstructions

This work investigates the detection of instabilities that may occur when utilizing deep learning models for image reconstruction tasks. Although neural networks often empirically outperform traditional reconstruction methods, their usage for sensitive medical applications remains controversial. Indeed, in a recent series of works, it has been demonstrated that deep learning approaches are susceptible to various types of instabilities, caused for instance by adversarial noise or out-ofdistribution features. It is argued that this phenomenon can be observed regardless of the underlying architecture and that there is no easy remedy. Based on this insight, the present work demonstrates, how uncertainty quantification methods can be employed as instability detectors. In particular, it is shown that the recently proposed Interval Neural Networks are highly effective in revealing instabilities of reconstructions. Such an ability is crucial to ensure a safe use of deep learning-based methods for medical image reconstruction.

Jan Macdonald, Maximilian März, Luis Oala, Wojciech Samek
Invertible Neural Networks for Uncertainty Quantification in Photoacoustic Imaging

Multispectral photoacoustic imaging (PAI) is an emerging imaging modality that enables the recovery of functional tissue parameters such as blood oxygenation. However, the underlying inverse reconstruction problems are potentially ill-posed, meaning that radically different tissue properties may-in theory-yield comparable measurements. In this work, we present a new approach for handling this specific type of uncertainty using conditional invertible neural networks. We propose going beyond commonly used point estimates for tissue oxygenation and convert single-pixel initial pressure spectra to the full posterior probability density. This way, the inherent ambiguity of a problem can be encoded with multiple modes in the output. Based on the presented architecture, we demonstrate two use cases that leverage this information to not only detect and quantify but also to compensate for uncertainties: (1) photoacoustic device design and (2) optimization of photoacoustic image acquisition. Our in silico studies demonstrate the potential of the proposed methodology to become an important building block for uncertainty-aware reconstruction of physiological parameters with PAI.

Jan-Hinrich Nölke, Tim Adler, Janek Gröhl, Thomas Kirchner, Lynton Ardizzone, Carsten Rother, Ullrich Köthe, Lena Maier-Hein
Abstract: Inertial Measurements for Motion Compensation in Weight-bearing Cone-beam CT of the Knee

The main cause of artifacts in weight-bearing cone-beam computed tomography (CT) scans of the knee is involuntary subject motion. Clinical diagnosis on the resulting images is only possible if the motion is corrected during reconstruction. Existing image-based or marker-based methods are time consuming in preparation or execution.

Jennifer Maier, Marlies Nitschke, Jang-Hwan Choi, Garry Gold, Rebecca Fahrig, Bjoern M. Eskofier, Andreas Maier
Abstract: Reduktion der Kalibrierungszeit für die Magnetpartikelbildgebung mittels Deep Learning

Die Magnetpartikelbildgebung (MPI) ist eine junge tomographische Bildgebungstechnik, die magnetische Nanopartikel mit einer hohen räumlichen und zeitlichen Auflösung quantitativ abbildet. Eine gängige Methode zur Rekonstruktion von MPI-Daten ist die Systemmatrix (SM)-basierte Rekonstruktion. Die komplexwertige SM wird in einer zeitaufwändigen Kalibrierungsmessung bestimmt.

Ivo M. Baltruschat, Patryk Szwargulski, Florian Griese, Mirco Grosser, Rene Werner, Tobias Knopp
Autoencoder-based Quality Assessment for Synthetic Diffusion-MRI Data

Diffusion MRI makes it possible to assess brain microstructure in-vivo. Recently, a variety of deep learning methods have been proposed that enhance the quality and utility of these acquisitions. For deep learning methods, a large amount of training data is necessary, but difficult to obtain. As a solution, different approaches to synthetic data creation have been published, but it is unclear which approach produces data that best matches the in-vivo characteristics. Here, a methodology to assess the quality of synthetic diffusion data which is based on denoising autoencoders is proposed. For this, the reconstruction errors of autoencoders trained only on synthetic data were evaluated. The more the synthetic data resembles the real data, the lower the reconstruction error. Using this method, we evaluated which of four different synthetic data simulation techniques produced data that best resembled the in-vivo data. We find that modeling diffusion MRI data with patient- and scanner specific values leads to significantly better reconstruction results than using default diffusivity values, suggesting possible benefits of precision medicine approaches in diffusion MRI analysis.

Leon Weninger, Maxim Drobjazko, Chuh-Hyoun Na, Kerstin Jütten, Dorit Merhof
Analysis of Generative Shape Modeling Approaches
Latent Space Properties and Interpretability

Generative shape models are crucial for many medical image analysis tasks. In previous studies, it has been shown that conventional methods like PCA-based statistical shape models (SSMs) and their extensions are thought to be robust in terms of generalization ability but have rather poor specificity. On the contrary, deep learning approaches like autoencoders, require large training set sizes, but are comparably specific. In this work, we comprehensively compare different classical and deep learning-based generative shape modeling approaches and demonstrate their limitations and advantages. Experiments on a publicly available 2D chest X-ray data set show that the deep learning methods achieve better specificity and similar generalization abilities for large training set sizes. Furthermore, an extensive analysis of the different methods, gives an insight on their latent space representations.

Hristina Uzunova, Jesse Kruse, Paul Kaftan, Matthias Wilms, Nils D. Forkert, Heinz Handels, Jan Ehrhardt
Latent Shape Constraint for Anatomical Landmark Detection on Spine Radiographs

Vertebral corner points are frequently used landmarks for a vast variety of orthopedic and trauma surgical applications. Algorithmic approaches that are designed to automatically detect them on 2D radiographs have to cope with varying image contrast, high noise levels, and superimposed soft tissue. To enforce an anatomically correct landmark configuration in presence of these limitations, this study investigates a shape constraint technique based on data-driven encodings of the spine geometry. A contractive PointNet autoencoder is used to map numerical landmark coordinate representations onto a low-dimensional shape manifold. A distance norm between prediction and ground truth encodings then serves as an additional loss term during optimization. The method is compared and evaluated on the SpineWeb16 dataset. Small improvements can be observed, recommending further analysis of the encoding design and composite cost function.

Florian Kordon, Andreas Maier, Holger Kunze
Backmatter
Metadaten
Titel
Bildverarbeitung für die Medizin 2021
herausgegeben von
Prof. Dr. Christoph Palm
Prof. Dr. Thomas M. Deserno
Prof. Dr. Heinz Handels
Prof. Dr. Andreas Maier
Prof. Dr. Klaus Maier-Hein
Prof. Dr. Thomas Tolxdorff
Copyright-Jahr
2021
Electronic ISBN
978-3-658-33198-6
Print ISBN
978-3-658-33197-9
DOI
https://doi.org/10.1007/978-3-658-33198-6

Premium Partner