Geometric alignment appears in a variety of applications, ranging from domain adaptation, optimal transport, and normalizing flows in machine learning; optical flow and learned augmentation in computer vision and deformable registration within biomedical imaging. A recurring challenge is the alignment of domains whose topology is not the same; a problem that is routinely ignored, potentially introducing bias in downstream analysis. As a first step towards solving such alignment problems, we propose an unsupervised algorithm for the detection of changes in image topology. The model is based on a conditional variational auto-encoder and detects topological changes between two images during the registration step. We account for both topological changes in the image under spatial variation and unexpected transformations. Our approach is validated on two tasks and datasets: detection of topological changes in microscopy images of cells, and unsupervised anomaly detection brain imaging.
MIDL
Award
Semantic similarity metrics for learned image registration
We propose a semantic similarity metric for image registration. Existing metrics like Euclidean Distance or Normalized Cross-Correlation focus on aligning intensity values, giving difficulties with low intensity contrast or noise. Our approach learns dataset-specific features that drive the optimization of a learning-based registration model. We train both an unsupervised approach using an auto-encoder, and a semi-supervised approach using supplemental segmentation data to extract semantic features for image registration. Comparing to existing methods across multiple image modalities and applications, we achieve consistently high registration accuracy. A learned invariance to noise gives smoother transformations on low-quality images.
Probabilistic image segmentation encodes varying prediction confidence and
inherent ambiguity in the segmentation problem. While different probabilistic
segmentation models are designed to capture different aspects of segmentation
uncertainty and ambiguity, these modelling differences are rarely discussed in
the context of applications of uncertainty. We consider two common use cases of
segmentation uncertainty, namely assessment of segmentation quality and active
learning. We consider four established strategies for probabilistic
segmentation, discuss their modelling capabilities, and investigate their
performance in these two tasks. We find that for all models and both tasks,
returned uncertainty correlates positively with segmentation error, but does
not prove to be useful for active learning.
2020
NeurIPS
A Loss Function for Generative Neural Networks Based on Watson’s Perceptual Model