Ground-truth cell body segmentation used for Starfinity training
Accurate segmentation of volumetric fluorescence image data has been a long-standing challenge and it can considerably degrade the accuracy of multiplexed fluorescence in situ hybridization (FISH) analysis. To overcome this challenge, we developed a deep learning-based automatic 3D segmentation algorithm, called Starfinity. It first predicts its cell center probability and its radial distances to the nearest cell borders for each pixel. It then aggregates pixel affinity maps from the densely predicted distances and applies a watershed segmentation on the affinity maps using the thresholded center probability as seeds.
This repository contains (1) 'ground-truth' segmentation annotation used to train the Starfinity model, (2) the trained Starfinity model used to predict segmentation masks for EASI-FISH data from the lateral hypothalamus (LHA). DAPI-stained RNA images collected from the Zeiss Z.1 lightsheet microscope after expansion (ExM) were used. The manual segmentation was performed using Paintera on full-resolution (0.23µm x 0.23µm x 0.42µm) images. The raw and annotated images were then down sampled 4x4x2 (0.92µm x 0.92µm x 0.84µm) for training and prediction.
Manual inspection of the predicted segmentation from ~5% of cells in 4 LHA samples (a total of ~4,000 out of 80,000 cells) suggests that 93% of cells were properly segmented with this model.