Back

UniSPAC: A Unified Segmentation Framework for Proofreading and Annotation in Connectomics

Deng, J.; Wu, J.; Chen, C.; Zheng, Q.; Zhang, Z.; Wu, J.; Ouyang, W.; Song, C.

2024-11-27 neuroscience
10.1101/2024.11.27.625336 bioRxiv
Show abstract

Reconstructing dense neuronal connections from volume electron microscopy (vEM) images is a critical challenge in neuroscience, driving the development of various automatic neuron segmentation methods1. Although current state-of-the-art automated segmentation methods can achieve high segmentation accuracy, they still require substantial manual proofreading and rely heavily on labeled datasets, which are often scarce, particularly for non-model organisms. Here, we introduce a Unified Segmentation framework for Proofreading and Annotation in Connectomics (UniSPAC) by providing the interactive segmentation model in 2D-level and the neuron tracing model in 3D-level. UniSPAC-2D allows users to correct its segmentation errors through point-based prompts, combining segmentation and proofreading in a single framework. UniSPAC-3D automatically traces neurons segmented by UniSPAC-2D across image slices, significantly reducing human involvement. Furthermore, UniSPAC-2D and UniSPAC-3D models can facilitate the semi-automatic generation of labeled data for new species, eliminating the need for external annotation tools. The fresh annotated data generated during proofreading in turn optimizes the interactive model through an online learning strategy, reducing the labeling effort for novel species over time. UniSPAC outperforms the start-of-the-art Segment Anything Model (SAM) in Drosophila segmentation, achieving 47x higher efficiency, and surpasses ACRLSD in cross-species segmentation on zebra finch data.

Matching journals

The top 1 journal accounts for 50% of the predicted probability mass.