(2024. 06.09) News : The training set has been released. Please visit the Dataset page for more information.¶
(2024.07.01) News: The validation set has been released. Please visit the Dataset page for more information.¶
(2024.07.07) News: The link to the validation set is updated. Please download the validation set using the new link.¶
(2024.07.07) News: The submission system and leaderboard for the validation set will open later than planned. For more details, please visit the Timeline page. We apologize for any inconvenience.¶
Self-supervised learning for 3D light-sheet microscopy image segmentation¶
Background¶
In the realm of modern biological research, the ability to visualize and understand complex structures within tissues and organisms is crucial. Light-sheet microscopy (LSM) after tissue clearing and specific structure staining presents an efficient, high contrast, and ultra-high resolution method for visualizing a wide array of biological structures in diverse samples, such as cellular and subcellular structures, organelles and processes.
In the tissue clearing step, while preserving sample integrity and fluorescence of labeled structures, inherently opaque biological samples are rendered transparent, allowing light to penetrate deeper into the tissue[1]. In the structure staining step, various dyes, fluorophores, or antibodies can be employed to selectively label specific biological structures within samples and enhance their contrast under microscopy[2]. Integrating with structure staining and tissue clearing steps, LSM provides researchers with unprecedented capabilities to visualize intricate biological structures with high spatial resolution, offering new insights into various biomedical research fields such as neuroscience[3], immunology[4], oncology[5] and cardiology[6].
To analyze LSM images in different biomedical research fields, segmentation plays a pivotal and essential role in identifying and distinguishing different biological structures[7]. For very small-scale LSM images, image segmentation can be done manually. However, in whole-organ or body LSM cases, manual segmentation is time-intensive, single images can have 10000^3 voxel, hence automatic segmentation methods are highly demanded. Recent strides in deep learning-based segmentation methods offer promising solutions for automated segmentation of LSM images[8-9]. Although these methods reached segmentation performances comparable to expert human annotators, their success largely relies on supervised learning from extensive training sets of manually annotated images which are specific to one kind of structure staining. However, large-scale annotation for diverse LSM image segmentation tasks poses a great challenge.
Self-supervised learning proves advantageous in this context, as it allows deep learning models to pretrain on large-scale, unannotated datasets, learning useful and general representations of LSM image data. Subsequently, the model can be fine-tuned on a smaller labeled dataset for specific segmentation tasks[10]. Notably, self-supervised learning has not been extensively explored within the LSM field, despite the presence of vast sets of LSM data of different biological structures. Some of the properties of LSM images e.g. the high signal-to-noise-ratio makes the data specifically well suited for self-supervised learning.
Objective¶
In this challenge, we aim to host a challenge on self-supervised learning for 3D LSM image segmentation, encouraging the community to develop self-supervised learning methods for general segmentation of various structures in 3D LSM images. With an effective self-supervised learning method, extensive 3D LSM images with no annotations can be leveraged to pretrain segmentation models. This encourages models to capture high-level representations that are generalizable across different biological structures. Subsequently, the pretrained models can be finetuned on substantially smaller annotated datasets, thereby significantly minimizing the annotation efforts in various 3D LSM segmentation applications.