CellSAM: A Foundation Model for Cell Segmentation
Cells are the basic building blocks of life, and accurately identifying and segmenting them in imaging data is crucial for various cellular imaging studies. Although deep learning techniques have made significant advancements in this field, most existing models are specialized and struggle to generalize across different domains or effectively handle large datasets. In this study, we introduce CellSAM, a universal model for cell segmentation that can adapt to diverse cellular imaging data. CellSAM extends the capabilities of the Segment Anything Model (SAM) through innovative prompt engineering methods for mask generation. By training an object detector called CellFinder to automatically locate cells and guide SAM in generating segmentations, CellSAM achieves human-level performance in segmenting images of mammalian cells, yeast, and bacteria obtained from various imaging techniques. Notably, CellSAM exhibits strong zero-shot performance and can be further enhanced with minimal examples using few-shot learning techniques. Moreover, we showcase the versatility of CellSAM in different bioimage analysis workflows. An operational version of CellSAM can be accessed at https://cellsam.deepcell.org/.
Key Points:
– CellSAM is a versatile model for cell segmentation that demonstrates high performance across diverse cellular imaging datasets.
– The model leverages prompt engineering techniques and an object detector, CellFinder, to enhance segmentation accuracy and generalizability.
– CellSAM exhibits strong zero-shot performance and can be fine-tuned with a small number of examples through few-shot learning.
– The model’s applicability extends to various bioimage analysis workflows, showcasing its flexibility and utility in different research contexts.
Tags: yeast
Read more on pubmed.ncbi.nlm.nih.gov
