Recent advancements in spatial transcriptomics (ST) technologies have enhanced the ability to investigate organ and tissue architecture at subcellular resolution. Sequencing-based methods achieve spatial resolutions of 0.5 to 2 µm, while imaging techniques can reach 0.1 to 0.2 µm. Extracting single-cell information from these high-resolution measurements is essential but challenging due to imperfect subcellular data. Sequencing technologies often encounter low transcriptional abundance per spot, and imaging methods typically measure only a few hundred genes, limiting accurate cell boundary delineation and type annotation. Additionally, the integration of multimodal information, such as staining images and transcript measurements, remains underexplored, and substantial computational resources are needed to analyze millions of cells. To address these challenges, we propose CellART, a unified framework for extracting single-cell information from high-resolution ST data. By integrating deep neural networks and probabilistic models, CellART effectively learns high-resolution representations from multimodal data, providing accurate cell segmentation and cell type annotations. The framework is scalable, capable of handling millions of cells efficiently. We further extend CellART to characterize cell niches in both condition-agnostic and disease-relevant studies, unlocking the potential of ST technologies to enhance our understanding of biological processes and disease mechanisms.
海报.pdf