Fluorescence microscopy, a key driver for progress in the life sciences, faces limitations due to the microscope’s optics, fluorophore chemistry, and photon exposure limits, necessitating trade-offs in imaging speed, resolution, and depth. In my talk, I will discuss the two deep-learning based computational multiplexing (image decomposition) techniques I developed during my PhD that enhanced the imaging of multiple cellular structures within a single fluorescent channel, allowing faster imaging and reduced photon exposure. Technically speaking, given a superimposed image (say containing Nucleus and Tubulin), my PhD research is to predict its constituent images separately. Early in my PhD, we found that the best results using regular deep architectures are achieved when large image patches are used during training, making GPU-memory consumption the limiting factor to further improving performance. So, we developed µSplit [AKDS+23], a novel meta-architecture that enabled the memory efficient incorporation of large image-context. We used Hierarchical-VAE (HVAE) and two U-Net variants as underlying architectures for µSplit. We modified HVAE’s ELBO loss term and adopted a KL loss formulation which enabled extraction of high-frequency details. In µSplit, we worked with noise-free data. To handle noise, we then developed denoiSplit [AJ24] which performed unsupervised denoising along with supervised image splitting. For this, we incorporated Noise models, which capture the noise distribution present in the noisy images thereby making our method specific to the employed microscope’s configuration. Additionally, we reverted back to HVAE’s KL divergence formulation. denoiSplit can sample diverse predictions from a trained posterior and the diversity scales with the aleatoric uncertainty in a given input, allowing us to estimate the true prediction errors by computing the variability between samples. During this research phase, we observed the limitations of SSIM (Structural Similarity Index Measure) in assessing model performance on microscopy data. More specifically, we introduced and quantified the notion of saturation, wherein SSIM values become unreasonably high. We also observed an issue arising due to the intensity differences present in microscopy data. To this end, we came up with MicroSSIM [ADJ24], an SSIM variant tailored for Microscopy data. At last, I will try to show the utility of image translation/decomposition methods to biological data with a brief mention of our ongoing collaboration with researchers from Google towards a related problem of bleed-through removal using InDI [DM24], a diffusion-like iterative model.
Short Bio: Ashesh is a last-year PhD student in the Computer Science department at TU Dresden, Germany. He has done all his PhD research at Florian Jug’s lab, Computational Biology Center, Human Technopole (HT), Milan, Italy. His PhD project is an image decomposition task of splitting a superimposed (fluorescence microscopy) image into its constituent channels, which has already been published at reputed international conferences in Computer Vision (ICCV 23 and ECCV 24). Moreover, from several projects done by Masters, PhDs, and Post-doctoral fellows at Human Technopole, Ashesh's PhD project was first selected for an oral presentation and later given the best oral presentation award at the HT PhD–Postdoc Symposium 2024. Previously, Ashesh received a Dual (B.Tech+M.Tech) degree in Computer Science in 2015 from IIT Delhi. He has more than 3 years of experience working in Industry, mostly as a Data Scientist. Ashesh also worked at National Taiwan University, Taipei, Taiwan, as a Research Assistant for about a year under Prof. Hsuan-Tien Lin. There, he initiated research on multiple Computer Vision problems in the lab and got two publications.