Skip To Content
ADVERTISEMENT

Advancing a Computational Miniature Mesoscope

figureArtist’s interpretation of an updated version of the CM2 computational minature mesoscope, which integrates new miniature optics and deep learning. [Enlarge figure]

Fluorescence microscopy is essential for studying biological structures and dynamics. Existing systems, however, suffer from a trade-off between field of view (FOV), resolution and system complexity, and cannot fulfill the emerging need for miniaturized platforms providing micron-scale resolution across centimeter-scale FOVs.

To overcome this challenge, two years ago, we developed a computational miniature mesoscope (CM2) that exploits a computational imaging strategy to enable single-shot, 3D high-resolution imaging across a wide FOV in a miniaturized platform.1 In work published this year, we further advanced CM2 technology by integrating novel miniature optics and deep learning.2

The CM2 achieves its single-shot 3D imaging capability via a microlens array (MLA). To achieve high image contrast, in version two of the system, we designed a hybrid emission filter to suppress undesired spectral leakage. In addition, we designed and 3D-printed a miniature freeform LED collimator, which is compact and lightweight, to provide greater than 80% excitation efficiency. Built around a back-side-illuminated CMOS sensor, CM2 version two achieved a fivefold improvement in image contrast over the version-one system, and captures high-SNR measurements in various experimental conditions.

Our deep-learning model, CM2Net, achieved high-quality 3D recovery across a wide FOV with high 3D resolution and fast reconstruction speed. CM2Net is designed based on the multi-view geometry of the CM2. The model solves the single-shot 3D reconstruction problem using three functional modules. The “view demixing” module de-multiplexes the 3×3 views from the MLA. The “view-synthesis” and “light-field refocusing enhancement” modules jointly perform high-resolution 3D reconstruction. In addition, to incorporate 3D linear-shift variant (3D-LSV) information into CM2Net, we developed a low-rank 3D-LSV model to efficiently generate realistic CM2 measurements, which are used to train the CM2Net.

We showed that CM2Net, trained using 3D-LSV simulator, generalized well to experiments and was robust to variations in the emitter’s local contrast and SNR. CM2Net enhanced the axial resolution to around 25 µm—eight times better than the model-based reconstruction. The 3D reconstructions were validated against tabletop widefield measurements. In addition, CM2Net reduced the reconstruction time to less than 4 s for a volume spanning a 7-mm FOV and a 0.8-mm depth, showing a speed roughly 1,400 times faster and a memory cost around 19 times less than the model-based algorithm.

Overall, our contribution is a novel deep-learning-augmented computational miniaturized microscope that achieves single-shot, high-resolution (roughly 6 µm lateral and 25 µm axial resolution) 3D fluorescence imaging across a mesoscale FOV. We expect that this simple, low-cost miniature system—built using off-the-shelf and 3D-printed components—will be useful in a wide range of large-scale 3D fluorescence-imaging and neural-recording applications.


Researchers

Qianwan Yang, Yujia Xue, Guorong Hu and Lei Tian, Boston University, Boston, MA, USA


References

1. Y. Xue et al. Sci. Adv. 6, eabb7508 (2020).

2. Y. Xue et al. Optica 9, 1009 (2022).

Publish Date: 01 December 2022

Add a Comment