Logo

The Data Daily

Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction

Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction

Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction
Nature Methods (2019) | Download Citation
Subjects
Wide-field fluorescence microscopy
Abstract
Deep learning is becoming an increasingly important tool for image reconstruction in fluorescence microscopy. We review state-of-the-art applications such as image restoration and super-resolution imaging, and discuss how the latest deep learning research could be applied to other image reconstruction tasks. Despite its successes, deep learning also poses substantial challenges and has limits. We discuss key questions, including how to obtain training data, whether discovery of unknown structures is possible, and the danger of inferring unsubstantiated image details.
Access optionsAccess options
Get full journal access for 1 year
$242.00
All prices are NET prices.
VAT will be added later in the checkout.
Rent or Buy article
Get time limited or full article access on ReadCube.
from$8.99
All prices are NET prices.
Additional access options:
Source code for the experiment described in Box 5 can be found at http://github.com/royerlab/DLDiscovery .
References
Google Scholar
3.
Rust, M. J., Bates, M. & Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm). Nat. Methods 3, 793–795 (2006).
Google Scholar
6.
Szegedy, C., Ioffe, S., Vanhoucke, V. & Alemi, A. A. Inception-v4, inception-resnet and the impact of residual connections on learning. AAAI 4, (12 (2017).
Google Scholar
7.
Zhao, H., Zarar, S., Tashev, I. & Lee, C.-H. Convolutional-recurrent neural networks for speech enhancement. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (eds. Hayes, M. et al.) 2401–2405 (IEEE, 2018).
8.
Lam, C. & Kipping, D. A machine learns to predict the stability of circumbinary planets. Mon. Not. R. Astron. Soc. 476, 5692–5697 (2018).
Google Scholar
9.
Ching, T. et al. Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 15, 20170387 (2018).
Google Scholar
10.
Radovic, A. et al. Machine learning at the energy and intensity frontiers of particle physics. Nature 560, 41–48 (2018).
Google Scholar
11.
Xu, Y. et al. Deep learning of feature representation with multiple instance learning for medical image analysis. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (eds. Gini, F. et al.) 1626–1630 (IEEE, 2014).
12.
Jin, K. H., McCann, M. T., Froustey, E. & Unser, M. Deep convolutional neural network for inverse problems in imaging. IEEE Trans. Image Process 26, 4509–4522 (2017).
Google Scholar
13.
Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).
Google Scholar
14.
Weigert, M., Royer, L., Jug, F. & Myers, G. Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2017 (eds. Descoteaux, M. et al.) 126–134 (Springer, 2017).
15.
Shajkofci, A. & Liebling, M. Semi-blind spatially-variant deconvolution in optical microscopy with local point spread function estimation by use of convolutional neural networks. In 2018 25th IEEE International Conference on Image Processing (ICIP) (eds. Nikou, C. et al.) 3818–3822 (IEEE, 2018).
16.
Wang, H. et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat. Methods 16, 103–110 (2019).
Google Scholar
17.
Ouyang, W., Aristov, A., Lelek, M., Hao, X. & Zimmer, C. Deep learning massively accelerates super-resolution localization microscopy. Nat. Biotechnol. 36, 460–468 (2018).
Google Scholar
18.
Nehme, E., Weiss, L. E., Michaeli, T. & Shechtman, Y. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5, 458–464 (2018).
Google Scholar
19.
Christiansen, E. M. et al. In silico labeling: predicting fluorescent labels in unlabeled images. Cell 173, 792–803 (2018).
Google Scholar
20.
Ounkomol, C., Seshamani, S., Maleckar, M. M., Collman, F. & Johnson, G. R. Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nat. Methods 15, 917–920 (2018).
Google Scholar
21.
Rivenson, Y. et al. Deep learning-based virtual histology staining using auto-fluorescence of label-free tissue. Preprint at https://arxiv.org/abs/1803.11293 (2018).
22.
Moen, E. et al. Deep learning for cellular image analysis. Nat. Methods https://doi.org/10.1038/s41592-019-0403-1 (2019).
23.
Google Scholar
24.
Carlton, P. M. et al. Fast live simultaneous multiwavelength four-dimensional optical microscopy. Proc. Natl Acad. Sci. USA 107, 16016–16022 (2010).
Google Scholar
25.
Marim, M. M., Angelini, E. D. & Olivo-Marin, J.-C. A compressed sensing approach for biological microscopy image denoising. In SPARS ‘09—Signal Processing with Adaptive Sparse Structured Representations (eds. Gribonval, R. et al.) inria-00369642 (IEEE, 2009).
26.
Boulanger, J. et al. Patch-based nonlocal functional for denoising fluorescence microscopy image sequences. IEEE Trans. Med. Imaging 29, 442–454 (2010).
Google Scholar
27.
Luisier, F., Blu, T. & Unser, M. Image denoising in mixed Poisson–Gaussian noise. IEEE Trans. Image Process 20, 696–708 (2011).
Google Scholar
28.
Xu, J., Zhang, L. & Zhang, D. A trilateral weighted sparse coding scheme for real-world image denoising. In Computer Vision—ECCV 2018 (eds. Ferrari, V. et al.) 21–38 (Springer, 2018).
29.
Yair, N. & Michaeli, T. Multi-scale weighted nuclear norm image restoration. In Proc. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (eds. Brown, M. S. et al.) 3165–3174 (IEEE, 2018).
30.
Buades, A., Coll, B. & Morel, J.-M. A non-local algorithm for image denoising. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ‘05) (eds. Schmid, C., Soatto, S. & Tomasi, C.) 60–65 (IEEE, 2005).
31.
Dabov, K., Foi, A., Katkovnik, V. & Egiazarian, K. Image denoising with block-matching and 3D filtering. In Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning (eds. Nasrabadi, N. M. et al.) 606414 (International Society for Optics and Photonics, 2006).
32.
Jain, V. & Seung, S. Natural image denoising with convolutional networks. In Advances in Neural Information Processing Systems 21 (eds. Koller, D. et al.) 769–776 (NIPS, 2009).
33.
Zhang, K., Zuo, W., Chen, Y., Meng, D. & Zhang, L. Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process 26, 3142–3155 (2017).
Google Scholar
34.
Lehtinen, J. et al. Noise2noise: learning image restoration without clean data. In Proc. 35th International Conference on Machine Learning, PMLR (eds. Dy, J. & Krause, A.) 2965–2974 (PMLR, 2018).
35.
Buchholz, T.-O., Jordan, M., Pigino, G. & Jug, F. Cryo-care: content-aware image restoration for cryo-transmission electron microscopy data. Preprint at https://arxiv.org/abs/1810.05420 (2018).
36.
Batson, J. & Royer, L. Noise2Self: blind denoising by self-supervision. Preprint at https://arxiv.org/abs/1901.11365 (2019).
37.
Krull, A., Buchholz, T.-O. & Jug, F. Noise2Void—learning denoising from single noisy images. Preprint at https://arxiv.org/abs/1811.10980 (2018).
38.
Laine, S., Lehtinen, J. & Aila, T. Self-supervised deep image denoising. Preprint at https://arxiv.org/abs/1901.10277v1 (2019).
39.
Ulyanov, D., Vedaldi, A. & Lempitsky, V. S. Deep image prior. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (eds. Brown, M. S. et al.) 9446–9454 (IEEE, 2018).
40.
Zhu, J., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In 2017 IEEE International Conference on Computer Vision (ICCV) (eds. Ikeuchi, K. et al.) 2242–2251 (IEEE, 2017).
41.
Hom, E. F. et al. AIDA: an adaptive image deconvolution algorithm with application to multiframe and three-dimensional data. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 24, 1580–1600 (2007).
Google Scholar
42.
Dey, N. et al. Richardson–Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution. Microsc. Res. Tech. 69, 260–266 (2006).
Google Scholar
48.
Boyd, N., Jonas, E., Babcock, H. P. & Recht, B. DeepLoco: fast 3D localization microscopy using neural networks. Preprint at https://www.biorxiv.org/content/10.1101/267096v1 (2018).
49.
Goodfellow, I. et al. Generative adversarial nets. In Proc. 27th International Conference on Neural Information Processing Systems (eds. Ghahramani, Z. et al.) 2672–2680 (MIT Press, 2014).
50.
Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (eds. Chellappa, R. et al.) 5967–5976 (IEEE, 2017).
51.
Gustafsson, N. et al. Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations. Nat. Commun. 7, 12471 (2016).
Google Scholar
62.
Mayer, J., Robert-Moreno, A., Sharpe, J. & Swoger, J. Attenuation artifacts in light sheet fluorescence microscopy corrected by OPTiSPIM. Light Sci. Appl. 7, 70 (2018).
Google Scholar
63.
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T. & Efros, A. A. Context encoders: feature learning by inpainting. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (eds. Tuytelaars, T. et al.) 2536–2544 (IEEE, 2016).
64.
Liu, G. et al. Image inpainting for irregular holes using partial convolutions. In Computer Vision—ECCV 2018 (eds. Ferrari, V. et al.) 89–105 (Springer, 2018).
65.
Amat, F. et al. Efficient processing and analysis of large-scale light-sheet microscopy data. Nat. Protoc. 10, 1679–1696 (2015).
Google Scholar
68.
Zbontar, J. & LeCun, Y. Computing the stereo matching cost with a convolutional neural network. In Proc. 28th IEEE Conference on Computer Vision and Pattern Recognition (eds. Bischof, H. et al.) 1592–1599 (2015).
69.
Rohé, M.-M., Datar, M., Heimann, T., Sermesant, M. & Pennec, X. SVF-Net: learning deformable image registration using shape matching. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2017 (eds. Descoteaux, M. et al.) 266–274 (Springer, 2017).
70.
Nguyen, T., Chen, S. W., Skandan, S., Taylor, C. J. & Kumar, V. Unsupervised deep homography: a fast and robust homography estimation model. IEEE Robot. Autom. Lett. 3, 2346–2353 (2018).
Google Scholar
76.
Xingjian, S. et al. Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems 28 (eds. Cortes, C. et al.) 802–810 (NIPS, 2015).
77.
Naganathan, S. R., Frthauer, S., Nishikawa, M., Jlicher, F. & Grill, S. W. Active torque generation by the actomyosin cell cortex drives left-right symmetry breaking. eLife 3, e04165 (2014).
Google Scholar
78.
Meister, S., Hur, J. & Roth, S. Unflow: unsupervised learning of optical flow with a bidirectional census loss. In Thirty-second AAAI Conference on Artificial Intelligence (eds. McIlraith, S. & Weinberger, K.) 7251–7259 (AAAI Press, 2018).
79.
Haring, M. T. et al. Automated sub-5nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers. Sci. Rep. 7, 43621 (2017).
Google Scholar
80.
Royer, L. A. et al. Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms. Nat. Biotechnol. 34, 1267–1278 (2016).
Google Scholar
85.
Moosavi-Dezfooli, S.-M., Fawzi, A. & Frossard, P. DeepFool: a simple and accurate method to fool deep neural networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (eds. Tuytelaars, T. et al.) 2574–2582 (IEEE, 2016).
86.
Sabour, S., Cao, Y., Faghri, F. & Fleet, D. J. Adversarial manipulation of deep representations. Preprint at https://arxiv.org/abs/1511.05122 (2015).
87.
Su, J., Vargas, D. V. & Sakurai, K. One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. https://doi.org/10.1109/TEVC.2019.2890858 (2019).
88.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D. & Vladu, A. Towards deep learning models resistant to adversarial attacks. Preprint at https://arxiv.org/abs/1706.06083v1 (2017).
89.
Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (eds. Pereira, F. et al.) 1097–1105 (Curran Associates, 2012).
90.
Johnson, G. R., Donovan-Maiye, R. M. & Maleckar, M. M. Generative modeling with conditional autoencoders: building an integrated cell. Preprint at https://arxiv.org/abs/1705.00092 (2017).
91.
Osokin, A., Chessel, A., Salas, R. E. C. & Vaggi, F. Gans for biological image synthesis. In 2017 IEEE International Conference on Computer Vision (ICCV) (eds. Ikeuchi, K. et al.) 2252–2261 (IEEE, 2017).
92.
Goldsborough, P., Pawlowski, N., Caicedo, J. C., Singh, S. & Carpenter, A. CytoGAN: generative modeling of cell images. Preprint at https://www.biorxiv.org/content/10.1101/227645v1 (2017).
93.
Yuan, H. et al. Computational modeling of cellular structures using conditional deep generative networks. Bioinformatics https://doi.org/10.1093/bioinformatics/bty923 (2018).
Google Scholar
94.
Zeiler, M. D. & Fergus, R. Visualizing and understanding convolutional networks. In Computer Vision—ECCV 2014 (eds. Fleet, D. et al.) 818–833 (Springer, 2014).
95.
Yosinski, J., Clune, J., Bengio, Y. & Lipson, H. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems 27 (eds. Ghahramani, Z. et al.) 3320–3328 (Curran Associates, 2014).
96.
Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115 (2017).
Google Scholar
100.
Simonyan, K., Vedaldi, A. & Zisserman, A. Deep inside convolutional networks: visualising image classification models and saliency maps. Preprint at https://arxiv.org/abs/1312.6034 (2013).
101.
Zhang, Q., Cao, R., Shi, F., Wu, Y. N. & Zhu, S.-C. Interpreting CNN knowledge via an explanatory graph. In Thirty-second AAAI Conference on Artificial Intelligence (eds. McIlraith, S. & Weinberger, K.) 4454–4463 (AAAI Press, 2018).
102.
Zhang, Q., Wu, Y. N. & Zhu, S.-C. Interpretable convolutional neural networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (eds. Brown, M. S. et al.) 8827–8836 (IEEE, 2018).
103.
Lakshminarayanan, B., Pritzel, A. & Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems 30 (eds. Guyon, I. et al.) 6402–6413 (NIPS, 2017).
104.
Kendall, A. & Gal, Y. What uncertainties do we need in Bayesian deep learning for computer vision? In Advances in Neural Information Processing Systems 30 (eds. Guyon, I. et al.) 5580–5590 (NIPS, 2017).
105.
Google Scholar
106.
Henderson, P. et al. Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelligence (eds. McIlraith, S. & Weinberger, K.) 3207–3214 (AAAI Press, 2018).
107.
Gazagnes, S., Soubies, E. & Blanc-Féraud, L. High density molecule localization for super-resolution microscopy using CEL0 based sparse approximation. In IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) (eds. Egan, G. et al.) 28–31 (IEEE, 2017).
108.
McCann, M. T., Jin, K. H. & Unser, M. Convolutional neural networks for inverse problems in imaging: a review. IEEE Signal Process. Mag. 34, 85–95 (2017).
Google Scholar
109.
Lucas, A., Iliadis, M., Molina, R. & Katsaggelos, A. K. Using deep neural networks for inverse problems in imaging: beyond analytical methods. IEEE Signal Process. Mag. 35, 20–36 (2018).
Google Scholar
110.
Hornik, K., Stinchcombe, M. & White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989).
Google Scholar
111.
Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by backpropagating errors. Nature 323, 533–536 (1986).
Google Scholar
112.
Masci, J., Meier, U., Cireşan, D. & Schmidhuber, J. Stacked convolutional auto-encoders for hierarchical feature extraction. In Artificial Neural Networks and Machine Learning—ICANN 2011 (eds. Honkela, T. et al.) 52–59 (Springer, 2011).
113.
Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-assisted Intervention—MICCAI 2015 (eds. Navab, N. et al.) 234–241 (Springer, 2015).
114.
Falk, T. et al. U-Net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16, 67 (2019).

Images Powered by Shutterstock