Contact - General information - Research - Publications - Teaching - Misc
Loïc Denis
Loïc Denis
Professor

Image Processing and Artificial Intelligence

Laboratoire Hubert Curien
UMR 5516 CNRS / Université de Saint-Etienne
Bât. F, 18 rue B. Lauras
42 000 Saint-Etienne
France

(+33) 4 77 91 57 66

loic dot denis at univ-st-etienne dot fr

and

TELECOM Saint-Etienne
25 rue R. Annino
42 000 Saint-Etienne
France

Google Scholar
some keywords

Latest news

March 2023: A post-doc position on Generative models and representation learning for synthetic aperture radar imaging is open at the University of Saint-Etienne!

September 2022: I now hold a full-professor position at the Université Jean Monnet Saint-Etienne. I will develop artificial intelligence and image processing techniques with a special focus on applications in physics: Earth observation (synthetic aperture radar), astrophysics (especially exoplanets detection & characterization), microscopy for surface and biological sample imaging. I am thrilled to conduct these projects at the Laboratoire Hubert Curien where cutting-edge research is developed both in optics/photonics and computer science!

July 2022: Emanuele Dalsasso presented a paper at the conference on Synthetic Aperture Radar imaging EUSAR on strategies for self-supervised training in order to suppress speckle in images. He was awarded the 2nd place at the Student Paper Award competition for his presentation.
Congratulations Emanuele!

July 2022: We were deeply honored to receive the 2021 IEEE GRSS Symposium Prize Paper Award during IEEE IGARSS conference held this month in Malaysia for our paper "Exploiting multi-temporal information for improved speckle reduction of Sentinel-1 SAR images by deep learning" (IEEEXplore link, ArXiV version). This paper is the joint work of Emanuele Dalsasso and Inès Meraoumia who work with Florence Tupin and me on deep neural networks for SAR image restoration.

January 2022: I am serving as an Associate Editor for the IEEE transactions on Image Processing. More information on the journal can be found here.

December 2021: I am giving a tutorial presentation on speckle reduction for a workshop on High-precision BRDF measurement. The slides of my talk are available here.

November 2021: Are deep neural networks performing magic? Some results achieved by neural nets are truly amazing but anyone who has trained neural networks knows that making these networks work can be a very tedious task, starting by the building of training sets. In that respect, self-supervised training techniques are a relief since they offer a way to train the networks on raw data, without ground-truth. In our recent work "As if by magic: self-supervised training of deep despeckling networks with MERLIN", we show that, somewhat surprisingly, single look complex SAR images have all we need to train despeckling networks in a self-supervised fashion. We believe that this may have huge implications in the way networks can be trained to process SAR data.

November 2021: Image restoration is an important task in remote sensing. We describe many different approaches that were developed for synthetic aperture radar imaging or hyperspectral imaging in this recent review paper.

March 2020: Olivier Flasseur received an award for his PhD on the detection and reconstrution of weak signals in images. The award acknowledges the excellent research Olivier conducted during his PhD at the University of Saint-Etienne. Olivier has developed extremely sensisitive detection methods used to search for exoplanets in astronomy or microscopic objects in biomedical imaging. Congratulations Olivier!

January 2020: Directly imaging exoplanets is an incredible scientific challenge that requires combining a very large telescope, an extremely efficient adaptive optics in order to correctly focus most of the light from the host star on a coronagraphic mask, multi-temporal and/or multi-spectral acquisitions and elaborate data processing techniques. We have been working on the data-processing end lately and are proud to have accomplished a significant progress thanks to the PhD work of Olivier Flasseur, the post-doc of Anthony Berdeu, and our ongoing collaboration with the Observatory of Lyon. Our most recent works cover an accurate modeling of integral-field spectrometers for instrumental calibration and multi-spectral reconstruction (PIC method) and algorithms to detect and characterize exo-planets (the PACO algorithm, its robust version and a multispectral extension).

October 2018: I defended my Habilitation thesis (HDR) on October 26, 2018. More information about the defense here (in French), the slides of the presentation (in English).

March. 2018: Congratulations to Sylvain Lobry! He was awarded the Best PhD Thesis by the "Futur et Ruptures" research program of the Mines-Telecom fundation and the "institut Carnot Télécom et Société numérique". I had the pleasure to co-supervise Sylvain during his master's thesis and his PhD thesis. More information here.

March. 2018: We proposed a new method called "PACO" for exoplanet detection by direct imaging. It is described in a paper published in Astronomy and Astrophysics.

Jan. 2018: SAR images in urban areas have many bright points with a characteristic cross-shaped signature. Rémy Abergel developed a very effective method during his postdoc to suppress these crosses and extract meaningful targets: see the preprint pdf and the source code. The paper is published in JSTARS.

May. 2016: Our paper "NL-SAR: A Unified Nonlocal Framework for Resolution-Preserving (Pol)(In)SAR Denoising" has been awarded the IEEE Geoscience and Remote Sensing Society 2016 Transactions Prize Paper Award. We are very honored to receive this award that distinguishes each year one paper published in the IEEE transactions on Geoscience and Remote Sensing.

Sept. 2015: Rahul Mourya received the Best student paper award at EUSIPCO in Nice! Rahul will defend his PhD thesis on shift-variant deblurring in the coming months.

Sept. 2015: Rahul Mourya will present our work on non-smooth optimization at ICIP in Quebec at the end of the month. Our paper has been marked as among the top 10% by ICIP board. Rahul will present both a poster and a "show and tell" demo. Have a look at the poster here.

April 2015: Our paper on shift-variant blur models is now online on the International Journal of Computer Vision: Fast Approximations of Shift-Variant Blur. You can find our preprint here.

March 2014: A review paper on patch-based methods for SAR imaging will appear this summer in the special issue of IEEE Signal Processing Magazine "Recent Advances in Synthetic Aperture Radar Imaging". Read our preprint here.

June 2012: The PhD position on image deblurring is now closed.

May 2012: Our recent paper How to compare noisy patches? Patch similarity beyond Gaussian noise is featured in the most downloaded articles of the International Journal of Computer Vision with about 900 downloads this last 3 months. It can be downloaded for free.

May 2012: Charles Deledalle will receive an award for the best PhD thesis in signal and image processing at the 52nd meeting of the French association for electrical and information engineering (club EEA). This prestigious award is given each year to an outstanding PhD in the field of signal and image processing defended in France. Congratulations Charles!!!
You can find Charles' PhD thesis on his webpage, and our papers in the publications section.

Apr. 2012: We have an openclosed PhD position: "Image deblurring under space-variant blur" at the Hubert Curien laboratory and the Observatory of Lyon. Interested students can contact me by e-mail.

Feb. 2012: How to compare noisy patches? This is the question we try to answer in our recent paper published in International Journal of Computer Vision (doi). A preprint version is available on HAL in pdf.

Sept. 2011: The poster I presented at IEEE ICIP in Brussels on shift-variant deblurring is available here pdf.

Sept. 2011: I now have an associate professor position at the University of Saint-Etienne. I teach at TELECOM Saint-Etienne, an Engineering School in electrical engineering. My research focus on image restoration and reconstruction with applications ranging from microscopy and optical metrology to SAR imaging and astronomy.

Apr. 2011: We will present two papers at next ICIP conference in September 2011 in Brussels:

Mar. 2011: Florence Tupin has written a nice article in last issue of IEEE Geoscience and Remote Sensing Newsletter describing recent progress made in image processing of SAR data (link, pdf).

Oct. 2010: Charles Deledalle received the best student paper award at ICIP 2010 in Hong Kong for his fully automatic method to denoise images corrupted by Poisson noise. The method compares noisy patches and pre-filtered patches to define adaptative weights that preserve edges and structures. Charles' denoising method improves on state-of the art denoising techniques based on total variation minimization or wavelets. The paper is available here: "Poisson NL-Means: Unsupervised non-local means for Poisson noise," C. Deledalle, F. Tupin, L. Denis, IEEE International Conference on Image Processing (ICIP), Hong Kong, September 2010 (pdf, abstract Abstract
An extension of the non local (NL) means is proposed for images damaged by Poisson noise. The proposed method is guided by the noisy image and a pre-filtered image and is adapted to the statistics of Poisson noise. The influence of both images can be tuned using two filtering parameters. We propose an automatic setting to select these parameters based on the minimization of the estimated risk (mean square error). This selection uses an estimator of the MSE for NL means with Poisson noise and Newton's method to find the optimal parameters in few iterations.
), and the slides of his presentation are here.

Oct. 2010: At ICIP 2010 in Hong Kong, I presented a paper on image denoising using an image decomposition approach (bounded variations + sparse component). We have shown that exact discrete minimization can be obtained with graph-cuts for TV+L0 decomposition models. The paper is available here: "Exact discrete minimization for TV+L0 image decomposition models," L. Denis, F. Tupin, X. Rondeau, IEEE International Conference on Image Processing (ICIP), Hong Kong, September 2010 (pdf, abstract Abstract
Penalized maximum likelihood denoising approaches seek a solution that fulfills a compromise between data fidelity and agreement with a prior model. Penalization terms are generally chosen to enforce smoothness of the solution and to reject noise. The design of a proper penalization term is a difficult task as it has to capture image variability. Image decomposition into two components of different nature, each given a different penalty, is a way to enrich the modeling. We consider the decomposition of an image into a component with bounded variations and a sparse component. The corresponding penalization is the sum of the total variation of the first component and the L0 pseudo-norm of the second component. The minimization problem is highly non-convex, but can still be globally minimized by a minimum s-t-cut computation on a graph. The decomposition model is applied to synthetic aperture radar image denoising.
), and the slides here.

Aug. 2010: Our study on resolution in holography based on Cramér-Rao lower bounds ("On the single point resolution of on-axis digital holography," C. Fournier, L. Denis, and T. Fournel, J. Opt. Soc. Am. A, 27 (8), 1856-1862, 2010: pdf, doi, abstract Abstract
On-axis digital holography (DH) is becoming widely used for its time-resolved three-dimensional (3D) imaging capabilities. A 3D volume can be reconstructed from a single hologram. DH is applied as a metrological tool in experimental mechanics, biology, and fluid dynamics, and therefore the estimation and the improvement of the resolution are current challenges. However, the resolution depends on experimental parameters such as the recording distance, the sensor definition, the pixel size, and also on the location of the object in the field of view. This paper derives resolution bounds in DH by using estimation theory. The single point resolution expresses the standard deviations on the estimation of the spatial coordinates of a point source from its hologram. Cramér Rao lower bounds give a lower limit for the resolution. The closed-form expressions of the Cramér Rao lower bounds are obtained for a point source located on and out of the optical axis. The influences of the 3D location of the source, the numerical aperture, and the signal-to-noise ratio are studied.
) is featured in Spotlight on Optics.

General information

I now have a full-Professor position at the University of Saint-Etienne. I teach at TELECOM Saint-Etienne, a Graduate School in electrical engineering and computer science. I conduct my research activity at the Laboratoire Hubert Curien, a joint lab between the University and the CNRS. My main interest is on image restoration and reconstruction with applications ranging from microscopy and optical metrology to SAR imaging and astronomy.

Previous positions:

From 2011 to 2022, I was Associate Professor at the University of Saint-Etienne.

In 2010-2011, I have spent a year and a half as a research scientist at the Observatory of Lyon on inverse problems in astronomy and biomedical imaging. My position was funded by the French Research Agency (research project "MITIV" lead by Eric Thiébaut).

From 2007 to the end of 2009, I was Assistant Professor at the Electrical Engineering Department of the Engineering School 'CPE Lyon'. I used to teach image processing, computer graphics and computer science. My research focused on image reconstruction/restoration problems that occur in imaging, especially in synthetic aperture radar and digital holography.

In 2006-2007, I worked for one year at Télécom Paristech as a postdoctoral scholar at the Image Processing Team of the Signal and Image Processing Department. My work was on synthetic aperture radar (SAR) images and optical images to design algorithms for automatic extraction of elevation information in urban areas. I focused on radar image denoising with graph-cut. SAR image denoising is a research subject on which I am still working.

Digital holography was the main subject of my PhD thesis, defended in autumn 2006 in Saint-Etienne University (France).

Research

Research projects

Detection and characterization of weak signals

2015 - 2020


Context

detection map The detection of a signal burried under a strong noise is an essential task in several applications of image processing, in particular in microscopy and in astronomy.

Key idea(s)

We developped robust estimation methods and approaches that account for the local correlations and the non-stationarities of the noise.

Results

Our algorithms for exoplanet detection by direct imaging achieve a higher sensitivity and are statistically better grounded (control of false alarms, confidence intervals).

Related publications:

"Exoplanet detection in angular differential imaging by statistical learning of the non-stationary patch covariances, The PACO algorithm", O. Flasseur, L. Denis, E. Thiébaut, M. Langlois, Astronomy and Astrophysics, 2018 ( doi, abstract Abstract
Context. The detection of exoplanets by direct imaging is an active research topic in astronomy. Even with the coupling of an extreme adaptive-optics system with a coronagraph, it remains challenging due to the very high contrast between the host star and the exoplanets.
Aims. The purpose of this paper is to describe a method, named PACO, dedicated to source detection from angular differential imaging data. Given the complexity of the fluctuations of the background in the datasets, involving spatially- variant correlations, we aim to show the potential of a processing method that learns the statistical model of the background from the data.
Methods. In contrast to existing approaches, the proposed method accounts for spatial correlations in the data. Those correlations and the average stellar speckles are learned locally and jointly to estimate the flux of the (potential) exoplanets. By preventing from subtracting images including the stellar speckles residuals, the photometry is intrinsically preserved. A non-stationary multi-variate Gaussian model of the background is learned. The decision in favor of the presence or the absence of an exoplanet is performed by a binary hypothesis test.
Results. The statistical accuracy of the model is assessed using VLT/SPHERE-IRDIS datasets. It is shown to capture the non-stationarity in the data so that a unique threshold can be applied to the detection maps to obtain consistent detection performance at all angular separations. This statistical model makes it possible to directly assess the false alarm rate, probability of detection, photometric and astrometric accuracies without resorting to Monte-Carlo methods.
Conclusions. PACO offers appealing characteristics: it is parameter-free and photometrically unbiased. The statistical performance in terms of detection capability, photometric and astrometric accuracies can be straightforwardly assessed. A fast approximate version of the method is also described to process large amounts of data from exoplanets search surveys.
).
 
"Robustness to bad frames in angular differential imaging: a local weighting approach", O Flasseur, L Denis, E Thiébaut, M Langlois, Astronomy and Astrophysics, 2020.
 
"ExPACO: detection of an extended pattern under nonstationary correlated noise by patch covariance modeling," O Flasseur, L Denis, E Thiébaut, T Olivier, C Fournier, EUSIPCO, 2019.
 
"Exoplanet detection in angular differential imaging by statistical learning of the non-stationary patch covariances, The PACO algorithm", O. Flasseur, L. Denis, E. Thiébaut, M. Langlois, Astronomy and Astrophysics, 2018.
 
"Robust Object Characterization from Lensless Microscopy Videos," O Flasseur, L Denis, C Fournier, E Thiébaut, EUSIPCO, 2017.
 
"Fast and robust detection of a known pattern in an image," L Denis, A Ferrari, D Mary, L Mugnier, E Thiébaut, EUSIPCO, 2016.
 

Algorithms for 3D reconstruction in SAR tomography

2017 - 2020


Context

detection map SAR tomography is a technique that combines several synthetic aperture radar (SAR) images acquired from slightly different view angles. By analyzing the phase of the echoes received along each trajectory, it is possible to unmix scatterers that are located within the same radar resolution cell (i.e., scatterers that are seen at the same pixel in the SAR images).

Key idea(s)

We proposed to include spatial information during the tomographic inversion. This inversion was typically performed independently at each location of the scene. Jointly performing the inversion with some spatial regularizations helps to obtain smoother and better-resolved reconstruction. Urban surface segmentation has been addressed using graph-cut techniques from the field of computer vision.

Related publications:

"From InSAR to TomoSAR: scatterers unmixing in urban areas. A review of SAR tomography processing techniques", C Rambour, A Budillon, A Johnsy, L Denis, F Tupin, G Schirinzi, IEEE Geoscience and Remote Sensing Magazine.
 
"Introducing Spatial Regularization in SAR Tomography Reconstruction", C Rambour, L Denis, F Tupin, H Oriot, IEEE transactions on Geoscience and Remote Sensing, 2019 ( pdf, doi,abstract Abstract
The resolution achieved by current synthetic aperture radar (SAR) sensors provides a detailed visualization of urban areas. Spaceborne sensors such as TerraSAR-X can be used to analyze large areas at a very high resolution. In addition, repeated passes of the satellite give access to temporal and interferometric information on the scene. Because of the complex 3-D structure of urban surfaces, scatterers located at different heights (ground, building facade, and roof) produce radar echoes that often get mixed within the same radar cells. These echoes must be numerically unmixed in order to get a fine understanding of the radar images. This unmixing is at the core of SAR tomography. SAR tomography reconstruction is generally performed in two steps: 1) reconstruction of the so-called tomogram by vertical focusing, at each radar resolution cell, to extract the complex amplitudes (a 1-D processing) and 2) transformation from radar geometry to ground geometry and extraction of significant scatterers. We propose to perform the tomographic inversion directly in ground geometry in order to enforce spatial regularity in 3-D space. This inversion requires solving a large-scale nonconvex optimization problem. We describe an iterative method based on variable splitting and the augmented Lagrangian technique. Spatial regularizations can easily be included in this generic scheme. We illustrate, on simulated data and a TerraSAR-X tomographic data set, the potential of this approach to produce 3-D reconstructions of urban surfaces.
).
 
"Urban surface reconstruction in SAR tomography by graph-cuts", C Rambour, L Denis, F Tupin, H Oriot, Y Huang, L Ferro-Famil, Computer Vision and Image Understanding, 2019 ( doi,abstract Abstract
SAR (Synthetic Aperture Radar) tomography reconstructs 3-D volumes from stacks of SAR images. High resolution satellites such as TerraSAR-X provide images that can be combined to produce 3-D models. In urban areas, sparsity priors are generally enforced during the tomographic inversion process in order to retrieve the location of scatterers seen within a given radar resolution cell. However, such priors often miss parts of the urban surfaces. Those missing parts are typically regions of flat areas such as ground or rooftops. This paper introduces a surface segmentation algorithm based on the computation of the optimal cut in a flow network. This segmentation process can be included within the 3-D reconstruction framework in order to improve the recovery of urban surfaces. Illustrations on a TerraSAR-X tomographic dataset demonstrate the potential of the approach to produce a 3-D model of urban surfaces such as ground, façades and rooftops.
).
 
"Urban surface recovery through graph-cuts over SAR tomographic reconstruction," C Rambour, L Denis, F Tupin, IEEE JURSE, 2019.
 
"SAR tomography of urban areas: 3D regularized inversion in the scene geometry," C Rambour, L Denis, F Tupin, H Oriot, JM Nicolas, IEEE IGARSS, 2018.
 
"Similarity criterion for SAR tomography over dense urban area," C Rambour, L Denis, F Tupin, JM Nicolas, H Oriot, L Ferro-Famil, C Deledalle, IEEE IGARSS, 2017.
 

Space-variant deblurring in astronomy and biomedical imaging

2010 - 2015


Context

deconvolution of Hubble Space Telescope simulations Image deblurring is essential to high resolution imaging and is therefore widely used in astronomy, microscopy or computational photography. While shift-invariant blur is modeled by convolution and leads to fast FFT-based algorithms, shift-variant blurring requires models both accurate and fast. When the point spread function (PSF) varies smoothly across the field, these two opposite objectives can be reached by interpolating from a grid of PSF samples.

Key idea(s)

We developped a physically grounded model for PSF modeling that leads to a fast and accurate shift-variant blurring operator.

Results

We applied our model to simulations of Hubble Space Telescope data and shown good reconstruction performance.

Related publications:

"Fast model of space-variant blurring and its application to deconvolution in astronomy," L. Denis, E. Thiébaut, and F. Soulez, IEEE International Conference on Image Processing (ICIP), Brussels, September 2011. (pdf,poster)
 
"Fast approximations of shift-variant blur," L Denis, E Thiébaut, F Soulez, JM Becker, Rahul Mourya, International Journal of Computer Vision, 2015 (pdf, doi, abstract Abstract
Image deblurring is essential in high resolution imaging, e.g., astronomy, microscopy or computational photography. Shift-invariant blur is fully characterized by a single point-spread-function (PSF). Blurring is then modeled by a convolution, leading to efficient algorithms for blur simulation and removal that rely on fast Fourier transforms. However, in many different contexts, blur cannot be considered constant throughout the field-of-view, and thus necessitates to model variations of the PSF with the location. These models must achieve a trade-off between the accuracy that can be reached with their flexibility, and their computational efficiency. Several fast approximations of blur have been proposed in the literature. We give a unified presentation of these methods in the light of matrix decompositions of the blurring operator. We establish the connection between different computational tricks that can be found in the litterature and the physical sense of corresponding approximations in terms of equivalent PSFs, physically-based approximations being preferable. We derive an improved approximation that preserves the same desirable low complexity as other fast algorithms while reaching a minimal approximation error. Comparison of theoretical properties and empirical performances of each blur approximation suggests that the proposed general model is preferable for approximation and inversion of a known shift-variant blur.
).
 

Non-local denoising of synthetic aperture radar images

2008 - present


Context

denoising result Image denoising is a fundamental low-level task in many applications. Numerous denoising techniques have been proposed, but only few of them provide a general methodology that apply to different noise models (e.g., additive, multiplicative). This project is concerned with the development of non-local denoising techniques adapted to given noise distributions.

Key idea(s)

Simple denoising techniques replace the noisy values with the Maximum Likelihood (ML) estimate computed over a small neighboring window. They lead to a loss of resolution, i.e., edges are blurred, as the window shape is unchanged even over homogeneous region boundaries where it overlaps pixels with very different values. A straightforward improvement is then to adapt spatially the window shape based on the homogeneity inside the window. A more powerful approach, deriving from the NL-means introduced by Buades et al., is to consider weighted maximum likelihood. Weights are set in a data-driven fashion based on the similarity between image patches.

Results

We suggest a general methodology to define the similarity between noisy patches as well as between restored patches. This leads to an iterative algorithm which gives good results on images corrupted with additive Gaussian noise, and outperforms state-of-the art denoising techniques for images with speckle noise such as synthetic aperture radar (SAR) images.

More information on Charles Deledalle's webpage.

Main publications:

"Exploiting patch similarity for SAR image processing: the nonlocal paradigm," C Deledalle, L Denis, G Poggi, F Tupin, L Verdoliva, IEEE Signal Processing Magazine, 2014 (pdf, abstract Abstract
Most current SAR systems offer high-resolution images featuring polarimetric, interferometric, multi-frequency, multi-angle, or multi-date information. SAR images however suffer from strong fluctuations due to the speckle phenomenon inherent to coherent imagery. Hence, all derived parameters display strong signal-dependent variance, preventing the full exploitation of such a wealth of information. Even with the abundance of despeckling techniques proposed these last three decades, there is still a pressing need for new methods that can handle this variety of SAR products and efficiently eliminate speckle without sacrificing the spatial resolution. Recently, patch-based filtering has emerged as a highly successful concept in image processing. By exploiting the redundancy between similar patches, it succeeds in suppressing most of the noise with good preservation of texture and thin structures. Extensions of patch-based methods to speckle reduction and joint exploitation of multi-channel SAR images (interferometric, polarimetric, or PolInSAR data) have led to the best denoising performance in radar imaging to date. We give a comprehensive survey of patch-based nonlocal filtering of SAR images, focusing on the two main ingredients of the methods: measuring patch similarity, and estimating the parameters of interest from a collection of similar patches.
).
 
"NL-SAR : a unified Non-Local framework for resolution-preserving (Pol)(In)SAR denoising," C Deledalle, L Denis, F Tupin, A Reigber, M Jäger, IEEE trans Geoscience and Remote Sensing, 53(4): 2021-2038, 2015 (pdf, doi, abstract Abstract
Speckle noise is an inherent problem in coherent imaging systems like synthetic aperture radar. It creates strong intensity fluctuations and hampers the analysis of images and the estimation of local radiometric, polarimetric or interferometric properties. SAR processing chains thus often include a multi-looking (i.e., averaging) filter for speckle reduction, at the expense of a strong resolution loss. Preservation of point-like and fine structures and textures requires to locally adapt the estimation. Non-local means successfully adapt smoothing by deriving data-driven weights from the similarity between small image patches. The generalization of non-local approaches offers a flexible framework for resolution-preserving speckle reduction. We describe a general method, NL-SAR, that builds extended non-local neighborhoods for denoising amplitude, polarimetric and/or interferometric SAR images. These neighborhoods are defined on the basis of pixel similarity as evaluated by multi-channel comparison of patches. Several non-local estimations are performed and the best one is locally selected to form a single restored image with good preservation of radar structures and discontinuities. The proposed method is fully automatic and handles single and multi-look images, with or without interferometric or polarimetric channels. Efficient speckle reduction with very good resolution preservation is demonstrated both on numerical experiments using simulated data and airborne radar images. The source code of a parallel implementation of NL-SAR is released with the paper.
).
 
"NL-InSAR: Non-local interferogram estimation," C. Deledalle, L. Denis, and F. Tupin, IEEE trans. geoscience and remote sensing, 49, 4, 2011. (pdf, doi)
 
"Iterative weighted maximum likelihood denoising with probabilistic patch-based weights," C. Deledalle, L. Denis, and F. Tupin, IEEE trans. image processing, 18, 12, 2009. (pdf, doi)
 

Joint regularization with graph-cuts

2007 - 2010


Context

graph Graph-cuts are powerful techniques that can be used to solve combinatorial minimization problems in processing. In the context of image regularization, they can find the global minimum of first-order Markov Random Field energies (i.e., energies involving only single and pair-wise terms) with convex regularization. This discrete minimization is performed by computing a minimum-cost cut over a huge graph. The size of the graph prevents from using directly such techniques on large images (million pixels images). Joint regularization cannot be handled with such graph constructs. Combinatorial approaches are however desirable to minimize energies with non-convex neg log-likelihood such as with SAR imaging.

Key idea(s)

Approximate minimization can be performed by considering a sequence of sub-problems. Each of these sub-problems can be exactly solved using two-levels graphs.

Results

A fast algorithm for approximate discrete minimization is proposed. It is suitable for minimization of scalar or vectorial fields with convex prior. The technique is applied to joint regularization of amplitude and phase of interferometric SAR images, and to 3D reconstruction from an interferometric SAR pair and an optical image.

Related publications:

"Exact discrete minimization for TV+L0 image decomposition models," L. Denis, F. Tupin, X. Rondeau, IEEE International Conference on Image Processing (ICIP), Hong Kong, September 2010 (pdf, abstract Abstract
Penalized maximum likelihood denoising approaches seek a solution that fulfills a compromise between data fidelity and agreement with a prior model. Penalization terms are generally chosen to enforce smoothness of the solution and to reject noise. The design of a proper penalization term is a difficult task as it has to capture image variability. Image decomposition into two components of different nature, each given a different penalty, is a way to enrich the modeling. We consider the decomposition of an image into a component with bounded variations and a sparse component. The corresponding penalization is the sum of the total variation of the first component and the L0 pseudo-norm of the second component. The minimization problem is highly non-convex, but can still be globally minimized by a minimum s-t-cut computation on a graph. The decomposition model is applied to synthetic aperture radar image denoising.
, slides)
 
"Joint regularization of phase and amplitude of InSAR data: application to 3D reconstruction," L. Denis, F. Tupin, J. Darbon, and M. Sigelle, IEEE trans. geoscience and remote sensing, 47, 11, 2009. (pdf, doi)
 
"SAR Image Regularization with Fast Approximate Discrete Minimization," L. Denis, F. Tupin, J. Darbon, and M. Sigelle, IEEE trans. image processing, 18, 7, 2009. (pdf, doi)
 

Sparse reconstruction in digital holography

2008 - 2018


Context

sparse reconstruction Inline digital holograms are classically reconstructed using linear operators to model diffraction. It has long been recognized that such reconstruction operators do not invert the hologram formation operator. Classical linear reconstructions yield images with artifacts such as distortions near the field-of-view boundaries or twin-images. When objects located at different depths are reconstructed from a hologram, in-focus and out-of-focus images of all objects superimpose upon each other. Additional processing, such as maximum-of-focus detection, is thus unavoidable for any successful use of the reconstructed volume.

Key idea(s)

We consider inverting the hologram formation model in Bayesian framework. We suggest the use of a sparsity-promoting prior, intrinsically verified due to inline holography requirements, and present a simple iterative algorithm for 3D object reconstruction under sparsity and positivity constraints.

Results

Out-of-focus images of objects are strongly attenuated or absent in reconstructed 3D images. The sparse reconstruction technique makes it possible to reconstruct outside of the field of view.
It is also possible to extend the recent theoretical results on conditions for exact recovery of sparse signals with orthogonal matching pursuit to the case of noisy data. In the case of digital holography of particles, this gives upper bounds on the achievable resolution.

More recently, we extended these approaches to the reconstruction of translucent objects that both absorb light and introduce a phase shift. The hologram formation model is no longuer linear and we consider several regularization terms to promote sparsity, spatial smoothness with sharp edges and to enforce strict bound constraints.

Related publications:

"Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology," F Jolivet, F Momey, L Denis, L Méès, N Faure, N Grosjean, F Pinston, J-L Marié, C Fournier, Optics Express, 2018 (pdf, doi, abstract Abstract
Reconstruction of phase objects is a central problem in digital holography, whose various applications include microscopy, biomedical imaging, and fluid mechanics. Starting from a single in-line hologram, there is no direct way to recover the phase of the diffracted wave in the hologram plane. The reconstruction of absorbing and phase objects therefore requires the inversion of the non-linear hologram formation model. We propose a regularized reconstruction method that includes several physically-grounded constraints such as bounds on transmittance values, maximum/minimum phase, spatial smoothness or the absence of any object in parts of the field of view. To solve the non-convex and non-smooth optimization problem induced by our modeling, a variable splitting strategy is applied and the closed-form solution of the sub-problem (the so-called proximal operator) is derived. The resulting algorithm is efficient and is shown to lead to quantitative phase estimation on reconstructions of accurate simulations of in-line holograms based on the Mie theory. As our approach is adaptable to several in-line digital holography configurations, we present and discuss the promising results of reconstructions from experimental in-line holograms obtained in two different applications: the tracking of an evaporating droplet (size about 100 micrometers) and the microscopic imaging of bacteria (size about 1 micrometer).
).
 
"Inline hologram reconstruction with sparsity constraints," L. Denis, D. Lorenz, E. Thiébaut, C. Fournier, D. Trede, Optics Letters, 34(22), 3475-3477, 2009. (pdf, doi)
 
"Greedy Solution of Ill-Posed Problems: Error Bounds and Exact Inversion," L. Denis, D. Lorenz and D. Trede, Inverse Problems, 25 115017, 2009. (pdf,doi) -- Note that the authors of this paper are ordered alphabetically, the main author is D. Trede.
 

Particle detection in digital holography

2006 - 2008


Context

particle detection algorithm Digital holography is the method of choice for time-resolved 3D measurement of the location of particles in a flow. These measurements are crucial to validate numerical simulations of turbulence. The 3D location of several particles can be recovered from a single hologram by analyzing their diffraction patterns. Classically, this is performed in two steps: first, a 3D volume is reconstructed by simulating optical diffraction of the hologram. Then, the maximum of focus location of the image of each particle is detected. These approaches suffer from severe bias close to the hologram boundaries, and false detections occur due to multiple focusing or speckle noise.

Key idea(s)

Such drawbacks can be circumvent by following an inverse problem approach. Instead of reconstructing a 3D volume image from the hologram, particles are directly detected by matching the diffraction patterns on the hologram. An approach similar to the matching pursuit algorithm is proposed, with sub-pixel refinement by local optimization.

Results

The accuracy of the detection is improved by a factor 5 compared to that of classical techniques in a standard experimental configuration. Out-of-field detection is demonstrated, even far from the hologram boundaries.

Related publications:

"Inverse problem approach for particle digital holography: out-of-field particle detection made possible," F. Soulez, L. Denis, E. Thiébaut, C. Fournier, and C. Goepfert, J. Opt. Soc. Am. A, 24 (12), 3708-3716, 2007. (pdf, doi)
 
"Inverse problem approach for particle digital holography: accurate location based on local optimisation," F. Soulez, L. Denis, C. Fournier, E. Thiébaut, and C. Goepfert, J. Opt. Soc. Am. A, 24 (4), 1164-1171, 2007. (pdf, doi)
 
"Digital holography of particles: experimental parameters setting and benefits of the inverse problems approach," J. Gire, L. Denis, C. Fournier, C. Ducottet, E. Thiebaut, and F. Soulez, Meas. Sci. Tech., 19, 2008. (pdf, doi)
 
"Numerical suppression of the twin-image in in-line holography of a volume of micro-objects," L. Denis, C. Fournier, T. Fournel, and C. Ducottet, Meas. Sci. Tech., 19, 2008. (pdf, doi)
 

Extraction of size/orientation information from a hologram

2005 - 2007


Context

autocovariance Digital holograms of a collection of small objects code the information of shape, orientation and 3D location of all objects. The recovery of the average size or orientation distribution however requires 3D reconstruction and analysis, which makes it hardly usable in on-line applications.

Key idea(s)

We show that the auto-covariance of a hologram can be inverted to recover directly the size and/or orientation information of the objects. Since inversion of the auto-covariance is difficult, only approximate sizes are available. This technique can be usefull to get a first guess on the size of particles when using the particle detection algorithm described above.

Results

The average size of particles can be recovered from a hologram of several particles spread in a volume. Short fibers have been studied. It has been shown that some information about the orientation of the projection of the fibers orientations can be recovered by inversion of the auto-covariance of the hologram.

Related publications:

"Direct extraction of mean particle size from a digital hologram," L. Denis, C. Fournier, T. Fournel, C. Ducottet, and D. Jeulin, Applied Optics, 45 (5), 944-952, 2006. (pdf, doi)
 
"Reconstruction of the rose of directions from a digital micro-hologram of fibers," L. Denis, T. Fournel, C. Fournier, and D. Jeulin, J. Microsc., 225 (3), 282-291, 2007. (pdf, doi)
 

Grants

Auvergne-Rhône-Alpes Region "DIAGHOLO"

2018 - 2023


Project leader

Loïc Denis & Corinne Fournier

Partners

Objectives

This project develops methods for the automated processing of images in holographic microscopy and in astronomy.

ANR "ALYS"

2016 - 2020


Project leader

Florence Tupin

Partners

Objectives

This project focuses on SAR tomography, which is an original and new approach to analyze urban areas. By combining multiple images, this technique allows to discriminate different scatterers inside a vertical cell. This project aims at developing new tomographic methods dedicated to urban areas. It is built with innovation at all steps of the processing chain: data acquisition, tomographic processing, information extraction. This project is funded by an ANR-ASTRID funding provided by DGA (Direction Générale de l'Armement).

CNRS "RESSOURCES"

2017 - 2018


Project leader

Loïc Denis

Partners

Objectives

Source detection and reconstruction are essential tasks, in particular in astronomy to study stellar environments, in radio-astronomy to study large hydrogen clouds that gave birth to the first stars and galaxies, as well as in holographic microscopy for the analysis and tracking of biological samples (cells, bacteria). This project aims at unifying signal processing methods developed for optical detection and robust processing, joint analysis of multi-variate data, self-calibration and inversion of instrumental degradations. Signal processing progress is key to advancing the performance of the instruments and to reach new sensitivities, improved resolutions and extended field of views.
 
More information here

CNRS "DETECTION"

2015 - 2018


Project leader

Loïc Denis

Partners

Objectives

Source detection is a critical task, especially in astronomy for the research and characterization of exo-planets, and in lensless microscopy to track and analyze micrometer-sized objects. This projects aims to improve existing methods thanks to joint processing of multi-variate data (multi-spectral and/or multi-temporal) and to develop optimal processing techniues and their characterization both in the case of few sources and in the cases of crowded fields (many sources overlapping).
 
More information here

DGA "SAR image regularization by minimization techniques"

2009 - 2011


Project leader

Florence Tupin, Télécom Paristech

Partners

Objectives

This project aims at comparing and applying the most recent image denoising techniques to the domain of synthetic aperture radar imaging.
Many different problems can be mapped to an energy minimization problem. The energy is generally composed of two terms: a data fidelity term (neg log-likelihood), and a regularization term that imposes a prior on the solution, often expressed as local interactions. Several recent developments in image processing are devoted to solving this minimization problem. On the one hand, discrete approaches based on minimum cut computation on graphs are very efficient techniques. They provide in several cases a deterministic way to solve exactly non-convex minimization problems. On the other hand, recent progress on variational approaches devoted to convex but non-smooth energies can be applied to handle some edge-preserving regularization models.
One of our goals is to define statistical models adapted to SAR images. We will also focus on the numerical techniques to efficiently apply these models.
 

ANR MITIV "Biomedical Image Reconstruction by Inverse Methods"

2009 - 2014


Project leader

Eric Thiébaut, Observatoire de Lyon

Partners

Objectives

This project aims at developping new models and reconstruction techniques for microscopy, angiography and dynamic tomography. Both theoretical aspects and practical issues such as automation and medical certification will be covered thanks to a pluri-disciplinary team made of signal and image reconstruction experts, software developers, microscopists, and cardiologists.
 

Co-workers

I have the pleasure to work with the following colleagues (non-exhaustive list!):

Publications


Papers in refereed journals

2023

[57] "Characterization of stellar companion from high-contrast long-slit spectroscopy data. The EXtraction Of SPEctrum of COmpanion (Exospeco) algorithm", S Thé, E Thiébaut, L Denis, T Wanner, R Thiébaut, M Langlois, and F Soulez, Astronomy and Astrophysics, 2023 (preprint, doi, abstract Abstract
High-contrast long-slit spectrographs can be used to characterize exoplanets. High-contrast long-slit spectroscopic data are however corrupted by stellar leakages which largely dominate other signals and make the process of extracting the companion spectrum very challenging. This paper presents a complete method to calibrate the spectrograph and extract the signal of interest. The proposed method is based on a flexible direct model of the high-contrast long-slit spectroscopic data. This model explicitly accounts for the instrumental response and for the contributions of both the star and the companion. The contributions of these two components and the calibration parameters are jointly estimated by solving a regularized inverse problem. This problem having no closed-form solution, we propose an alternating minimization strategy to effectively find the solution. We have tested our method on empirical long-slit spectroscopic data and by injecting synthetic companion signals in these data. The proposed initialization and the alternating strategy effectively avoid the self-subtraction bias, even for companions observed very close to the coronagraphic mask. Careful modeling and calibration of the angular and spectral dispersion laws of the instrument clearly reduce the contamination by the stellar leakages. In practice, the outputs of the method are mostly driven by a single hyper-parameter which tunes the level of regularization of the companion SED.
).
 
[56] "A Deep-Learning Approach for SAR Tomographic Imaging of Forested Areas", Z Berenger, L Denis, F Tupin, L Ferro-Famil, and Y Huang, IEEE Geoscience and Remote Sensing Letters, 2023 (preprint, doi, abstract Abstract
Synthetic aperture radar tomographic imaging reconstructs the 3-D reflectivity of a scene from a set of coherent acquisitions performed in an interferometric configuration. In forest areas, a high number of elements backscatter the radar signal within each resolution cell. To reconstruct the vertical reflectivity profile, state-of-the-art techniques perform a regularized inversion implemented in the form of iterative minimization algorithms. We show that lightweight neural networks can be trained to perform this inversion with a single feed-forward pass, leading to fast reconstructions that could better scale to the amount of data provided by the future BIOMASS mission. We train our encoder–decoder network using simulated data and validate our technique on real L-band and P-band data.
).
 
[55] "Multitemporal Speckle Reduction With Self-Supervised Deep Neural Networks", I Meraoumia, E Dalsasso, L Denis, R Abergel and F Tupin, IEEE transactions on Geoscience and Remote Sensing, 2023 (preprint, doi, abstract Abstract
Speckle filtering is generally a prerequisite to the analysis of synthetic aperture radar (SAR) images. Tremendous progress has been achieved in the domain of single-image despeckling. Latest techniques rely on deep neural networks to restore the various structures and textures peculiar to SAR images. The availability of time series of SAR images offers the possibility of improving speckle filtering by combining different speckle realizations over the same area. The supervised training of deep neural networks requires ground-truth speckle-free images. Such images can only be obtained indirectly through some form of averaging, by spatial or temporal integration, and are imperfect. Given the potential of very high-quality restoration reachable by multitemporal speckle filtering, the limitations of ground-truth images need to be circumvented. We extend a recent self-supervised training strategy for single-look complex (SLC) SAR images, called MERLIN, to the case of multitemporal filtering. This requires modeling the sources of statistical dependencies in the spatial and temporal dimensions as well as between the real and imaginary components of the complex amplitudes. Quantitative analysis on datasets with simulated speckle indicates a clear improvement of speckle reduction when additional SAR images are included. Our method is then applied to stacks of TerraSAR-X images and shown to outperform competing multitemporal speckle filtering approaches.
).
The code of the trained models and supplementary results are available here.
 

2022

[54] "Speckle reduction in matrix-log domain for synthetic aperture radar imaging", C A Deledalle, L Denis, and F Tupin, Journal of Mathematical Imaging and Vision, 2022 (preprint).
 
[53] "As if by magic: self-supervised training of deep despeckling networks with MERLIN", E Dalsasso, L Denis, and F Tupin, IEEE transactions on Geoscience and Remote Sensing, 2022 (preprint).
 
[52] "Image Restoration for Remote Sensing: Overview and Toolbox", B Rasti, Y Chang, E Dalsasso, L Denis, and P Ghamisi, IEEE magazine on Geoscience and Remote Sensing, 2022 (preprint).
 
[51] "Automatic numerical focus plane estimation in digital holographic microscopy using calibration beads", D Brault, C Fournier, T Olivier, N Faure, S Dixneuf, L Thibon, L Mees, and L Denis, Applied Optics, 2022 (doi).
 

2021

[50] "REXPACO: an algorithm for high contrast reconstruction of the circumstellar environment by angular differential imaging", O Flasseur, S Thé, L Denis, E Thiébaut, M Langlois, Astronomy and Astrophysics, 2021 (pdf, abstract Abstract
Direct imaging is a method of choice to probe the close environment of young stars. Even with the coupling of adaptive optics and coronagraphy, the direct detection of off-axis sources like circumstellar disks and exoplanets remains challenging due to the required high contrast and small angular resolution. Angular differential imaging (ADI) is an observational technique that introduces an angular diversity to help disentangle the signal of off-axis sources from the residual signal of the star in a post-processing step.
While various detection algorithms have been proposed in the last decade to process ADI sequences and reach high contrast for the detection of point-like sources, very few methods are available to reconstruct meaningfull images of extended features such as circumstellar disks. The purpose of this paper is to describe a new post-processing algorithm dedicated to the reconstruction of the spatial distribution of light (total intensity) received from off-axis sources, in particular from circumstellar disks.
Built on the recent PACO algorithm dedicated to the detection of point-like sources, the proposed method is based on the local learning of patch covariances capturing the spatial fluctuations of the stellar leakages. From this statistical modeling, we develop a regularized image reconstruction algorithm (REXPACO) following an inverse problem approach based on a forward image formation model of the off-axis sources in the ADI sequences.
Injections of fake circumstellar disks in ADI sequences from the VLT/SPHERE-IRDIS instrument show that both the morphology and the photometry of the disks are better preserved by REXPACO compared to standard post-processing methods like cADI. In particular, the modeling of the spatial covariances proves usefull in reducing typical ADI artifacts and in better disentangling the signal of these sources from the residual stellar contamination. The application to stars hosting circumstellar disks with various morphologies confirms the ability of REXPACO to produce images of the light distribution with reduced artifacts. Finally, we show how REXPACO can be combined with PACO to disentangle the signal of circumstellar disks from the signal of candidate point-like sources.
REXPACO is a novel post-processing algorithm for reconstructing images of the circumstellar environment from high contrast ADI sequences. It produces numerically deblurred images and exploits the spatial covariances of the stellar leakages and of the noise to efficiently eliminate this nuisance term. The processing is fully unsupervised, all tuning parameters being directly estimated from the data themselves.
).
 
[49] "Narrow River Extraction from SAR Images Using Exogenous Information", N Gasnier, L Denis, R Fjortoft, F Liège, F Tupin, IEEE JSTARS, vol 14, 5720-5734, 2021 ( doi, abstract Abstract
Monitoring of rivers is of major scientific and societal importance, due to the crucial resource they provide to human activities and the threats caused by flood events. Rapid revisit Synthetic Aperture Radar (SAR) sensors such as Sentinel-1 or the future Surface Water and Ocean Topography (SWOT) mission are indispensable tools to achieve all-weather monitoring of water bodies at the global scale. Unfortunately, at the spatial resolution of these sensors, the extraction of narrow rivers is extremely difficult without resorting to exogenous knowledge. This paper introduces an innovative river segmentation method from SAR images using a priori databases such as the Global River Widths from Landsat (GRWL). First, a recently proposed linear structure detector is used to produce a map of likely line structures. Then, a limited number of nodes along the prior river centerline are extracted from the exogenous database, and used to reconstruct the full river centerline from the detection map. Finally, an innovative conditional random field approach is used to delineate accurately the river extent around its centerline. The proposed method has been tested on several Sentinel-1 images and on simulated SWOT data. Both visual and qualitative evaluations demonstrate its efficiency.
).
 
[48] "Joint reconstruction of an in-focus image and of the background signal in in-line holographic microscopy", A Berdeu, T Olivier, F Momey, L Denis, F Pinston, N Faure, C Fournier, Optics and Lasers in Engineering, 2021 ( abstract Abstract
In-line digital holography is a simple yet powerful tool to image absorbing and/or phase objects. However, the holograms of interest are perturbed by the background signal due to unwanted scattering elements located in the optical path.
Using only two holograms of the same object, shifted to different locations, an inverse problems approach is applied to jointly estimate the complex transmittance of the sample and the contribution of the interferent background signal at the sensor plane. Experimental results with stained bacteria are presented and show improved reconstructions of the sample while also accounting for the background contribution.
).
 
[47] "SAR2SAR: A Semi-Supervised Despeckling Algorithm for SAR Images", E Dalsasso, L Denis, F Tupin, IEEE JSTARS, 2021 (pdf, doi,abstract Abstract
Speckle reduction is a key step in many remote sensing applications. By strongly affecting synthetic aperture radar (SAR) images, it makes them difficult to analyze. Due to the difficulty to model the spatial correlation of speckle, a deep learning algorithm with semi-supervision is proposed in this article: SAR2SAR. Multitemporal time series are leveraged and the neural network learns to restore SAR images by only looking at noisy acquisitions. To this purpose, the recently proposed noise2noise framework [1] has been employed. The strategy to adapt it to SAR despeckling is presented, based on a compensation of temporal changes and a loss function adapted to the statistics of speckle. A study with synthetic speckle noise is presented to compare the performances of the proposed method with other state-of-the-art filters. Then, results on real images are discussed, to show the potential of the proposed algorithm. The code is made available to allow testing and reproducible research in this field.
).
 
[46] "The SPHERE infrared survey for exoplanets (SHINE). II. Observations, data reduction and analysis, detection performances and early-results ", M. Langlois, R. Gratton, A.-M. Lagrange, P. Delorme, et al. , Astronomy and Astrophysics, 2021 (pdf, doi,abstract Abstract
Over the past decades, direct imaging has confirmed the existence of substellar companions (exoplanets or brown dwarfs) on wide orbits (>10 au) from their host stars. To understand their formation and evolution mechanisms, we have initiated in 2015 the SPHERE infrared survey for exoplanets (SHINE), a systematic direct imaging survey of young, nearby stars to explore their demographics.
We aim to detect and characterize the population of giant planets and brown dwarfs beyond the snow line around young, nearby stars. Combined with the survey completeness, our observations offer the opportunity to constrain the statistical properties (occurrence, mass and orbital distributions, dependency on the stellar mass) of these young giant planets.
In this study, we present the observing and data analysis strategy, the ranking process of the detected candidates, and the survey performances for a subsample of 150 stars, which are representative of the full SHINE sample. The observations were conducted in an homogeneous way from February 2015 to February 2017 with the dedicated ground-based VLT/SPHERE instrument equipped with the IFS integral field spectrograph and the IRDIS dual-band imager covering a spectral range between 0.9 and 2.3 μm. We used coronographic, angular and spectral differential imaging techniques to reach the best detection performances for this study down to the planetary mass regime.
).
 
[45] "On the Use and Denoising of the Temporal Geometric Mean for SAR Time Series", N Gasnier, L Denis, F Tupin, IEEE GRSL, 2021 (pdf, doi,abstract Abstract
The increasing availability of synthetic aperture radar (SAR) time series creates many opportunities for remote sensing applications, but it can be challenging in terms of amount of data to process. This letter discusses the interest of the geometric mean to average SAR time series. First, the properties of the geometric mean and the arithmetic mean are compared. Then, a speckle-reduction method specifically designed to improve images obtained with the geometric mean is presented. This method is based on an adaptation of the MuLoG framework to take into account the specific distribution of the geometric mean. Finally, applications of this denoised geometric-mean image are presented.
).
 

2020

[44] "SAR Image Despeckling by Deep Neural Networks: from a Pre-Trained Model to an End-to-End Training Strategy", E Dalsasso, X Yang, L Denis, F Tupin, W Yang, Remote Sensing, 2020 ( pdf, doi,abstract Abstract
Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) images. Many different schemes have been proposed for the restoration of intensity SAR images. Among the different possible approaches, methods based on convolutional neural networks (CNNs) have recently shown to reach state-of-the-art performance for SAR image restoration. CNN training requires good training data: many pairs of speckle-free/speckle-corrupted images. This is an issue in SAR applications, given the inherent scarcity of speckle-free images. To handle this problem, this paper analyzes different strategies one can adopt, depending on the speckle removal task one wishes to perform and the availability of multitemporal stacks of SAR data. The first strategy applies a CNN model, trained to remove additive white Gaussian noise from natural images, to a recently proposed SAR speckle removal framework: MuLoG (MUlti-channel LOgarithm with Gaussian denoising). No training on SAR images is performed, the network is readily applied to speckle reduction tasks. The second strategy considers a novel approach to construct a reliable dataset of speckle-free SAR images necessary to train a CNN model. Finally, a hybrid approach is also analyzed: the CNN used to remove additive white Gaussian noise is trained on speckle-free SAR images. The proposed methods are compared to other state-of-the-art speckle removal filters, to evaluate the quality of denoising and to discuss the pros and cons of the different strategies. Along with the paper, we make available the weights of the trained network to allow its usage by other researchers.
).
 
[43] "PACO ASDI: an algorithm for exoplanet detection and characterization in direct imaging with integral field spectrographs", O Flasseur, L Denis, E Thiébaut, M Langlois, Astronomy and Astrophysics, 637, 2020 ( pdf, doi,abstract Abstract
Context. Exoplanet detection and characterization by direct imaging both rely on sophisticated instruments (adaptive optics and coronagraph) and adequate data processing methods. Angular and spectral differential imaging (ASDI) combines observations at different times and a range of wavelengths in order to separate the residual signal from the host star and the signal of interest corresponding to off-axis sources.
Aims. Very high contrast detection is only possible with an accurate modeling of those two components, in particular of the background due to stellar leakages of the host star masked out by the coronagraph. Beyond the detection of point-like sources in the field of view, it is also essential to characterize the detection in terms of statistical significance and astrometry and to estimate the source spectrum.
Methods. We extend our recent method PACO, based on local learning of patch covariances, in order to capture the spectral and temporal fluctuations of background structures. From this statistical modeling, we build a detection algorithm and a spectrum estimation method: PACO ASDI. The modeling of spectral correlations proves useful both in reducing detection artifacts and obtaining accurate statistical guarantees (detection thresholds and photometry confidence intervals).
Results. An analysis of several ASDI datasets from the VLT/SPHERE-IFS instrument shows that PACO ASDI produces very clean detection maps, for which setting a detection threshold is statistically reliable. Compared to other algorithms used routinely to exploit the scientific results of SPHERE-IFS, sensitivity is improved and many false detections can be avoided. Spectrally smoothed spectra are also produced by PACO ASDI. The analysis of datasets with injected fake planets validates the recovered spectra and the computed confidence intervals.
Conclusions. PACO ASDI is a high-contrast processing algorithm accounting for the spatio-spectral correlations of the data to produce statistically-grounded detection maps and reliable spectral estimations. Point source detections, photometric and astrometric characterizations are fully automatized.
).
 
[42] "PIC: a data reduction algorithm for integral field spectrographs", A Berdeu, F Soulez, L Denis, M Langlois, E Thiébaut, Astronomy and Astrophysics, 635, 2020 ( pdf, doi,abstract Abstract
Context. The improvement of large size detectors permitted the development of integral field spectrographs (IFSs) in astronomy. Spectral information for each spatial element of a two-dimensional field of view is obtained thanks to integral field units that spread the spectra on the 2D grid of the sensor.
Aims. Here we aim to solve the inherent issues raised by standard data-reduction algorithms based on direct mapping of the 2D + λ data cube: the spectral cross-talk due to the overlap of neighbouring spectra, the spatial correlations of the noise due to the re-interpolation of the cube on a Cartesian grid, and the artefacts due to the influence of defective pixels.
Methods. The proposed method, Projection, Interpolation, and Convolution (PIC), is based on an “inverse-problems” approach. By accounting for the overlap of neighbouring spectra as well as the spatial extension in a spectrum of a given wavelength, the model inversion reduces the spectral cross-talk while deconvolving the spectral dispersion. Considered as missing data, defective pixels undetected during the calibration are discarded on-the-fly via a robust penalisation of the data fidelity term.
Results. The calibration of the proposed model is presented for the Spectro-Polarimetric High-contrast Exoplanet REsearch instrument (SPHERE). This calibration was applied to extended objects as well as coronagraphic acquisitions dedicated to exoplanet detection or disc imaging. Artefacts due to badly corrected defective pixels or artificial background structures observed in the cube reduced by the SPHERE data reduction pipeline are suppressed while the reconstructed spectra are sharper. This reduces the false detections by the standard exoplanet detection algorithms.
Conclusions. These results show the pertinence of the inverse-problems approach to reduce the raw data produced by IFSs and to compensate for some of their imperfections. Our modelling forms an initial building block necessary to develop methods that can reconstruct and/or detect sources directly from the raw data.
).
 
[41] "From InSAR to TomoSAR: scatterers unmixing in urban areas. A review of SAR tomography processing techniques", C Rambour, A Budillon, A Johnsy, L Denis, F Tupin, G Schirinzi, IEEE Geoscience and Remote Sensing Magazine, 8(2), 2020 ( doi,abstract Abstract
Cross-track synthetic aperture radar (SAR) interferometry is a powerful technique that analyzes the phase shift each pixel undergoes between acquisitions of the same scene with just a slight change of viewpoint. These phase shifts provide information about the topography and, when more than two acquisitions are available at different dates, about possible slow motions along the line of sight, related to subsidence and/or thermal dilation, of the dominant scatterer in the resolution cell. However, interferometric processing exploits phase-only data, does not provide scatterer distribution in the vertical direction, and is not able to separate multiple scatterers lying in the same range/azimuth resolution cell.
).
 
[40] "Robustness to bad frames in angular differential imaging: a local weighting approach", O Flasseur, L Denis, E Thiébaut, M Langlois, Astronomy and Astrophysics, Volume 634, 2020 ( pdf, doi,abstract Abstract
Context. The detection of exoplanets by direct imaging is very challenging. It requires an extreme adaptive-optics (AO) system and a coronagraph as well as suitable observing strategies. In angular differential imaging, the signal-to-noise ratio is improved by combining several observations.
Aims. Due to the evolution of the observation conditions and of the AO correction, the quality of the observations may vary significantly during the observing sequence. It is common practice to reject images of comparatively poor quality. We aim to decipher when this selection should be performed and what its impact on detection performance is.
Methods. Rather than discarding a full image, we study the local fluctuations of the signal at each frame and derive weighting maps for each frame. These fluctuations are modeled locally directly from the data through the spatio-temporal covariance of small image patches. The weights derived from the temporal variances can be used to improve the robustness of the detection step and reduce estimation errors of both the astrometry and photometry. The impact of bad frames can be analyzed by statistically characterizing the detection and estimation performance.
Results. When used together with a modeling of the spatial covariances (PACO algorithm), these weights improve the robustness of the detection method.
Conclusions. The spatio-temporal modeling of the background fluctuations provides a way to exploit all acquired frames. In the case of bad frames, areas with larger fluctuations are discarded by a weighting strategy and do not corrupt the detection map or the astrometric and photometric estimations. Other areas of better quality are preserved and are included to detect and characterize sources.
).
 

2019

[39] "Water Detection in SWOT HR Images Based on Multiple Markov Random Fields", S Lobry, L Denis, B Williams, R Fjørtoft, F Tupin, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(11), 2019 ( doi, abstract Abstract
One of the main objectives of the surface water and ocean topography (SWOT) mission, scheduled for launch in 2021, is to measure inland water levels using synthetic aperture radar (SAR) interferometry. A key step toward this objective is to precisely detect water areas. In this article, we present a method to detect water in SWOT images. Water is detected based on the relative brightness of the water and nonwater surfaces. Water brightness varies throughout the swath because of system parameters (i.e., the antenna pattern), as well as the phenomenology such as wind speed and surface roughness. To handle the effects of brightness variability, we propose to model the problem with one Markov random field (MRF) on the binary classification map, and two other MRFs to regularize the estimation of the class parameters (i.e., the land and water background power images). Our experiments show that the proposed method is more robust to the expected variations in SWOT images than traditional approaches.
).
 
[38] "From Fienup’s phase retrieval techniques to regularized inversion for in-line holography: tutorial", F Momey, L Denis, T Olivier, C Fournier, JOSA A, 36(12), 2019 ( pdf, doi,abstract Abstract
This paper includes a tutorial on how to reconstruct in-line holograms using an inverse problems approach, starting with modeling the observations, selecting regularizations and constraints, and ending with the design of a reconstruction algorithm. A special focus is placed on the connections between the numerous alternating projections strategies derived from Fienup’s phase retrieval technique and the inverse problems framework. In particular, an interpretation of Fienup’s algorithm as iterates of a proximal gradient descent for a particular cost function is given. Reconstructions from simulated and experimental holograms of micrometric beads illustrate the theoretical developments. The results show that the transition from alternating projections techniques to the inverse problems formulation is straightforward and advantageous.
).
 
[37] "Introducing Spatial Regularization in SAR Tomography Reconstruction", C Rambour, L Denis, F Tupin, H Oriot, IEEE transactions on Geoscience and Remote Sensing, 2019 ( pdf, doi,abstract Abstract
The resolution achieved by current synthetic aperture radar (SAR) sensors provides a detailed visualization of urban areas. Spaceborne sensors such as TerraSAR-X can be used to analyze large areas at a very high resolution. In addition, repeated passes of the satellite give access to temporal and interferometric information on the scene. Because of the complex 3-D structure of urban surfaces, scatterers located at different heights (ground, building facade, and roof) produce radar echoes that often get mixed within the same radar cells. These echoes must be numerically unmixed in order to get a fine understanding of the radar images. This unmixing is at the core of SAR tomography. SAR tomography reconstruction is generally performed in two steps: 1) reconstruction of the so-called tomogram by vertical focusing, at each radar resolution cell, to extract the complex amplitudes (a 1-D processing) and 2) transformation from radar geometry to ground geometry and extraction of significant scatterers. We propose to perform the tomographic inversion directly in ground geometry in order to enforce spatial regularity in 3-D space. This inversion requires solving a large-scale nonconvex optimization problem. We describe an iterative method based on variable splitting and the augmented Lagrangian technique. Spatial regularizations can easily be included in this generic scheme. We illustrate, on simulated data and a TerraSAR-X tomographic data set, the potential of this approach to produce 3-D reconstructions of urban surfaces.
).
 
[36] "Urban surface reconstruction in SAR tomography by graph-cuts", C Rambour, L Denis, F Tupin, H Oriot, Y Huang, L Ferro-Famil, Computer Vision and Image Understanding, 2019 ( doi,abstract Abstract
SAR (Synthetic Aperture Radar) tomography reconstructs 3-D volumes from stacks of SAR images. High resolution satellites such as TerraSAR-X provide images that can be combined to produce 3-D models. In urban areas, sparsity priors are generally enforced during the tomographic inversion process in order to retrieve the location of scatterers seen within a given radar resolution cell. However, such priors often miss parts of the urban surfaces. Those missing parts are typically regions of flat areas such as ground or rooftops. This paper introduces a surface segmentation algorithm based on the computation of the optimal cut in a flow network. This segmentation process can be included within the 3-D reconstruction framework in order to improve the recovery of urban surfaces. Illustrations on a TerraSAR-X tomographic dataset demonstrate the potential of the approach to produce a 3-D model of urban surfaces such as ground, façades and rooftops.
).
 
[35] "Ratio-Based Multitemporal SAR Images Denoising: RABASAR", W Zhao, C Deledalle, L Denis, H Maître, JM Nicolas, F Tupin, IEEE transactions on Geoscience and Remote Sensing, 2019 ( pdf, doi,abstract Abstract
In this paper, we propose a fast and efficient multitemporal despeckling method. The key idea of the proposed approach is the use of the ratio image, provided by the ratio between an image and the temporal mean of the stack. This ratio image is easier to denoise than a single image thanks to its improved stationarity. Besides, temporally stable thin structures are well preserved thanks to the multitemporal mean. The proposed approach can be divided into three steps: 1) estimation of a ``superimage'' by temporal averaging and possibly spatial denoising; 2) denoising of the ratio between the noisy image of interest and the ``superimage''; and 3) computation of the denoised image by remultiplying the denoised ratio by the ``superimage.'' Because of the improved spatial stationarity of the ratio images, denoising these ratio images with a speckle-reduction method is more effective than denoising images from the original multitemporal stack. The amount of data that is jointly processed is also reduced compared to other methods through the use of the ``superimage'' that sums up the temporal stack. The comparison with several state-of-the-art reference methods shows better results numerically (peak signal-noise-ratio and structure similarity index) as well as visually on simulated and synthetic aperture radar (SAR) time series. The proposed ratio-based denoising framework successfully extends single-image SAR denoising methods to time series by exploiting the persistence of many geometrical structures.
).
 
[34] "Reconstruction of in-line holograms: combining model-based and regularized inversion", A Berdeu, O Flasseur, L Méès, L Denis, F Momey, T Olivier, N Grosjean, and C Fournier, Optics Express, vol 27 (10), 2019 ( pdf, doi,abstract Abstract
In-line digital holography is a simple yet powerful tool to image absorbing and/or phase objects. Nevertheless, the loss of the phase of the complex wavefront on the sensor can be critical in the reconstruction process. The simplicity of the setup must thus be counterbalanced by dedicated reconstruction algorithms, such as inverse approaches, in order to retrieve the object from its hologram. In the case of simple objects for which the diffraction pattern produced in the hologram plane can be modeled using few parameters, a model fitting algorithm is very effective. However, such an approach fails to reconstruct objects with more complex shapes, and an image reconstruction technique is then needed. The improved flexibility of these methods comes at the cost of a possible loss of reconstruction accuracy. In this work, we combine the two approaches (model fitting and regularized reconstruction) to benefit from their respective advantages. The sample to be reconstructed is modeled as the sum of simple parameterized objects and a complex-valued pixelated transmittance plane. These two components jointly scatter the incident illumination, and the resulting interferences contribute to the intensity on the sensor. The proposed hologram reconstruction algorithm is based on alternating a model fitting step and a regularized inversion step. We apply this algorithm in the context of fluid mechanics, where holograms of evaporating droplets are analyzed. In these holograms, the high contrast fringes produced by each droplet tend to mask the diffraction pattern produced by the surrounding vapor wake. With our method, the droplet and the vapor wake can be jointly reconstructed.
).
 
[33] "Accelerating GMM-based patch priors for image restoration: Three ingredients for a 100x speed-up", S. Parameswaran, C. Deledalle, L. Denis, T. Nguyen, IEEE transactions on Image Processing, 28(2), 2019 ( pdf, doi,abstract Abstract
Image restoration methods aim to recover the underlying clean image from corrupted observations. The Expected Patch Log-likelihood (EPLL) algorithm is a powerful image restoration method that uses a Gaussian mixture model (GMM) prior on the patches of natural images. Although it is very effective for restoring images, its high runtime complexity makes EPLL ill-suited for most practical applications. In this paper, we propose three approximations to the original EPLL algorithm. The resulting algorithm, which we call the fast-EPLL (FEPLL), attains a dramatic speed-up of two orders of magnitude over EPLL while incurring a negligible drop in the restored image quality (less than 0.5 dB). We demonstrate the efficacy and versatility of our algorithm on a number of inverse problems such as denoising, deblurring, super-resolution, inpainting and devignetting. To the best of our knowledge, FEPLL is the first algorithm that can competitively restore a 512×512 pixel image in under 0.5s for all the degradations mentioned above without specialized code optimizations such as CPU parallelization or GPU implementation.
).
 

2018

[32] "Exoplanet detection in angular differential imaging by statistical learning of the non-stationary patch covariances, The PACO algorithm", O. Flasseur, L. Denis, E. Thiébaut, M. Langlois, Astronomy and Astrophysics, 2018 ( doi, abstract Abstract
Context. The detection of exoplanets by direct imaging is an active research topic in astronomy. Even with the coupling of an extreme adaptive-optics system with a coronagraph, it remains challenging due to the very high contrast between the host star and the exoplanets.
Aims. The purpose of this paper is to describe a method, named PACO, dedicated to source detection from angular differential imaging data. Given the complexity of the fluctuations of the background in the datasets, involving spatially- variant correlations, we aim to show the potential of a processing method that learns the statistical model of the background from the data.
Methods. In contrast to existing approaches, the proposed method accounts for spatial correlations in the data. Those correlations and the average stellar speckles are learned locally and jointly to estimate the flux of the (potential) exoplanets. By preventing from subtracting images including the stellar speckles residuals, the photometry is intrinsically preserved. A non-stationary multi-variate Gaussian model of the background is learned. The decision in favor of the presence or the absence of an exoplanet is performed by a binary hypothesis test.
Results. The statistical accuracy of the model is assessed using VLT/SPHERE-IRDIS datasets. It is shown to capture the non-stationarity in the data so that a unique threshold can be applied to the detection maps to obtain consistent detection performance at all angular separations. This statistical model makes it possible to directly assess the false alarm rate, probability of detection, photometric and astrometric accuracies without resorting to Monte-Carlo methods.
Conclusions. PACO offers appealing characteristics: it is parameter-free and photometrically unbiased. The statistical performance in terms of detection capability, photometric and astrometric accuracies can be straightforwardly assessed. A fast approximate version of the method is also described to process large amounts of data from exoplanets search surveys.
).
 
[31] "Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology," F Jolivet, F Momey, L Denis, L Méès, N Faure, N Grosjean, F Pinston, J-L Marié, C Fournier, Optics Express, 2018 (pdf, doi, abstract Abstract
Reconstruction of phase objects is a central problem in digital holography, whose various applications include microscopy, biomedical imaging, and fluid mechanics. Starting from a single in-line hologram, there is no direct way to recover the phase of the diffracted wave in the hologram plane. The reconstruction of absorbing and phase objects therefore requires the inversion of the non-linear hologram formation model. We propose a regularized reconstruction method that includes several physically-grounded constraints such as bounds on transmittance values, maximum/minimum phase, spatial smoothness or the absence of any object in parts of the field of view. To solve the non-convex and non-smooth optimization problem induced by our modeling, a variable splitting strategy is applied and the closed-form solution of the sub-problem (the so-called proximal operator) is derived. The resulting algorithm is efficient and is shown to lead to quantitative phase estimation on reconstructions of accurate simulations of in-line holograms based on the Mie theory. As our approach is adaptable to several in-line digital holography configurations, we present and discuss the promising results of reconstructions from experimental in-line holograms obtained in two different applications: the tracking of an evaporating droplet (size about 100 micrometers) and the microscopic imaging of bacteria (size about 1 micrometer).
).
 
[30] "PARISAR: Patch-Based Estimation and Regularized Inversion for Multibaseline SAR Interferometry," G Ferraioli, C Deledalle, L Denis, F Tupin, IEEE transactions on Geoscience and Remote Sensing, 2018 (pdf, doi, abstract Abstract
Reconstruction of elevation maps from a collection of synthetic aperture radar (SAR) images obtained in interferometric configuration is a challenging task. Reconstruction methods must overcome two difficulties: the strong interferometric noise that contaminates the data and the 2 pi phase ambiguities. Interferometric noise requires some form of smoothing among pixels of identical height. Phase ambiguities can be solved, up to a point, by combining linkage to the neighbors and a global optimization strategy to prevent from being trapped in local minima. This paper introduces a reconstruction method, PARISAR, that achieves both a resolution-preserving denoising and a robust phase unwrapping (PhU) by combining nonlocal denoising methods based on patch similarities and total-variation regularization. The optimization algorithm, based on graph cuts, identifies the global optimum. Combining patch-based speckle reduction methods and regularization-based PhU requires solving several issues: 1) computational complexity, the inclusion of nonlocal neighborhoods strongly increasing the number of terms involved during the regularization, and 2) adaptation to varying neighborhoods, patch comparison leading to large neighborhoods in homogeneous regions and much sparser neighborhoods in some geometrical structures. PARISAR solves both issues. We compare PARISAR with other reconstruction methods both on numerical simulations and satellite images and show a qualitative and quantitative improvement over state-of-the-art reconstruction methods for multibaseline SAR interferometry.
).
 
[29] "Subpixellic Methods for Sidelobes Suppression and Strong Targets Extraction in Single Look Complex SAR Images ," R Abergel, L Denis, S Ladjal, F Tupin, IEEE journal of selected topics in applied earth observations and remote sensing (JSTARS), 2018 (pdf, doi, source code, abstract Abstract
SAR images display very high dynamic ranges. Man-made structures (like buildings or power towers) produce echoes that are several orders of magnitude stronger than echoes from diffusing areas (vegetated areas) or from smooth surfaces (e.g., roads). The impulse response of the SAR imaging system is thus clearly visible around the strongest targets: sidelobes spread over several pixels, masking the much weaker echoes from the background. To reduce the sidelobes of the impulse response, images are generally spectrally apodized, trading resolution for a reduction of the sidelobes. This apodization procedure (global or shift-variant) introduces spatial correlations in the speckle-dominated areas which complicates the design of estimation methods. This paper describes strategies to cancel sidelobes around point-like targets while preserving the spatial resolution and the statistics of speckle-dominated areas. An irregular sampling grid is built to compensate the sub-pixel shifts and turn cardinal sines into discrete Diracs. A statistically grounded approach for point-like target extraction is also introduced, thereby providing a decomposition of a single look complex image into two components: a speckle-dominated image and the point-like targets. This decomposition can be exploited to produce images with improved quality (full resolution and suppressed sidelobes) suitable both for visual inspection and further processing (multi-temporal analysis, despeckling, interferometry).
).
 

2017

[28] "MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?," C Deledalle, L Denis, S Tabti, F Tupin, IEEE transactions on Image Processing, 2017 (pdf, doi, source code, abstract Abstract
Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.
).
 
[27] "Self-calibration for lensless color microscopy," O Flasseur, C Fournier, N Verrier, L Denis, F Jolivet, A Cazier, and T Lépine, Applied Optics, 2017 (doi, abstract Abstract
Lensless color microscopy (also called in-line digital color holography) is a recent quantitative 3D imaging method used in several areas including biomedical imaging and microfluidics. By targeting cost-effective and compact designs, the wavelength of the low-end sources used is known only imprecisely, in particular because of their dependence on temperature and power supply voltage. This imprecision is the source of biases during the reconstruction step. An additional source of error is the crosstalk phenomenon, i.e., the mixture in color sensors of signals originating from different color channels. We propose to use a parametric inverse problem approach to achieve self-calibration of a digital color holographic setup. This process provides an estimation of the central wavelengths and crosstalk. We show that taking the crosstalk phenomenon into account in the reconstruction step improves its accuracy.
).
 
[26] "Pixel super-resolution in digital holography by regularized reconstruction ," C. Fournier, F. Jolivet, L. Denis, N. Verrier, E. Thiebaut, C. Allier, and T. Fournel, Applied Optics, 2017 (doi, abstract Abstract
In-line digital holography (DH) and lensless microscopy are 3D imaging techniques used to reconstruct the volume of micro-objects in many fields. However, their performances are limited by the pixel size of the sensor. Recently, various pixel super-resolution algorithms for digital holography have been proposed. A hologram with improved resolution was produced from a stack of laterally shifted holograms, resulting in better resolved reconstruction than a single low-resolution hologram. Algorithms for super-resolved reconstructions based on inverse problems approaches have already been shown to improve the 3D reconstruction of opaque spheres. Maximum a posteriori approaches have also been shown capable of reconstructing the object field more accurately and more efficiently and to extend the usual field-of-view. Here we propose an inverse problem formulation for DH pixel super-resolution and an algorithm that alternates registration and reconstruction steps. The method is described in detail and used to reconstruct synthetic and experimental holograms of sparse 2D objects. We show that our approach improves both the shift estimation and reconstruction quality. Moreover, the reconstructed field-of-view can be expanded by up to a factor 3, thus making it possible to multiply the analyzed area ninefold.
).
 

2016

[25] "Multi-temporal SAR image decomposition into strong scatterers, background, and speckle," S Lobry, L Denis, F Tupin, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2016 (pdf, doi, abstract Abstract
Speckle phenomenon in synthetic aperture radar (SAR) images makes their visual and automatic interpretation a difficult task. To reduce strong fluctuations due to speckle, total variation (TV) regularization has been proposed by several authors to smooth out noise without blurring edges. A specificity of SAR images is the presence of strong scatterers having a radiometry several orders of magnitude larger than their surrounding region. These scatterers, especially present in urban areas, limit the effectiveness of TV regularization as they break the assumption of an image made of regions of constant radiometry. To overcome this limitation, we propose in this paper an image decomposition approach. There exists numerous methods to decompose an image into several components, notably to separate textural and geometrical information. These decomposition models are generally recast as energy minimization problems involving a different penalty term for each of the components. In this framework, we propose an energy suitable for the decomposition of SAR images into speckle, a smooth background and strong scatterers, and discuss its minimization using max-flow/min-cut algorithms. We make the connection between the minimization problem considered, involving the L0 pseudo-norm, and the generalized likelihood ratio test used in detection theory. The proposed decomposition jointly performs the detection of strong scatterers and the estimation of the background radiometry. Given the increasing availability of time series of SAR images, we consider the decomposition of a whole time series. New change detection methods can be based on the temporal analysis of the components obtained from our decomposition.
).
 

2015

[24] "Spline driven: high accuracy projectors for 3D tomographic reconstruction from few projections," F Momey, E Thiébaut, C Burnier, L Denis, JM Becker, L Desbat, IEEE transactions on Image Processing, 2015 (pdf, doi, abstract Abstract
Tomographic iterative reconstruction methods need a very thorough modeling of data. The core of this issue is the projectors's design, i.e. the numerical model of projection, is mostly influenced by the representation of the object of interest, decomposed on a basis of functions, and on the approximations made for the projection on the detector. Voxel driven and ray driven projection models, widely appreciated for their short execution time, are too coarse. Distance driven model has a better accuracy but also relies on strong approximations to project voxel basis functions. Cubic voxel basis functions are anisotropic, modeling accurately their projection is therefore computationally expensive. Smoother and more isotropic basis functions both better represent continuous functions and provide simpler projectors. This consideration has lead to the development of spherically symmetric volume elements, called blobs. Set apart their isotropy, blobs are often considered too computationally expensive in practice. We propose to use 3D B-splines, which are smooth piecewise polynomials, as basis functions. When the degree of these polynomials increases, their isotropy improves and projections can be computed regardless of their orientation. Thanks to their separability, very efficient algorithms can be used to decompose an image on B-spline basis functions. We approximate the projection of B-spline basis functions with a 2D separable model. The degree and the sampling of the B-splines can be chosen according to a tradeoff between approximation quality and computational complexity. We show on numerical experiments that with our accurate projector, the number of projections can be reduced while preserving a similar reconstruction quality. Used with cubic B-splines, our projector requires just twice as many operations as a model involving voxel basis functions. High accuracy projectors can enhance the resolution of existing systems, or can reduce the number of projections required to reach a given resolution, potentially reducing the dose absorbed by the patient.
).
 
[23] "Fast approximations of shift-variant blur," L Denis, E Thiébaut, F Soulez, JM Becker, Rahul Mourya, International Journal of Computer Vision, 2015 (pdf, doi, abstract Abstract
Image deblurring is essential in high resolution imaging, e.g., astronomy, microscopy or computational photography. Shift-invariant blur is fully characterized by a single point-spread-function (PSF). Blurring is then modeled by a convolution, leading to efficient algorithms for blur simulation and removal that rely on fast Fourier transforms. However, in many different contexts, blur cannot be considered constant throughout the field-of-view, and thus necessitates to model variations of the PSF with the location. These models must achieve a trade-off between the accuracy that can be reached with their flexibility, and their computational efficiency. Several fast approximations of blur have been proposed in the literature. We give a unified presentation of these methods in the light of matrix decompositions of the blurring operator. We establish the connection between different computational tricks that can be found in the litterature and the physical sense of corresponding approximations in terms of equivalent PSFs, physically-based approximations being preferable. We derive an improved approximation that preserves the same desirable low complexity as other fast algorithms while reaching a minimal approximation error. Comparison of theoretical properties and empirical performances of each blur approximation suggests that the proposed general model is preferable for approximation and inversion of a known shift-variant blur.
).
 
[22] "NL-SAR : a unified Non-Local framework for resolution-preserving (Pol)(In)SAR denoising," C Deledalle, L Denis, F Tupin, A Reigber, M Jäger, IEEE trans Geoscience and Remote Sensing, 53(4): 2021-2038, 2015 (pdf, doi, abstract Abstract
Speckle noise is an inherent problem in coherent imaging systems like synthetic aperture radar. It creates strong intensity fluctuations and hampers the analysis of images and the estimation of local radiometric, polarimetric or interferometric properties. SAR processing chains thus often include a multi-looking (i.e., averaging) filter for speckle reduction, at the expense of a strong resolution loss. Preservation of point-like and fine structures and textures requires to locally adapt the estimation. Non-local means successfully adapt smoothing by deriving data-driven weights from the similarity between small image patches. The generalization of non-local approaches offers a flexible framework for resolution-preserving speckle reduction. We describe a general method, NL-SAR, that builds extended non-local neighborhoods for denoising amplitude, polarimetric and/or interferometric SAR images. These neighborhoods are defined on the basis of pixel similarity as evaluated by multi-channel comparison of patches. Several non-local estimations are performed and the best one is locally selected to form a single restored image with good preservation of radar structures and discontinuities. The proposed method is fully automatic and handles single and multi-look images, with or without interferometric or polarimetric channels. Efficient speckle reduction with very good resolution preservation is demonstrated both on numerical experiments using simulated data and airborne radar images. The source code of a parallel implementation of NL-SAR is released with the paper.
).
 

2014

[21] "Exploiting patch similarity for SAR image processing: the nonlocal paradigm," C Deledalle, L Denis, G Poggi, F Tupin, L Verdoliva, IEEE Signal Processing Magazine, 2014 (pdf, abstract Abstract
Most current SAR systems offer high-resolution images featuring polarimetric, interferometric, multi-frequency, multi-angle, or multi-date information. SAR images however suffer from strong fluctuations due to the speckle phenomenon inherent to coherent imagery. Hence, all derived parameters display strong signal-dependent variance, preventing the full exploitation of such a wealth of information. Even with the abundance of despeckling techniques proposed these last three decades, there is still a pressing need for new methods that can handle this variety of SAR products and efficiently eliminate speckle without sacrificing the spatial resolution. Recently, patch-based filtering has emerged as a highly successful concept in image processing. By exploiting the redundancy between similar patches, it succeeds in suppressing most of the noise with good preservation of texture and thin structures. Extensions of patch-based methods to speckle reduction and joint exploitation of multi-channel SAR images (interferometric, polarimetric, or PolInSAR data) have led to the best denoising performance in radar imaging to date. We give a comprehensive survey of patch-based nonlocal filtering of SAR images, focusing on the two main ingredients of the methods: measuring patch similarity, and estimating the parameters of interest from a collection of similar patches.
).
 
[20] "Inverse problem approach for the alignment of electron tomographic series," VD Tran, M Moreaud, E Thiebault, L Denis, and JM Becker, Oil and Gas Science and Technology, OGST, 69(2): 279-291, 2014 (pdf, doi, abstract Abstract
In the refining industry, morphological measurements of particles have become an essential part in the characterization catalyst supports. Through these parameters, one can infer the specific physicochemical properties of the studied materials. One of the main acquisition techniques is electron tomography (or nanotomography). 3D volumes are reconstructed from sets of projections from different angles made by a Transmission Electron Microscope (TEM). This technique provides a real three-dimensional information at the nanometric scale. A major issue in this method is the misalignment of the projections that contributes to the reconstruction. The current alignment techniques usually employ fiducial markers such as gold particles for a correct alignment of the images. When the use of markers is not possible, the correlation between adjacent projections is used to align them. However, this method sometimes fails. In this paper, we propose a new method based on the inverse problem approach where a certain criterion is minimized using a variant of the Nelder and Mead simplex algorithm. The proposed approach is composed of two steps. The first step consists of an initial alignment process, which relies on the minimization of a cost function based on robust statistics measuring the similarity of a projection to its previous projections in the series. It reduces strong shifts resulting from the acquisition between successive projections. In the second step, the pre-registered projections are used to initialize an iterative alignment-refinement process which alternates between (i) volume reconstructions and (ii) registrations of measured projections onto simulated projections computed from the volume reconstructed in (i). At the end of this process, we have a correct reconstruction of the volume, the projections being correctly aligned. Our method is tested on simulated data and shown to estimate accurately the translation, rotation and scale of arbitrary transforms. We have successfully tested our method with real projections of different catalyst supports.
).
 

2013

[19] "Accurate 3D tracking and size measurement of evaporating droplets using in-line digital holography and "inverse problems" reconstruction approach," M. Seifi, C. Fournier, N. Grosjean, L. Méès, JL. Marié, and L. Denis, Optics Express, 21(23), pp. 27964-27980, 2013 (pdf, doi, abstract Abstract
Digital in-line holography was used to study a fast dynamic 3D phenomenon: the evaporation of free-falling diethyl ether droplets. We describe an unsupervised reconstruction algorithm based on an "inverse problems" approach previously developed by our team to accurately reconstruct 3D trajectories and to estimate the droplets' size in a field of view of 7 × 11 × 20 mm3. A first experiment with non-evaporating droplets established that the radius estimates were accurate to better than 0.1 micrometer. With evaporating droplets, the vapor around the droplet distorts the diffraction patterns in the holograms. We showed that areas with the strongest distortions can be discarded using an exclusion mask. We achieved radius estimates better than 0.5 micrometer accuracy for evaporating droplets. Our estimates of the evaporation rate fell within the range predicted by theoretical models.
).
 
[18] "Fast and accurate 3D object recognition directly from digital holograms," M. Seifi, L. Denis, and C. Fournier, J. Opt. Soc. Am. A, 30(11), pp. 2216-2224, 2013 (doi, abstract Abstract
Pattern recognition methods can be used in the context of digital holography to perform the task of object detection, classification, and position extraction directly from the hologram rather than from the reconstructed optical field. These approaches may exploit the differences between the holographic signatures of objects coming from distinct object classes and/or different depth positions. Direct matching of diffraction patterns, however, becomes computationally intractable with increasing variability of objects due to the very high dimensionality of the dictionary of all reference diffraction patterns. We show that most of the diffraction pattern variability can be captured in a lower dimensional space. Good performance for object recognition and localization is demonstrated at a reduced computational cost using a low-dimensional dictionary. The principle of the method is illustrated on a digit recognition problem and on a video of experimental holograms of particles.
).
 
[17] "Exploiting spatial sparsity for multiwavelength imaging in optical interferometry," E Thiébaut, F Soulez, and L Denis, J. Opt. Soc. Am. A, 30(2), pp. 160-170, 2013 (pdf, doi, abstract Abstract
Optical interferometers provide multiple wavelength measurements. In order to fully exploit the spectral and spatial resolution of these instruments, new algorithms for image reconstruction have to be developed. Early attempts to deal with multichromatic interferometric data have consisted in recovering a gray image of the object or independent monochromatic images in some spectral bandwidths. The main challenge is now to recover the full three-dimensional (spatiospectral) brightness distribution of the astronomical target given all the available data. We describe an approach to implement multiwavelength image reconstruction in the case where the observed scene is a collection of point-like sources. We show the gain in image quality (both spatially and spectrally) achieved by globally taking into account all the data instead of dealing with independent spectral slices. This is achieved thanks to a regularization that favors spatial sparsity and spectral grouping of the sources. Since the objective function is not differentiable, we had to develop a specialized optimization algorithm that also accounts for non-negativity of the brightness distribution.
).
 

2012

[16] "Three-dimensional reconstruction of particle holograms: a fast and accurate multiscale approach," M. Seifi, C. Fournier, L. Denis, D. Chareyron, and J.-L. Marié, J. Opt. Soc. Am. A, 29(9), pp. 1808-1817, 2012 (doi, abstract Abstract
In-line digital holography is an imaging technique that is being increasingly used for studying three-dimensional flows. It has been previously shown that very accurate reconstructions of objects could be achieved with the use of an inverse problem framework. Such approaches, however, suffer from higher computational times compared to less accurate conventional reconstructions based on hologram backpropagation. To overcome this computational issue, we propose a coarse-to-fine multiscale approach to strongly reduce the algorithm complexity. We illustrate that an accuracy comparable to that of state-of-the-art methods can be reached while accelerating parameter-space scanning.
).
 
[15] "How to Compare Noisy Patches? Patch Similarity Beyond Gaussian Noise," C. Deledalle, L. Denis, and F. Tupin, International Journal of Computer Vision, 99(1), pp. 86-102, 2012 (pdf, doi, abstract Abstract
Many tasks in computer vision require to match image parts. While higher-level methods consider image features such as edges or robust descriptors, low-level approaches (so-called image-based) compare groups of pixels (patches) and provide dense matching. Patch similarity is a key ingredient to many techniques for image registration, stereo-vision, change detection or denoising. Recent progress in natural image modeling also makes intensive use of patch comparison.
A fundamental difficulty when comparing two patches from "real" data is to decide whether the differences should be ascribed to noise or intrinsic dissimilarity. Gaussian noise assumption leads to the classical definition of patch similarity based on the squared differences of intensities. For the case where noise departs from the Gaussian distribution, several similarity criteria have been proposed in the literature of image processing, detection theory and machine learning.
By expressing patch (dis)similarity as a detection test under a given noise model, we introduce these criteria with a new one and discuss their properties. We then assess their performance for different tasks: patch discrimination, image denoising, stereo-matching and motion-tracking under gamma and Poisson noises. The proposed criterion based on the generalized likelihood ratio is shown to be both easy to derive and powerful in these diverse applications.
).
 
[14] "Testing an in-line digital holography 'inverse method' for the Lagrangian tracking of evaporating droplets in homogeneous nearly isotropic turbulence," D. Chareyron, J.L. Marié, C. Fournier, J. Gire, N. Grosjean, L. Denis, M. Lance and L. Méès, New Journal of Physics, 14 043039, 2012 (doi, pdf, HAL, abstract Abstract
An in-line digital holography technique is tested, the objective being to measure Lagrangian three-dimensional (3D) trajectories and the size evolution of droplets evaporating in high-Re strong turbulence. The experiment is performed in homogeneous, nearly isotropic turbulence (50 × 50 × 50 mm3) created by the meeting of six synthetic jets. The holograms of droplets are recorded with a single high-speed camera at frame rates of 1-3 kHz. While hologram time series are generally processed using a classical approach based on the Fresnel transform, we follow an 'inverse problem' approach leading to improved size and 3D position accuracy and both in-field and out-of-field detection. The reconstruction method is validated with 60 microns diameter water droplets released from a piezoelectric injector 'on-demand' and which do not appreciably evaporate in the sample volume. Lagrangian statistics on 1000 reconstructed tracks are presented. Although improved, uncertainty on the depth positions remains higher, as expected in in-line digital holography. An additional filter is used to reduce the effect of this uncertainty when calculating the droplet velocities and accelerations along this direction. The diameters measured along the trajectories remain constant within ±1.6%, thus indicating that accuracy on size is high enough for evaporation studies. The method is then tested with R114 freon droplets at an early stage of evaporation. The striking feature is the presence on each hologram of a thermal wake image, aligned with the relative velocity fluctuations 'seen' by the droplets (visualization of the Lagrangian fluid motion about the droplet). Its orientation compares rather well with that calculated by using a dynamical equation for describing the droplet motion. A decrease of size due to evaporation is measured for the droplet that remains longest in the turbulence domain.
).
 

2011

[13] "NL-InSAR: Non-local interferogram estimation," C. Deledalle, L. Denis, and F. Tupin, IEEE trans. geoscience and remote sensing, 49, 4, 2011 (pdf, doi, abstract Abstract
Interferometric synthetic aperture radar (InSAR) data provides reflectivity, phase difference and coherence images, which are paramount to scene interpretation or low-level processing tasks such as segmentation and 3D reconstruction. These images are estimated in practice from hermitian product on local windows. These windows lead to biases and resolution losses due to local heterogeneity caused by edges and texture. This paper proposes a non-local approach for joint estimation of the reflectivity, phase difference and coherence images from an interferometric pair of co-registered single-look complex (SLC) SAR images. Non-local techniques are known to efficiently reduce noise while preserving structures by performing a weighted averaging of similar pixels. Two pixels are considered similar if the surrounding image patches are 'resembling'. Patch-similarity is usually defined as the Euclidean distance between the vectors of graylevels. In this paper a statistically grounded patch-similarity criterion suitable to SLC images is derived. A weighted maximum likelihood estimation of the SAR interferogram is then computed with weights derived in a data-driven way. Weights are defined from intensity and phase difference, and are iteratively refined based both on the similarity between noisy patches and on the similarity of patches from the previous estimate. The efficiency of this new interferogram construction technique is illustrated both qualitatively and quantitatively on synthetic and true data.
).
 

2010

[12] "On the single point resolution of on-axis digital holography," C. Fournier, L. Denis, and T. Fournel, J. Opt. Soc. Am. A, 27 (8), 1856-1862, 2010. (pdf, doi, abstract Abstract
On-axis digital holography (DH) is becoming widely used for its time-resolved three-dimensional (3D) imaging capabilities. A 3D volume can be reconstructed from a single hologram. DH is applied as a metrological tool in experimental mechanics, biology, and fluid dynamics, and therefore the estimation and the improvement of the resolution are current challenges. However, the resolution depends on experimental parameters such as the recording distance, the sensor definition, the pixel size, and also on the location of the object in the field of view. This paper derives resolution bounds in DH by using estimation theory. The single point resolution expresses the standard deviations on the estimation of the spatial coordinates of a point source from its hologram. Cramér Rao lower bounds give a lower limit for the resolution. The closed-form expressions of the Cramér Rao lower bounds are obtained for a point source located on and out of the optical axis. The influences of the 3D location of the source, the numerical aperture, and the signal-to-noise ratio are studied.
).
 

2009

[11] "Inline hologram reconstruction with sparsity constraints," L. Denis, D. Lorenz, E. Thiébaut, C. Fournier, D. Trede, Optics Letters, 34(22), 3475-3477, 2009. (pdf, doi, abstract Abstract
Inline digital holograms are classically reconstructed using linear operators to model diffraction. It has long been recognized that such reconstruction operators do not invert the hologram formation operator. Classical linear reconstructions yield images with artifacts such as distortions near the field-of-view boundaries or twin images. When objects located at different depths are reconstructed from a hologram, in-focus and out-of-focus images of all objects superimpose upon each other. Additional processing, such as maximum-of-focus detection, is thus unavoidable for any successful use of the reconstructed volume. In this Letter, we consider inverting the hologram formation model in a Bayesian framework. We suggest the use of a sparsity-promoting prior, verified in many inline holography applications, and present a simple iterative algorithm for 3D object reconstruction under sparsity and positivity constraints. Preliminary results with both simulated and experimental holograms are highly promising.
).
 
[10] "Greedy Solution of Ill-Posed Problems: Error Bounds and Exact Inversion," L. Denis, D. Lorenz and D. Trede, Inverse Problems, 25 115017, 2009. (pdf,doi, abstract Abstract
The orthogonal matching pursuit (OMP) is a greedy algorithm to solve sparse approximation problems. Sufficient conditions for exact recovery are known with and without noise. In this paper we investigate the applicability of the OMP for the solution of ill-posed inverse problems in general, and in particular for two deconvolution examples from mass spectrometry and digital holography, respectively. In sparse approximation problems one often has to deal with the problem of redundancy of a dictionary, i.e. the atoms are not linearly independent. However, one expects them to be approximatively orthogonal and this is quantified by the so-called incoherence. This idea cannot be transferred to ill-posed inverse problems since here the atoms are typically far from orthogonal. The ill-posedness of the operator probably causes the correlation of two distinct atoms to become huge, i.e. that two atoms look much alike. Therefore, one needs conditions which take the structure of the problem into account and work without the concept of coherence. In this paper we develop results for the exact recovery of the support of noisy signals. In the two examples, mass spectrometry and digital holography, we show that our results lead to practically relevant estimates such that one may check a priori if the experimental setup guarantees exact deconvolution with OMP. Especially in the example from digital holography, our analysis may be regarded as a first step to calculate the resolution power of droplet holography.
) -- Note that the authors of this paper are ordered alphabetically, the main author is D. Trede.
 
[9] "Iterative weighted maximum likelihood denoising with probabilistic patch-based weights," C. Deledalle, L. Denis, and F. Tupin, IEEE trans. image processing, 18, 12, 2009. (pdf, doi, abstract Abstract
Image denoising is an important problem in image processing since noise may interfere with visual or automatic interpretation. This paper presents a new approach for image denoising in the case of a known uncorrelated noise model. The proposed filter is an extension of the nonlocal means (NL means) algorithm introduced by Buades et al., which performs a weighted average of the values of similar pixels. Pixel similarity is defined in NL means as the Euclidean distance between patches (rectangular windows centered on each two pixels). In this paper, a more general and statistically grounded similarity criterion is proposed which depends on the noise distribution model. The denoising process is expressed as a weighted maximum likelihood estimation problem where the weights are derived in a data-driven way. These weights can be iteratively refined based on both the similarity between noisy patches and the similarity of patches extracted from the previous estimate. We show that this iterative process noticeably improves the denoising performance, especially in the case of low signal-to-noise ratio images such as synthetic aperture radar (SAR) images. Numerical experiments illustrate that the technique can be successfully applied to the classical case of additive Gaussian noise but also to cases such as multiplicative speckle noise. The proposed denoising technique seems to improve on the state of the art performance in that latter case.
).
 
[8] "Joint regularization of phase and amplitude of InSAR data: application to 3D reconstruction," L. Denis, F. Tupin, J. Darbon, and M. Sigelle, IEEE trans. geoscience and remote sensing, 47, 11, 2009. (pdf, doi, abstract Abstract
Interferometric synthetic aperture radar (SAR) images suffer from a strong noise, and their regularization is often a prerequisite for successful use of their information. Independently of the unwrapping problem, interferometric phase denoising is a difficult task due to shadows and discontinuities. In this paper, we propose to jointly filter phase and amplitude data in a Markovian framework. The regularization term is expressed by the minimization of the total variation and may combine different information (phase, amplitude, optical data). First, a fast and approximate optimization algorithm for vectorial data is briefly presented. Then, two applications are described. The first one is a direct application of this algorithm for 3-D reconstruction in urban areas with very high resolution images. The second one is an adaptation of this framework to the fusion of SAR and optical data. Results on aerial SAR images are presented.
).
 
[7] "SAR Image Regularization with Fast Approximate Discrete Minimization," L. Denis, F. Tupin, J. Darbon, and M. Sigelle, IEEE trans. image processing, 18, 7, 2009. (pdf, doi, abstract Abstract
Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.
).
 

2008

[6] "Digital holography of particles: benefits of the 'inverse problem' approach," J. Gire, L. Denis, C. Fournier, C. Ducottet, E. Thiebaut, and F. Soulez, Meas. Sci. Tech., 19, 2008. (pdf, doi, abstract Abstract
The potential of in-line digital holography to locate and measure the size of particles distributed throughout a volume (in one shot) has been established. These measurements are fundamental for the study of particle trajectories in fluid flow. The most important issues in digital holography today are poor depth positioning accuracy, transverse field-of-view limitations, border artifacts and computational burdens. We recently suggested an 'inverse problem' approach to address some of these issues for the processing of particle digital holograms. The described algorithm improves axial positioning accuracy, gives particle diameters with sub-micrometer accuracy, eliminates border effects and increases the size of the studied volume. This approach for processing particle holograms pushes back some classical constraints. For example, the Nyquist criterion is no longer a restriction for the recording step and the studied volume is no longer confined to the field of view delimited by the sensor borders. In this paper we present a review of the limitations commonly found in digital holography. We then discuss the benefits of the 'inverse problem' approach and the influence of some experimental parameters in this framework.
).
 
[5] "Numerical suppression of the twin-image in in-line holography of a volume of micro-objects," L. Denis, C. Fournier, T. Fournel, and C. Ducottet, Meas. Sci. Tech., 19, 2008. (pdf, doi, abstract Abstract
We address the twin-image problem that arises in holography due to the lack of phase information in intensity measurements. This problem is of great importance in in-line holography where spatial elimination of the twin image cannot be carried out as in off-axis holography. A unifying description of existing digital suppression methods is given in the light of deconvolution techniques. Holograms of objects spread in 3D cannot be processed through available approaches. We suggest an iterative algorithm and demonstrate its efficacy on both simulated and real data. This method is suitable to enhance the reconstructed images from a digital hologram of small objects.
).
 

2007

[4] "Inverse problem approach for particle digital holography: out-of-field particle detection made possible," F. Soulez, L. Denis, E. Thiébaut, C. Fournier, and C. Goepfert, J. Opt. Soc. Am. A, 24 (12), 3708-3716, 2007. (pdf, doi, abstract Abstract
We propose a microparticle detection scheme in digital holography. In our inverse problem approach, we estimate the optimal particles set that best models the observed hologram image. Such a method can deal with data that have missing pixels. By considering the camera as a truncated version of a wider sensor, it becomes possible to detect particles even out of the camera field of view. We tested the performance of our algorithm against simulated and experimental data for diluted particle conditions. With real data, our algorithm can detect particles far from the detector edges in a working area as large as 16 times the camera field of view. A study based on simulated data shows that, compared with classical methods, our algorithm greatly improves the precision of the estimated particle positions and radii. This precision does not depend on the particle's size or location (i.e., whether inside or outside the detector field of view).
).
 
[3] "Inverse problem approach for particle digital holography: accurate location based on local optimisation," F. Soulez, L. Denis, C. Fournier, E. Thiébaut, and C. Goepfert, J. Opt. Soc. Am. A, 24 (4), 1164-1171, 2007. (pdf, doi, abstract Abstract
We propose a microparticle localization scheme in digital holography. Most conventional digital holography methods are based on Fresnel transform and present several problems such as twin-image noise, border effects, and other effects. To avoid these difficulties, we propose an inverse-problem approach, which yields the optimal particle set that best models the observed hologram image. We resolve this global optimization problem by conventional particle detection followed by a local refinement for each particle. Results for both simulated and real digital holograms show strong improvement in the localization of the particles, particularly along the depth dimension. In our simulations, the position precision is >1 micron rms. Our results also show that the localization precision does not deteriorate for particles near the edge of the field of view.
).
 
[2] "Reconstruction of the rose of directions from a digital micro-hologram of fibers," L. Denis, T. Fournel, C. Fournier, and D. Jeulin, J. Microsc., 225 (3), 282-291, 2007. (pdf, doi, abstract Abstract
Digital holography makes it possible to acquire quickly the interference patterns of objects spread in a volume. The digital processing of the fringes is still too slow to achieve on line analysis of the holograms. We describe a new approach to obtain information on the direction of illuminated objects. The key idea is to avoid reconstruction of the volume followed by classical three-dimensional image processing. The hologram is processed using a global analysis based on autocorrelation. A fundamental property of diffraction patterns leads to an estimate of the mean geometric covariogram of the objects projections. The rose of directions is connected with the mean geometric covariogram through an inverse problem. In the general case, only the two-dimensional rose of the object projections can be reconstructed. The further assumption of unique-size objects gives access with the knowledge of this size to the three-dimensional direction information. An iterative scheme is suggested to reconstruct the three-dimensional rose in this special case. Results are provided on holograms of paper fibres.
).
 

2006

[1] "Direct extraction of mean particle size from a digital hologram," L. Denis, C. Fournier, T. Fournel, C. Ducottet, and D. Jeulin, Applied Optics, 45 (5), 944-952, 2006. (pdf, doi, abstract Abstract
Digital holography, which consists of both acquiring the hologram image in a digital camera and numerically reconstructing the information, offers new and faster ways to make the most of a hologram. We describe a new method to determine the rough size of particles in an in-line hologram. This method relies on a property that is specific to interference patterns in Fresnel holograms: Self-correlation of a hologram provides access to size information. The proposed method is both simple and fast and gives results with acceptable precision. It suppresses all the problems related to the numerical depth of focus when large depth volumes are analyzed.
).
 



Conference papers


2019

[58] "Urban surface recovery through graph-cuts over SAR tomographic reconstruction," C Rambour, L Denis, F Tupin, IEEE JURSE, 2019.
 
[57] "SAR Image Despeckling Using Pre-trained Convolutional Neural Network Models," X Yang, L Denis, F Tupin, W Yang, IEEE JURSE, 2019.
 
[56] "ExPACO: detection of an extended pattern under nonstationary correlated noise by patch covariance modeling," O Flasseur, L Denis, E Thiébaut, T Olivier, C Fournier, EUSIPCO, 2019.
 
[55] "The exploitation of the non-local paradigm for SAR 3D reconstruction," G Ferraioli, L Denis, C Deledalle, F Tupin, IEEE IGARSS, 2019.
 
[54] "Ten years of patch-based approaches for SAR imaging: a review," F Tupin, L Denis, C Deledalle, G Ferraioli, IEEE IGARSS, 2019.
 
[53] "From patches to deep learning: combining self-similarity and neural networks for SAR image despeckling," L Denis, C Deledalle, F Tupin, IEEE IGARSS, 2019.
 
[52] "Multi-temporal speckle reduction of polarimetric SAR images: a ratio-based approach," C Deledalle, L Denis, L Ferro-Famil, JM Nicolas, F Tupin, IEEE IGARSS, 2019.
 
[51] "Resolution-preserving speckle reduction of SAR images: the benefits of speckle decorrelation and targets extraction," R Abergel, L Denis, F Tupin, S Ladjal, C Deledalle, A Almansa, IEEE IGARSS, 2019.
 

2018

[50] "Speckle reduction in PolSAR by multi-channel variance stabilization and Gaussian denoising: MuLoG," C Deledalle, L Denis, F Tupin, S Lobry EUSAR, 2018.
 
[49] "RABASAR: A fast ratio based multi-temporal SAR despeckling," W Zhao, C Deledalle, L Denis, H Maître, JM Nicolas, F Tupin, IEEE IGARSS, 2018.
 
[48] "MuLoG : A generic variance-stabilization approach for speckle reduction in SAR interferometry and SAR polarimetry," C Deledalle, L Denis, F Tupin, IEEE IGARSS, 2018.
 
[47] "SAR tomography of urban areas: 3D regularized inversion in the scene geometry," C Rambour, L Denis, F Tupin, H Oriot, JM Nicolas, IEEE IGARSS, 2018.
 
[46] "An unsupervised patch-based approach for exoplanet detection by direct imaging," O Flasseur, L Denis, E Thiébaut, M Langlois, IEEE ICIP, 2018.
 
[45] "Exoplanet detection in angular and spectral differential imaging : local learning of background correlations for improved detections," O Flasseur, L Denis, E Thiébaut, M Langlois, SPIE Astronomical Telescopes+Instrumentation, 2018.
 

2017

[44] "Robust Object Characterization from Lensless Microscopy Videos," O Flasseur, L Denis, C Fournier, E Thiébaut, EUSIPCO, 2017.
 
[43] "Similarity criterion for SAR tomography over dense urban area," C Rambour, L Denis, F Tupin, JM Nicolas, H Oriot, L Ferro-Famil, C Deledalle, IEEE IGARSS, 2017.
 
[42] "Double MRF for water classification in SAR images by joint detection and reflectivity estimation," S Lobry, L Denis, F Tupin, R Fjortoft, IEEE IGARSS, 2017.
 

2016

[41] "Fast and robust exo-planet detection in multi-spectral, multi-temporal data, E Thiébaut, L Denis, L Mugnier, A Ferrari, D Mary, M Langlois, F Cantalloube, N Devaney, SPIE Adaptive Optics Systems, 2016 (abstract Abstract
Exo-planet detection is a signal processing problem that can be addressed by several detection approaches. This paper provides a review of methods from detection theory that can be applied to detect exo-planets in coronographic images such as those provided by SPHERE and GPI. In a first part, we recall the basics of signal detection and describe how to derive a fast and robust detection criterion based on a heavy tail model that can account for outliers in the residuals. In a second part, we derive detectors that handle jointly several wavelengths and exposures and focus on an approach that prevents from interpolating the data, thereby preserving the statistics of the original data.
).
 
[40] "Spatially variant PSF modeling and image deblurring," E Thiébaut, L Denis, F Soulez, R Mourya, SPIE Adaptive Optics Systems, 2016 (abstract Abstract
Most current imaging instruments have a spatially variant point spread function (PSF). An optimal exploitation of these instruments requires to account for this non-stationarity. We review existing models of spatially variant PSF with an emphasis on those which are not only accurate but also fast because getting rid of non-stationary blur can only be done by iterative methods.
).
 
[39] "A decomposition model for scatterers change detection in multi-temporal series of SAR images," S Lobry, L Denis, F Tupin, IEEE IGARSS, 2016 (doi, abstract Abstract
This paper presents a method for strong scatterers change detection in synthetic aperture radar (SAR) images based on a decomposition for multi-temporal series. The formulated decomposition model jointly estimates the background of the series and the scatterers. The decomposition model retrieves possible changes in scatterers and the date at which they occurred. An exact optimization method of the model is presented and applied to a TerraSAR-X time series.
).
 
[38] "Fast and robust detection of a known pattern in an image," L Denis, A Ferrari, D Mary, L Mugnier, E Thiébaut, EUSIPCO, 2016 (doi, abstract Abstract
Many image processing applications require to detect a known pattern buried under noise. While maximum correlation can be implemented efficiently using fast Fourier transforms, detection criteria that are robust to the presence of outliers are typically slower by several orders of magnitude. We derive the general expression of a robust detection criterion based on the theory of locally optimal detectors. The expression of the criterion is attractive because it offers a fast implementation based on correlations. Application of this criterion to Cauchy likelihood gives good detection performance in the presence of outliers, as shown in our numerical experiments. Special attention is given to proper normalization of the criterion in order to account for truncation at the image borders and noise with a non-stationary dispersion.
).
 

2015

[37] "Semi-blind joint super-resolution/segmentation of 3D trabecular bone images by a TV box approach," F Peyrin, A Toma, B Sixou, L Denis, A Burghardt, JB Pialat, EUSIPCO, 2015 (doi, abstract Abstract
The investigation of bone fragility diseases, as osteoporosis, is based on the analysis of the trabecular bone microarchitecture. The aim of this paper is to improve the in-vivo trabecular bone segmentation and quantification by increasing the resolution of bone micro-architecture images. We propose a semi-blind joint super-resolution/segmentation approach based on a Total Variation regularization with a convex constraint. A comparison with the bicubic interpolation method and the non-blind version of the proposed method is shown. The validation is performed on blurred, noisy and down-sampled 3D synchrotron micro-CT bone images. Good estimates of the blur and of the high resolution image are obtained with the semi-blind approach. Preliminary results are obtained with the semi-blind approach on real HR-pQCT images.
).
 
[36] "A blind deblurring and image decomposition approach for astronomical image restoration," R Mourya, L Denis, JM Becker, E Thiébaut, EUSIPCO, 2015 (doi, abstract Abstract
With the progress of adaptive optics systems, ground-based telescopes acquire images with improved resolutions. However, compensation for atmospheric turbulence is still partial, which leaves good scope for digital restoration techniques to recover fine details in the images. A blind image deblurring algorithm for a single long-exposure image is proposed, which is an instance of maximum-a-posteriori estimation posed as constrained non-convex optimization problem. A view of sky contains mainly two types of sources: point-like and smooth extended sources. The algorithm takes into account this fact explicitly by imposing different priors on these components, and recovers two separate maps for them. Moreover, an appropriate prior on the blur kernel is also considered. The resulting optimization problem is solved by alternating minimization. The initial experimental results on synthetically corrupted images are promising, the algorithm is able to restore the fine details in the image, and recover the point spread function.
).
 
[35] "Augmented lagrangian without alternating directions: Practical algorithms for inverse problems in imaging," R Mourya, L Denis, JM Becker, E Thiébaut, IEEE ICIP, 2015 (doi, abstract Abstract
Several problems in signal processing and machine learning can be casted as optimization problems. In many cases, they are of large-scale, nonlinear, have constraints, and may be nonsmooth in the unknown parameters. There exists plethora of fast algorithms for smooth convex optimization, but these algorithms are not readily applicable to nonsmooth problems, which has led to a considerable amount of research in this direction. In this paper, we propose a general algorithm for nonsmooth bound-constrained convex optimization problems. Our algorithm is instance of the so-called augmented Lagrangian, for which theoretical convergence is well established for convex problems. The proposed algorithm is a blend of superlinearly convergent limited memory quasi-Newton method, and proximal projection operator. The initial promising numerical results for total-variation based image deblurring show that they are as fast as the best existing algorithms in the same class, but with fewer and less sensitive tuning parameters, which makes a huge difference in practice.
).
 
[34] "Combining patch-based estimation and total variation regularization for 3D InSAR reconstruction," C Deledalle, L Denis, G Ferraioli, F Tupin, IEEE IGARSS, 2015 (doi, abstract Abstract
In this paper we propose a new approach for height retrieval using multi-channel SAR interferometry. It combines patch-based estimation and total variation regularization to provide a regularized height estimate. The non-local likelihood term adaptation relies on NL-SAR method, and the global optimization is realized through graph-cut minimization. The method is evaluated both with synthetic and real experiments.
).
 
[33] "Patch-based SAR image classification: The potential of modeling the statistical distribution of patches with Gaussian mixtures.," S Tabti, C Deledalle, L Denis, F Tupin, IEEE IGARSS, 2015 (doi, abstract Abstract
Due to their coherent nature, SAR (Synthetic Aperture Radar) images are very different from optical satellite images and more difficult to interpret, especially because of speckle noise. Given the increasing amount of available SAR data, efficient image processing techniques are needed to ease the analysis. Classifying this type of images, i.e., selecting an adequate label for each pixel, is a challenging task. This paper describes a supervised classification method based on local features derived from a Gaussian mixture model (GMM) of the distribution of patches. First classification results are encouraging and suggest an interesting potential of the GMM model for SAR imaging.
).
 
[32] " Sparse + smooth decomposition models for multi-temporal SAR images," S Lobry, L Denis, F Tupin, Multi-Temp, 2015 (doi, abstract Abstract
SAR images have distinctive characteristics compared to optical images: speckle phenomenon produces strong fluctuations, and strong scatterers have radar signatures several orders of magnitude larger than others. We propose to use an image decomposition approach to account for these peculiarities. Several methods have been proposed in the field of image processing to decompose an image into components of different nature, such as a geometrical part and a textural part. They are generally stated as an energy minimization problem where specific penalty terms are applied to each component of the sought decomposition. We decompose temporal series of SAR images into three components: speckle, strong scatterers and background. Our decomposition method is based on a discrete optimization technique by graph-cut. We apply it to change detection tasks.
).
 

2014

[31] "Building invariance properties for dictionaries of SAR image patches," S Tabti, C Deledalle, L Denis, F Tupin, IEEE IGARSS, 2014 (doi, abstract Abstract
Adding invariance properties to a dictionary-based model is a convenient way to reach a high representation capacity while maintaining a compact structure. Compact dictionaries of patches are desirable because they ease semantic interpretation of their elements (atoms) and offer robust decompositions even under strong speckle fluctuations. This paper describes how patches of a dictionary can be matched to a speckled image by accounting for unknown shifts and affine radio-metric changes. This procedure is used to build dictionaries of patches specific to SAR images. The dictionaries can then be used for denoising or classification purposes.
).
 
[30] "Total variation super-resolution for 3D trabecular bone micro-structure segmentation," A Toma, L Denis, B Sixou, JB Pialat, F Peyrin, EUSIPCO, 2014 (abstract Abstract
The analysis of the trabecular bone micro-structure plays an important role in studying bone fragility diseases such as osteoporosis. In this context, X-ray CT techniques are increasingly used to image bone micro-architecture. The aim of this paper is to improve the segmentation of the bone micro-structure for further bone quantification. We propose a joint super-resolution/segmentation method based on total variation with a convex constraint. The minimization is performed with the Alternating Direction Method of Multipliers (ADMM). The new method is compared with the bicubic interpolation method and the classical total variation regularization. All methods were tested on blurred, noisy and down-sampled 3D synchrotron micro-CT bone volumes. Improved segmentation is obtained with the proposed joint super-resolution/segmentation method.
).
 
[29] "Modeling the distribution of patches with shift-invariance : application to SAR image restoration," S Tabti, C Deledalle, L Denis, F Tupin, IEEE ICIP, 2014 (doi, abstract Abstract
Patches have proven to be very effective features to model natural images and to design image restoration methods. Given the huge diversity of patches found in images, modeling the distribution of patches is a difficult task. Rather than attempting to accurately model all patches of the image, we advocate that it is sufficient that all pixels of the image belong to at least one well-explained patch. An image is thus described as a tiling of patches that have large prior probability. In contrast to most patch-based approaches, we do not process the image in patch space, and consider instead that patches should match well everywhere where they overlap. In-order to apply this modeling to the restoration of SAR images, we define a suitable data-fitting term to account for the statistical distribution of speckle. Restoration results are competitive with state-of-the art SAR despeckling methods.
).
 
[28] "Higher order total variation super-resolution from a single trabecular bone image," A Toma, B Sixou, L Denis, JB Pialat, F Peyrin, IEEE ISBI, 2014 (doi, abstract Abstract
Osteoporosis is characterized by a low bone mass density and deterioration of bone micro-architecture. Despite the considerable progress in Computed Tomography (CT), the investigation of 3D trabecular bone micro-architecture in-vivo remains limited due to a lack of spatial resolution compared to the trabeculae size. To improve the analysis of trabecular bone from in-vivo CT images, we investigate super-resolution methods to estimate a higher spatial resolution image from a single lower spatial resolution image. To solve this inverse problem, we considered two regularization strategies involving first or second order differential operators. The methods are tested on experimental micro-CT trabecular bone images at 20 micrometer which are used as reference images. The first tests suggest that both methods give similar results but total variation regularization implemented with the alternating direction method of multipliers algorithm is more efficient to recover correctly some structural parameters.
).
 

2013

[27] "Template Matching with Noisy Patches: A Contrast-Invariant GLR Test," C. Deledalle, L. Denis, and F. Tupin, EUSIPCO, Marrakech, September 2013 (pdf, abstract Abstract
Matching patches from a noisy image to atoms in a dictionary of patches is a key ingredient to many techniques in image processing and computer vision. By representing with a single atom all patches that are identical up to a radiometric transformation, dictionary size can be kept small, thereby retaining good computational efficiency. Identification of the atom in best match with a given noisy patch then requires a contrast-invariant criterion. In the light of detection theory, we propose a new criterion that ensures contrast invariance and robustness to noise. We discuss its theoretical grounding and assess its performance under Gaussian, gamma and Poisson noises.
).
 
[26] "Fast Diffraction-pattern matching for object detection and recognition in digital holograms," M. Seifi, L. Denis and C. Fournier, EUSIPCO, Marrakech, September 2013 ( abstract Abstract
A digital hologram is a 2-D recording of the diffraction fringes created by 3-D objects under coherent lighting. These fringes encode the shape and 3-D location information of the objects. By simulating re-lighting of the hologram, the 3-D wave eld can be reconstructed and a volumetric image of the objects recovered. Rather than performing object detection and identi cation in this reconstructed volume, we consider direct recognition of diffraction-patterns in in-line holograms and show that it leads to superior performance. The huge variability of diffraction patterns with object shape and 3-D location makes diffraction-pattern matching computationally expensive. We suggest the use of a dimensionality reduction technique to circumvent this limitation and show good detection and recognition performance both on simulated and experimental holograms.
).
 
[25] "Dictionary size reduction for a faster object recognition in digital holography," C. Fournier, L. Denis, M. Seifi, and T. Fournel, Workshop on Information Optics (WIO), Tenerife, July 2013 ( abstract Abstract
Pattern matching methods can be used in the context of digital holography to perform the task of object recognition, classification and position extraction directly from the hologram and not from the reconstructed optical yield. These approaches exploit the differences between the objects holographic signatures caused by class and depth position of the objects. In this talk we will show that such inter-signature variabilities can be captured efficiently in a lower-dimensional vector space using dimensionality reduction methods.
).
 

2012

[24] "Blind deconvolution of 3D data in wide field fluorescence microscopy," F. Soulez, L. Denis, Y. Tourneur, and E. Thiébaut, IEEE International Symposium on Biomedical Imaging (ISBI), Barcelona, April 2012 (pdf, HAL, abstract Abstract
In this paper we propose a blind deconvolution algorithm for wide field fluorescence microscopy. The 3D PSF is modeled after a parametrized pupil function. The PSF parameters are estimated jointly with the object in a maximum a posteriori framework. We illustrate the performances of our algorithm on experimental data and show significant resolution improvement notably along the depth. Quantitative measurements on images of calibration beads demonstrate the benefits of blind deconvolution both in terms of contrast and resolution compared to non-blind deconvolution using a theoretical PSF.
).
 

2011

[23] "Fast model of space-variant blurring and its application to deconvolution in astronomy," L. Denis, E. Thiébaut, and F. Soulez, IEEE International Conference on Image Processing (ICIP), Brussels, September 2011 (pdf, abstract Abstract
Image deblurring is essential to high resolution imaging and is therefore widely used in astronomy, microscopy or computational photography. While shift-invariant blur is modeled by convolution and leads to fast FFT-based algorithms, shift-variant blurring requires models both accurate and fast. When the point spread function (PSF) varies smoothly across the field, these two opposite objectives can be reached by interpolating from a grid of PSF samples. Several models for smoothly varying PSF co-exist in the literature. We advocate that one of them is both physically-grounded and fast. Moreover, we show that the approximation can be largely improved by tuning the PSF samples and interpolation weights with respect to a given continuous model. This improvement comes without increasing the computational cost of the blurring operator. We illustrate the developed blurring model on a deconvolution application in astronomy. Regularized reconstruction with our model leads to large improvements over existing results.
,poster).
 
[22] "Patch similarity under non gaussian noise," C. Deledalle, F. Tupin, and L. Denis, IEEE International Conference on Image Processing (ICIP), Brussels, September 2011 (pdf, abstract Abstract
Many tasks in computer vision require to match image parts. While higher-level methods consider image features such as edges or robust descriptors, low-level approaches compare groups of pixels (patches) and provide dense matching. Patch similarity is a key ingredient to many techniques for image registration, stereo-vision, change detection or denoising. A fundamental difficulty when comparing two patches from "real" data is to decide whether the differences should be ascribed to noise or intrinsic dissimilarity. Gaussian noise assumption leads to the classical definition of patch similarity based on the squared intensity differences. When the noise departs from the Gaussian distribution, several similarity criteria have been proposed in the literature. We review seven of those criteria taken from the fields of image processing, detection theory and machine learning. We discuss their theoretical grounding and provide a numerical comparison of their performance under Gamma and Poisson noises.
).
 
[21] "Influence of speckle filtering of polarimetric SAR data on different classification methods," F. Cao, C. Deledalle, J.-M. Nicolas, F. Tupin, L. Denis, L. Ferro-Famil, E. Pottier, and C. Lopez-Martinez, IEEE International Geoscience and Remote Sensing Symposium, Vancouver, July 2011.
 
[20] "Inverse problem approach for digital hologram reconstruction," C. Fournier, L. Denis, E. Thiébaut, T. Fournel and M. Seifi, SPIE 3D Imaging, Visualization and Display, Orlando, April 2011 (pdf, abstract Abstract
Digital holography (DH) is being increasingly used for its time-resolved three-dimensional (3-D) imaging capabilities. A 3-D volume can be numerically reconstructed from a single 2-D hologram. Applications of DH range from experimental mechanics, biology, and fluid dynamics. Improvement and characterization of the 3-D reconstruction algorithms is a current issue. Over the past decade, numerous algorithms for the analysis of holograms have been proposed. They are mostly based on a common approach to hologram processing: digital reconstruction based on the simulation of hologram diffraction. They suffer from artifacts intrinsic to holography: twin-image contamination of the reconstructed images, image distortions for objects located close to the hologram borders. The analysis of the reconstructed planes is therefore limited by these defects. In contrast to this approach, the inverse problems perspective does not transform the hologram but performs object detection and location by matching a model of the hologram. Information is thus extracted from the hologram in an optimal way, leading to two essential results: an improvement of the axial accuracy and the capability to extend the reconstructed field beyond the physical limit of the sensor size (out-of-field reconstruction). These improvements come at the cost of an increase of the computational load compared to (typically non iterative) classical approaches.
).
 

2010

[19] "Exact discrete minimization for TV+L0 image decomposition models," L. Denis, F. Tupin, X. Rondeau, IEEE International Conference on Image Processing (ICIP), Hong Kong, September 2010 (pdf, abstract Abstract
Penalized maximum likelihood denoising approaches seek a solution that fulfills a compromise between data fidelity and agreement with a prior model. Penalization terms are generally chosen to enforce smoothness of the solution and to reject noise. The design of a proper penalization term is a difficult task as it has to capture image variability. Image decomposition into two components of different nature, each given a different penalty, is a way to enrich the modeling. We consider the decomposition of an image into a component with bounded variations and a sparse component. The corresponding penalization is the sum of the total variation of the first component and the L0 pseudo-norm of the second component. The minimization problem is highly non-convex, but can still be globally minimized by a minimum s-t-cut computation on a graph. The decomposition model is applied to synthetic aperture radar image denoising.
, slides).
 
[18] "Poisson NL-Means: Unsupervised non-local means for Poisson noise," C. Deledalle, F. Tupin, L. Denis, IEEE International Conference on Image Processing (ICIP), Hong Kong, September 2010 (pdf, abstract Abstract
An extension of the non local (NL) means is proposed for images damaged by Poisson noise. The proposed method is guided by the noisy image and a pre-filtered image and is adapted to the statistics of Poisson noise. The influence of both images can be tuned using two filtering parameters. We propose an automatic setting to select these parameters based on the minimization of the estimated risk (mean square error). This selection uses an estimator of the MSE for NL means with Poisson noise and Newton's method to find the optimal parameters in few iterations.
, slides).
 
[17] "Polarimetric SAR estimation based on non-local means," C. Deledalle, F. Tupin, L. Denis, IEEE International Geoscience and Remote Sensing Symposium, Honolulu, July 2010 ( abstract Abstract
Recently, non-local approaches have been proved relevant for image restoration. Unlike local filters, the non-local (NL) means decrease the noise while preserving well the resolution. In the proposed paper, we suggest the use of a non-local approach to estimate single-look SAR reflectivity images or to construct SAR interferograms. SAR interferogram construction refers to the joint estimation of the reflectivity, phase difference and coherence image froma pair of two co-registered single-look complex SAR images. This paper is composed of four sections. Section 2 recalls the non-local (NL) means. Weighted maximum likelihood is then introduced in Section 3 as a generalization of the weighted average performed in the NL means. In Section 4, we propose to set the weights according to the probability of similarity which provides an extension of the Euclidean distance used in the NL means. Finally, experiments and results are presented in Section 5 to show the efficiency of the proposed approach.
, slides).
 
[16] "A non-local approach for SAR and interferometric SAR denoising," C. Deledalle, F. Tupin, L. Denis, IEEE International Geoscience and Remote Sensing Symposium, Honolulu, July 2010 ( abstract Abstract
During the past few years, the non-local (NL) means have proved their efficiency for image denoising. This approach assumes there exist enough redundant patterns in images to be used for noise reduction. We suggest that the same assumption can be done for polarimetric synthetic aperture radar (PolSAR) images. In its original version, theNLmeans dealwith additivewhite Gaussian noise, but several extensions have been proposed for non-Gaussian noise. This paper applies the methodology proposed in [9] to PolSAR data. The proposed filter seems to deal well with the statistical properties of speckle noise and themulti-dimensional nature of such data. Results are given on synthetic and L-Band E-SAR data to validate the proposed method.
, slides).
 
[15] "Glacier monitoring: correlation versus texture tracking," C. Deledalle, J.M. Nicolas, F. Tupin, L. Denis, R. Fallourd, E. Trouvé, IEEE International Geoscience and Remote Sensing Symposium, Honolulu, July 2010.
 
[14] "A Comparative Review of SAR Images Speckle Denoising Methods Based on Functional Minimization," J-F Aujol, E. Bratsolis, J. Darbon, L. Denis, J-M. Nicolas, X. Rondeau, M. Sigelle and F. Tupin, SIAM Conference on Imaging Science, Chicago, 12-14th april 2010 -- This work has been presented by Marc Sigelle.
 

2009

[13] "Resolution in in-line digital holography," C. Fournier, L. Denis, T. Fournel, Workshop on Information Optics, J. Phys.: Conf. Ser., 206, 012025, Paris, France, July 2009 (doi).
 
[12] "Lagrangian measurement of droplet in homogeneous isotropic turbulence by digital in-line holography", D Chareyron, J-L Marié, M. Lance, J. Gire, C. Fournier, L. Denis, 11th International Symposium on Gas-Liquid Two-Phase Flows, FEDSM2009, Vail (Colorado) 2-5 August 2009.
 
[11] "Digital holography measurements of Lagrangian trajectories and diameters of droplets in an isotropic turbulence," D. Chareyron, J.L. Marié, M. Lance, J. Gire, C. Fournier, L. Denis, 6th International Symposium on Multiphase Flow, Heat Mass Transfert and Energy Conversion, Xi'an (China) 11-15 July 2009.
 

2008

[10] "Joint filtering of SAR interferometric phase and amplitude data in urban areas by TV minimization," L. Denis, L, F Tupin, Darbon, et Sigelle, IEEE International Geoscience and Remote Sensing Symposium, Boston, 2008. (pdf, doi)
 
[9] "A regularization approach for InSAR and optical data fusion," L. Denis, Tupin, Darbon, et Sigelle, IEEE International Geoscience and Remote Sensing Symposium, Boston, 2008. (pdf, doi)
 
[8] "SAR amplitude filtering using TV prior and its application to building delineation," L. Denis, Tupin, Darbon, Sigelle, et Tison, 7th European Conference on Synthetic Aperture Radar, Friedrichshafen, Germany, 2008. (pdf, doi)
 
[7] "Signal to noise characterization of an inverse problem-based algorithm for digital inline holography," J. Gire, C. Ducottet, L. Denis, E. Thiebaut, F. Soulez, Proceedings of the International Symposium on Flow Visualization, (CDROM), S39:ID226. Nice: JP Prenel - Y Bailly, 2008. (pdf)
 

2007

[6] "Inverse problem approach for Digital Holographic Particle Tracking: Influence of the experimental parameters and benefits," C. Fournier, J. Gire, L. Denis, E. Thiebaut, F. Soulez, and C. Ducottet, Workshop on Digital Holographic Reconstruction and Tomography, Loughborough, England, April 2007.
 
[5] "Inverse Problem Approach for Particle Digital Holography: Field of View Extrapolation and Accurate Location," F. Soulez, E. Thiébaut, L. Denis, and C. Fournier, Adaptive Optics: Analysis and Methods / Computational Optical Sensing and Imaging / Information Photonics / Signal Recovery and Synthesis Topical Meetings, Vancouver, Canada, June 2007. (doi)
 
[4] "Inverse problem approach for particle digital holography: particle detection and accurate location," F. Soulez, L. Denis, C. Fournier, E. Thiébaut, and C. Goepfert, Proceedings of the Physics in Signal and Image Processing, Mulhouse, France, January 2007. (pdf)
 

2006

[3] "Digital Holography compared to Phase Doppler Anemometry: study of an experimental droplet flow," C. Fournier, C. Goepfert, J. L. Marié, L. Denis, F. Soulez, M. Lance, et J. P. Schon, Proceedings of the 12th International Symposium on Flow Visualization, (ed. Optimage Ltd), ISBN : 0-9533991-8-4,19.4, p228, Göttingen, Germany, September 2006.
 
[2] "Cleaning digital holograms to investigate 3D particle fields," L. Denis, T. Fournel, C. Fournier, et C. Ducottet, Proceedings of the 12th International Symposium on Flow Visualization, (ed. Optimage Ltd),ISBN : 0-9533991-8-4, 69.4, p215, Göttingen, Germany, September 2006.
 

2005

[1] "Twin-image noise reduction by phase retrieval in in-line digital holography," L. Denis, C. Fournier, T. Fournel, C. Ducottet, Wavelets XI, SPIE's Symposium on Optical Science and Technology, vol 5914, pp 59140J, San Diego, CA, USA, 2005. (pdf, doi)
 

Teaching

I used to teach approximately 200 hours per year when I was assistant professor at CPE Lyon. My students were attending the Engineering School CPE Lyon in the Electrical Engineering department. I covered the following topics, together with other colleagues:

Image processing (lectures + lab sessions)

Computer graphics (lectures + lab sessions)

Unix systems programming (lab sessions)

Signals and Linear Systems (tutorials + lab sessions)

Misc

How to create a bibliographic style for LaTeX/BibTeX

Scientific journals have strictly defined bibliographic conventions for typesetting references. Unfortunately for LaTeX users, these journals do not always provide a bibliogaphic style (i.e. a .bst file). This page describes how to create one yourself and gives one such file I created for use with Journal of Microscopy.
 

Using makebst

A very usefull tool for creating BibTeX styles is makebst. It's use is extremly simple: you just have to run LaTeX: latex makebst and answer a bunch of questions. An output file of type .bst will be created for use as any other bibliographic style.

Here is an extract of the questions you have to answer:

$ latex makebst
[...]
***********************************
* This is Make Bibliography Style *
***********************************
It makes up a docstrip batch job to produce
a customized .bst file for running with BibTeX
Do you want a description of the usage? (NO)

\yn=y
In the interactive dialogue that follows,
you will be presented with a series of menus.
In each case, one answer is the default, marked as (*),
and a mere carriage-return is sufficient to select it.
(If there is no * choice, then the default is the last choice.)
For the other choices, a letter is indicated
in brackets for selecting that option. If you select
a letter not in the list, default is taken.

The final output is a file containing a batch job
which may be (La)TeXed to produce the desired BibTeX
bibliography style file. The batch job may be edited
to make minor changes, rather than running this program
once again.

[...]
Name of the final OUTPUT .bst file? (default extension=bst)

\ofile=mystyle.bst

[...]
STYLE OF CITATIONS:
(*) Numerical as in standard LaTeX
(a) Author-year with some non-standard interface
(b) Alpha style, Jon90 or JWB90 for single or multiple authors
(o) Alpha style, Jon90 even for multiple authors
(f) Alpha style, Jones90 (full name of first author)
(c) Cite key (special for listing contents of bib file)
Select:

[...]
AUTHOR NAMES:
(*) Full, surname last (John Frederick Smith)
(f) Full, surname first (Smith, John Frederick)
(i) Initials + surname (J. F. Smith)
(r) Surname + initials (Smith, J. F.)
(s) Surname + dotless initials (Smith J F)
(x) Surname + pure initials (Smith JF)
(y) Surname + comma + pure initials (Smith, JF)
(z) Surname + spaceless initials (Smith J.F.)
(a) Only first name reversed, initials (AGU style: Smith, J. F., H. K. Jones)
(b) First name reversed, with full names (Smith, John Fred, Harry Kab Jones)
Select:
[...]
NUMBER OF AUTHORS:
(*) All authors included in listing
(l) Limited authors (et al replaces missing names)
Select:
[...]
TYPEFACE FOR AUTHORS IN LIST OF REFERENCES:
(*) Normal font for author names
(s) Small caps authors (\sc)
(i) Italic authors (\it or \em)
(b) Bold authors (\bf)
(u) User defined author font (\bibnamefont)
Select:
[...]

The new style mystyle.bst can then be used in your LaTeX file to typset the bibliographical entries stored in your BibTeX file mybib.bib with the following two lines of code:

\bibliographystyle{mystyle}
\bibliography{mybib}

Unofficial bibliographic style for Journal of Microscopy

I have created with the previously described procedure a .bst file for a paper I have written and published in Journal of Microscopy. I tried to answer the best as I could to the questions of makebst script, but I cannot guarantee that the file I generated fullfill all requirements of the journal. The file can be downloaded here (improved version by Michael Kopp here, read also below). Please contact me if you see any improvement to be done on the file or if you want me to display a link to your own .bst file.

In addition to using the provided .bst file, you will have to include natbib package. This package provides variants of LaTeX \cite command. The command \citep adheres to Journal of Microscopy's citing convention.

Michael Kopp sent me a .dbj file that can be easily modified to tune the way citations are displayed. His .bst file adheres more strictly to Journal of Microscopy guidelines. To use .dbj files, just run latex on them: latex microscopy.dbj.


Last update: August 2022