Contact - General information - Research - Publications - Teaching - Misc
Loïc Denis
Loïc Denis
Laboratoire Hubert Curien
UMR 5516 CNRS / Université de Saint-Etienne
Bât. F, 18 rue B. Lauras
42000 Saint-Etienne
France

(+33) 4 77 91 57 66

loic dot denis at univ-st-etienne dot fr

and

TELECOM Saint-Etienne
25 rue R. Annino
F-42000 Saint-Etienne
France
some keywords

Latest news

March. 2018: Congratulations to Sylvain Lobry! He was awarded the Best PhD Thesis by the "Futur et Ruptures" research program of the Mines-Telecom fundation and the "institut Carnot Télécom et Société numérique". I had the pleasure to co-supervise Sylvain during his master's thesis and his PhD thesis. More information here.

March. 2018: We proposed a new method called "PACO" for exoplanet detection by direct imaging. It is described in a paper to appear in Astronomy and Astrophysics.

Jan. 2018: SAR images in urban areas have many bright points with a characteristic cross-shaped signature. Rémy Abergel developed a very effective method during his postdoc to suppress these crosses and extract meaningful targets: see the preprint pdf and the source code. The paper is to appear in JSTARS.

May. 2016: Our paper "NL-SAR: A Unified Nonlocal Framework for Resolution-Preserving (Pol)(In)SAR Denoising" has been awarded the IEEE Geoscience and Remote Sensing Society 2016 Transactions Prize Paper Award. We are very honored to receive this award that distinguishes each year one paper published in the IEEE transactions on Geoscience and Remote Sensing.

Sept. 2015: Rahul Mourya received the Best student paper award at EUSIPCO in Nice! Rahul will defend his PhD thesis on shift-variant deblurring in the coming months.

Sept. 2015: Rahul Mourya will present our work on non-smooth optimization at ICIP in Quebec at the end of the month. Our paper has been marked as among the top 10% by ICIP board. Rahul will present both a poster and a "show and tell" demo. Have a look at the poster here.

April 2015: Our paper on shift-variant blur models is now online on the International Journal of Computer Vision: Fast Approximations of Shift-Variant Blur. You can find our preprint here.

March 2014: A review paper on patch-based methods for SAR imaging will appear this summer in the special issue of IEEE Signal Processing Magazine "Recent Advances in Synthetic Aperture Radar Imaging". Read our preprint here.

June 2012: The PhD position on image deblurring is now closed.

May 2012: Our recent paper How to compare noisy patches? Patch similarity beyond Gaussian noise is featured in the most downloaded articles of the International Journal of Computer Vision with about 900 downloads this last 3 months. It can be downloaded for free.

May 2012: Charles Deledalle will receive an award for the best PhD thesis in signal and image processing at the 52nd meeting of the French association for electrical and information engineering (club EEA). This prestigious award is given each year to an outstanding PhD in the field of signal and image processing defended in France. Congratulations Charles!!!
You can find Charles' PhD thesis on his webpage, and our papers in the publications section.

Apr. 2012: We have an openclosed PhD position: "Image deblurring under space-variant blur" at the Hubert Curien laboratory and the Observatory of Lyon. Interested students can contact me by e-mail.

Feb. 2012: How to compare noisy patches? This is the question we try to answer in our recent paper published in International Journal of Computer Vision (doi). A preprint version is available on HAL in pdf.

Sept. 2011: The poster I presented at IEEE ICIP in Brussels on shift-variant deblurring is available here pdf.

Sept. 2011: I now have an associate professor position at the University of Saint-Etienne. I teach at TELECOM Saint-Etienne, an Engineering School in electrical engineering. My research focus on image restoration and reconstruction with applications ranging from microscopy and optical metrology to SAR imaging and astronomy.

Apr. 2011: We will present two papers at next ICIP conference in September 2011 in Brussels:

Mar. 2011: Florence Tupin has written a nice article in last issue of IEEE Geoscience and Remote Sensing Newsletter describing recent progress made in image processing of SAR data (link, pdf).

Oct. 2010: Charles Deledalle received the best student paper award at ICIP 2010 in Hong Kong for his fully automatic method to denoise images corrupted by Poisson noise. The method compares noisy patches and pre-filtered patches to define adaptative weights that preserve edges and structures. Charles' denoising method improves on state-of the art denoising techniques based on total variation minimization or wavelets. The paper is available here: "Poisson NL-Means: Unsupervised non-local means for Poisson noise," C. Deledalle, F. Tupin, L. Denis, IEEE International Conference on Image Processing (ICIP), Hong Kong, September 2010 (pdf, abstract Abstract
An extension of the non local (NL) means is proposed for images damaged by Poisson noise. The proposed method is guided by the noisy image and a pre-filtered image and is adapted to the statistics of Poisson noise. The influence of both images can be tuned using two filtering parameters. We propose an automatic setting to select these parameters based on the minimization of the estimated risk (mean square error). This selection uses an estimator of the MSE for NL means with Poisson noise and Newton's method to find the optimal parameters in few iterations.
), and the slides of his presentation are here.

Oct. 2010: At ICIP 2010 in Hong Kong, I presented a paper on image denoising using an image decomposition approach (bounded variations + sparse component). We have shown that exact discrete minimization can be obtained with graph-cuts for TV+L0 decomposition models. The paper is available here: "Exact discrete minimization for TV+L0 image decomposition models," L. Denis, F. Tupin, X. Rondeau, IEEE International Conference on Image Processing (ICIP), Hong Kong, September 2010 (pdf, abstract Abstract
Penalized maximum likelihood denoising approaches seek a solution that fulfills a compromise between data fidelity and agreement with a prior model. Penalization terms are generally chosen to enforce smoothness of the solution and to reject noise. The design of a proper penalization term is a difficult task as it has to capture image variability. Image decomposition into two components of different nature, each given a different penalty, is a way to enrich the modeling. We consider the decomposition of an image into a component with bounded variations and a sparse component. The corresponding penalization is the sum of the total variation of the first component and the L0 pseudo-norm of the second component. The minimization problem is highly non-convex, but can still be globally minimized by a minimum s-t-cut computation on a graph. The decomposition model is applied to synthetic aperture radar image denoising.
), and the slides here.

Aug. 2010: Our study on resolution in holography based on Cramér-Rao lower bounds ("On the single point resolution of on-axis digital holography," C. Fournier, L. Denis, and T. Fournel, J. Opt. Soc. Am. A, 27 (8), 1856-1862, 2010: pdf, doi, abstract Abstract
On-axis digital holography (DH) is becoming widely used for its time-resolved three-dimensional (3D) imaging capabilities. A 3D volume can be reconstructed from a single hologram. DH is applied as a metrological tool in experimental mechanics, biology, and fluid dynamics, and therefore the estimation and the improvement of the resolution are current challenges. However, the resolution depends on experimental parameters such as the recording distance, the sensor definition, the pixel size, and also on the location of the object in the field of view. This paper derives resolution bounds in DH by using estimation theory. The single point resolution expresses the standard deviations on the estimation of the spatial coordinates of a point source from its hologram. Cramér Rao lower bounds give a lower limit for the resolution. The closed-form expressions of the Cramér Rao lower bounds are obtained for a point source located on and out of the optical axis. The influences of the 3D location of the source, the numerical aperture, and the signal-to-noise ratio are studied.
) is featured in Spotlight on Optics.

General information

I have an associate professor position at the University of Saint-Etienne. I teach at TELECOM Saint-Etienne, an Engineering School in electrical engineering. I conduct my research activity at the Laboratoire Hubert Curien. My main interest is on image restoration and reconstruction with applications ranging from microscopy and optical metrology to SAR imaging and astronomy.

Previous positions:

In 2010-2011, I have spent a year and a half as a research scientist at the Observatory of Lyon on inverse problems in astronomy and biomedical imaging. My position was funded by the French Research Agency (research project "MITIV" lead by Eric Thiébaut).

From 2007 to the end of 2009, I was Assistant Professor at the Electrical Engineering Department of the Engineering School 'CPE Lyon'. I used to teach image processing, computer graphics and computer science. My research focused on image reconstruction/restoration problems that occur in imaging, especially in synthetic aperture radar and digital holography.

In 2006-2007, I worked for one year at Télécom Paristech as a postdoctoral scholar at the Image Processing Team of the Signal and Image Processing Department. My work was on synthetic aperture radar (SAR) images and optical images to design algorithms for automatic extraction of elevation information in urban areas. I focused on radar image denoising with graph-cut. SAR image denoising is a research subject on which I am still working.

Digital holography was the main subject of my PhD thesis, defended in autumn 2006 in Saint-Etienne University (France).

Research

Research projects

Space-variant deblurring in astronomy and biomedical imaging

2010 - present


Context

deconvolution of Hubble Space Telescope simulations Image deblurring is essential to high resolution imaging and is therefore widely used in astronomy, microscopy or computational photography. While shift-invariant blur is modeled by convolution and leads to fast FFT-based algorithms, shift-variant blurring requires models both accurate and fast. When the point spread function (PSF) varies smoothly across the field, these two opposite objectives can be reached by interpolating from a grid of PSF samples.

Key idea(s)

We developped a physically grounded model for PSF modeling that leads to a fast and accurate shift-variant blurring operator.

Results

We applied our model to simulations of Hubble Space Telescope data and shown good reconstruction performance.

Related publications:

"Fast model of space-variant blurring and its application to deconvolution in astronomy," L. Denis, E. Thiébaut, and F. Soulez, IEEE International Conference on Image Processing (ICIP), Brussels, September 2011. (pdf,poster)
 

Non-local denoising of synthetic aperture radar images

2008 - present


Context

denoising result Image denoising is a fundamental low-level task in many applications. Numerous denoising techniques have been proposed, but only few of them provide a general methodology that apply to different noise models (e.g., additive, multiplicative). This project is concerned with the development of non-local denoising techniques adapted to given noise distributions.

Key idea(s)

Simple denoising techniques replace the noisy values with the Maximum Likelihood (ML) estimate computed over a small neighboring window. They lead to a loss of resolution, i.e., edges are blurred, as the window shape is unchanged even over homogeneous region boundaries where it overlaps pixels with very different values. A straightforward improvement is then to adapt spatially the window shape based on the homogeneity inside the window. A more powerful approach, deriving from the NL-means introduced by Buades et al., is to consider weighted maximum likelihood. Weights are set in a data-driven fashion based on the similarity between image patches.

Results

We suggest a general methodology to define the similarity between noisy patches as well as between restored patches. This leads to an iterative algorithm which gives good results on images corrupted with additive Gaussian noise, and outperforms state-of-the art denoising techniques for images with speckle noise such as synthetic aperture radar (SAR) images.

More information on Charles Deledalle's webpage.

Related publications:

"NL-InSAR: Non-local interferogram estimation," C. Deledalle, L. Denis, and F. Tupin, IEEE trans. geoscience and remote sensing, 49, 4, 2011. (pdf, doi)
 
"Iterative weighted maximum likelihood denoising with probabilistic patch-based weights," C. Deledalle, L. Denis, and F. Tupin, IEEE trans. image processing, 18, 12, 2009. (pdf, doi)
 

Joint regularization with graph-cuts

2007 - present


Context

graph Graph-cuts are powerful techniques that can be used to solve combinatorial minimization problems in processing. In the context of image regularization, they can find the global minimum of first-order Markov Random Field energies (i.e., energies involving only single and pair-wise terms) with convex regularization. This discrete minimization is performed by computing a minimum-cost cut over a huge graph. The size of the graph prevents from using directly such techniques on large images (million pixels images). Joint regularization cannot be handled with such graph constructs. Combinatorial approaches are however desirable to minimize energies with non-convex neg log-likelihood such as with SAR imaging.

Key idea(s)

Approximate minimization can be performed by considering a sequence of sub-problems. Each of these sub-problems can be exactly solved using two-levels graphs.

Results

A fast algorithm for approximate discrete minimization is proposed. It is suitable for minimization of scalar or vectorial fields with convex prior. The technique is applied to joint regularization of amplitude and phase of interferometric SAR images, and to 3D reconstruction from an interferometric SAR pair and an optical image.

Related publications:

"Exact discrete minimization for TV+L0 image decomposition models," L. Denis, F. Tupin, X. Rondeau, IEEE International Conference on Image Processing (ICIP), Hong Kong, September 2010 (pdf, abstract Abstract
Penalized maximum likelihood denoising approaches seek a solution that fulfills a compromise between data fidelity and agreement with a prior model. Penalization terms are generally chosen to enforce smoothness of the solution and to reject noise. The design of a proper penalization term is a difficult task as it has to capture image variability. Image decomposition into two components of different nature, each given a different penalty, is a way to enrich the modeling. We consider the decomposition of an image into a component with bounded variations and a sparse component. The corresponding penalization is the sum of the total variation of the first component and the L0 pseudo-norm of the second component. The minimization problem is highly non-convex, but can still be globally minimized by a minimum s-t-cut computation on a graph. The decomposition model is applied to synthetic aperture radar image denoising.
, slides)
 
"Joint regularization of phase and amplitude of InSAR data: application to 3D reconstruction," L. Denis, F. Tupin, J. Darbon, and M. Sigelle, IEEE trans. geoscience and remote sensing, 47, 11, 2009. (pdf, doi)
 
"SAR Image Regularization with Fast Approximate Discrete Minimization," L. Denis, F. Tupin, J. Darbon, and M. Sigelle, IEEE trans. image processing, 18, 7, 2009. (pdf, doi)
 

Sparse reconstruction in digital holography

2008 - present


Context

sparse reconstruction Inline digital holograms are classically reconstructed using linear operators to model diffraction. It has long been recognized that such reconstruction operators do not invert the hologram formation operator. Classical linear reconstructions yield images with artifacts such as distortions near the field-of-view boundaries or twin-images. When objects located at different depths are reconstructed from a hologram, in-focus and out-of-focus images of all objects superimpose upon each other. Additional processing, such as maximum-of-focus detection, is thus unavoidable for any successful use of the reconstructed volume.

Key idea(s)

We consider inverting the hologram formation model in Bayesian framework. We suggest the use of a sparsity-promoting prior, intrinsically verified due to inline holography requirements, and present a simple iterative algorithm for 3D object reconstruction under sparsity and positivity constraints.

Results

Out-of-focus images of objects are strongly attenuated or absent in reconstructed 3D images. The sparse reconstruction technique makes it possible to reconstruct outside of the field of view.
It is also possible to extend the recent theoretical results on conditions for exact recovery of sparse signals with orthogonal matching pursuit to the case of noisy data. In the case of digital holography of particles, this gives upper bounds on the achievable resolution.

Related publications:

"Inline hologram reconstruction with sparsity constraints," L. Denis, D. Lorenz, E. Thiébaut, C. Fournier, D. Trede, Optics Letters, 34(22), 3475-3477, 2009. (pdf, doi)
 
"Greedy Solution of Ill-Posed Problems: Error Bounds and Exact Inversion," L. Denis, D. Lorenz and D. Trede, Inverse Problems, 25 115017, 2009. (pdf,doi) -- Note that the authors of this paper are ordered alphabetically, the main author is D. Trede.
 

Particle detection in digital holography

2006 - 2008


Context

particle detection algorithm Digital holography is the method of choice for time-resolved 3D measurement of the location of particles in a flow. These measurements are crucial to validate numerical simulations of turbulence. The 3D location of several particles can be recovered from a single hologram by analyzing their diffraction patterns. Classically, this is performed in two steps: first, a 3D volume is reconstructed by simulating optical diffraction of the hologram. Then, the maximum of focus location of the image of each particle is detected. These approaches suffer from severe bias close to the hologram boundaries, and false detections occur due to multiple focusing or speckle noise.

Key idea(s)

Such drawbacks can be circumvent by following an inverse problem approach. Instead of reconstructing a 3D volume image from the hologram, particles are directly detected by matching the diffraction patterns on the hologram. An approach similar to the matching pursuit algorithm is proposed, with sub-pixel refinement by local optimization.

Results

The accuracy of the detection is improved by a factor 5 compared to that of classical techniques in a standard experimental configuration. Out-of-field detection is demonstrated, even far from the hologram boundaries.

Related publications:

"Inverse problem approach for particle digital holography: out-of-field particle detection made possible," F. Soulez, L. Denis, E. Thiébaut, C. Fournier, and C. Goepfert, J. Opt. Soc. Am. A, 24 (12), 3708-3716, 2007. (pdf, doi)
 
"Inverse problem approach for particle digital holography: accurate location based on local optimisation," F. Soulez, L. Denis, C. Fournier, E. Thiébaut, and C. Goepfert, J. Opt. Soc. Am. A, 24 (4), 1164-1171, 2007. (pdf, doi)
 
"Digital holography of particles: experimental parameters setting and benefits of the inverse problems approach," J. Gire, L. Denis, C. Fournier, C. Ducottet, E. Thiebaut, and F. Soulez, Meas. Sci. Tech., 19, 2008. (pdf, doi)
 
"Numerical suppression of the twin-image in in-line holography of a volume of micro-objects," L. Denis, C. Fournier, T. Fournel, and C. Ducottet, Meas. Sci. Tech., 19, 2008. (pdf, doi)
 

Extraction of size/orientation information from a hologram

2005 - 2007


Context

autocovariance Digital holograms of a collection of small objects code the information of shape, orientation and 3D location of all objects. The recovery of the average size or orientation distribution however requires 3D reconstruction and analysis, which makes it hardly usable in on-line applications.

Key idea(s)

We show that the auto-covariance of a hologram can be inverted to recover directly the size and/or orientation information of the objects. Since inversion of the auto-covariance is difficult, only approximate sizes are available. This technique can be usefull to get a first guess on the size of particles when using the particle detection algorithm described above.

Results

The average size of particles can be recovered from a hologram of several particles spread in a volume. Short fibers have been studied. It has been shown that some information about the orientation of the projection of the fibers orientations can be recovered by inversion of the auto-covariance of the hologram.

Related publications:

"Direct extraction of mean particle size from a digital hologram," L. Denis, C. Fournier, T. Fournel, C. Ducottet, and D. Jeulin, Applied Optics, 45 (5), 944-952, 2006. (pdf, doi)
 
"Reconstruction of the rose of directions from a digital micro-hologram of fibers," L. Denis, T. Fournel, C. Fournier, and D. Jeulin, J. Microsc., 225 (3), 282-291, 2007. (pdf, doi)
 

Grants

CNRS "DETECTION"

2015 - 2018


Project leader

Loïc Denis

Partners

Objectives

Source detection is a critical task, especially in astronomy for the research and characterization of exo-planets, and in lensless microscopy to track and analyze micrometer-sized objects. This projects aims to improve existing methods thanks to joint processing of multi-variate data (multi-spectral and/or multi-temporal) and to develop optimal processing techniues and their characterization both in the case of few sources and in the cases of crowded fields (many sources overlapping).
 
More information here

DGA "SAR image regularization by minimization techniques"

2009 - 2011


Project leader

Florence Tupin, Télécom Paristech

Partners

Objectives

This project aims at comparing and applying the most recent image denoising techniques to the domain of synthetic aperture radar imaging.
Many different problems can be mapped to an energy minimization problem. The energy is generally composed of two terms: a data fidelity term (neg log-likelihood), and a regularization term that imposes a prior on the solution, often expressed as local interactions. Several recent developments in image processing are devoted to solving this minimization problem. On the one hand, discrete approaches based on minimum cut computation on graphs are very efficient techniques. They provide in several cases a deterministic way to solve exactly non-convex minimization problems. On the other hand, recent progress on variational approaches devoted to convex but non-smooth energies can be applied to handle some edge-preserving regularization models.
One of our goals is to define statistical models adapted to SAR images. We will also focus on the numerical techniques to efficiently apply these models.
 

ANR MITIV "Biomedical Image Reconstruction by Inverse Methods"

2009 - 2014


Project leader

Eric Thiébaut, Observatoire de Lyon

Partners

Objectives

This project aims at developping new models and reconstruction techniques for microscopy, angiography and dynamic tomography. Both theoretical aspects and practical issues such as automation and medical certification will be covered thanks to a pluri-disciplinary team made of signal and image reconstruction experts, software developers, microscopists, and cardiologists.
 

Co-workers

I have the pleasure to work with the following colleagues (non-exhaustive list!):

Publications


Papers in refereed journals

2018

[32] "Exoplanet detection in angular differential imaging by statistical learning of the non-stationary patch covariances, The PACO algorithm", Astronomy and Astrophysics, 2018 ( abstract Abstract
Context. The detection of exoplanets by direct imaging is an active research topic in astronomy. Even with the coupling of an extreme adaptive-optics system with a coronagraph, it remains challenging due to the very high contrast between the host star and the exoplanets.
Aims. The purpose of this paper is to describe a method, named PACO, dedicated to source detection from angular differential imaging data. Given the complexity of the fluctuations of the background in the datasets, involving spatially- variant correlations, we aim to show the potential of a processing method that learns the statistical model of the background from the data.
Methods. In contrast to existing approaches, the proposed method accounts for spatial correlations in the data. Those correlations and the average stellar speckles are learned locally and jointly to estimate the flux of the (potential) exoplanets. By preventing from subtracting images including the stellar speckles residuals, the photometry is intrinsically preserved. A non-stationary multi-variate Gaussian model of the background is learned. The decision in favor of the presence or the absence of an exoplanet is performed by a binary hypothesis test.
Results. The statistical accuracy of the model is assessed using VLT/SPHERE-IRDIS datasets. It is shown to capture the non-stationarity in the data so that a unique threshold can be applied to the detection maps to obtain consistent detection performance at all angular separations. This statistical model makes it possible to directly assess the false alarm rate, probability of detection, photometric and astrometric accuracies without resorting to Monte-Carlo methods.
Conclusions. PACO offers appealing characteristics: it is parameter-free and photometrically unbiased. The statistical performance in terms of detection capability, photometric and astrometric accuracies can be straightforwardly assessed. A fast approximate version of the method is also described to process large amounts of data from exoplanets search surveys.
).
 
[31] "Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology," F Jolivet, F Momey, L Denis, L Méès, N Faure, N Grosjean, F Pinston, J-L Marié, C Fournier, Optics Express, 2018 (pdf, doi, abstract Abstract
Reconstruction of phase objects is a central problem in digital holography, whose various applications include microscopy, biomedical imaging, and fluid mechanics. Starting from a single in-line hologram, there is no direct way to recover the phase of the diffracted wave in the hologram plane. The reconstruction of absorbing and phase objects therefore requires the inversion of the non-linear hologram formation model. We propose a regularized reconstruction method that includes several physically-grounded constraints such as bounds on transmittance values, maximum/minimum phase, spatial smoothness or the absence of any object in parts of the field of view. To solve the non-convex and non-smooth optimization problem induced by our modeling, a variable splitting strategy is applied and the closed-form solution of the sub-problem (the so-called proximal operator) is derived. The resulting algorithm is efficient and is shown to lead to quantitative phase estimation on reconstructions of accurate simulations of in-line holograms based on the Mie theory. As our approach is adaptable to several in-line digital holography configurations, we present and discuss the promising results of reconstructions from experimental in-line holograms obtained in two different applications: the tracking of an evaporating droplet (size about 100 micrometers) and the microscopic imaging of bacteria (size about 1 micrometer).
).
 
[30] "PARISAR: Patch-Based Estimation and Regularized Inversion for Multibaseline SAR Interferometry," G Ferraioli, C Deledalle, L Denis, F Tupin, IEEE transactions on Geoscience and Remote Sensing, 2018 (pdf, doi, abstract Abstract
Reconstruction of elevation maps from a collection of synthetic aperture radar (SAR) images obtained in interferometric configuration is a challenging task. Reconstruction methods must overcome two difficulties: the strong interferometric noise that contaminates the data and the 2 pi phase ambiguities. Interferometric noise requires some form of smoothing among pixels of identical height. Phase ambiguities can be solved, up to a point, by combining linkage to the neighbors and a global optimization strategy to prevent from being trapped in local minima. This paper introduces a reconstruction method, PARISAR, that achieves both a resolution-preserving denoising and a robust phase unwrapping (PhU) by combining nonlocal denoising methods based on patch similarities and total-variation regularization. The optimization algorithm, based on graph cuts, identifies the global optimum. Combining patch-based speckle reduction methods and regularization-based PhU requires solving several issues: 1) computational complexity, the inclusion of nonlocal neighborhoods strongly increasing the number of terms involved during the regularization, and 2) adaptation to varying neighborhoods, patch comparison leading to large neighborhoods in homogeneous regions and much sparser neighborhoods in some geometrical structures. PARISAR solves both issues. We compare PARISAR with other reconstruction methods both on numerical simulations and satellite images and show a qualitative and quantitative improvement over state-of-the-art reconstruction methods for multibaseline SAR interferometry.
).
 
[29] "Subpixellic Methods for Sidelobes Suppression and Strong Targets Extraction in Single Look Complex SAR Images ," R Abergel, L Denis, S Ladjal, F Tupin, IEEE journal of selected topics in applied earth observations and remote sensing (JSTARS), 2018 (pdf, doi, source code, abstract Abstract
SAR images display very high dynamic ranges. Man-made structures (like buildings or power towers) produce echoes that are several orders of magnitude stronger than echoes from diffusing areas (vegetated areas) or from smooth surfaces (e.g., roads). The impulse response of the SAR imaging system is thus clearly visible around the strongest targets: sidelobes spread over several pixels, masking the much weaker echoes from the background. To reduce the sidelobes of the impulse response, images are generally spectrally apodized, trading resolution for a reduction of the sidelobes. This apodization procedure (global or shift-variant) introduces spatial correlations in the speckle-dominated areas which complicates the design of estimation methods. This paper describes strategies to cancel sidelobes around point-like targets while preserving the spatial resolution and the statistics of speckle-dominated areas. An irregular sampling grid is built to compensate the sub-pixel shifts and turn cardinal sines into discrete Diracs. A statistically grounded approach for point-like target extraction is also introduced, thereby providing a decomposition of a single look complex image into two components: a speckle-dominated image and the point-like targets. This decomposition can be exploited to produce images with improved quality (full resolution and suppressed sidelobes) suitable both for visual inspection and further processing (multi-temporal analysis, despeckling, interferometry).
).
 

2017

[28] "MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?," C Deledalle, L Denis, S Tabti, F Tupin, IEEE transactions on Image Processing, 2017 (pdf, doi, source code, abstract Abstract
Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.
).
 
[27] "Self-calibration for lensless color microscopy," O Flasseur, C Fournier, N Verrier, L Denis, F Jolivet, A Cazier, and T Lépine, Applied Optics, 2017 (doi, abstract Abstract
Lensless color microscopy (also called in-line digital color holography) is a recent quantitative 3D imaging method used in several areas including biomedical imaging and microfluidics. By targeting cost-effective and compact designs, the wavelength of the low-end sources used is known only imprecisely, in particular because of their dependence on temperature and power supply voltage. This imprecision is the source of biases during the reconstruction step. An additional source of error is the crosstalk phenomenon, i.e., the mixture in color sensors of signals originating from different color channels. We propose to use a parametric inverse problem approach to achieve self-calibration of a digital color holographic setup. This process provides an estimation of the central wavelengths and crosstalk. We show that taking the crosstalk phenomenon into account in the reconstruction step improves its accuracy.
).
 
[26] "Pixel super-resolution in digital holography by regularized reconstruction ," C. Fournier, F. Jolivet, L. Denis, N. Verrier, E. Thiebaut, C. Allier, and T. Fournel, Applied Optics, 2017 (doi, abstract Abstract
In-line digital holography (DH) and lensless microscopy are 3D imaging techniques used to reconstruct the volume of micro-objects in many fields. However, their performances are limited by the pixel size of the sensor. Recently, various pixel super-resolution algorithms for digital holography have been proposed. A hologram with improved resolution was produced from a stack of laterally shifted holograms, resulting in better resolved reconstruction than a single low-resolution hologram. Algorithms for super-resolved reconstructions based on inverse problems approaches have already been shown to improve the 3D reconstruction of opaque spheres. Maximum a posteriori approaches have also been shown capable of reconstructing the object field more accurately and more efficiently and to extend the usual field-of-view. Here we propose an inverse problem formulation for DH pixel super-resolution and an algorithm that alternates registration and reconstruction steps. The method is described in detail and used to reconstruct synthetic and experimental holograms of sparse 2D objects. We show that our approach improves both the shift estimation and reconstruction quality. Moreover, the reconstructed field-of-view can be expanded by up to a factor 3, thus making it possible to multiply the analyzed area ninefold.
).
 

2016

[25] "Multi-temporal SAR image decomposition into strong scatterers, background, and speckle," S Lobry, L Denis, F Tupin, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2016 (pdf, doi, abstract Abstract
Speckle phenomenon in synthetic aperture radar (SAR) images makes their visual and automatic interpretation a difficult task. To reduce strong fluctuations due to speckle, total variation (TV) regularization has been proposed by several authors to smooth out noise without blurring edges. A specificity of SAR images is the presence of strong scatterers having a radiometry several orders of magnitude larger than their surrounding region. These scatterers, especially present in urban areas, limit the effectiveness of TV regularization as they break the assumption of an image made of regions of constant radiometry. To overcome this limitation, we propose in this paper an image decomposition approach. There exists numerous methods to decompose an image into several components, notably to separate textural and geometrical information. These decomposition models are generally recast as energy minimization problems involving a different penalty term for each of the components. In this framework, we propose an energy suitable for the decomposition of SAR images into speckle, a smooth background and strong scatterers, and discuss its minimization using max-flow/min-cut algorithms. We make the connection between the minimization problem considered, involving the L0 pseudo-norm, and the generalized likelihood ratio test used in detection theory. The proposed decomposition jointly performs the detection of strong scatterers and the estimation of the background radiometry. Given the increasing availability of time series of SAR images, we consider the decomposition of a whole time series. New change detection methods can be based on the temporal analysis of the components obtained from our decomposition.
).
 

2015

[24] "Spline driven: high accuracy projectors for 3D tomographic reconstruction from few projections," F Momey, E Thiébaut, C Burnier, L Denis, JM Becker, L Desbat, IEEE transactions on Image Processing, 2015 (pdf, doi, abstract Abstract
Tomographic iterative reconstruction methods need a very thorough modeling of data. The core of this issue is the projectors's design, i.e. the numerical model of projection, is mostly influenced by the representation of the object of interest, decomposed on a basis of functions, and on the approximations made for the projection on the detector. Voxel driven and ray driven projection models, widely appreciated for their short execution time, are too coarse. Distance driven model has a better accuracy but also relies on strong approximations to project voxel basis functions. Cubic voxel basis functions are anisotropic, modeling accurately their projection is therefore computationally expensive. Smoother and more isotropic basis functions both better represent continuous functions and provide simpler projectors. This consideration has lead to the development of spherically symmetric volume elements, called blobs. Set apart their isotropy, blobs are often considered too computationally expensive in practice. We propose to use 3D B-splines, which are smooth piecewise polynomials, as basis functions. When the degree of these polynomials increases, their isotropy improves and projections can be computed regardless of their orientation. Thanks to their separability, very efficient algorithms can be used to decompose an image on B-spline basis functions. We approximate the projection of B-spline basis functions with a 2D separable model. The degree and the sampling of the B-splines can be chosen according to a tradeoff between approximation quality and computational complexity. We show on numerical experiments that with our accurate projector, the number of projections can be reduced while preserving a similar reconstruction quality. Used with cubic B-splines, our projector requires just twice as many operations as a model involving voxel basis functions. High accuracy projectors can enhance the resolution of existing systems, or can reduce the number of projections required to reach a given resolution, potentially reducing the dose absorbed by the patient.
).
 
[23] "Fast approximations of shift-variant blur," L Denis, E Thiébaut, F Soulez, JM Becker, Rahul Mourya, International Journal of Computer Vision, 2015 (pdf, doi, abstract Abstract
Image deblurring is essential in high resolution imaging, e.g., astronomy, microscopy or computational photography. Shift-invariant blur is fully characterized by a single point-spread-function (PSF). Blurring is then modeled by a convolution, leading to efficient algorithms for blur simulation and removal that rely on fast Fourier transforms. However, in many different contexts, blur cannot be considered constant throughout the field-of-view, and thus necessitates to model variations of the PSF with the location. These models must achieve a trade-off between the accuracy that can be reached with their flexibility, and their computational efficiency. Several fast approximations of blur have been proposed in the literature. We give a unified presentation of these methods in the light of matrix decompositions of the blurring operator. We establish the connection between different computational tricks that can be found in the litterature and the physical sense of corresponding approximations in terms of equivalent PSFs, physically-based approximations being preferable. We derive an improved approximation that preserves the same desirable low complexity as other fast algorithms while reaching a minimal approximation error. Comparison of theoretical properties and empirical performances of each blur approximation suggests that the proposed general model is preferable for approximation and inversion of a known shift-variant blur.
).
 
[22] "NL-SAR : a unified Non-Local framework for resolution-preserving (Pol)(In)SAR denoising," C Deledalle, L Denis, F Tupin, A Reigber, M Jäger, IEEE trans Geoscience and Remote Sensing, 53(4): 2021-2038, 2015 (pdf, doi, abstract Abstract
Speckle noise is an inherent problem in coherent imaging systems like synthetic aperture radar. It creates strong intensity fluctuations and hampers the analysis of images and the estimation of local radiometric, polarimetric or interferometric properties. SAR processing chains thus often include a multi-looking (i.e., averaging) filter for speckle reduction, at the expense of a strong resolution loss. Preservation of point-like and fine structures and textures requires to locally adapt the estimation. Non-local means successfully adapt smoothing by deriving data-driven weights from the similarity between small image patches. The generalization of non-local approaches offers a flexible framework for resolution-preserving speckle reduction. We describe a general method, NL-SAR, that builds extended non-local neighborhoods for denoising amplitude, polarimetric and/or interferometric SAR images. These neighborhoods are defined on the basis of pixel similarity as evaluated by multi-channel comparison of patches. Several non-local estimations are performed and the best one is locally selected to form a single restored image with good preservation of radar structures and discontinuities. The proposed method is fully automatic and handles single and multi-look images, with or without interferometric or polarimetric channels. Efficient speckle reduction with very good resolution preservation is demonstrated both on numerical experiments using simulated data and airborne radar images. The source code of a parallel implementation of NL-SAR is released with the paper.
).
 

2014

[21] "Exploiting patch similarity for SAR image processing: the nonlocal paradigm," C Deledalle, L Denis, G Poggi, F Tupin, L Verdoliva, IEEE Signal Processing Magazine, 2014 (pdf, abstract Abstract
Most current SAR systems offer high-resolution images featuring polarimetric, interferometric, multi-frequency, multi-angle, or multi-date information. SAR images however suffer from strong fluctuations due to the speckle phenomenon inherent to coherent imagery. Hence, all derived parameters display strong signal-dependent variance, preventing the full exploitation of such a wealth of information. Even with the abundance of despeckling techniques proposed these last three decades, there is still a pressing need for new methods that can handle this variety of SAR products and efficiently eliminate speckle without sacrificing the spatial resolution. Recently, patch-based filtering has emerged as a highly successful concept in image processing. By exploiting the redundancy between similar patches, it succeeds in suppressing most of the noise with good preservation of texture and thin structures. Extensions of patch-based methods to speckle reduction and joint exploitation of multi-channel SAR images (interferometric, polarimetric, or PolInSAR data) have led to the best denoising performance in radar imaging to date. We give a comprehensive survey of patch-based nonlocal filtering of SAR images, focusing on the two main ingredients of the methods: measuring patch similarity, and estimating the parameters of interest from a collection of similar patches.
).
 
[20] "Inverse problem approach for the alignment of electron tomographic series," VD Tran, M Moreaud, E Thiebault, L Denis, and JM Becker, Oil and Gas Science and Technology, OGST, 69(2): 279-291, 2014 (pdf, doi, abstract Abstract
In the refining industry, morphological measurements of particles have become an essential part in the characterization catalyst supports. Through these parameters, one can infer the specific physicochemical properties of the studied materials. One of the main acquisition techniques is electron tomography (or nanotomography). 3D volumes are reconstructed from sets of projections from different angles made by a Transmission Electron Microscope (TEM). This technique provides a real three-dimensional information at the nanometric scale. A major issue in this method is the misalignment of the projections that contributes to the reconstruction. The current alignment techniques usually employ fiducial markers such as gold particles for a correct alignment of the images. When the use of markers is not possible, the correlation between adjacent projections is used to align them. However, this method sometimes fails. In this paper, we propose a new method based on the inverse problem approach where a certain criterion is minimized using a variant of the Nelder and Mead simplex algorithm. The proposed approach is composed of two steps. The first step consists of an initial alignment process, which relies on the minimization of a cost function based on robust statistics measuring the similarity of a projection to its previous projections in the series. It reduces strong shifts resulting from the acquisition between successive projections. In the second step, the pre-registered projections are used to initialize an iterative alignment-refinement process which alternates between (i) volume reconstructions and (ii) registrations of measured projections onto simulated projections computed from the volume reconstructed in (i). At the end of this process, we have a correct reconstruction of the volume, the projections being correctly aligned. Our method is tested on simulated data and shown to estimate accurately the translation, rotation and scale of arbitrary transforms. We have successfully tested our method with real projections of different catalyst supports.
).
 

2013

[19] "Accurate 3D tracking and size measurement of evaporating droplets using in-line digital holography and "inverse problems" reconstruction approach," M. Seifi, C. Fournier, N. Grosjean, L. Méès, JL. Marié, and L. Denis, Optics Express, 21(23), pp. 27964-27980, 2013 (pdf, doi, abstract Abstract
Digital in-line holography was used to study a fast dynamic 3D phenomenon: the evaporation of free-falling diethyl ether droplets. We describe an unsupervised reconstruction algorithm based on an "inverse problems" approach previously developed by our team to accurately reconstruct 3D trajectories and to estimate the droplets' size in a field of view of 7 × 11 × 20 mm3. A first experiment with non-evaporating droplets established that the radius estimates were accurate to better than 0.1 micrometer. With evaporating droplets, the vapor around the droplet distorts the diffraction patterns in the holograms. We showed that areas with the strongest distortions can be discarded using an exclusion mask. We achieved radius estimates better than 0.5 micrometer accuracy for evaporating droplets. Our estimates of the evaporation rate fell within the range predicted by theoretical models.
).
 
[18] "Fast and accurate 3D object recognition directly from digital holograms," M. Seifi, L. Denis, and C. Fournier, J. Opt. Soc. Am. A, 30(11), pp. 2216-2224, 2013 (doi, abstract Abstract
Pattern recognition methods can be used in the context of digital holography to perform the task of object detection, classification, and position extraction directly from the hologram rather than from the reconstructed optical field. These approaches may exploit the differences between the holographic signatures of objects coming from distinct object classes and/or different depth positions. Direct matching of diffraction patterns, however, becomes computationally intractable with increasing variability of objects due to the very high dimensionality of the dictionary of all reference diffraction patterns. We show that most of the diffraction pattern variability can be captured in a lower dimensional space. Good performance for object recognition and localization is demonstrated at a reduced computational cost using a low-dimensional dictionary. The principle of the method is illustrated on a digit recognition problem and on a video of experimental holograms of particles.
).
 
[17] "Exploiting spatial sparsity for multiwavelength imaging in optical interferometry," E Thiébaut, F Soulez, and L Denis, J. Opt. Soc. Am. A, 30(2), pp. 160-170, 2013 (pdf, doi, abstract Abstract
Optical interferometers provide multiple wavelength measurements. In order to fully exploit the spectral and spatial resolution of these instruments, new algorithms for image reconstruction have to be developed. Early attempts to deal with multichromatic interferometric data have consisted in recovering a gray image of the object or independent monochromatic images in some spectral bandwidths. The main challenge is now to recover the full three-dimensional (spatiospectral) brightness distribution of the astronomical target given all the available data. We describe an approach to implement multiwavelength image reconstruction in the case where the observed scene is a collection of point-like sources. We show the gain in image quality (both spatially and spectrally) achieved by globally taking into account all the data instead of dealing with independent spectral slices. This is achieved thanks to a regularization that favors spatial sparsity and spectral grouping of the sources. Since the objective function is not differentiable, we had to develop a specialized optimization algorithm that also accounts for non-negativity of the brightness distribution.
).
 

2012

[16] "Three-dimensional reconstruction of particle holograms: a fast and accurate multiscale approach," M. Seifi, C. Fournier, L. Denis, D. Chareyron, and J.-L. Marié, J. Opt. Soc. Am. A, 29(9), pp. 1808-1817, 2012 (doi, abstract Abstract
In-line digital holography is an imaging technique that is being increasingly used for studying three-dimensional flows. It has been previously shown that very accurate reconstructions of objects could be achieved with the use of an inverse problem framework. Such approaches, however, suffer from higher computational times compared to less accurate conventional reconstructions based on hologram backpropagation. To overcome this computational issue, we propose a coarse-to-fine multiscale approach to strongly reduce the algorithm complexity. We illustrate that an accuracy comparable to that of state-of-the-art methods can be reached while accelerating parameter-space scanning.
).
 
[15] "How to Compare Noisy Patches? Patch Similarity Beyond Gaussian Noise," C. Deledalle, L. Denis, and F. Tupin, International Journal of Computer Vision, 99(1), pp. 86-102, 2012 (pdf, doi, abstract Abstract
Many tasks in computer vision require to match image parts. While higher-level methods consider image features such as edges or robust descriptors, low-level approaches (so-called image-based) compare groups of pixels (patches) and provide dense matching. Patch similarity is a key ingredient to many techniques for image registration, stereo-vision, change detection or denoising. Recent progress in natural image modeling also makes intensive use of patch comparison.
A fundamental difficulty when comparing two patches from "real" data is to decide whether the differences should be ascribed to noise or intrinsic dissimilarity. Gaussian noise assumption leads to the classical definition of patch similarity based on the squared differences of intensities. For the case where noise departs from the Gaussian distribution, several similarity criteria have been proposed in the literature of image processing, detection theory and machine learning.
By expressing patch (dis)similarity as a detection test under a given noise model, we introduce these criteria with a new one and discuss their properties. We then assess their performance for different tasks: patch discrimination, image denoising, stereo-matching and motion-tracking under gamma and Poisson noises. The proposed criterion based on the generalized likelihood ratio is shown to be both easy to derive and powerful in these diverse applications.
).
 
[14] "Testing an in-line digital holography 'inverse method' for the Lagrangian tracking of evaporating droplets in homogeneous nearly isotropic turbulence," D. Chareyron, J.L. Marié, C. Fournier, J. Gire, N. Grosjean, L. Denis, M. Lance and L. Méès, New Journal of Physics, 14 043039, 2012 (doi, pdf, HAL, abstract Abstract
An in-line digital holography technique is tested, the objective being to measure Lagrangian three-dimensional (3D) trajectories and the size evolution of droplets evaporating in high-Re strong turbulence. The experiment is performed in homogeneous, nearly isotropic turbulence (50 × 50 × 50 mm3) created by the meeting of six synthetic jets. The holograms of droplets are recorded with a single high-speed camera at frame rates of 1-3 kHz. While hologram time series are generally processed using a classical approach based on the Fresnel transform, we follow an 'inverse problem' approach leading to improved size and 3D position accuracy and both in-field and out-of-field detection. The reconstruction method is validated with 60 microns diameter water droplets released from a piezoelectric injector 'on-demand' and which do not appreciably evaporate in the sample volume. Lagrangian statistics on 1000 reconstructed tracks are presented. Although improved, uncertainty on the depth positions remains higher, as expected in in-line digital holography. An additional filter is used to reduce the effect of this uncertainty when calculating the droplet velocities and accelerations along this direction. The diameters measured along the trajectories remain constant within ±1.6%, thus indicating that accuracy on size is high enough for evaporation studies. The method is then tested with R114 freon droplets at an early stage of evaporation. The striking feature is the presence on each hologram of a thermal wake image, aligned with the relative velocity fluctuations 'seen' by the droplets (visualization of the Lagrangian fluid motion about the droplet). Its orientation compares rather well with that calculated by using a dynamical equation for describing the droplet motion. A decrease of size due to evaporation is measured for the droplet that remains longest in the turbulence domain.
).
 

2011

[13] "NL-InSAR: Non-local interferogram estimation," C. Deledalle, L. Denis, and F. Tupin, IEEE trans. geoscience and remote sensing, 49, 4, 2011 (pdf, doi, abstract Abstract
Interferometric synthetic aperture radar (InSAR) data provides reflectivity, phase difference and coherence images, which are paramount to scene interpretation or low-level processing tasks such as segmentation and 3D reconstruction. These images are estimated in practice from hermitian product on local windows. These windows lead to biases and resolution losses due to local heterogeneity caused by edges and texture. This paper proposes a non-local approach for joint estimation of the reflectivity, phase difference and coherence images from an interferometric pair of co-registered single-look complex (SLC) SAR images. Non-local techniques are known to efficiently reduce noise while preserving structures by performing a weighted averaging of similar pixels. Two pixels are considered similar if the surrounding image patches are 'resembling'. Patch-similarity is usually defined as the Euclidean distance between the vectors of graylevels. In this paper a statistically grounded patch-similarity criterion suitable to SLC images is derived. A weighted maximum likelihood estimation of the SAR interferogram is then computed with weights derived in a data-driven way. Weights are defined from intensity and phase difference, and are iteratively refined based both on the similarity between noisy patches and on the similarity of patches from the previous estimate. The efficiency of this new interferogram construction technique is illustrated both qualitatively and quantitatively on synthetic and true data.
).
 

2010

[12] "On the single point resolution of on-axis digital holography," C. Fournier, L. Denis, and T. Fournel, J. Opt. Soc. Am. A, 27 (8), 1856-1862, 2010. (pdf, doi, abstract Abstract
On-axis digital holography (DH) is becoming widely used for its time-resolved three-dimensional (3D) imaging capabilities. A 3D volume can be reconstructed from a single hologram. DH is applied as a metrological tool in experimental mechanics, biology, and fluid dynamics, and therefore the estimation and the improvement of the resolution are current challenges. However, the resolution depends on experimental parameters such as the recording distance, the sensor definition, the pixel size, and also on the location of the object in the field of view. This paper derives resolution bounds in DH by using estimation theory. The single point resolution expresses the standard deviations on the estimation of the spatial coordinates of a point source from its hologram. Cramér Rao lower bounds give a lower limit for the resolution. The closed-form expressions of the Cramér Rao lower bounds are obtained for a point source located on and out of the optical axis. The influences of the 3D location of the source, the numerical aperture, and the signal-to-noise ratio are studied.
).
 

2009

[11] "Inline hologram reconstruction with sparsity constraints," L. Denis, D. Lorenz, E. Thiébaut, C. Fournier, D. Trede, Optics Letters, 34(22), 3475-3477, 2009. (pdf, doi, abstract Abstract
Inline digital holograms are classically reconstructed using linear operators to model diffraction. It has long been recognized that such reconstruction operators do not invert the hologram formation operator. Classical linear reconstructions yield images with artifacts such as distortions near the field-of-view boundaries or twin images. When objects located at different depths are reconstructed from a hologram, in-focus and out-of-focus images of all objects superimpose upon each other. Additional processing, such as maximum-of-focus detection, is thus unavoidable for any successful use of the reconstructed volume. In this Letter, we consider inverting the hologram formation model in a Bayesian framework. We suggest the use of a sparsity-promoting prior, verified in many inline holography applications, and present a simple iterative algorithm for 3D object reconstruction under sparsity and positivity constraints. Preliminary results with both simulated and experimental holograms are highly promising.
).
 
[10] "Greedy Solution of Ill-Posed Problems: Error Bounds and Exact Inversion," L. Denis, D. Lorenz and D. Trede, Inverse Problems, 25 115017, 2009. (pdf,doi, abstract Abstract
The orthogonal matching pursuit (OMP) is a greedy algorithm to solve sparse approximation problems. Sufficient conditions for exact recovery are known with and without noise. In this paper we investigate the applicability of the OMP for the solution of ill-posed inverse problems in general, and in particular for two deconvolution examples from mass spectrometry and digital holography, respectively. In sparse approximation problems one often has to deal with the problem of redundancy of a dictionary, i.e. the atoms are not linearly independent. However, one expects them to be approximatively orthogonal and this is quantified by the so-called incoherence. This idea cannot be transferred to ill-posed inverse problems since here the atoms are typically far from orthogonal. The ill-posedness of the operator probably causes the correlation of two distinct atoms to become huge, i.e. that two atoms look much alike. Therefore, one needs conditions which take the structure of the problem into account and work without the concept of coherence. In this paper we develop results for the exact recovery of the support of noisy signals. In the two examples, mass spectrometry and digital holography, we show that our results lead to practically relevant estimates such that one may check a priori if the experimental setup guarantees exact deconvolution with OMP. Especially in the example from digital holography, our analysis may be regarded as a first step to calculate the resolution power of droplet holography.
) -- Note that the authors of this paper are ordered alphabetically, the main author is D. Trede.
 
[9] "Iterative weighted maximum likelihood denoising with probabilistic patch-based weights," C. Deledalle, L. Denis, and F. Tupin, IEEE trans. image processing, 18, 12, 2009. (pdf, doi, abstract Abstract
Image denoising is an important problem in image processing since noise may interfere with visual or automatic interpretation. This paper presents a new approach for image denoising in the case of a known uncorrelated noise model. The proposed filter is an extension of the nonlocal means (NL means) algorithm introduced by Buades et al., which performs a weighted average of the values of similar pixels. Pixel similarity is defined in NL means as the Euclidean distance between patches (rectangular windows centered on each two pixels). In this paper, a more general and statistically grounded similarity criterion is proposed which depends on the noise distribution model. The denoising process is expressed as a weighted maximum likelihood estimation problem where the weights are derived in a data-driven way. These weights can be iteratively refined based on both the similarity between noisy patches and the similarity of patches extracted from the previous estimate. We show that this iterative process noticeably improves the denoising performance, especially in the case of low signal-to-noise ratio images such as synthetic aperture radar (SAR) images. Numerical experiments illustrate that the technique can be successfully applied to the classical case of additive Gaussian noise but also to cases such as multiplicative speckle noise. The proposed denoising technique seems to improve on the state of the art performance in that latter case.
).
 
[8] "Joint regularization of phase and amplitude of InSAR data: application to 3D reconstruction," L. Denis, F. Tupin, J. Darbon, and M. Sigelle, IEEE trans. geoscience and remote sensing, 47, 11, 2009. (pdf, doi, abstract Abstract
Interferometric synthetic aperture radar (SAR) images suffer from a strong noise, and their regularization is often a prerequisite for successful use of their information. Independently of the unwrapping problem, interferometric phase denoising is a difficult task due to shadows and discontinuities. In this paper, we propose to jointly filter phase and amplitude data in a Markovian framework. The regularization term is expressed by the minimization of the total variation and may combine different information (phase, amplitude, optical data). First, a fast and approximate optimization algorithm for vectorial data is briefly presented. Then, two applications are described. The first one is a direct application of this algorithm for 3-D reconstruction in urban areas with very high resolution images. The second one is an adaptation of this framework to the fusion of SAR and optical data. Results on aerial SAR images are presented.
).
 
[7] "SAR Image Regularization with Fast Approximate Discrete Minimization," L. Denis, F. Tupin, J. Darbon, and M. Sigelle, IEEE trans. image processing, 18, 7, 2009. (pdf, doi, abstract Abstract
Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.
).
 

2008

[6] "Digital holography of particles: benefits of the 'inverse problem' approach," J. Gire, L. Denis, C. Fournier, C. Ducottet, E. Thiebaut, and F. Soulez, Meas. Sci. Tech., 19, 2008. (pdf, doi, abstract Abstract
The potential of in-line digital holography to locate and measure the size of particles distributed throughout a volume (in one shot) has been established. These measurements are fundamental for the study of particle trajectories in fluid flow. The most important issues in digital holography today are poor depth positioning accuracy, transverse field-of-view limitations, border artifacts and computational burdens. We recently suggested an 'inverse problem' approach to address some of these issues for the processing of particle digital holograms. The described algorithm improves axial positioning accuracy, gives particle diameters with sub-micrometer accuracy, eliminates border effects and increases the size of the studied volume. This approach for processing particle holograms pushes back some classical constraints. For example, the Nyquist criterion is no longer a restriction for the recording step and the studied volume is no longer confined to the field of view delimited by the sensor borders. In this paper we present a review of the limitations commonly found in digital holography. We then discuss the benefits of the 'inverse problem' approach and the influence of some experimental parameters in this framework.
).
 
[5] "Numerical suppression of the twin-image in in-line holography of a volume of micro-objects," L. Denis, C. Fournier, T. Fournel, and C. Ducottet, Meas. Sci. Tech., 19, 2008. (pdf, doi, abstract Abstract
We address the twin-image problem that arises in holography due to the lack of phase information in intensity measurements. This problem is of great importance in in-line holography where spatial elimination of the twin image cannot be carried out as in off-axis holography. A unifying description of existing digital suppression methods is given in the light of deconvolution techniques. Holograms of objects spread in 3D cannot be processed through available approaches. We suggest an iterative algorithm and demonstrate its efficacy on both simulated and real data. This method is suitable to enhance the reconstructed images from a digital hologram of small objects.
).
 

2007

[4] "Inverse problem approach for particle digital holography: out-of-field particle detection made possible," F. Soulez, L. Denis, E. Thiébaut, C. Fournier, and C. Goepfert, J. Opt. Soc. Am. A, 24 (12), 3708-3716, 2007. (pdf, doi, abstract Abstract
We propose a microparticle detection scheme in digital holography. In our inverse problem approach, we estimate the optimal particles set that best models the observed hologram image. Such a method can deal with data that have missing pixels. By considering the camera as a truncated version of a wider sensor, it becomes possible to detect particles even out of the camera field of view. We tested the performance of our algorithm against simulated and experimental data for diluted particle conditions. With real data, our algorithm can detect particles far from the detector edges in a working area as large as 16 times the camera field of view. A study based on simulated data shows that, compared with classical methods, our algorithm greatly improves the precision of the estimated particle positions and radii. This precision does not depend on the particle's size or location (i.e., whether inside or outside the detector field of view).
).
 
[3] "Inverse problem approach for particle digital holography: accurate location based on local optimisation," F. Soulez, L. Denis, C. Fournier, E. Thiébaut, and C. Goepfert, J. Opt. Soc. Am. A, 24 (4), 1164-1171, 2007. (pdf, doi, abstract Abstract
We propose a microparticle localization scheme in digital holography. Most conventional digital holography methods are based on Fresnel transform and present several problems such as twin-image noise, border effects, and other effects. To avoid these difficulties, we propose an inverse-problem approach, which yields the optimal particle set that best models the observed hologram image. We resolve this global optimization problem by conventional particle detection followed by a local refinement for each particle. Results for both simulated and real digital holograms show strong improvement in the localization of the particles, particularly along the depth dimension. In our simulations, the position precision is >1 micron rms. Our results also show that the localization precision does not deteriorate for particles near the edge of the field of view.
).
 
[2] "Reconstruction of the rose of directions from a digital micro-hologram of fibers," L. Denis, T. Fournel, C. Fournier, and D. Jeulin, J. Microsc., 225 (3), 282-291, 2007. (pdf, doi, abstract Abstract
Digital holography makes it possible to acquire quickly the interference patterns of objects spread in a volume. The digital processing of the fringes is still too slow to achieve on line analysis of the holograms. We describe a new approach to obtain information on the direction of illuminated objects. The key idea is to avoid reconstruction of the volume followed by classical three-dimensional image processing. The hologram is processed using a global analysis based on autocorrelation. A fundamental property of diffraction patterns leads to an estimate of the mean geometric covariogram of the objects projections. The rose of directions is connected with the mean geometric covariogram through an inverse problem. In the general case, only the two-dimensional rose of the object projections can be reconstructed. The further assumption of unique-size objects gives access with the knowledge of this size to the three-dimensional direction information. An iterative scheme is suggested to reconstruct the three-dimensional rose in this special case. Results are provided on holograms of paper fibres.
).
 

2006

[1] "Direct extraction of mean particle size from a digital hologram," L. Denis, C. Fournier, T. Fournel, C. Ducottet, and D. Jeulin, Applied Optics, 45 (5), 944-952, 2006. (pdf, doi, abstract Abstract
Digital holography, which consists of both acquiring the hologram image in a digital camera and numerically reconstructing the information, offers new and faster ways to make the most of a hologram. We describe a new method to determine the rough size of particles in an in-line hologram. This method relies on a property that is specific to interference patterns in Fresnel holograms: Self-correlation of a hologram provides access to size information. The proposed method is both simple and fast and gives results with acceptable precision. It suppresses all the problems related to the numerical depth of focus when large depth volumes are analyzed.
).
 



Conference papers


2017

[44] "Robust Object Characterization from Lensless Microscopy Videos," O Flasseur, L Denis, C Fournier, E Thiébaut, EUSIPCO, 2017.
 
[43] "Similarity criterion for SAR tomography over dense urban area," C Rambour, L Denis, F Tupin, JM Nicolas, H Oriot, L Ferro-Famil, C Deledalle, IEEE IGARSS, 2017.
 
[42] "Double MRF for water classification in SAR images by joint detection and reflectivity estimation," S Lobry, L Denis, F Tupin, R Fjortoft, IEEE IGARSS, 2017.
 

2016

[41] "Fast and robust exo-planet detection in multi-spectral, multi-temporal data, E Thiébaut, L Denis, L Mugnier, A Ferrari, D Mary, M Langlois, F Cantalloube, N Devaney, SPIE Adaptive Optics Systems, 2016 (abstract Abstract
Exo-planet detection is a signal processing problem that can be addressed by several detection approaches. This paper provides a review of methods from detection theory that can be applied to detect exo-planets in coronographic images such as those provided by SPHERE and GPI. In a first part, we recall the basics of signal detection and describe how to derive a fast and robust detection criterion based on a heavy tail model that can account for outliers in the residuals. In a second part, we derive detectors that handle jointly several wavelengths and exposures and focus on an approach that prevents from interpolating the data, thereby preserving the statistics of the original data.
).
 
[40] "Spatially variant PSF modeling and image deblurring," E Thiébaut, L Denis, F Soulez, R Mourya, SPIE Adaptive Optics Systems, 2016 (abstract Abstract
Most current imaging instruments have a spatially variant point spread function (PSF). An optimal exploitation of these instruments requires to account for this non-stationarity. We review existing models of spatially variant PSF with an emphasis on those which are not only accurate but also fast because getting rid of non-stationary blur can only be done by iterative methods.
).
 
[39] "A decomposition model for scatterers change detection in multi-temporal series of SAR images," S Lobry, L Denis, F Tupin, IEEE IGARSS, 2016 (doi, abstract Abstract
This paper presents a method for strong scatterers change detection in synthetic aperture radar (SAR) images based on a decomposition for multi-temporal series. The formulated decomposition model jointly estimates the background of the series and the scatterers. The decomposition model retrieves possible changes in scatterers and the date at which they occurred. An exact optimization method of the model is presented and applied to a TerraSAR-X time series.
).
 
[38] "Fast and robust detection of a known pattern in an image," L Denis, A Ferrari, D Mary, L Mugnier, E Thiébaut, EUSIPCO, 2016 (doi, abstract Abstract
Many image processing applications require to detect a known pattern buried under noise. While maximum correlation can be implemented efficiently using fast Fourier transforms, detection criteria that are robust to the presence of outliers are typically slower by several orders of magnitude. We derive the general expression of a robust detection criterion based on the theory of locally optimal detectors. The expression of the criterion is attractive because it offers a fast implementation based on correlations. Application of this criterion to Cauchy likelihood gives good detection performance in the presence of outliers, as shown in our numerical experiments. Special attention is given to proper normalization of the criterion in order to account for truncation at the image borders and noise with a non-stationary dispersion.
).
 

2015

[37] "Semi-blind joint super-resolution/segmentation of 3D trabecular bone images by a TV box approach," F Peyrin, A Toma, B Sixou, L Denis, A Burghardt, JB Pialat, EUSIPCO, 2015 (doi, abstract Abstract
The investigation of bone fragility diseases, as osteoporosis, is based on the analysis of the trabecular bone microarchitecture. The aim of this paper is to improve the in-vivo trabecular bone segmentation and quantification by increasing the resolution of bone micro-architecture images. We propose a semi-blind joint super-resolution/segmentation approach based on a Total Variation regularization with a convex constraint. A comparison with the bicubic interpolation method and the non-blind version of the proposed method is shown. The validation is performed on blurred, noisy and down-sampled 3D synchrotron micro-CT bone images. Good estimates of the blur and of the high resolution image are obtained with the semi-blind approach. Preliminary results are obtained with the semi-blind approach on real HR-pQCT images.
).
 
[36] "A blind deblurring and image decomposition approach for astronomical image restoration," R Mourya, L Denis, JM Becker, E Thiébaut, EUSIPCO, 2015 (doi, abstract Abstract
With the progress of adaptive optics systems, ground-based telescopes acquire images with improved resolutions. However, compensation for atmospheric turbulence is still partial, which leaves good scope for digital restoration techniques to recover fine details in the images. A blind image deblurring algorithm for a single long-exposure image is proposed, which is an instance of maximum-a-posteriori estimation posed as constrained non-convex optimization problem. A view of sky contains mainly two types of sources: point-like and smooth extended sources. The algorithm takes into account this fact explicitly by imposing different priors on these components, and recovers two separate maps for them. Moreover, an appropriate prior on the blur kernel is also considered. The resulting optimization problem is solved by alternating minimization. The initial experimental results on synthetically corrupted images are promising, the algorithm is able to restore the fine details in the image, and recover the point spread function.
).
 
[35] "Augmented lagrangian without alternating directions: Practical algorithms for inverse problems in imaging," R Mourya, L Denis, JM Becker, E Thiébaut, IEEE ICIP, 2015 (doi, abstract Abstract
Several problems in signal processing and machine learning can be casted as optimization problems. In many cases, they are of large-scale, nonlinear, have constraints, and may be nonsmooth in the unknown parameters. There exists plethora of fast algorithms for smooth convex optimization, but these algorithms are not readily applicable to nonsmooth problems, which has led to a considerable amount of research in this direction. In this paper, we propose a general algorithm for nonsmooth bound-constrained convex optimization problems. Our algorithm is instance of the so-called augmented Lagrangian, for which theoretical convergence is well established for convex problems. The proposed algorithm is a blend of superlinearly convergent limited memory quasi-Newton method, and proximal projection operator. The initial promising numerical results for total-variation based image deblurring show that they are as fast as the best existing algorithms in the same class, but with fewer and less sensitive tuning parameters, which makes a huge difference in practice.
).
 
[34] "Combining patch-based estimation and total variation regularization for 3D InSAR reconstruction," C Deledalle, L Denis, G Ferraioli, F Tupin, IEEE IGARSS, 2015 (doi, abstract Abstract
In this paper we propose a new approach for height retrieval using multi-channel SAR interferometry. It combines patch-based estimation and total variation regularization to provide a regularized height estimate. The non-local likelihood term adaptation relies on NL-SAR method, and the global optimization is realized through graph-cut minimization. The method is evaluated both with synthetic and real experiments.
).
 
[33] "Patch-based SAR image classification: The potential of modeling the statistical distribution of patches with Gaussian mixtures.," S Tabti, C Deledalle, L Denis, F Tupin, IEEE IGARSS, 2015 (doi, abstract Abstract
Due to their coherent nature, SAR (Synthetic Aperture Radar) images are very different from optical satellite images and more difficult to interpret, especially because of speckle noise. Given the increasing amount of available SAR data, efficient image processing techniques are needed to ease the analysis. Classifying this type of images, i.e., selecting an adequate label for each pixel, is a challenging task. This paper describes a supervised classification method based on local features derived from a Gaussian mixture model (GMM) of the distribution of patches. First classification results are encouraging and suggest an interesting potential of the GMM model for SAR imaging.
).
 
[32] " Sparse + smooth decomposition models for multi-temporal SAR images," S Lobry, L Denis, F Tupin, Multi-Temp, 2015 (doi, abstract Abstract
SAR images have distinctive characteristics compared to optical images: speckle phenomenon produces strong fluctuations, and strong scatterers have radar signatures several orders of magnitude larger than others. We propose to use an image decomposition approach to account for these peculiarities. Several methods have been proposed in the field of image processing to decompose an image into components of different nature, such as a geometrical part and a textural part. They are generally stated as an energy minimization problem where specific penalty terms are applied to each component of the sought decomposition. We decompose temporal series of SAR images into three components: speckle, strong scatterers and background. Our decomposition method is based on a discrete optimization technique by graph-cut. We apply it to change detection tasks.
).
 

2014

[31] "Building invariance properties for dictionaries of SAR image patches," S Tabti, C Deledalle, L Denis, F Tupin, IEEE IGARSS, 2014 (doi, abstract Abstract
Adding invariance properties to a dictionary-based model is a convenient way to reach a high representation capacity while maintaining a compact structure. Compact dictionaries of patches are desirable because they ease semantic interpretation of their elements (atoms) and offer robust decompositions even under strong speckle fluctuations. This paper describes how patches of a dictionary can be matched to a speckled image by accounting for unknown shifts and affine radio-metric changes. This procedure is used to build dictionaries of patches specific to SAR images. The dictionaries can then be used for denoising or classification purposes.
).
 
[30] "Total variation super-resolution for 3D trabecular bone micro-structure segmentation," A Toma, L Denis, B Sixou, JB Pialat, F Peyrin, EUSIPCO, 2014 (abstract Abstract
The analysis of the trabecular bone micro-structure plays an important role in studying bone fragility diseases such as osteoporosis. In this context, X-ray CT techniques are increasingly used to image bone micro-architecture. The aim of this paper is to improve the segmentation of the bone micro-structure for further bone quantification. We propose a joint super-resolution/segmentation method based on total variation with a convex constraint. The minimization is performed with the Alternating Direction Method of Multipliers (ADMM). The new method is compared with the bicubic interpolation method and the classical total variation regularization. All methods were tested on blurred, noisy and down-sampled 3D synchrotron micro-CT bone volumes. Improved segmentation is obtained with the proposed joint super-resolution/segmentation method.
).
 
[29] "Modeling the distribution of patches with shift-invariance : application to SAR image restoration," S Tabti, C Deledalle, L Denis, F Tupin, IEEE ICIP, 2014 (doi, abstract Abstract
Patches have proven to be very effective features to model natural images and to design image restoration methods. Given the huge diversity of patches found in images, modeling the distribution of patches is a difficult task. Rather than attempting to accurately model all patches of the image, we advocate that it is sufficient that all pixels of the image belong to at least one well-explained patch. An image is thus described as a tiling of patches that have large prior probability. In contrast to most patch-based approaches, we do not process the image in patch space, and consider instead that patches should match well everywhere where they overlap. In-order to apply this modeling to the restoration of SAR images, we define a suitable data-fitting term to account for the statistical distribution of speckle. Restoration results are competitive with state-of-the art SAR despeckling methods.
).
 
[28] "Higher order total variation super-resolution from a single trabecular bone image," A Toma, B Sixou, L Denis, JB Pialat, F Peyrin, IEEE ISBI, 2014 (doi, abstract Abstract
Osteoporosis is characterized by a low bone mass density and deterioration of bone micro-architecture. Despite the considerable progress in Computed Tomography (CT), the investigation of 3D trabecular bone micro-architecture in-vivo remains limited due to a lack of spatial resolution compared to the trabeculae size. To improve the analysis of trabecular bone from in-vivo CT images, we investigate super-resolution methods to estimate a higher spatial resolution image from a single lower spatial resolution image. To solve this inverse problem, we considered two regularization strategies involving first or second order differential operators. The methods are tested on experimental micro-CT trabecular bone images at 20 micrometer which are used as reference images. The first tests suggest that both methods give similar results but total variation regularization implemented with the alternating direction method of multipliers algorithm is more efficient to recover correctly some structural parameters.
).
 

2013

[27] "Template Matching with Noisy Patches: A Contrast-Invariant GLR Test," C. Deledalle, L. Denis, and F. Tupin, EUSIPCO, Marrakech, September 2013 (pdf, abstract Abstract
Matching patches from a noisy image to atoms in a dictionary of patches is a key ingredient to many techniques in image processing and computer vision. By representing with a single atom all patches that are identical up to a radiometric transformation, dictionary size can be kept small, thereby retaining good computational efficiency. Identification of the atom in best match with a given noisy patch then requires a contrast-invariant criterion. In the light of detection theory, we propose a new criterion that ensures contrast invariance and robustness to noise. We discuss its theoretical grounding and assess its performance under Gaussian, gamma and Poisson noises.
).
 
[26] "Fast Diffraction-pattern matching for object detection and recognition in digital holograms," M. Seifi, L. Denis and C. Fournier, EUSIPCO, Marrakech, September 2013 ( abstract Abstract
A digital hologram is a 2-D recording of the diffraction fringes created by 3-D objects under coherent lighting. These fringes encode the shape and 3-D location information of the objects. By simulating re-lighting of the hologram, the 3-D wave eld can be reconstructed and a volumetric image of the objects recovered. Rather than performing object detection and identi cation in this reconstructed volume, we consider direct recognition of diffraction-patterns in in-line holograms and show that it leads to superior performance. The huge variability of diffraction patterns with object shape and 3-D location makes diffraction-pattern matching computationally expensive. We suggest the use of a dimensionality reduction technique to circumvent this limitation and show good detection and recognition performance both on simulated and experimental holograms.
).
 
[25] "Dictionary size reduction for a faster object recognition in digital holography," C. Fournier, L. Denis, M. Seifi, and T. Fournel, Workshop on Information Optics (WIO), Tenerife, July 2013 ( abstract Abstract
Pattern matching methods can be used in the context of digital holography to perform the task of object recognition, classification and position extraction directly from the hologram and not from the reconstructed optical yield. These approaches exploit the differences between the objects holographic signatures caused by class and depth position of the objects. In this talk we will show that such inter-signature variabilities can be captured efficiently in a lower-dimensional vector space using dimensionality reduction methods.
).
 

2012

[24] "Blind deconvolution of 3D data in wide field fluorescence microscopy," F. Soulez, L. Denis, Y. Tourneur, and E. Thiébaut, IEEE International Symposium on Biomedical Imaging (ISBI), Barcelona, April 2012 (pdf, HAL, abstract Abstract
In this paper we propose a blind deconvolution algorithm for wide field fluorescence microscopy. The 3D PSF is modeled after a parametrized pupil function. The PSF parameters are estimated jointly with the object in a maximum a posteriori framework. We illustrate the performances of our algorithm on experimental data and show significant resolution improvement notably along the depth. Quantitative measurements on images of calibration beads demonstrate the benefits of blind deconvolution both in terms of contrast and resolution compared to non-blind deconvolution using a theoretical PSF.
).
 

2011

[23] "Fast model of space-variant blurring and its application to deconvolution in astronomy," L. Denis, E. Thiébaut, and F. Soulez, IEEE International Conference on Image Processing (ICIP), Brussels, September 2011 (pdf, abstract Abstract
Image deblurring is essential to high resolution imaging and is therefore widely used in astronomy, microscopy or computational photography. While shift-invariant blur is modeled by convolution and leads to fast FFT-based algorithms, shift-variant blurring requires models both accurate and fast. When the point spread function (PSF) varies smoothly across the field, these two opposite objectives can be reached by interpolating from a grid of PSF samples. Several models for smoothly varying PSF co-exist in the literature. We advocate that one of them is both physically-grounded and fast. Moreover, we show that the approximation can be largely improved by tuning the PSF samples and interpolation weights with respect to a given continuous model. This improvement comes without increasing the computational cost of the blurring operator. We illustrate the developed blurring model on a deconvolution application in astronomy. Regularized reconstruction with our model leads to large improvements over existing results.
,poster).
 
[22] "Patch similarity under non gaussian noise," C. Deledalle, F. Tupin, and L. Denis, IEEE International Conference on Image Processing (ICIP), Brussels, September 2011 (pdf, abstract Abstract
Many tasks in computer vision require to match image parts. While higher-level methods consider image features such as edges or robust descriptors, low-level approaches compare groups of pixels (patches) and provide dense matching. Patch similarity is a key ingredient to many techniques for image registration, stereo-vision, change detection or denoising. A fundamental difficulty when comparing two patches from "real" data is to decide whether the differences should be ascribed to noise or intrinsic dissimilarity. Gaussian noise assumption leads to the classical definition of patch similarity based on the squared intensity differences. When the noise departs from the Gaussian distribution, several similarity criteria have been proposed in the literature. We review seven of those criteria taken from the fields of image processing, detection theory and machine learning. We discuss their theoretical grounding and provide a numerical comparison of their performance under Gamma and Poisson noises.
).
 
[21] "Influence of speckle filtering of polarimetric SAR data on different classification methods," F. Cao, C. Deledalle, J.-M. Nicolas, F. Tupin, L. Denis, L. Ferro-Famil, E. Pottier, and C. Lopez-Martinez, IEEE International Geoscience and Remote Sensing Symposium, Vancouver, July 2011.
 
[20] "Inverse problem approach for digital hologram reconstruction," C. Fournier, L. Denis, E. Thiébaut, T. Fournel and M. Seifi, SPIE 3D Imaging, Visualization and Display, Orlando, April 2011 (pdf, abstract Abstract
Digital holography (DH) is being increasingly used for its time-resolved three-dimensional (3-D) imaging capabilities. A 3-D volume can be numerically reconstructed from a single 2-D hologram. Applications of DH range from experimental mechanics, biology, and fluid dynamics. Improvement and characterization of the 3-D reconstruction algorithms is a current issue. Over the past decade, numerous algorithms for the analysis of holograms have been proposed. They are mostly based on a common approach to hologram processing: digital reconstruction based on the simulation of hologram diffraction. They suffer from artifacts intrinsic to holography: twin-image contamination of the reconstructed images, image distortions for objects located close to the hologram borders. The analysis of the reconstructed planes is therefore limited by these defects. In contrast to this approach, the inverse problems perspective does not transform the hologram but performs object detection and location by matching a model of the hologram. Information is thus extracted from the hologram in an optimal way, leading to two essential results: an improvement of the axial accuracy and the capability to extend the reconstructed field beyond the physical limit of the sensor size (out-of-field reconstruction). These improvements come at the cost of an increase of the computational load compared to (typically non iterative) classical approaches.
).
 

2010

[19] "Exact discrete minimization for TV+L0 image decomposition models," L. Denis, F. Tupin, X. Rondeau, IEEE International Conference on Image Processing (ICIP), Hong Kong, September 2010 (pdf, abstract Abstract
Penalized maximum likelihood denoising approaches seek a solution that fulfills a compromise between data fidelity and agreement with a prior model. Penalization terms are generally chosen to enforce smoothness of the solution and to reject noise. The design of a proper penalization term is a difficult task as it has to capture image variability. Image decomposition into two components of different nature, each given a different penalty, is a way to enrich the modeling. We consider the decomposition of an image into a component with bounded variations and a sparse component. The corresponding penalization is the sum of the total variation of the first component and the L0 pseudo-norm of the second component. The minimization problem is highly non-convex, but can still be globally minimized by a minimum s-t-cut computation on a graph. The decomposition model is applied to synthetic aperture radar image denoising.
, slides).
 
[18] "Poisson NL-Means: Unsupervised non-local means for Poisson noise," C. Deledalle, F. Tupin, L. Denis, IEEE International Conference on Image Processing (ICIP), Hong Kong, September 2010 (pdf, abstract Abstract
An extension of the non local (NL) means is proposed for images damaged by Poisson noise. The proposed method is guided by the noisy image and a pre-filtered image and is adapted to the statistics of Poisson noise. The influence of both images can be tuned using two filtering parameters. We propose an automatic setting to select these parameters based on the minimization of the estimated risk (mean square error). This selection uses an estimator of the MSE for NL means with Poisson noise and Newton's method to find the optimal parameters in few iterations.
, slides).
 
[17] "Polarimetric SAR estimation based on non-local means," C. Deledalle, F. Tupin, L. Denis, IEEE International Geoscience and Remote Sensing Symposium, Honolulu, July 2010 ( abstract Abstract
Recently, non-local approaches have been proved relevant for image restoration. Unlike local filters, the non-local (NL) means decrease the noise while preserving well the resolution. In the proposed paper, we suggest the use of a non-local approach to estimate single-look SAR reflectivity images or to construct SAR interferograms. SAR interferogram construction refers to the joint estimation of the reflectivity, phase difference and coherence image froma pair of two co-registered single-look complex SAR images. This paper is composed of four sections. Section 2 recalls the non-local (NL) means. Weighted maximum likelihood is then introduced in Section 3 as a generalization of the weighted average performed in the NL means. In Section 4, we propose to set the weights according to the probability of similarity which provides an extension of the Euclidean distance used in the NL means. Finally, experiments and results are presented in Section 5 to show the efficiency of the proposed approach.
, slides).
 
[16] "A non-local approach for SAR and interferometric SAR denoising," C. Deledalle, F. Tupin, L. Denis, IEEE International Geoscience and Remote Sensing Symposium, Honolulu, July 2010 ( abstract Abstract
During the past few years, the non-local (NL) means have proved their efficiency for image denoising. This approach assumes there exist enough redundant patterns in images to be used for noise reduction. We suggest that the same assumption can be done for polarimetric synthetic aperture radar (PolSAR) images. In its original version, theNLmeans dealwith additivewhite Gaussian noise, but several extensions have been proposed for non-Gaussian noise. This paper applies the methodology proposed in [9] to PolSAR data. The proposed filter seems to deal well with the statistical properties of speckle noise and themulti-dimensional nature of such data. Results are given on synthetic and L-Band E-SAR data to validate the proposed method.
, slides).
 
[15] "Glacier monitoring: correlation versus texture tracking," C. Deledalle, J.M. Nicolas, F. Tupin, L. Denis, R. Fallourd, E. Trouvé, IEEE International Geoscience and Remote Sensing Symposium, Honolulu, July 2010.
 
[14] "A Comparative Review of SAR Images Speckle Denoising Methods Based on Functional Minimization," J-F Aujol, E. Bratsolis, J. Darbon, L. Denis, J-M. Nicolas, X. Rondeau, M. Sigelle and F. Tupin, SIAM Conference on Imaging Science, Chicago, 12-14th april 2010 -- This work has been presented by Marc Sigelle.
 

2009

[13] "Resolution in in-line digital holography," C. Fournier, L. Denis, T. Fournel, Workshop on Information Optics, J. Phys.: Conf. Ser., 206, 012025, Paris, France, July 2009 (doi).
 
[12] "Lagrangian measurement of droplet in homogeneous isotropic turbulence by digital in-line holography", D Chareyron, J-L Marié, M. Lance, J. Gire, C. Fournier, L. Denis, 11th International Symposium on Gas-Liquid Two-Phase Flows, FEDSM2009, Vail (Colorado) 2-5 August 2009.
 
[11] "Digital holography measurements of Lagrangian trajectories and diameters of droplets in an isotropic turbulence," D. Chareyron, J.L. Marié, M. Lance, J. Gire, C. Fournier, L. Denis, 6th International Symposium on Multiphase Flow, Heat Mass Transfert and Energy Conversion, Xi'an (China) 11-15 July 2009.
 

2008

[10] "Joint filtering of SAR interferometric phase and amplitude data in urban areas by TV minimization," L. Denis, L, F Tupin, Darbon, et Sigelle, IEEE International Geoscience and Remote Sensing Symposium, Boston, 2008. (pdf, doi)
 
[9] "A regularization approach for InSAR and optical data fusion," L. Denis, Tupin, Darbon, et Sigelle, IEEE International Geoscience and Remote Sensing Symposium, Boston, 2008. (pdf, doi)
 
[8] "SAR amplitude filtering using TV prior and its application to building delineation," L. Denis, Tupin, Darbon, Sigelle, et Tison, 7th European Conference on Synthetic Aperture Radar, Friedrichshafen, Germany, 2008. (pdf, doi)
 
[7] "Signal to noise characterization of an inverse problem-based algorithm for digital inline holography," J. Gire, C. Ducottet, L. Denis, E. Thiebaut, F. Soulez, Proceedings of the International Symposium on Flow Visualization, (CDROM), S39:ID226. Nice: JP Prenel - Y Bailly, 2008. (pdf)
 

2007

[6] "Inverse problem approach for Digital Holographic Particle Tracking: Influence of the experimental parameters and benefits," C. Fournier, J. Gire, L. Denis, E. Thiebaut, F. Soulez, and C. Ducottet, Workshop on Digital Holographic Reconstruction and Tomography, Loughborough, England, April 2007.
 
[5] "Inverse Problem Approach for Particle Digital Holography: Field of View Extrapolation and Accurate Location," F. Soulez, E. Thiébaut, L. Denis, and C. Fournier, Adaptive Optics: Analysis and Methods / Computational Optical Sensing and Imaging / Information Photonics / Signal Recovery and Synthesis Topical Meetings, Vancouver, Canada, June 2007. (doi)
 
[4] "Inverse problem approach for particle digital holography: particle detection and accurate location," F. Soulez, L. Denis, C. Fournier, E. Thiébaut, and C. Goepfert, Proceedings of the Physics in Signal and Image Processing, Mulhouse, France, January 2007. (pdf)
 

2006

[3] "Digital Holography compared to Phase Doppler Anemometry: study of an experimental droplet flow," C. Fournier, C. Goepfert, J. L. Marié, L. Denis, F. Soulez, M. Lance, et J. P. Schon, Proceedings of the 12th International Symposium on Flow Visualization, (ed. Optimage Ltd), ISBN : 0-9533991-8-4,19.4, p228, Göttingen, Germany, September 2006.
 
[2] "Cleaning digital holograms to investigate 3D particle fields," L. Denis, T. Fournel, C. Fournier, et C. Ducottet, Proceedings of the 12th International Symposium on Flow Visualization, (ed. Optimage Ltd),ISBN : 0-9533991-8-4, 69.4, p215, Göttingen, Germany, September 2006.
 

2005

[1] "Twin-image noise reduction by phase retrieval in in-line digital holography," L. Denis, C. Fournier, T. Fournel, C. Ducottet, Wavelets XI, SPIE's Symposium on Optical Science and Technology, vol 5914, pp 59140J, San Diego, CA, USA, 2005. (pdf, doi)
 

Teaching

I used to teach approximately 200 hours per year when I was assistant professor at CPE Lyon. My students were attending the Engineering School CPE Lyon in the Electrical Engineering department. I covered the following topics, together with other colleagues:

Image processing (lectures + lab sessions)

Computer graphics (lectures + lab sessions)

Unix systems programming (lab sessions)

Signals and Linear Systems (tutorials + lab sessions)

Misc

How to create a bibliographic style for LaTeX/BibTeX

Scientific journals have strictly defined bibliographic conventions for typesetting references. Unfortunately for LaTeX users, these journals do not always provide a bibliogaphic style (i.e. a .bst file). This page describes how to create one yourself and gives one such file I created for use with Journal of Microscopy.
 

Using makebst

A very usefull tool for creating BibTeX styles is makebst. It's use is extremly simple: you just have to run LaTeX: latex makebst and answer a bunch of questions. An output file of type .bst will be created for use as any other bibliographic style.

Here is an extract of the questions you have to answer:

$ latex makebst
[...]
***********************************
* This is Make Bibliography Style *
***********************************
It makes up a docstrip batch job to produce
a customized .bst file for running with BibTeX
Do you want a description of the usage? (NO)

\yn=y
In the interactive dialogue that follows,
you will be presented with a series of menus.
In each case, one answer is the default, marked as (*),
and a mere carriage-return is sufficient to select it.
(If there is no * choice, then the default is the last choice.)
For the other choices, a letter is indicated
in brackets for selecting that option. If you select
a letter not in the list, default is taken.

The final output is a file containing a batch job
which may be (La)TeXed to produce the desired BibTeX
bibliography style file. The batch job may be edited
to make minor changes, rather than running this program
once again.

[...]
Name of the final OUTPUT .bst file? (default extension=bst)

\ofile=mystyle.bst

[...]
STYLE OF CITATIONS:
(*) Numerical as in standard LaTeX
(a) Author-year with some non-standard interface
(b) Alpha style, Jon90 or JWB90 for single or multiple authors
(o) Alpha style, Jon90 even for multiple authors
(f) Alpha style, Jones90 (full name of first author)
(c) Cite key (special for listing contents of bib file)
Select:

[...]
AUTHOR NAMES:
(*) Full, surname last (John Frederick Smith)
(f) Full, surname first (Smith, John Frederick)
(i) Initials + surname (J. F. Smith)
(r) Surname + initials (Smith, J. F.)
(s) Surname + dotless initials (Smith J F)
(x) Surname + pure initials (Smith JF)
(y) Surname + comma + pure initials (Smith, JF)
(z) Surname + spaceless initials (Smith J.F.)
(a) Only first name reversed, initials (AGU style: Smith, J. F., H. K. Jones)
(b) First name reversed, with full names (Smith, John Fred, Harry Kab Jones)
Select:
[...]
NUMBER OF AUTHORS:
(*) All authors included in listing
(l) Limited authors (et al replaces missing names)
Select:
[...]
TYPEFACE FOR AUTHORS IN LIST OF REFERENCES:
(*) Normal font for author names
(s) Small caps authors (\sc)
(i) Italic authors (\it or \em)
(b) Bold authors (\bf)
(u) User defined author font (\bibnamefont)
Select:
[...]

The new style mystyle.bst can then be used in your LaTeX file to typset the bibliographical entries stored in your BibTeX file mybib.bib with the following two lines of code:

\bibliographystyle{mystyle}
\bibliography{mybib}

Unofficial bibliographic style for Journal of Microscopy

I have created with the previously described procedure a .bst file for a paper I have written and published in Journal of Microscopy. I tried to answer the best as I could to the questions of makebst script, but I cannot guarantee that the file I generated fullfill all requirements of the journal. The file can be downloaded here (improved version by Michael Kopp here, read also below). Please contact me if you see any improvement to be done on the file or if you want me to display a link to your own .bst file.

In addition to using the provided .bst file, you will have to include natbib package. This package provides variants of LaTeX \cite command. The command \citep adheres to Journal of Microscopy's citing convention.

Michael Kopp sent me a .dbj file that can be easily modified to tune the way citations are displayed. His .bst file adheres more strictly to Journal of Microscopy guidelines. To use .dbj files, just run latex on them: latex microscopy.dbj.


Last update: Aug. 2017