Publications
Publications in reversed chronological order.
2024
- Black-Hole-to-Halo Mass Relation From UNIONS Weak LensingQinxun Li, Martin Kilbinger, Wentao Luo, and 17 more authorsarXiv e-prints, Feb 2024
This letter presents, for the first time, direct constraints on the black-hole-to-halo-mass relation using weak gravitational lensing measurements. We construct type I and type II Active Galactic Nuclei (AGNs) samples from the Sloan Digital Sky Survey (SDSS), with a mean redshift of 0.4 (0.1) for type I (type II) AGNs. This sample is cross-correlated with weak lensing shear from the Ultraviolet Near Infrared Northern Survey (UNIONS). We compute the excess surface mass density of the halos associated with 36, 181 AGNs from 94, 308, 561 lensed galaxies and fit the halo mass in bins of black- hole mass. We find that more massive AGNs reside in more massive halos. We see no evidence of dependence on AGN type or redshift in the black-hole-to-halo-mass relationship when systematic errors in the measured black-hole masses are included. Our results are consistent with previous measurements for non-AGN galaxies. At a fixed black-hole mass, our weak-lensing halo masses are consistent with galaxy rotation curves, but significantly lower than galaxy clustering measurements. Finally, our results are broadly consistent with state-of-the-art hydro-dynamical cosmological simulations, providing a new constraint for black-hole masses in simulations.
2023
- Scalable Bayesian uncertainty quantification with data-driven priors for radio interferometric imagingTobías I. Liaudat, Matthijs Mars, Matthew A. Price, and 3 more authorsarXiv e-prints, Nov 2023
Next-generation radio interferometers like the Square Kilometer Array have the potential to unlock scientific discoveries thanks to their unprecedented angular resolution and sensitivity. One key to unlocking their potential resides in handling the deluge and complexity of incoming data. This challenge requires building radio interferometric imaging methods that can cope with the massive data sizes and provide high-quality image reconstructions with uncertainty quantification (UQ). This work proposes a method coined QuantifAI to address UQ in radio-interferometric imaging with data-driven (learned) priors for high-dimensional settings. Our model, rooted in the Bayesian framework, uses a physically motivated model for the likelihood. The model exploits a data-driven convex prior, which can encode complex information learned implicitly from simulations and guarantee the log-concavity of the posterior. We leverage probability concentration phenomena of high-dimensional log-concave posteriors that let us obtain information about the posterior, avoiding MCMC sampling techniques. We rely on convex optimisation methods to compute the MAP estimation, which is known to be faster and better scale with dimension than MCMC sampling strategies. Our method allows us to compute local credible intervals, i.e., Bayesian error bars, and perform hypothesis testing of structure on the reconstructed image. In addition, we propose a novel blazing-fast method to compute pixel-wise uncertainties at different scales. We demonstrate our method by reconstructing radio-interferometric images in a simulated setting and carrying out fast and scalable UQ, which we validate with MCMC sampling. Our method shows an improved image quality and more meaningful uncertainties than the benchmark method based on a sparsity-promoting prior.
- Proximal nested sampling with data-driven priors for physical scientistsJason D. McEwen, Tobías I. Liaudat, Matthew A. Price, and 2 more authorsarXiv e-prints, Jun 2023
Proximal nested sampling was introduced recently to open up Bayesian model selection for high-dimensional problems such as computational imaging. The framework is suitable for models with a log-convex likelihood, which are ubiquitous in the imaging sciences. The purpose of this article is two-fold. First, we review proximal nested sampling in a pedagogical manner in an attempt to elucidate the framework for physical scientists. Second, we show how proximal nested sampling can be extended in an empirical Bayes setting to support data-driven priors, such as deep neural networks learned from training data.
- Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studiesTobias Liaudat, Jean-Luc Starck, and Martin KilbingerFrontiers in Astronomy and Space Sciences, Jun 2023
The accurate modelling of the point spread function (PSF) is of paramount importance in astronomical observations, as it allows for the correction of distortions and blurring caused by the telescope and atmosphere. PSF modelling is crucial for accurately measuring celestial objects’ properties. The last decades have brought us a steady increase in the power and complexity of astronomical telescopes and instruments. Upcoming galaxy surveys like Euclid and Legacy Survey of Space and Time (LSST) will observe an unprecedented amount and quality of data. Modelling the PSF for these new facilities and surveys requires novel modelling techniques that can cope with the ever-tightening error requirements. The purpose of this review is threefold. Firstly, we introduce the optical background required for a more physically motivated PSF modelling and propose an observational model that can be reused for future developments. Secondly, we provide an overview of the different physical contributors of the PSF, which includes the optic- and detector-level contributors and atmosphere. We expect that the overview will help better understand the modelled effects. Thirdly, we discuss the different methods for PSF modelling from the parametric and non-parametric families for ground- and space-based telescopes, with their advantages and limitations. Validation methods for PSF models are then addressed, with several metrics related to weak-lensing studies discussed in detail. Finally, we explore current challenges and future directions in PSF modelling for astronomical telescopes.
- COSMOS-Web: An Overview of the JWST Cosmic Origins SurveyCaitlin M. Casey, Jeyhan S. Kartaltepe, Nicole E. Drakos, and 83 more authorsThe Astrophysical Journal, Aug 2023
We present the survey design, implementation, and outlook for COSMOS-Web, a 255 hr treasury program conducted by the James Webb Space Telescope in its first cycle of observations. COSMOS-Web is a contiguous 0.54 deg2 NIRCam imaging survey in four filters (F115W, F150W, F277W, and F444W) that will reach 5σ point-source depths ranging 27.5–28.2 mag. In parallel, we will obtain 0.19 deg2 of MIRI imaging in one filter (F770W) reaching 5σ point-source depths of 25.3–26.0 mag. COSMOS-Web will build on the rich heritage of multiwavelength observations and data products available in the COSMOS field. The design of COSMOS-Web is motivated by three primary science goals: (1) to discover thousands of galaxies in the Epoch of Reionization (6 ≲ z ≲ 11) and map reionization’s spatial distribution, environments, and drivers on scales sufficiently large to mitigate cosmic variance, (2) to identify hundreds of rare quiescent galaxies at z > 4 and place constraints on the formation of the universe’s most-massive galaxies (M ⋆ > 1010 M ⊙), and (3) directly measure the evolution of the stellar-mass-to-halo-mass relation using weak gravitational lensing out to z 2.5 and measure its variance with galaxies’ star formation histories and morphologies. In addition, we anticipate COSMOS-Web’s legacy value to reach far beyond these scientific goals, touching many other areas of astrophysics, such as the identification of the first direct collapse black hole candidates, ultracool subdwarf stars in the Galactic halo, and possibly the identification of z > 10 pair-instability supernovae. In this paper we provide an overview of the survey’s key measurements, specifications, goals, and prospects for new discovery.
- Rethinking data-driven point spread function modeling with a differentiable optical modelTobias Liaudat, Jean-Luc Starck, Martin Kilbinger, and 1 more authorInverse Problems, Feb 2023
In astronomy, upcoming space telescopes with wide-field optical instruments have a spatially varying point spread function (PSF). Specific scientific goals require a high-fidelity estimation of the PSF at target positions where no direct measurement of the PSF is provided. Even though observations of the PSF are available at some positions of the field of view (FOV), they are undersampled, noisy, and integrated into wavelength in the instrument’s passband. PSF modeling represents a challenging ill-posed problem, as it requires building a model from these observations that can infer a super-resolved PSF at any wavelength and position in the FOV. Current data-driven PSF models can tackle spatial variations and super-resolution. However, they are not capable of capturing PSF chromatic variations. Our model, coined WaveDiff, proposes a paradigm shift in the data-driven modeling of the point spread function field of telescopes. We change the data-driven modeling space from the pixels to the wavefront by adding a differentiable optical forward model into the modeling framework. This change allows the transfer of a great deal of complexity from the instrumental response into the forward model. The proposed model relies on efficient automatic differentiation technology and modern stochastic first-order optimization techniques recently developed by the thriving machine-learning community. Our framework paves the way to building powerful, physically motivated models that do not require special calibration data. This paper demonstrates the WaveDiff model in a simplified setting of a space telescope. The proposed framework represents a performance breakthrough with respect to the existing state-of-the-art data-driven approach. The pixel reconstruction errors decrease six-fold at observation resolution and 44-fold for a 3x super-resolution. The ellipticity errors are reduced at least 20 times, and the size error is reduced more than 250 times. By only using noisy broad-band in-focus observations, we successfully capture the PSF chromatic variations due to diffraction. WaveDiff source code and examples associated with this paper are available at this link.
2022
- ShapePipe: A modular weak-lensing processing and analysis pipelineFarrens, S., Guinot, A., Kilbinger, M., and 8 more authorsA&A, Feb 2022
We present the first public release of ShapePipe, an open-source and modular weak-lensing measurement, analysis, and validation pipeline written in Python. We describe the design of the software and justify the choices made. We provide a brief description of all the modules currently available and summarise how the pipeline has been applied to real Ultraviolet Near-Infrared Optical Northern Survey data. Finally, we mention plans for future applications and development. The code and accompanying documentation are publicly available on GitHub.
- ShapePipe: A new shape measurement pipeline and weak-lensing application to UNIONS/CFIS dataGuinot, Axel, Kilbinger, Martin, Farrens, Samuel, and 17 more authorsA&A, Feb 2022
UNIONS is an ongoing collaboration that will provide the largest deep photometric survey of the Northern sky in four optical bands to date. As part of this collaboration, CFIS is taking r-band data with an average seeing of 0.65 arcsec, which is complete to magnitude 24.5 and thus ideal for weak-lensing studies. We perform the first weak-lensing analysis of CFIS r-band data over an area spanning 1700 deg2 of the sky. We create a catalogue with measured shapes for 40 million galaxies, corresponding to an effective density of 6.8 galaxies per square arcminute, and demonstrate a low level of systematic biases. This work serves as the basis for further cosmological studies using the full UNIONS survey of 4800 deg2 when completed. Here we present ShapePipe, a newly developed weak-lensing pipeline. This pipeline makes use of state-of-the-art methods such as Ngmix for accurate galaxy shape measurement. Shear calibration is performed with metacalibration. We carry out extensive validation tests on the Point Spread Function (PSF), and on the galaxy shapes. In addition, we create realistic image simulations to validate the estimated shear. We quantify the PSF model accuracy and show that the level of systematics is low as measured by the PSF residuals. Their effect on the shear two-point correlation function is sub-dominant compared to the cosmological contribution on angular scales <100 arcmin. The additive shear bias is below 5x10−4, and the residual multiplicative shear bias is at most 10−3 as measured on image simulations. Using COSEBIs we show that there are no significant B-modes present in second-order shear statistics. We present convergence maps and see clear correlations of the E-mode with known cluster positions. We measure the stacked tangential shear profile around Planck clusters at a significance higher than 4σ.
- Data-driven modelling of ground-based and space-based telescope’s point spread functionsTobias Ignacio LiaudatUniversité Paris-Saclay, Oct 20222022UPASP118
Gravitational lensing is the distortion of the images of distant galaxies by intervening massive objects and constitutes a powerful probe of the Large Scale Structure of our Universe. Cosmologists use weak (gravitational) lensing to study the nature of dark matter and its spatial distribution. These studies require highly accurate measurements of galaxy shapes, but the telescope’s instrumental response, or point spread function (PSF), deforms our observations. This deformation can be mistaken for weak lensing effects in the galaxy images, thus being one of the primary sources of systematic error when doing weak lensing science. Therefore, estimating a reliable and accurate PSF model is crucial for the success of any weak lensing mission. The PSF field can be interpreted as a convolutional kernel that affects each of our observations of interest that varies spatially, spectrally, and temporally. The PSF model needs to cope with these variations and is constrained by specific stars in the field of view. These stars, considered point sources, provide us with degraded samples of the PSF field. The observations go through different degradations depending on the properties of the telescope, including undersampling, an integration over the instrument’s passband, and additive noise. We finally build the PSF model using these degraded observations and then use the model to infer the PSF at the position of galaxies. This procedure constitutes the ill-posed inverse problem of PSF modelling. The core of this thesis has been the development of new data-driven, also known as non-parametric, PSF models. We have developed a new PSF model for ground-based telescopes, coined MCCD, which can simultaneously model the entire focal plane. Consequently, MCCD has more available stars to constrain a more complex model. The method is based on a matrix factorisation scheme, sparsity, and an alternating optimisation procedure. We have included the PSF model in a high-performance shape measurement pipeline and used it to process 3500 deg² of r-band observations from the Canada-France Imaging Survey. A shape catalogue has been produced and will be soon released. The main goal of this thesis has been to develop a data-driven PSF model that can address the challenges raised by one of the most ambitious weak lensing missions so far, the Euclid space mission. The main difficulties related to the Euclid mission are that the observations are undersampled and integrated into a single wide passband. Therefore, it is hard to recover and model the PSF chromatic variations from such observations. Our main contribution has been a new framework for data-driven PSF modelling based on a differentiable optical forward model allowing us to build a data-driven model for the wavefront. The new model coined WaveDiff is based on a matrix factorisation scheme and Zernike polynomials. The model relies on modern gradient-based methods and automatic differentiation for optimisation, which only uses noisy broad-band in-focus observations. Results show that WaveDiff can model the PSFs’ chromatic variations and handle super-resolution with high accuracy.
2021
- Multi-CCD modelling of the point spread functionTobias Liaudat, Jérôme Bonnin, Jean-Luc Starck, and 4 more authorsA&A, Oct 2021
Context. Galaxy imaging surveys observe a vast number of objects, which are ultimately affected by the instrument’s point spread function (PSF). It is weak lensing missions in particular that are aimed at measuring the shape of galaxies and PSF effects represent an significant source of systematic errors that must be handled appropriately. This requires a high level of accuracy at the modelling stage as well as in the estimation of the PSF at galaxy positions. Aims. The goal of this work is to estimate a PSF at galaxy positions, which is also referred to as a non-parametric PSF estimation and which starts from a set of noisy star image observations distributed over the focal plane. To accomplish this, we need our model to precisely capture the PSF field variations over the field of view and then to recover the PSF at the chosen positions. Methods. In this paper, we propose a new method, coined Multi-CCD (MCCD) PSF modelling, which simultaneously creates a PSF field model over the entirety of the instrument’s focal plane. It allows us to capture global as well as local PSF features through the use of two complementary models that enforce different spatial constraints. Most existing non-parametric models build one model per charge-coupled device, which can lead to difficulties in capturing global ellipticity patterns. Results. We first tested our method on a realistic simulated dataset, comparing it with two state-of-the-art PSF modelling methods (PSFEx and RCA) and finding that our method outperforms both of them. Then we contrasted our approach with PSFEx based on real data from the Canada-France Imaging Survey, which uses the Canada-France-Hawaii Telescope. We show that our PSF model is less noisy and achieves a 22% gain on the pixel’s root mean square error with respect to PSFEx. Conclusions. We present and share the code for a new PSF modelling algorithm that models the PSF field on all the focal plane that is mature enough to handle real data.
- Rethinking the modeling of the instrumental response of telescopes with a differentiable optical modelTobias Liaudat, Jean-Luc Starck, Martin Kilbinger, and 1 more authorIn NeurIPS 2021 Machine Learning for Physical sciences workshop, Nov 2021
We propose a paradigm shift in the data-driven modeling of the instrumental response field of telescopes. By adding a differentiable optical forward model into the modeling framework, we change the data-driven modeling space from the pixels to the wavefront. This allows to transfer a great deal of complexity from the instrumental response into the forward model while being able to adapt to the observations, remaining data-driven. Our framework allows a way forward to building powerful models that are physically motivated, interpretable, and that do not require special calibration data. We show that for a simplified setting of a space telescope, this framework represents a real performance breakthrough compared to existing data-driven approaches with reconstruction errors decreasing 5 fold at observation resolution and more than 10 fold for a 3x super-resolution. We successfully model chromatic variations of the instrument’s response only using noisy broad-band in-focus observations.
- Semi-Parametric Wavefront Modelling for the Point Spread FunctionTobias Liaudat, Jean-Luc Starck, and Martin KilbingerIn 52ème Journées de Statistique de la Société Française de Statistique (SFdS), Jun 2021
We introduce a new approach to estimate the point spread function (PSF) field of an optical telescope by building a semi-parametric model of its wavefront error. This method is particularly advantageous because it does not require calibration observa- tions to recover the wavefront error and it naturally takes into account the chromaticity of the optical system. The model is end-to-end differentiable and relies on a diffraction operator that allows us to compute monochromatic PSFs from the wavefront information.
2020
- Faster and better sparse blind source separation through mini-batch optimizationChristophe Kervazo, Tobias Liaudat, and Jérôme BobinDigital Signal Processing, Jun 2020
Sparse Blind Source Separation (sBSS) plays a key role in scientific domains as different as biomedical imaging, remote sensing or astrophysics. Such fields however require the development of increasingly faster and scalable BSS methods without sacrificing the separation performances. To that end, we introduce in this work a new distributed sparse BSS algorithm based on a mini-batch extension of the Generalized Morphological Component Analysis algorithm (GMCA). Precisely, it combines a robust projected alternated least-squares method with mini-batch optimization. The originality further lies in the use of a manifold-based aggregation of the asynchronously estimated mixing matrices. Numerical experiments are carried out on realistic spectroscopic spectra, and highlight the ability of the proposed distributed GMCA (dGMCA) to provide very good separation results even when very small mini-batches are used. Quite unexpectedly, the algorithm can further outperform the (non-distributed) state-of-the-art methods for highly sparse sources.
2019
- Distributed sparse BSS for large-scale datasetsTobias Liaudat, Jérôme Bobin, and Christophe KervazoIn 2019 SPARS conference proceedings, Apr 2019
Blind Source Separation (BSS) [1] is widely used to analyze multichannel data stemming from origins as wide as astrophysics to medicine. However, existent methods do not efficiently handle very large datasets. In this work, we propose a new method coined DGMCA (Distributed Generalized Morphological Component Analysis) in which the original BSS problem is decomposed into subproblems that can be tackled in parallel, alleviating the large-scale issue. We propose to use the RCM (Riemannian Center of Mass - [6][7]) to aggregate during the iterative process the estimations yielded by the different subproblems. The approach is made robust both by a clever choice of the weights of the RCM and the adaptation of the heuristic parameter choice proposed in [4] to the parallel framework. The results obtained show that the proposed approach is able to handle large-scale problems with a linear acceleration performing at the same level as GMCA and maintaining an automatic choice of parameters.