Publications
Publications in reversed chronological order.
2024
- Uncertainty quantification for fast reconstruction methods using augmented equivariant bootstrap: Application to radio interferometryMostafa Cherif, Tobías I. Liaudat, Jonathan Kern, and 2 more authorsarXiv e-prints, Oct 2024
The advent of next-generation radio interferometers like the Square Kilometer Array promises to revolutionise our radio astronomy observational capabilities. The unprecedented volume of data these devices generate requires fast and accurate image reconstruction algorithms to solve the ill-posed radio interferometric imaging problem. Most state-of-the-art reconstruction methods lack trustworthy and scalable uncertainty quantification, which is critical for the rigorous scientific interpretation of radio observations. We propose an unsupervised technique based on a conformalized version of a radio-augmented equivariant bootstrapping method, which allows us to quantify uncertainties for fast reconstruction methods. Noticeably, we rely on reconstructions from ultra-fast unrolled algorithms. The proposed method brings more reliable uncertainty estimations to our problem than existing alternatives.
- Generative modelling for mass-mapping with fast uncertainty quantificationJessica J. Whitney, Tobías I. Liaudat, Matthew A. Price, and 2 more authorsarXiv e-prints, Oct 2024
Understanding the nature of dark matter in the Universe is an important goal of modern cosmology. A key method for probing this distribution is via weak gravitational lensing mass-mapping - a challenging ill-posed inverse problem where one infers the convergence field from observed shear measurements. Upcoming stage IV surveys, such as those made by the Vera C. Rubin Observatory and Euclid satellite, will provide a greater quantity and precision of data for lensing analyses, necessitating high-fidelity mass-mapping methods that are computationally efficient and that also provide uncertainties for integration into downstream cosmological analyses. In this work we introduce MMGAN, a novel mass-mapping method based on a regularised conditional generative adversarial network (GAN) framework, which generates approximate posterior samples of the convergence field given shear data. We adopt Wasserstein GANs to improve training stability and apply regularisation techniques to overcome mode collapse, issues that otherwise are particularly acute for conditional GANs. We train and validate our model on a mock COSMOS-style dataset before applying it to true COSMOS survey data. Our approach significantly outperforms the Kaiser-Squires technique and achieves similar reconstruction fidelity as alternative state-of-the-art deep learning approaches. Notably, while alternative approaches for generating samples from a learned posterior are slow (e.g. requiring 10 GPU minutes per posterior sample), MMGAN can produce a high-quality convergence sample in less than a second.
- Euclid preparation. Deep learning true galaxy morphologies for weak lensing shear bias calibrationEuclid Collaboration, B. Csizi, T. Schrabback, and 259 more authorsarXiv e-prints, Sep 2024
To date, galaxy image simulations for weak lensing surveys usually approximate the light profiles of all galaxies as a single or double Sérsic profile, neglecting the influence of galaxy substructures and morphologies deviating from such a simplified parametric characterization. While this approximation may be sufficient for previous data sets, the stringent cosmic shear calibration requirements and the high quality of the data in the upcoming Euclid survey demand a consideration of the effects that realistic galaxy substructures have on shear measurement biases. Here we present a novel deep learning-based method to create such simulated galaxies directly from HST data. We first build and validate a convolutional neural network based on the wavelet scattering transform to learn noise-free representations independent of the point-spread function of HST galaxy images that can be injected into simulations of images from Euclid’s optical instrument VIS without introducing noise correlations during PSF convolution or shearing. Then, we demonstrate the generation of new galaxy images by sampling from the model randomly and conditionally. Next, we quantify the cosmic shear bias from complex galaxy shapes in Euclid-like simulations by comparing the shear measurement biases between a sample of model objects and their best-fit double-Sérsic counterparts. Using the KSB shape measurement algorithm, we find a multiplicative bias difference between these branches with realistic morphologies and parametric profiles on the order of 6.9×10−3 for a realistic magnitude-Sérsic index distribution. Moreover, we find clear detection bias differences between full image scenes simulated with parametric and realistic galaxies, leading to a bias difference of 4.0×10−3 independent of the shape measurement method. This makes it relevant for stage IV weak lensing surveys such as Euclid.
- Euclid preparation. Simulations and nonlinearities beyond ΛCDM. 4. Constraints on f(R) models from the photometric primary probesEuclid Collaboration, K. Koyama, S. Pamuk, and 275 more authorsarXiv e-prints, Sep 2024
We study the constraint on f(R) gravity that can be obtained by photometric primary probes of the Euclid mission. Our focus is the dependence of the constraint on the theoretical modelling of the nonlinear matter power spectrum. In the Hu-Sawicki f(R) gravity model, we consider four different predictions for the ratio between the power spectrum in f(R) and that in ΛCDM: a fitting formula, the halo model reaction approach, ReACT and two emulators based on dark matter only N-body simulations, FORGE and e-Mantis. These predictions are added to the MontePython implementation to predict the angular power spectra for weak lensing (WL), photometric galaxy clustering and their cross-correlation. By running Markov Chain Monte Carlo, we compare constraints on parameters and investigate the bias of the recovered f(R) parameter if the data are created by a different model. For the pessimistic setting of WL, one dimensional bias for the f(R) parameter, log10|fR0|, is found to be 0.5σ when FORGE is used to create the synthetic data with log10|fR0|=−5.301 and fitted by e-Mantis. The impact of baryonic physics on WL is studied by using a baryonification emulator BCemu. For the optimistic setting, the f(R) parameter and two main baryon parameters are well constrained despite the degeneracies among these parameters. However, the difference in the nonlinear dark matter prediction can be compensated by the adjustment of baryon parameters, and the one-dimensional marginalised constraint on log10|fR0| is biased. This bias can be avoided in the pessimistic setting at the expense of weaker constraints. For the pessimistic setting, using the ΛCDM synthetic data for WL, we obtain the prior-independent upper limit of log10|fR0|<−5.6. Finally, we implement a method to include theoretical errors to avoid the bias.
- Euclid preparation. Simulations and nonlinearities beyond ΛCDM. 2. Results from non-standard simulationsEuclid Collaboration, G. Rácz, M. -A. Breton, and 275 more authorsarXiv e-prints, Sep 2024
The Euclid mission will measure cosmological parameters with unprecedented precision. To distinguish between cosmological models, it is essential to generate realistic mock observables from cosmological simulations that were run in both the standard Λ-cold-dark-matter (ΛCDM) paradigm and in many non-standard models beyond ΛCDM. We present the scientific results from a suite of cosmological N-body simulations using non-standard models including dynamical dark energy, k-essence, interacting dark energy, modified gravity, massive neutrinos, and primordial non-Gaussianities. We investigate how these models affect the large-scale-structure formation and evolution in addition to providing synthetic observables that can be used to test and constrain these models with Euclid data. We developed a custom pipeline based on the Rockstar halo finder and the nbodykit large-scale structure toolkit to analyse the particle output of non-standard simulations and generate mock observables such as halo and void catalogues, mass density fields, and power spectra in a consistent way. We compare these observables with those from the standard ΛCDM model and quantify the deviations. We find that non-standard cosmological models can leave significant imprints on the synthetic observables that we have generated. Our results demonstrate that non-standard cosmological N-body simulations provide valuable insights into the physics of dark energy and dark matter, which is essential to maximising the scientific return of Euclid.
- Euclid preparation. Simulations and nonlinearities beyond ΛCDM. 1. Numerical methods and validationEuclid Collaboration, J. Adamek, B. Fiorini, and 268 more authorsarXiv e-prints, Sep 2024
To constrain models beyond ΛCDM, the development of the Euclid analysis pipeline requires simulations that capture the nonlinear phenomenology of such models. We present an overview of numerical methods and N-body simulation codes developed to study the nonlinear regime of structure formation in alternative dark energy and modified gravity theories. We review a variety of numerical techniques and approximations employed in cosmological N-body simulations to model the complex phenomenology of scenarios beyond ΛCDM. This includes discussions on solving nonlinear field equations, accounting for fifth forces, and implementing screening mechanisms. Furthermore, we conduct a code comparison exercise to assess the reliability and convergence of different simulation codes across a range of models. Our analysis demonstrates a high degree of agreement among the outputs of different simulation codes, providing confidence in current numerical methods for modelling cosmic structure formation beyond ΛCDM. We highlight recent advances made in simulating the nonlinear scales of structure formation, which are essential for leveraging the full scientific potential of the forthcoming observational data from the Euclid mission.
- Euclid preparation: Determining the weak lensing mass accuracy and precision for galaxy clustersEuclid Collaboration, L. Ingoglia, M. Sereno, and 279 more authorsarXiv e-prints, Sep 2024
We investigate the level of accuracy and precision of cluster weak-lensing (WL) masses measured with the \Euclid data processing pipeline. We use the DEMNUni-Cov N-body simulations to assess how well the WL mass probes the true halo mass, and, then, how well WL masses can be recovered in the presence of measurement uncertainties. We consider different halo mass density models, priors, and mass point estimates. WL mass differs from true mass due to, e.g., the intrinsic ellipticity of sources, correlated or uncorrelated matter and large-scale structure, halo triaxiality and orientation, and merging or irregular morphology. In an ideal scenario without observational or measurement errors, the maximum likelihood estimator is the most accurate, with WL masses biased low by ⟨bM⟩=−14.6±1.7% on average over the full range M200c>5×1013M⊙ and z<1. Due to the stabilising effect of the prior, the biweight, mean, and median estimates are more precise. The scatter decreases with increasing mass and informative priors significantly reduce the scatter. Halo mass density profiles with a truncation provide better fits to the lensing signal, while the accuracy and precision are not significantly affected. We further investigate the impact of additional sources of systematic uncertainty on the WL mass, namely the impact of photometric redshift uncertainties and source selection, the expected performance of \Euclid cluster detection algorithms, and the presence of masks. Taken in isolation, we find that the largest effect is induced by non-conservative source selection. This effect can be mostly removed with a robust selection. As a final \Euclid-like test, we combine systematic effects in a realistic observational setting and find results similar to the ideal case, ⟨bM⟩=−15.5±2.4%, under a robust selection.
- Euclid preparation. L. Calibration of the linear halo bias in Λ(ν)CDM cosmologiesEuclid Collaboration, T. Castro, A. Fumagalli, and 253 more authorsarXiv e-prints, Sep 2024
The Euclid mission, designed to map the geometry of the dark Universe, presents an unprecedented opportunity for advancing our understanding of the cosmos through its photometric galaxy cluster survey. This paper focuses on enhancing the precision of halo bias (HB) predictions, which is crucial for deriving cosmological constraints from the clustering of galaxy clusters. Our study is based on the peak-background split (PBS) model linked to the halo mass function (HMF); it extends with a parametric correction to precisely align with results from an extended set of N-body simulations carried out with the OpenGADGET3 code. Employing simulations with fixed and paired initial conditions, we meticulously analyze the matter-halo cross-spectrum and model its covariance using a large number of mock catalogs generated with Lagrangian Perturbation Theory simulations with the PINOCCHIO code. This ensures a comprehensive understanding of the uncertainties in our HB calibration. Our findings indicate that the calibrated HB model is remarkably resilient against changes in cosmological parameters including those involving massive neutrinos. The robustness and adaptability of our calibrated HB model provide an important contribution to the cosmological exploitation of the cluster surveys to be provided by the Euclid mission. This study highlights the necessity of continuously refining the calibration of cosmological tools like the HB to match the advancing quality of observational data. As we project the impact of our model on cosmological constraints, we find that, given the sensitivity of the Euclid survey, a miscalibration of the HB could introduce biases in cluster cosmology analyses. Our work fills this critical gap, ensuring the HB calibration matches the expected precision of the Euclid survey.
- Scalable Bayesian uncertainty quantification with data-driven priors for radio interferometric imagingTobías I. Liaudat, Matthijs Mars, Matthew A. Price, and 3 more authorsRAS Techniques and Instruments, Aug 2024
Next-generation radio interferometers like the Square Kilometer Array have the potential to unlock scientific discoveries thanks to their unprecedented angular resolution and sensitivity. One key to unlocking their potential resides in handling the deluge and complexity of incoming data. This challenge requires building radio interferometric (RI) imaging methods that can cope with the massive data sizes and provide high-quality image reconstructions with uncertainty quantification (UQ). This work proposes a method coined quantifAI to address UQ in RI imaging with data-driven (learned) priors for high-dimensional settings. Our model, rooted in the Bayesian framework, uses a physically motivated model for the likelihood. The model exploits a data-driven convex prior potential, which can encode complex information learned implicitly from simulations and guarantee the log-concavity of the posterior. We leverage probability concentration phenomena of high-dimensional log-concave posteriors to obtain information about the posterior, avoiding MCMC sampling techniques. We rely on convex optimization methods to compute the MAP estimation, which is known to be faster and better scale with dimension than MCMC strategies. quantifAI allows us to compute local credible intervals and perform hypothesis testing of structure on the reconstructed image. We propose a novel fast method to compute pixel-wise uncertainties at different scales, which uses three and six orders of magnitude less likelihood evaluations than other UQ methods like length of the credible intervals and Monte Carlo posterior sampling, respectively. We demonstrate our method by reconstructing RI images in a simulated setting and carrying out fast and scalable UQ, which we validate with MCMC sampling. Our method shows an improved image quality and more meaningful uncertainties than the benchmark method based on a sparsity-promoting prior.
- Euclid preparation. Angular power spectra from discrete observationsEuclid Collaboration, N. Tessore, B. Joachimi, and 266 more authorsarXiv e-prints, Aug 2024
We present the framework for measuring angular power spectra in the Euclid mission. The observables in galaxy surveys, such as galaxy clustering and cosmic shear, are not continuous fields, but discrete sets of data, obtained only at the positions of galaxies. We show how to compute the angular power spectra of such discrete data sets, without treating observations as maps of an underlying continuous field that is overlaid with a noise component. This formalism allows us to compute exact theoretical expectations for our measured spectra, under a number of assumptions that we track explicitly. In particular, we obtain exact expressions for the additive biases ("shot noise") in angular galaxy clustering and cosmic shear. For efficient practical computations, we introduce a spin-weighted spherical convolution with a well-defined convolution theorem, which allows us to apply exact theoretical predictions to finite-resolution maps, including HEALPix. When validating our methodology, we find that our measurements are biased by less than 1% of their statistical uncertainty in simulations of Euclid’s first data release.
- Euclid preparation. Exploring the properties of proto-clusters in the Simulated Euclid Wide SurveyEuclid Collaboration, H. Böhringer, G. Chon, and 263 more authorsarXiv e-prints, Jul 2024
Galaxy proto-clusters are receiving an increased interest since most of the processes shaping the structure of clusters of galaxies and their galaxy population are happening at early stages of their formation. The Euclid Survey will provide a unique opportunity to discover a large number of proto-clusters over a large fraction of the sky (14 500 square degrees). In this paper, we explore the expected observational properties of proto-clusters in the Euclid Wide Survey by means of theoretical models and simulations. We provide an overview of the predicted proto-cluster extent, galaxy density profiles, mass-richness relations, abundance, and sky-filling as a function of redshift. Useful analytical approximations for the functions of these properties are provided. The focus is on the redshift range z= 1.5 to 4. We discuss in particular the density contrast with which proto-clusters can be observed against the background in the galaxy distribution if photometric galaxy redshifts are used as supplied by the ESA Euclid mission together with the ground-based photometric surveys. We show that the obtainable detection significance is sufficient to find large numbers of interesting proto-cluster candidates. For quantitative studies, additional spectroscopic follow-up is required to confirm the proto-clusters and establish their richness.
- Using conditional GANs for convergence map reconstruction with uncertaintiesJessica Whitney, Tobı́as I. Liaudat, Matt Price, and 2 more authorsarXiv e-prints, May 2024
Understanding the large-scale structure of the Universe and unravelling the mysteries of dark matter are fundamental challenges in contemporary cosmology. Reconstruction of the cosmological matter distribution from lensing observables, referred to as ’mass-mapping’ is an important aspect of this quest. Mass-mapping is an ill-posed problem, meaning there is inherent uncertainty in any convergence map reconstruction. The demand for fast and efficient reconstruction techniques is rising as we prepare for upcoming surveys. We present a novel approach which utilises deep learning, in particular a conditional Generative Adversarial Network (cGAN), to approximate samples from a Bayesian posterior distribution, meaning they can be interpreted in a statistically robust manner. By combining data-driven priors with recent regularisation techniques, we introduce an approach that facilitates the swift generation of high-fidelity, mass maps. Furthermore, to validate the effectiveness of our approach, we train the model on mock COSMOS-style data, generated using Colombia Lensing’s kappaTNG mock weak lensing suite. These preliminary results showcase compelling convergence map reconstructions and ongoing refinement efforts are underway to enhance the robustness of our method further.
- Euclid. V. The Flagship galaxy mock catalogue: a comprehensive simulation for the Euclid missionEuclid Collaboration, F. J. Castander, P. Fosalba, and 366 more authorsarXiv e-prints, May 2024
We present the Flagship galaxy mock, a simulated catalogue of billions of galaxies designed to support the scientific exploitation of the Euclid mission. Euclid is a medium-class mission of the European Space Agency optimised to determine the properties of dark matter and dark energy on the largest scales of the Universe. It probes structure formation over more than 10 billion years primarily from the combination of weak gravitational lensing and galaxy clustering data. The breath of Euclid’s data will also foster a wide variety of scientific analyses. The Flagship simulation was developed to provide a realistic approximation to the galaxies that will be observed by Euclid and used in its scientific analyses. We ran a state-of-the-art N-body simulation with four trillion particles, producing a lightcone on the fly. From the dark matter particles, we produced a catalogue of 16 billion haloes in one octant of the sky in the lightcone up to redshift z=3. We then populated these haloes with mock galaxies using a halo occupation distribution and abundance matching approach, calibrating the free parameters of the galaxy mock against observed correlations and other basic galaxy properties. Modelled galaxy properties include luminosity and flux in several bands, redshifts, positions and velocities, spectral energy distributions, shapes and sizes, stellar masses, star formation rates, metallicities, emission line fluxes, and lensing properties. We selected a final sample of 3.4 billion galaxies with a magnitude cut of H_E<26, where we are complete. We have performed a comprehensive set of validation tests to check the similarity to observational data and theoretical models. In particular, our catalogue is able to closely reproduce the main characteristics of the weak lensing and galaxy clustering samples to be used in the mission’s main cosmological analysis. (abridged)
- Euclid. IV. The NISP Calibration UnitEuclid Collaboration, F. Hormuth, K. Jahnke, and 332 more authorsarXiv e-prints, May 2024
The near-infrared calibration unit (NI-CU) on board Euclid’s Near-Infrared Spectrometer and Photometer (NISP) is the first astronomical calibration lamp based on light-emitting diodes (LEDs) to be operated in space. Euclid is a mission in ESA’s Cosmic Vision 2015-2025 framework, to explore the dark universe and provide a next-level characterisation of the nature of gravitation, dark matter, and dark energy. Calibrating photometric and spectrometric measurements of galaxies to better than 1.5% accuracy in a survey homogeneously mapping 14000 deg^2 of extragalactic sky requires a very detailed characterisation of near-infrared (NIR) detector properties, as well their constant monitoring in flight. To cover two of the main contributions - relative pixel-to-pixel sensitivity and non-linearity characteristics - as well as support other calibration activities, NI-CU was designed to provide spatially approximately homogeneous (<12% variations) and temporally stable illumination (0.1%-0.2% over 1200s) over the NISP detector plane, with minimal power consumption and energy dissipation. NI-CU is covers the spectral range [900,1900] nm - at cryo-operating temperature - at 5 fixed independent wavelengths to capture wavelength-dependent behaviour of the detectors, with fluence over a dynamic range of >=100 from 15 ph s^-1 pixel^-1 to >1500 ph s^-1 pixel^-1. For this functionality, NI-CU is based on LEDs. We describe the rationale behind the decision and design process, describe the challenges in sourcing the right LEDs, as well as the qualification process and lessons learned. We also provide a description of the completed NI-CU, its capabilities and performance as well as its limits. NI-CU has been integrated into NISP and the Euclid satellite, and since Euclid’s launch in July 2023 has started supporting survey operations.
- Euclid. III. The NISP InstrumentEuclid Collaboration, K. Jahnke, W. Gillard, and 434 more authorsarXiv e-prints, May 2024
The Near-Infrared Spectrometer and Photometer (NISP) on board the Euclid satellite provides multiband photometry and R>=450 slitless grism spectroscopy in the 950-2020nm wavelength range. In this reference article we illuminate the background of NISP’s functional and calibration requirements, describe the instrument’s integral components, and provide all its key properties. We also sketch the processes needed to understand how NISP operates and is calibrated, and its technical potentials and limitations. Links to articles providing more details and technical background are included. NISP’s 16 HAWAII-2RG (H2RG) detectors with a plate scale of 0.3" pix^-1 deliver a field-of-view of 0.57deg^2. In photo mode, NISP reaches a limiting magnitude of 24.5AB mag in three photometric exposures of about 100s exposure time, for point sources and with a signal-to-noise ratio (SNR) of 5. For spectroscopy, NISP’s point-source sensitivity is a SNR = 3.5 detection of an emission line with flux 2x10^-16erg/s/cm^2 integrated over two resolution elements of 13.4A, in 3x560s grism exposures at 1.6 mu (redshifted Ha). Our calibration includes on-ground and in-flight characterisation and monitoring of detector baseline, dark current, non-linearity, and sensitivity, to guarantee a relative photometric accuracy of better than 1.5%, and relative spectrophotometry to better than 0.7%. The wavelength calibration must be better than 5A. NISP is the state-of-the-art instrument in the NIR for all science beyond small areas available from HST and JWST - and an enormous advance due to its combination of field size and high throughput of telescope and instrument. During Euclid’s 6-year survey covering 14000 deg^2 of extragalactic sky, NISP will be the backbone for determining distances of more than a billion galaxies. Its NIR data will become a rich reference imaging and spectroscopy data set for the coming decades.
- Euclid. II. The VIS InstrumentEuclid Collaboration, M. Cropper, A. Al-Bahlawan, and 425 more authorsarXiv e-prints, May 2024
This paper presents the specification, design, and development of the Visible Camera (VIS) on the ESA Euclid mission. VIS is a large optical-band imager with a field of view of 0.54 deg^2 sampled at 0.1" with an array of 609 Megapixels and spatial resolution of 0.18". It will be used to survey approximately 14,000 deg^2 of extragalactic sky to measure the distortion of galaxies in the redshift range z=0.1-1.5 resulting from weak gravitational lensing, one of the two principal cosmology probes of Euclid. With photometric redshifts, the distribution of dark matter can be mapped in three dimensions, and, from how this has changed with look-back time, the nature of dark energy and theories of gravity can be constrained. The entire VIS focal plane will be transmitted to provide the largest images of the Universe from space to date, reaching m_AB>24.5 with S/N >10 in a single broad I_E (r+i+z) band over a six year survey. The particularly challenging aspects of the instrument are the control and calibration of observational biases, which lead to stringent performance requirements and calibration regimes. With its combination of spatial resolution, calibration knowledge, depth, and area covering most of the extra-Galactic sky, VIS will also provide a legacy data set for many other fields. This paper discusses the rationale behind the VIS concept and describes the instrument design and development before reporting the pre-launch performance derived from ground calibrations and brief results from the in-orbit commissioning. VIS should reach fainter than m_AB=25 with S/N>10 for galaxies of full-width half-maximum of 0.3" in a 1.3" diameter aperture over the Wide Survey, and m_AB>26.4 for a Deep Survey that will cover more than 50 deg^2. The paper also describes how VIS works with the other Euclid components of survey, telescope, and science data processing to extract the cosmological information.
- Euclid. I. Overview of the Euclid missionEuclid Collaboration, Y. Mellier, Abdurro’uf, and 1108 more authorsarXiv e-prints, May 2024
The current standard model of cosmology successfully describes a variety of measurements, but the nature of its main ingredients, dark matter and dark energy, remains unknown. Euclid is a medium-class mission in the Cosmic Vision 2015-2025 programme of the European Space Agency (ESA) that will provide high-resolution optical imaging, as well as near-infrared imaging and spectroscopy, over about 14,000 deg^2 of extragalactic sky. In addition to accurate weak lensing and clustering measurements that probe structure formation over half of the age of the Universe, its primary probes for cosmology, these exquisite data will enable a wide range of science. This paper provides a high-level overview of the mission, summarising the survey characteristics, the various data-processing steps, and data products. We also highlight the main science objectives and expected performance.
- Black-Hole-to-Halo Mass Relation From UNIONS Weak LensingQinxun Li, Martin Kilbinger, Wentao Luo, and 17 more authorsarXiv e-prints, Feb 2024
This letter presents, for the first time, direct constraints on the black-hole-to-halo-mass relation using weak gravitational lensing measurements. We construct type I and type II Active Galactic Nuclei (AGNs) samples from the Sloan Digital Sky Survey (SDSS), with a mean redshift of 0.4 (0.1) for type I (type II) AGNs. This sample is cross-correlated with weak lensing shear from the Ultraviolet Near Infrared Northern Survey (UNIONS). We compute the excess surface mass density of the halos associated with 36, 181 AGNs from 94, 308, 561 lensed galaxies and fit the halo mass in bins of black- hole mass. We find that more massive AGNs reside in more massive halos. We see no evidence of dependence on AGN type or redshift in the black-hole-to-halo-mass relationship when systematic errors in the measured black-hole masses are included. Our results are consistent with previous measurements for non-AGN galaxies. At a fixed black-hole mass, our weak-lensing halo masses are consistent with galaxy rotation curves, but significantly lower than galaxy clustering measurements. Finally, our results are broadly consistent with state-of-the-art hydro-dynamical cosmological simulations, providing a new constraint for black-hole masses in simulations.
2023
- Proximal Nested Sampling with Data-Driven Priors for Physical ScientistsJason D. McEwen, Tobías I. Liaudat, Matthew A. Price, and 2 more authorsPhysical Sciences Forum, Jun 2023
Proximal nested sampling was introduced recently to open up Bayesian model selection for high-dimensional problems such as computational imaging. The framework is suitable for models with a log-convex likelihood, which are ubiquitous in the imaging sciences. The purpose of this article is two-fold. First, we review proximal nested sampling in a pedagogical manner in an attempt to elucidate the framework for physical scientists. Second, we show how proximal nested sampling can be extended in an empirical Bayes setting to support data-driven priors, such as deep neural networks learned from training data.
- Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studiesTobias Liaudat, Jean-Luc Starck, and Martin KilbingerFrontiers in Astronomy and Space Sciences, Jun 2023
The accurate modelling of the point spread function (PSF) is of paramount importance in astronomical observations, as it allows for the correction of distortions and blurring caused by the telescope and atmosphere. PSF modelling is crucial for accurately measuring celestial objects’ properties. The last decades have brought us a steady increase in the power and complexity of astronomical telescopes and instruments. Upcoming galaxy surveys like Euclid and Legacy Survey of Space and Time (LSST) will observe an unprecedented amount and quality of data. Modelling the PSF for these new facilities and surveys requires novel modelling techniques that can cope with the ever-tightening error requirements. The purpose of this review is threefold. Firstly, we introduce the optical background required for a more physically motivated PSF modelling and propose an observational model that can be reused for future developments. Secondly, we provide an overview of the different physical contributors of the PSF, which includes the optic- and detector-level contributors and atmosphere. We expect that the overview will help better understand the modelled effects. Thirdly, we discuss the different methods for PSF modelling from the parametric and non-parametric families for ground- and space-based telescopes, with their advantages and limitations. Validation methods for PSF models are then addressed, with several metrics related to weak-lensing studies discussed in detail. Finally, we explore current challenges and future directions in PSF modelling for astronomical telescopes.
- COSMOS-Web: An Overview of the JWST Cosmic Origins SurveyCaitlin M. Casey, Jeyhan S. Kartaltepe, Nicole E. Drakos, and 83 more authorsThe Astrophysical Journal, Aug 2023
We present the survey design, implementation, and outlook for COSMOS-Web, a 255 hr treasury program conducted by the James Webb Space Telescope in its first cycle of observations. COSMOS-Web is a contiguous 0.54 deg2 NIRCam imaging survey in four filters (F115W, F150W, F277W, and F444W) that will reach 5σ point-source depths ranging 27.5–28.2 mag. In parallel, we will obtain 0.19 deg2 of MIRI imaging in one filter (F770W) reaching 5σ point-source depths of 25.3–26.0 mag. COSMOS-Web will build on the rich heritage of multiwavelength observations and data products available in the COSMOS field. The design of COSMOS-Web is motivated by three primary science goals: (1) to discover thousands of galaxies in the Epoch of Reionization (6 ≲ z ≲ 11) and map reionization’s spatial distribution, environments, and drivers on scales sufficiently large to mitigate cosmic variance, (2) to identify hundreds of rare quiescent galaxies at z > 4 and place constraints on the formation of the universe’s most-massive galaxies (M ⋆ > 1010 M ⊙), and (3) directly measure the evolution of the stellar-mass-to-halo-mass relation using weak gravitational lensing out to z 2.5 and measure its variance with galaxies’ star formation histories and morphologies. In addition, we anticipate COSMOS-Web’s legacy value to reach far beyond these scientific goals, touching many other areas of astrophysics, such as the identification of the first direct collapse black hole candidates, ultracool subdwarf stars in the Galactic halo, and possibly the identification of z > 10 pair-instability supernovae. In this paper we provide an overview of the survey’s key measurements, specifications, goals, and prospects for new discovery.
- Rethinking data-driven point spread function modeling with a differentiable optical modelTobias Liaudat, Jean-Luc Starck, Martin Kilbinger, and 1 more authorInverse Problems, Feb 2023
In astronomy, upcoming space telescopes with wide-field optical instruments have a spatially varying point spread function (PSF). Specific scientific goals require a high-fidelity estimation of the PSF at target positions where no direct measurement of the PSF is provided. Even though observations of the PSF are available at some positions of the field of view (FOV), they are undersampled, noisy, and integrated into wavelength in the instrument’s passband. PSF modeling represents a challenging ill-posed problem, as it requires building a model from these observations that can infer a super-resolved PSF at any wavelength and position in the FOV. Current data-driven PSF models can tackle spatial variations and super-resolution. However, they are not capable of capturing PSF chromatic variations. Our model, coined WaveDiff, proposes a paradigm shift in the data-driven modeling of the point spread function field of telescopes. We change the data-driven modeling space from the pixels to the wavefront by adding a differentiable optical forward model into the modeling framework. This change allows the transfer of a great deal of complexity from the instrumental response into the forward model. The proposed model relies on efficient automatic differentiation technology and modern stochastic first-order optimization techniques recently developed by the thriving machine-learning community. Our framework paves the way to building powerful, physically motivated models that do not require special calibration data. This paper demonstrates the WaveDiff model in a simplified setting of a space telescope. The proposed framework represents a performance breakthrough with respect to the existing state-of-the-art data-driven approach. The pixel reconstruction errors decrease six-fold at observation resolution and 44-fold for a 3x super-resolution. The ellipticity errors are reduced at least 20 times, and the size error is reduced more than 250 times. By only using noisy broad-band in-focus observations, we successfully capture the PSF chromatic variations due to diffraction. WaveDiff source code and examples associated with this paper are available at this link.
2022
- ShapePipe: A modular weak-lensing processing and analysis pipelineFarrens, S., Guinot, A., Kilbinger, M., and 8 more authorsA&A, Feb 2022
We present the first public release of ShapePipe, an open-source and modular weak-lensing measurement, analysis, and validation pipeline written in Python. We describe the design of the software and justify the choices made. We provide a brief description of all the modules currently available and summarise how the pipeline has been applied to real Ultraviolet Near-Infrared Optical Northern Survey data. Finally, we mention plans for future applications and development. The code and accompanying documentation are publicly available on GitHub.
- ShapePipe: A new shape measurement pipeline and weak-lensing application to UNIONS/CFIS dataGuinot, Axel, Kilbinger, Martin, Farrens, Samuel, and 17 more authorsA&A, Feb 2022
UNIONS is an ongoing collaboration that will provide the largest deep photometric survey of the Northern sky in four optical bands to date. As part of this collaboration, CFIS is taking r-band data with an average seeing of 0.65 arcsec, which is complete to magnitude 24.5 and thus ideal for weak-lensing studies. We perform the first weak-lensing analysis of CFIS r-band data over an area spanning 1700 deg2 of the sky. We create a catalogue with measured shapes for 40 million galaxies, corresponding to an effective density of 6.8 galaxies per square arcminute, and demonstrate a low level of systematic biases. This work serves as the basis for further cosmological studies using the full UNIONS survey of 4800 deg2 when completed. Here we present ShapePipe, a newly developed weak-lensing pipeline. This pipeline makes use of state-of-the-art methods such as Ngmix for accurate galaxy shape measurement. Shear calibration is performed with metacalibration. We carry out extensive validation tests on the Point Spread Function (PSF), and on the galaxy shapes. In addition, we create realistic image simulations to validate the estimated shear. We quantify the PSF model accuracy and show that the level of systematics is low as measured by the PSF residuals. Their effect on the shear two-point correlation function is sub-dominant compared to the cosmological contribution on angular scales <100 arcmin. The additive shear bias is below 5x10−4, and the residual multiplicative shear bias is at most 10−3 as measured on image simulations. Using COSEBIs we show that there are no significant B-modes present in second-order shear statistics. We present convergence maps and see clear correlations of the E-mode with known cluster positions. We measure the stacked tangential shear profile around Planck clusters at a significance higher than 4σ.
- Data-driven modelling of ground-based and space-based telescope’s point spread functionsTobias Ignacio LiaudatUniversité Paris-Saclay, Oct 20222022UPASP118
Gravitational lensing is the distortion of the images of distant galaxies by intervening massive objects and constitutes a powerful probe of the Large Scale Structure of our Universe. Cosmologists use weak (gravitational) lensing to study the nature of dark matter and its spatial distribution. These studies require highly accurate measurements of galaxy shapes, but the telescope’s instrumental response, or point spread function (PSF), deforms our observations. This deformation can be mistaken for weak lensing effects in the galaxy images, thus being one of the primary sources of systematic error when doing weak lensing science. Therefore, estimating a reliable and accurate PSF model is crucial for the success of any weak lensing mission. The PSF field can be interpreted as a convolutional kernel that affects each of our observations of interest that varies spatially, spectrally, and temporally. The PSF model needs to cope with these variations and is constrained by specific stars in the field of view. These stars, considered point sources, provide us with degraded samples of the PSF field. The observations go through different degradations depending on the properties of the telescope, including undersampling, an integration over the instrument’s passband, and additive noise. We finally build the PSF model using these degraded observations and then use the model to infer the PSF at the position of galaxies. This procedure constitutes the ill-posed inverse problem of PSF modelling. The core of this thesis has been the development of new data-driven, also known as non-parametric, PSF models. We have developed a new PSF model for ground-based telescopes, coined MCCD, which can simultaneously model the entire focal plane. Consequently, MCCD has more available stars to constrain a more complex model. The method is based on a matrix factorisation scheme, sparsity, and an alternating optimisation procedure. We have included the PSF model in a high-performance shape measurement pipeline and used it to process 3500 deg² of r-band observations from the Canada-France Imaging Survey. A shape catalogue has been produced and will be soon released. The main goal of this thesis has been to develop a data-driven PSF model that can address the challenges raised by one of the most ambitious weak lensing missions so far, the Euclid space mission. The main difficulties related to the Euclid mission are that the observations are undersampled and integrated into a single wide passband. Therefore, it is hard to recover and model the PSF chromatic variations from such observations. Our main contribution has been a new framework for data-driven PSF modelling based on a differentiable optical forward model allowing us to build a data-driven model for the wavefront. The new model coined WaveDiff is based on a matrix factorisation scheme and Zernike polynomials. The model relies on modern gradient-based methods and automatic differentiation for optimisation, which only uses noisy broad-band in-focus observations. Results show that WaveDiff can model the PSFs’ chromatic variations and handle super-resolution with high accuracy.
2021
- Multi-CCD modelling of the point spread functionTobias Liaudat, Jérôme Bonnin, Jean-Luc Starck, and 4 more authorsA&A, Oct 2021
Context. Galaxy imaging surveys observe a vast number of objects, which are ultimately affected by the instrument’s point spread function (PSF). It is weak lensing missions in particular that are aimed at measuring the shape of galaxies and PSF effects represent an significant source of systematic errors that must be handled appropriately. This requires a high level of accuracy at the modelling stage as well as in the estimation of the PSF at galaxy positions. Aims. The goal of this work is to estimate a PSF at galaxy positions, which is also referred to as a non-parametric PSF estimation and which starts from a set of noisy star image observations distributed over the focal plane. To accomplish this, we need our model to precisely capture the PSF field variations over the field of view and then to recover the PSF at the chosen positions. Methods. In this paper, we propose a new method, coined Multi-CCD (MCCD) PSF modelling, which simultaneously creates a PSF field model over the entirety of the instrument’s focal plane. It allows us to capture global as well as local PSF features through the use of two complementary models that enforce different spatial constraints. Most existing non-parametric models build one model per charge-coupled device, which can lead to difficulties in capturing global ellipticity patterns. Results. We first tested our method on a realistic simulated dataset, comparing it with two state-of-the-art PSF modelling methods (PSFEx and RCA) and finding that our method outperforms both of them. Then we contrasted our approach with PSFEx based on real data from the Canada-France Imaging Survey, which uses the Canada-France-Hawaii Telescope. We show that our PSF model is less noisy and achieves a 22% gain on the pixel’s root mean square error with respect to PSFEx. Conclusions. We present and share the code for a new PSF modelling algorithm that models the PSF field on all the focal plane that is mature enough to handle real data.
- Rethinking the modeling of the instrumental response of telescopes with a differentiable optical modelTobias Liaudat, Jean-Luc Starck, Martin Kilbinger, and 1 more authorIn NeurIPS 2021 Machine Learning for Physical sciences workshop, Nov 2021
We propose a paradigm shift in the data-driven modeling of the instrumental response field of telescopes. By adding a differentiable optical forward model into the modeling framework, we change the data-driven modeling space from the pixels to the wavefront. This allows to transfer a great deal of complexity from the instrumental response into the forward model while being able to adapt to the observations, remaining data-driven. Our framework allows a way forward to building powerful models that are physically motivated, interpretable, and that do not require special calibration data. We show that for a simplified setting of a space telescope, this framework represents a real performance breakthrough compared to existing data-driven approaches with reconstruction errors decreasing 5 fold at observation resolution and more than 10 fold for a 3x super-resolution. We successfully model chromatic variations of the instrument’s response only using noisy broad-band in-focus observations.
- Semi-Parametric Wavefront Modelling for the Point Spread FunctionTobias Liaudat, Jean-Luc Starck, and Martin KilbingerIn 52ème Journées de Statistique de la Société Française de Statistique (SFdS), Jun 2021
We introduce a new approach to estimate the point spread function (PSF) field of an optical telescope by building a semi-parametric model of its wavefront error. This method is particularly advantageous because it does not require calibration observa- tions to recover the wavefront error and it naturally takes into account the chromaticity of the optical system. The model is end-to-end differentiable and relies on a diffraction operator that allows us to compute monochromatic PSFs from the wavefront information.
2020
- Faster and better sparse blind source separation through mini-batch optimizationChristophe Kervazo, Tobias Liaudat, and Jérôme BobinDigital Signal Processing, Jun 2020
Sparse Blind Source Separation (sBSS) plays a key role in scientific domains as different as biomedical imaging, remote sensing or astrophysics. Such fields however require the development of increasingly faster and scalable BSS methods without sacrificing the separation performances. To that end, we introduce in this work a new distributed sparse BSS algorithm based on a mini-batch extension of the Generalized Morphological Component Analysis algorithm (GMCA). Precisely, it combines a robust projected alternated least-squares method with mini-batch optimization. The originality further lies in the use of a manifold-based aggregation of the asynchronously estimated mixing matrices. Numerical experiments are carried out on realistic spectroscopic spectra, and highlight the ability of the proposed distributed GMCA (dGMCA) to provide very good separation results even when very small mini-batches are used. Quite unexpectedly, the algorithm can further outperform the (non-distributed) state-of-the-art methods for highly sparse sources.
2019
- Distributed sparse BSS for large-scale datasetsTobias Liaudat, Jérôme Bobin, and Christophe KervazoIn 2019 SPARS conference proceedings, Apr 2019
Blind Source Separation (BSS) [1] is widely used to analyze multichannel data stemming from origins as wide as astrophysics to medicine. However, existent methods do not efficiently handle very large datasets. In this work, we propose a new method coined DGMCA (Distributed Generalized Morphological Component Analysis) in which the original BSS problem is decomposed into subproblems that can be tackled in parallel, alleviating the large-scale issue. We propose to use the RCM (Riemannian Center of Mass - [6][7]) to aggregate during the iterative process the estimations yielded by the different subproblems. The approach is made robust both by a clever choice of the weights of the RCM and the adaptation of the heuristic parameter choice proposed in [4] to the parallel framework. The results obtained show that the proposed approach is able to handle large-scale problems with a linear acceleration performing at the same level as GMCA and maintaining an automatic choice of parameters.