Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon–Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

1 INTRODUCTION

Any telescope observation of astrophysical objects is limited by the noise in the image, driven by both the detector used and the sky background. Similarly, the observation is limited in angular resolution by the resolving power of the telescope (R ∼ λ/D) and, if taken from the ground, by the distortions caused by the moving atmosphere (the ‘seeing’). The total blurring introduced by the combination of the telescope and the atmosphere is described by the point spread function (PSF). An image taken by a telescope can therefore be thought of as a convolution of the true light distribution with this point spread function plus the addition of various sources of noise. The Shannon–Nyquist sampling theorem (Nyquist 1928; Shannon 1949) limits the ability of deconvolution techniques in removing the effect of the PSF, particularly in the presence of noise (Magain, Courbin & Sohy 1998; Courbin 1999; Starck, Pantin & Murtagh 2002).

Deconvolution has long been known as an ‘ill-posed’ inverse problem because there is often no unique solution if one follows the signal processing approach of backwards modelling (Magain et al. 19982007; Letawe et al. 20072008; Cantale et al. 2016). Another standard practice in tackling inverse problems like these is integrating priors using domain knowledge in forward modelling. For example, if the algorithm knows what a galaxy should look like or it knows the output needs to have certain properties such as being ‘sharp’, it will make more informative decisions when choosing among all possible solutions. In this paper, we demonstrate a method using machine learning to automatically introduce such priors. This method can reliably recover features in images of galaxies. We find that machine learning techniques can go beyond this limitation of deconvolutions – by training on higher quality data, a machine learning system can learn to recover information from poor-quality data by effectively building priors.

2 METHOD

Our general method is agnostic as to the specific machine learning algorithm used. In this paper, we choose to use conditional generative adversarial networks (GAN), a state-of-the-art deep learning algorithm for image-to-image translation. In this work, we adopted a standard GAN architecture; therefore, we only briefly introduce GAN and interested readers can consult Reed et al. (2016) and Goodfellow et al. (2014) for details.

In the training phase, the GAN takes as input a set of image pairs – in our case, one image that is degraded (by this we mean: convolved with a worse PSF, or blurred, and with added noise) and the same image without such degradation. The GAN then tries to ‘learn’ to recover the degraded image by minimizing the difference between the recovered image and the non-degraded image. The function that measures the difference between the two images, which is often called the loss function, is often something simple such as the Euclid distance but can be a more sophisticated function. In the case of a GAN, this function is another neural network (hence the name adversarial) whose goal is to distinguish the recovered image from a non-degraded image. These two neural networks are trained at the same time. This allows the system to learn sophisticated loss functions automatically without hand-engineering.1 In the testing phase, the GAN takes a different set of degraded images and recovers them.

One remaining challenge is how to generate pairs of images with and without degradation for the training phase. In our framework, we take advantage of the centuries of study of the noise introduced by the telescope and the atmosphere to weakly supervise a GAN network by simulating the blurring process automatically. This allows us to easily harvest a large training set automatically without any human intervention. Furthermore, it allows us to automatically scale our system, and arguably achieve better quality, when future large-scale sky survey data from e.g. LSST (LSST Science Collaboration et al. 2009) or Euclid (Laureijs et al. 2011) are available. We outline the method in Fig. 1.

 

Figure 1.
Schematic illustration of the training process of our method. The input is a set of original images. From these, we automatically generate degraded images, and train a GAN. In the testing phase, only the generator will be used to recover images.

Schematic illustration of the training process of our method. The input is a set of original images. From these, we automatically generate degraded images, and train a GAN. In the testing phase, only the generator will be used to recover images.

We select a sample of 4550 galaxies from the Sloan Digital Sky Survey Data Release 12 (York et al. 2000; Alam et al. 2015) in the redshift range 0.01 < z < 0.02 and conduct 10× cross-validation for all of our experiments (each fold contains 4105 images for training and 455 for testing). We obtain the g, r and i-band images for these objects and process them using an asinh stretch (y = asinh(10 × )/3) to produce three-band RGB images. The transform to asinh stretch rather than keeping the linear data follows the best practice of making the range of input values for a neural network comparable across images. This step has been shown to help make the training procedure faster and easier in other applications (Sola & Sevilla 1997). We note that this process involved clipping extreme pixel values and in principle makes it impossible to fully recover the original flux calibration; exploring how to avoid this while not degrading the neural net training performance is an interesting project.

In order to test the performance of the GAN, we generate a grid of training sets from the galaxy sample. In each training set, we convolve the images with a Gaussian PSF with a full-width at half-maximum (FWHM) of FWHM = [1.4, 1.8, 2.0, 2.5] arcsec. The median seeing of the SDSS images is ∼1.4 arcsec so we explore images of effectively the same resolution all the way to a significantly worse seeing of 2.5 arcsec. After convolving the images with a Gaussian filter representing worse seeing, we adjust the noise level, first restoring it to that of the original image, and then increasing it so that σnew = [1.0, 1.2, 2.0, 5.0, 10.0]σoriginal to mimic shallower images. We train the GAN using open source code released by Reed et al. (2016) with TITAN X PASCAL GPUs. Training finishes in 2 h per setting per fold (200 h in total).

3 RESULTS

We evaluate our method both quantitatively and qualitatively. Quantitatively, we measure the peak signal-to-noise ratio (PSNR; Xu et al. 2014) of the blurred image and the recovered image. The PSNR is a popular quantitative measure for image recovery (Xu et al. 2014). It is defined as the ratio between the maximum possible power of a signal (original image) and the power of the noise (difference between recovered and original image). When the blurred image contains much noise, a case that is known to be a challenge for deconvolution-based approaches, our method can achieve a PSNR of 37.2 dB. Blind deconvolution (Bell & Sejnowski 1995)2 achieves a PSNR of 19.9 dB and Lucy–Richardson deconvolution (Richardson 1972; Lucy 1974)3 achieves a PSNR of 18.7 dB. We compare our and classic deconvolution approaches in Table 1. We also perform an experiment on the impact of the training set size, reported in Table 1 d: the PSNR achieved increases as we increase the training set size from five images to 2000.