##
Abstract

The image inverse problem is the problem of reconstructing an image given its degraded
or compressed observation. Some previous solutions to this problem use generative adversarial
networks (GANs), but the representation capabilities of such models cannot capture the full
distribution of complex classes of images (e.g., human faces), thus producing sub-optimal results.
Our work examines the image-adaptive generative model, proposed in Hussein et al (2020), that
purports to mitigate the limited representation capabilities of previous models in solving the
image inverse problem. To this end, we implement the proposed model from Hussein et al (2020),
which makes generators "image-adaptive" to a specific test sample. This model consists of three
successive optimization stages: the non-image-adaptive "compressed sensing using generative models"
(CSGM), the image-adaptive step (IA), and the post-processing "back-projection" (BP). Our results
demonstrate that the two image-adaptive approaches--IA and BP--can effectively improve reconstructions.
Further testing reveals slight biases existing in the model (e.g., skin tones), which we conjecture to
be caused by the training dataset on which the model is trained. Finally, to explore more efficient ways
of running the model, we test out different numbers of iterations used for CSGM. The results show that we
can indeed decrease the number of CSGM iterations without compromising reconstruction qualities.

##
Authors

Antonio Marino ('23), Yilong Song ('23), and Daisuke Yamada ('23), under supervision of Professor Anna Rafferty. Carleton College.