Abstract
Ever since the advent of AlexNet, designing novel deep neural architectures for different tasks has consistently been a productive research direction. Despite the exceptional performance of various architectures in practice, we study a theoretical question: what is the condition for deep neural architectures to preserve all the information of the input data? Identifying the information lossless condition for deep neural architectures is important, because tasks such as image restoration require keep the detailed information of the input data as much as possible. Using the definition of mutual information, we show that: a deep neural architecture can preserve maximum details about the given data if and only if the architecture is invertible. We verify the advantages of our Invertible Restoring Autoencoder (IRAE) network by comparing it with competitive models on three perturbed image restoration tasks: image denoising, JPEG image decompression and image inpainting. Experimental results show that IRAE consistently outperforms non-invertible ones. Our model even contains far fewer parameters. Thus, it may be worthwhile to try replacing standard components of deep neural architectures with their invertible counterparts. We believe our work provides a unique perspective and direction for future deep learning research.
| Original language | English |
|---|---|
| Title of host publication | Neural Information Processing - 27th International Conference, ICONIP 2020, Proceedings |
| Editors | Haiqin Yang, Kitsuchart Pasupa, Andrew Chi-Sing Leung, James T. Kwok, Jonathan H. Chan, Irwin King |
| Publisher | Springer Science and Business Media Deutschland GmbH |
| Pages | 172-184 |
| Number of pages | 13 |
| ISBN (Print) | 9783030638351 |
| DOIs | |
| State | Published - 2020 |
| Externally published | Yes |
| Event | 27th International Conference on Neural Information Processing, ICONIP 2020 - Bangkok, Thailand Duration: 18 Nov 2020 → 22 Nov 2020 |
Publication series
| Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
|---|---|
| Volume | 12534 LNCS |
| ISSN (Print) | 0302-9743 |
| ISSN (Electronic) | 1611-3349 |
Conference
| Conference | 27th International Conference on Neural Information Processing, ICONIP 2020 |
|---|---|
| Country/Territory | Thailand |
| City | Bangkok |
| Period | 18/11/20 → 22/11/20 |
Bibliographical note
Publisher Copyright:© 2020, Springer Nature Switzerland AG.
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science