How to exploit the transferability of learned image compression to conventional codecs
Journal
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Pages
16160-16169
Date Issued
2021
Author(s)
Abstract
Lossy image compression is often limited by the simplicity of the chosen loss measure. Recent research suggests that generative adversarial networks have the ability to overcome this limitation and serve as a multi-modal loss, especially for textures. Together with learned image compression, these two techniques can be used to great effect when relaxing the commonly employed tight measures of distortion. However, convolutional neural network-based algorithms have a large computational footprint. Ideally, an existing conventional codec should stay in place, ensuring faster adoption and adherence to a balanced computational envelope. As a possible avenue to this goal, we propose and investigate how learned image coding can be used as a surrogate to optimise an image for encoding. A learned filter alters the image to optimise a different performance measure or a particular task. Extending this idea with a generative adversarial network, we show how entire textures are replaced by ones that are less costly to encode but preserve a sense of detail. Our approach can remodel a conventional codec to adjust for the MS-SSIM distortion with over 20% rate improvement without any decoding overhead. On task-aware image compression, we perform favourably against a similar but codec-specific approach. ? 2021 IEEE
Event(s)
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021
Subjects
Convolutional neural networks
Encoding (symbols)
Image coding
Image compression
Textures
Convolutional neural network
Decoding overheads
Images compression
Lossy image compression
Modal loss
Multi-modal
Network-based algorithm
Performance measure
Recent researches
Task-aware
Generative adversarial networks
Type
conference paper
