Deep Learning Image Compression: nearly 10,000:1 compression ratio!

Tuesday, June 28, 2022

Here disclosed is a novel compression technique I call Deep Learning Semantic Vector Quantization (DLSVC) that achieves in this sample 9,039:1 compression! Compare this to JPEG at about 10:1 or even HEIC at about 20:1, and the absolutely incredible power of DL image compression becomes apparent.

Before I disclose the technique to achieve this absolutely stunning result, we need to understand a bit about the psychovisual mechanisms that are being exploited. A good starting point is thinking about:

It was a dark and stormy night and all through the house not a creature was stirring, not even a mouse.

I’m sure each person reading this develops an internal model, likely some combination of a snug, warm indoor Christmas scene while outside a storm raged, or something to that effect derived from the shared cultural semantic representation: a scene with a great deal of detail and complexity, despite the very short text string. The underlying mechanism is a sort of vector quantization where the text represents a series of vectors that semantically reference complex culturally shared elements that form a type of codebook.

If a person skilled at drawing were to attempt to represent this coded reference visually, it is likely the result would be recognizable to others as a representation of the text; that is, the text is an extremely compact symbolic representation of an image.

1x1.trans

Created by DALL·E mini developed by Boris Dayma et al. 2021-2022

So now lets try a little AI assisted vector quantization of images. We can start with the a generic image from Wikipedia:

A cat sitting on a bench, Varna, December, 2018

A cat sitting on a bench, Varna, December, 2018 (https://commons.wikimedia.org/wiki/File:A_feral_cat_sitting_on_a_bench,_Varna,_December,_2018.jpg)

Next we use AI to reduce the image to a symbolic semantic representation. There are far more powerful AI systems available, but we’ll use one that allows normal people to play with it, @milhidaka’s caption generator on github:

1x1.trans

This is a cat sitting on top of a wooden bench which we can LZW compress assuming 26 character text to a mere 174 bits or 804D22134C834638D4CE3CE14058E38310D071087. That’s a pretty compact representation of an image! The model has been trained to understand a correlation between widely shared semantic symbols and elements of images and can reduce an image to a human-comprehensible, compact textual representation, effectively a lossy coding scheme referencing a massive shared codebook with complex grammatical rules that further increase the information density of the text.

Decoding those 174 bits back to the original text, we can feed them into an image generating generative AI model, like DALL·E mini and we get our original image back by reversing the process leveraging a different semantic model, but one also trained to the same human language.
1x1.trans

It is clearly a lossy conversion, but here’s the thing: so too is human memory lossy. If you saw the original scene and 20 years later, someone said, “hey, remember that time we saw the cat sitting on a wooden bench in Varna, look, here’s a picture of it!” and showed you this picture, I mean aside from the funny looking cat like blob, you’d say “oh, yeah, cool, that was a cute cat.”

Using the DALL·E mini output as the basis for computing compression rather than the input image which could be arbitrarily large, we have 256×256×8×3 bits output = 1,572,864 bits to represent the output image raw.

WebP “low quality” compressing the 256×256 image yields a file of 146,080 bits or 10.77:1 compression.

My technique yields a compressed representation of 174 bits or 9,039:1 compression. DALL·E 2‘s 1024×1024 output size should yield 144,624:1 compression.

https://labs.openai.com/s/nM4ALIzP7Mbn7GZQUVMnLgcl

DALL·E 2023-01-22 02.07.41 – a cat sitting on top of a wooden bench

This is not a photograph. This is Dall-E 2’s 25,165,824 bit (raw) interpretation of the 174 bit text “a cat sitting on top of a wooden bench” which was derived by a different AI from the original image.

So just for comparison, lets consider how much we can compress the original image, resizing to 32×21 pixels and, say, webp, to 580 bytes.

Cat compressed to 580 bytesEven being generous and using the original file’s 7,111,400 bytes such that this blancmange of an image represents 12,261:1 compression, it is still 12× worse compression than our novel technique, it is hard to argue that this is a better representation of the original image than our AI-based semantic codebook compression achieved.

Pied Piper got nothin’ on this!

Summary
10,000:1 image compression achieved with deep learning
Article Name
10,000:1 image compression achieved with deep learning
Description
10,000:1 image compression technique using symbolic semantic vector quantization, 100x better than WebP.
Author
Black Rose Technology
Posted at 11:51:14 GMT-0700
Comments

One Response

  1. […] future by pattern matching against history, something human cognition does to reduce reality into space saving symbolic representation which leads us to see cyclic patterns in everything, even random noise. We also tend to allude to […]

Leave a Reply

486 Views