|
|
paperboyo
|
|
paperboyo
[sorry for the spam] Just wanted to say that theguardian.com is on `tune=iq` for the last two weeks (we serve AVIF to non-JXL-supporting browsers) and we couldn’t be more happy. Encoder consistency is much, much higher (still not as good as JXL, but that’s just an impression, not a rigorous assessment): some highly compressible images where delicate detail was being obliterated grown even ~3× in filesize and look much better while unnecessarily weighty images shrank, mostly without any perceptible degradation. Interestingly, we kept the same quality values (which differ by dpr) and as much as we can say, the overall bandwidth stayed more or less the same (I can’t see any change in the graph), so it’s likely the massive reduction of rare weighty images offset small images growing.
Huge thanks to <@297955493698076672> for his work on `tune=iq` and to whomever else worked on it. And to friends at our image CDN (Fastly). And to everyone here as without this server, I wouldn’t even know about it…
As always, we welcome improvements to any and all encoders. So the idea of [splitting Julio](https://discord.com/channels/794206087879852103/805176455658733570/1376249806917210224) couldn’t sound more exciting 😉 .
|
|
2025-07-30 04:08:09
|
|
|
2025-07-30 04:08:10
|
One observation that maybe of use is that the effect of overall bandwidth staying the same(ish) bewteen `tune=ssim` and `tune=iq` holds less true the larger the image dimensions (given the same corpus/settings). And results in actual reduction of overall bandwidth for smaller dimensions.
|
|
|
juliobbv
|
2025-07-30 07:03:26
|
This is great news! To my knowledge, The Guardian is the first big customer of tune=iq. Your observations on improved image consistency are spot-on -- image consistency is one of the major aspects we worked on improved (alongsize overall efficiency).
|
|
2025-07-30 07:18:26
|
I have a couple of questions:
- For adding tune=iq support, did you ask Fastly to just add the parameter as a "drop-in" replacement, or were other parameters (like speed) adjusted as well?
- If possible, can you give me a point of contact to a Fastly representative who help you enable tune=iq? (can be done through DMs) The libaom team would love to get their feedback on tune=iq integration experience.
|
|
2025-07-30 07:20:21
|
BTW <@703028154431832094> co-authored the techniques used in tune=iq, leveraging improvements done by <@321486891079696385> and <@138804598365224961>.
|
|
|
|
paperboyo
|
2025-07-30 07:21:38
|
As far as I know, all other settings, defo speed, stayed the same. Will ask around for a contact.
|
|
|
juliobbv
|
2025-07-30 07:26:31
|
Thanks <@810102077895344159>!
|
|
|
gb82
|
2025-07-30 08:21:12
|
This is incredibly cool to hear, awesome seeing tune iq gain adoption!
|
|
2025-07-30 08:22:16
|
<@810102077895344159> out of curiosity, how much AVIF is the Guardian serving? Is it your primary image format at this time? Also, does this mean it has become easier for Fastly to produce tune iq images for other clients?
|
|
|
|
paperboyo
|
|
gb82
<@810102077895344159> out of curiosity, how much AVIF is the Guardian serving? Is it your primary image format at this time? Also, does this mean it has become easier for Fastly to produce tune iq images for other clients?
|
|
2025-07-30 08:31:00
|
I don’t have the breakdown per format, but we serve AVIF to anything that supports it apart from those who understand JPEG XL (in practice – Safaris). After that it’s WebP and mozJPEG.
As to the second question: I don’t know, but I guess it cannot be harder now ;-).
|
|
|
juliobbv
|
|
gb82
<@810102077895344159> out of curiosity, how much AVIF is the Guardian serving? Is it your primary image format at this time? Also, does this mean it has become easier for Fastly to produce tune iq images for other clients?
|
|
2025-07-30 08:32:18
|
I'd assume Fastly has tune=iq enabled for the theguardian.com account behind a feature flag or something
|
|
|
gb82
|
|
paperboyo
I don’t have the breakdown per format, but we serve AVIF to anything that supports it apart from those who understand JPEG XL (in practice – Safaris). After that it’s WebP and mozJPEG.
As to the second question: I don’t know, but I guess it cannot be harder now ;-).
|
|
2025-07-30 08:33:07
|
nice, super cool to see AVIF/JXL at the forefront 😎
|
|
|
juliobbv
|
2025-07-30 08:33:20
|
so most of the infra work has been done by now
|
|
2025-07-30 09:25:06
|
BTW <@810102077895344159> I think I know why the bigger images at theguardian.com are overall more compressible than the smaller ones: the larger ones usually have camera noise, and tune=iq started preserving that noise at the selected quality level
|
|
2025-07-30 09:25:29
|
the downscales get rid of the noise, so there's nothing additional to preserve there, and the images become smaller because of that
|
|
2025-07-30 09:26:47
|
so it's a side effect of the consistency improvement -- tune ssim (the default for images) obliterates camera noise
|
|
|
|
paperboyo
|
|
juliobbv
so it's a side effect of the consistency improvement -- tune ssim (the default for images) obliterates camera noise
|
|
2025-07-30 11:28:46
|
[this below, entirely a personal take and likely full of – my own! – mistakes turned out much longer than I ever intended, apologies. TL;DR would be: I’ll take a weaker but more consistent codec over stronger but less consistent one any day!]
Larger ones more often having camera noise is definitely true. But smaller ones can also have faint textures and delicate/slow transitions. In my (limited!) experience, all three are now better preserved. It’s as if I increased the quality overall.
But this in itself is not amazing at all.
Just like with all the other codecs, only having ability to set one quality for the whole site (not per-image; varied by dpr only in two steps since [this](https://github.com/guardian/frontend/pull/12079)), you have to make a pact with the image devil: you set it so that is works overall. You know some images will not look their best. You set it such that majority look good (good is, mostly, subjective, but for the sake of this here argument, it doesn’t really mater who’s deciding, what matters is that it’s always a decision). And the reason you won’t get higher is that while it would improve the minority which may look weak, it would massively hurt performance spending redundant bandwidth on the majority which already look good enough. And given that the latter can easily weigh 16 times more than former at the same dimensions, they can really eat extra bandwidth like in La Grande Bouffe, and won’t look any better for it (interestingly, the difference in weight for the same image pair in `ssim` was… 166×!). Again, it doesn’t matter here if you consider quality of this or that site low or high, the argument holds true regardless. At any quality. Unless you go 100 and leave the server ;-).
And reducing this gap between how many of weak ones are how far from how many good-looking ones is encoder consistency.
|
|
2025-07-30 11:29:12
|
I’m more and more convinced that the quality of the codec lies less in new fancy coding tools and more in its intelligence to distribute bytes to where they matter and not waste them on where they don’t. Both on image level and within-image level (to images that need them and to areas that need them; from those that do not).
For all codecs I played with [over the years](https://github.com/GoogleChromeLabs/squoosh/issues/270#:~:text=Some%20way%20of%20controlling%20quality%20of%20well%2Dcompressable/low%20freq%20areas%20independently%20of%20badly%2Dcompressable/high%20freq%20areas%20would%20do%20wonders%20%E2%80%93%20it%E2%80%99s%20often%20that%20both%20WebP%20and%20MozJPEG%20images%20could%20be%20compressed%20much%20more%20if%20I%20could%20protect%20areas%20that%20codecs%20deemed%20more%20%E2%80%9Csafe%E2%80%9D%20for%20compression.), it was almost invariably the areas, and the images, that codecs decided they can compress more easily/harder that I wanted to be compressed less! Paradoxically, it’s as if the codecs were always too clever! WebP being particlarly “smart”… 😉
So, what’s actually amazing, indistinguishable from magic, with `tune=iq`, and what makes AVIF more like JPEG XL in this department, is that despite leaving quality values as they were (almost) all images/areas look better **at the same overall bandwidth**! Those that exhibited artifacting don’t (or do so much less) and grew. And those that already looked good do not look visibly worse after shrinking. Bytes were redirected where they matter. Encoder consistency is higher. Profit!
|
|
2025-07-30 11:29:28
|
What I meant with image dimensions affecting this wholesale transfer of bytes differently was that given image corpus A, small versions of all its images got smaller overall than `ssim` versions while large versions got bigger overall. And if I would have the knowledge and ability, I would play with that `boost` you mentioned to try to get even higher consistency. Those large images, in general, looked relatively better in `ssim` already and still look “too good” in `iq`, I feel they could trickle down some of that byte wealth to their tinier, poorer friends some more ;-). Maybe by varying `boost` by image dimensions? But that but a hunch, not tested, let alone proven. It’s already so much better, I should stop producing walls of text of questionable value and enjoy. And say thank you once again.
|
|
2025-07-30 11:41:34
|
*– Pretty bold statements and sweeping assertions! How big was this corpus of yours, exactly, to warrant them?
– Err… twenty-four…
– (…)*
😜
|
|
|
juliobbv
|
|
paperboyo
[this below, entirely a personal take and likely full of – my own! – mistakes turned out much longer than I ever intended, apologies. TL;DR would be: I’ll take a weaker but more consistent codec over stronger but less consistent one any day!]
Larger ones more often having camera noise is definitely true. But smaller ones can also have faint textures and delicate/slow transitions. In my (limited!) experience, all three are now better preserved. It’s as if I increased the quality overall.
But this in itself is not amazing at all.
Just like with all the other codecs, only having ability to set one quality for the whole site (not per-image; varied by dpr only in two steps since [this](https://github.com/guardian/frontend/pull/12079)), you have to make a pact with the image devil: you set it so that is works overall. You know some images will not look their best. You set it such that majority look good (good is, mostly, subjective, but for the sake of this here argument, it doesn’t really mater who’s deciding, what matters is that it’s always a decision). And the reason you won’t get higher is that while it would improve the minority which may look weak, it would massively hurt performance spending redundant bandwidth on the majority which already look good enough. And given that the latter can easily weigh 16 times more than former at the same dimensions, they can really eat extra bandwidth like in La Grande Bouffe, and won’t look any better for it (interestingly, the difference in weight for the same image pair in `ssim` was… 166×!). Again, it doesn’t matter here if you consider quality of this or that site low or high, the argument holds true regardless. At any quality. Unless you go 100 and leave the server ;-).
And reducing this gap between how many of weak ones are how far from how many good-looking ones is encoder consistency.
|
|
2025-07-31 12:29:44
|
Thanks for your take! It's nice to see somebody who's into image consistency as much as I and <@703028154431832094> do.
|
|
|
paperboyo
What I meant with image dimensions affecting this wholesale transfer of bytes differently was that given image corpus A, small versions of all its images got smaller overall than `ssim` versions while large versions got bigger overall. And if I would have the knowledge and ability, I would play with that `boost` you mentioned to try to get even higher consistency. Those large images, in general, looked relatively better in `ssim` already and still look “too good” in `iq`, I feel they could trickle down some of that byte wealth to their tinier, poorer friends some more ;-). Maybe by varying `boost` by image dimensions? But that but a hunch, not tested, let alone proven. It’s already so much better, I should stop producing walls of text of questionable value and enjoy. And say thank you once again.
|
|
2025-07-31 12:32:55
|
By "boost", do you mean `--deltaq-strength`? If you can make sure most of your image assets aren't animation, good values to start with are 125 and 150. I'd prob not go beyond 150, as too strong of a strength will inflate images too much to make up for the quality benefit.
|
|
|
|
paperboyo
|
|
juliobbv
By "boost", do you mean `--deltaq-strength`? If you can make sure most of your image assets aren't animation, good values to start with are 125 and 150. I'd prob not go beyond 150, as too strong of a strength will inflate images too much to make up for the quality benefit.
|
|
2025-07-31 12:35:08
|
[This](https://discord.com/channels/794206087879852103/805176455658733570/1391299676057112607) (only because you mentioned, I can’t actually read code…).
|
|
|
juliobbv
|
|
paperboyo
[This](https://discord.com/channels/794206087879852103/805176455658733570/1391299676057112607) (only because you mentioned, I can’t actually read code…).
|
|
2025-07-31 12:36:36
|
oh yeah, you don't need to modify code anymore
|
|
2025-07-31 12:37:04
|
I made configurable strength into a convenient parameter 🙂
|
|
2025-07-31 12:40:18
|
please note that you'll need to increase QP to compensate for the increased file sizes if you go beyond 100
|
|
|
BlueSwordM
|
|
paperboyo
I don’t have the breakdown per format, but we serve AVIF to anything that supports it apart from those who understand JPEG XL (in practice – Safaris). After that it’s WebP and mozJPEG.
As to the second question: I don’t know, but I guess it cannot be harder now ;-).
|
|
2025-07-31 07:28:42
|
Can you perhaps find a way to enable jpegli to replace mozjpeg for lossy coding?
|
|
|
|
paperboyo
|
|
BlueSwordM
Can you perhaps find a way to enable jpegli to replace mozjpeg for lossy coding?
|
|
2025-08-01 11:40:48
|
I don’t think engineering effort would be worth it (even more so, coz not mine 😉) as great majority of readers are not getting to the JPEG file format at all (don’t know what’s the percentage, but I expect it to be minuscule). We go JPEG XL>AVIF>WebP>JPEG/PNG.
This is of course, not a comment on jpegli vs mozJPEG, just a pragmatic take of effort vs. gain.
|
|