|
Jyrki Alakuijala
|
2026-03-11 03:28:55
|
the 8x8 loads ~6x faster than the competing formats (WebP/usual AVIF) and is practically always available from the file
|
|
|
_wb_
|
|
Jyrki Alakuijala
do we turn the 16/32 off when there is normal internet speed so they don't accidentally render
|
|
2026-03-11 03:29:42
|
The idea is to only flush pixels and do a render when data is arriving slowly; if it arrives quickly there's only a single render pass.
|
|
2026-03-11 03:30:51
|
(possibly it would make sense to do the 1:8 pass always β even when loading from local cache β just to offset decode speed itself, which does add some delay of course)
|
|
|
The last thing I want is JXL's progressive loading to be inaccessible to those who need it. Be it via CSSS, browser flag, launch option or otherwise, I think we shouldn't cut out a feature from the people who need it most.
Not to mention, the Internet isn't just blog posts and ads. Photography sites, art boorus, scientific CDNs all handle very large files that could even be lossless. If you're waiting for the file you just clicked on to load, seeing it do so is preferable to nothing for the first 50%
|
|
2026-03-11 03:32:31
|
The usual pattern for such things on the web is that there is some default, the author can override that default, and then the end-user can override the author. This is how it works for nearly everything, from font sizes to color schemes etc.
|
|
|
jonnyawsom3
|
2026-03-11 03:33:17
|
Discord uses a blurhash, then WebP preview before the original file. The WebP is even lossless for PNG images, so replacing the WebP with a JXL and using a partial LF for the blurhash would cut down on both cost and complexity, for example
|
|
|
Jyrki Alakuijala
|
2026-03-11 03:33:21
|
I don't like LQIP/blurhashs, I think it is more or less useless, and invented to make AVIF/WebP more competitive against progressive JPEG1 in technical analysis
|
|
|
jonnyawsom3
|
2026-03-11 03:34:29
|
Agreed, but like JPEG gainmaps, they exist and now we have to deal with and supercede them...
|
|
|
Jyrki Alakuijala
|
2026-03-11 03:36:17
|
I don't think we need to sabotage user experience of JPEG XL just because something bad exists, we can fully own the user experience ourselves, at least initially to guide it where we believe it should be
|
|
|
juliobbv
|
|
_wb_
I'm not sure, the internet gets faster but the stdev of speed also grows
|
|
2026-03-11 03:36:55
|
yeah, it's good to keep in mind that there's still the "using the internet in a crowded space" use case... internet can slow down to the equivalent of 3G speeds or even worse
|
|
|
AccessViolation_
|
|
_wb_
I do think it makes sense to have a sequence JPEG, JPEG 2000 (aka JP2), JPEG XR, JPEG XT, JPEG XL, which is the chronological sequence of general-purpose image coding systems created by the JPEG committee.
|
|
2026-03-11 03:37:59
|
I've implemented this as a 'follows' and 'followed by' chain, I'm pretty happy with it. Also had to rename JPEG-XT to JPEG XT <:KekDog:805390049033191445>
I also created a new entity '[general-purpose image coding system](https://www.wikidata.org/entity/Q138645418)' which they are now all an instance of. I'm gonna add some more formats to this, and probably also make an entity 'special-purpose image coding system' or something, so the distinction can be properly represented
|
|
|
Jyrki Alakuijala
|
2026-03-11 03:38:06
|
crowded spaces and (train) tunnels at rush hour etc. could be a reason for 16x16
|
|
|
_wb_
|
2026-03-11 03:39:45
|
The way I see it, 1:8 preview + possibly reordered final groups is a "good enough" progression that can effectively hide decode speed; this is something that would be useful even if everyone has a gigabit connection with 1ms ping.
More progression such as 1:16, 1:32, and additional HF passes is still useful though if you also want to mitigate transfer speed and poor connections. There are still going to be slow/unreliable connections and in such cases, having _something_ is always better than having nothing at all.
|
|
|
juliobbv
|
|
Jyrki Alakuijala
the 8x8 loads ~6x faster than the competing formats (WebP/usual AVIF) and is practically always available from the file
|
|
2026-03-11 03:39:47
|
side note: AVIF (the standard) does support loading images progressively (in the video codec way: 1 intra frame, plus up to 3 extra frames that can be intra or inter-coded), but AFAIK only Chrome actually supports the progressive loading mechanism
|
|
|
_wb_
|
2026-03-11 03:41:07
|
I guess even WebP theoretically supports that kind of progression, if you make a non-looping animation and the first frame is low quality.
|
|
|
Jyrki Alakuijala
|
|
juliobbv
side note: AVIF (the standard) does support loading images progressively (in the video codec way: 1 intra frame, plus up to 3 extra frames that can be intra or inter-coded), but AFAIK only Chrome actually supports the progressive loading mechanism
|
|
2026-03-11 03:41:14
|
this is more or less theoretical since the author of the image must make the decision and it comes with substantial cost (like 10 % more bytes) if they want anything better than a blur hash
|
|
|
_wb_
I guess even WebP theoretically supports that kind of progression, if you make a non-looping animation and the first frame is low quality.
|
|
2026-03-11 03:42:04
|
it is a _huge_ bandwidth loss, the animation in WebP is very inefficiently designed
|
|
|
juliobbv
|
|
Jyrki Alakuijala
crowded spaces and (train) tunnels at rush hour etc. could be a reason for 16x16
|
|
2026-03-11 03:42:18
|
ideally the first preview pass should get you a recognizable representation of what the object being depicted is, so the actual ratio should depend on e.g. image size
|
|
|
_wb_
|
2026-03-11 03:44:08
|
anyway unless this becomes the default behavior of avif encoders, it's going to be pretty rare to have progressive avif files, maybe similar to adam7 png files
|
|
|
Jyrki Alakuijala
|
|
juliobbv
ideally the first preview pass should get you a recognizable representation of what the object being depicted is, so the actual ratio should depend on e.g. image size
|
|
2026-03-11 03:44:20
|
the images in the internet from main sites have fixed sizes (like facebook at 2M pixels max) and amazon etc. choose reasonable sizes, too -- the devices have pixel sizes that relate to the angular distance from the eye (more density for mobile than a monitor, and more density for a monitor than a tv, etc.)
|
|
|
juliobbv
|
|
_wb_
anyway unless this becomes the default behavior of avif encoders, it's going to be pretty rare to have progressive avif files, maybe similar to adam7 png files
|
|
2026-03-11 03:45:41
|
agreed, defaults trump everything else (a big learning from making tune IQ the default in libavif)
|
|
|
Jyrki Alakuijala
|
|
juliobbv
ideally the first preview pass should get you a recognizable representation of what the object being depicted is, so the actual ratio should depend on e.g. image size
|
|
2026-03-11 03:45:43
|
I disagree with the concept that the first preview pass is to bring a recognizable representation -- I consider that human cognition/cognitive stress is the most expensive part here, and there shouldn't be anything in the first rendering that consumes attention
|
|
2026-03-11 03:46:57
|
the first preview of course ideally is recognizable, but that is the the highest priority -- the highest priority is that when it switches to the final image, no motion perception mechanisms are triggered
|
|
|
jonnyawsom3
|
|
juliobbv
yeah, it's good to keep in mind that there's still the "using the internet in a crowded space" use case... internet can slow down to the equivalent of 3G speeds or even worse
|
|
2026-03-11 03:47:17
|
Ironically, I set my phone to 3G in London, but there's so much less congestion that I get double the speed of 4G
|
|
|
juliobbv
|
|
Jyrki Alakuijala
I disagree with the concept that the first preview pass is to bring a recognizable representation -- I consider that human cognition/cognitive stress is the most expensive part here, and there shouldn't be anything in the first rendering that consumes attention
|
|
2026-03-11 03:47:51
|
oh, I was thinking of case scenarios of those "QR code menus": if you want to see a picture of a dish, you want to least see that the plate has a main course and what the two sides are
|
|
|
Jyrki Alakuijala
|
|
Ironically, I set my phone to 3G in London, but there's so much less congestion that I get double the speed of 4G
|
|
2026-03-11 03:47:53
|
3G can be fast enough for internet use
|
|
|
juliobbv
|
|
Ironically, I set my phone to 3G in London, but there's so much less congestion that I get double the speed of 4G
|
|
2026-03-11 03:49:20
|
like Alanis Morrisette said, isn't it ironic? π
|
|
|
Jyrki Alakuijala
|
2026-03-11 03:49:57
|
"""3G network bandwidth typically provides average download speeds of 3β6 Mbps, with theoretical peaks reaching up to 42 Mbps via HSPA+ technology"""
|
|
2026-03-11 03:50:34
|
if the average website is 3 MB, then it loads fully in 5 seconds on 3G, if progression helps 6x, then you see the images in ~800 ms
|
|
|
juliobbv
|
|
Jyrki Alakuijala
this is more or less theoretical since the author of the image must make the decision and it comes with substantial cost (like 10 % more bytes) if they want anything better than a blur hash
|
|
2026-03-11 03:51:36
|
btw, AVIF supports lower-overhead progressive through reference scaling of the preview, so overhead can be as little as 1-2%
|
|
2026-03-11 03:52:32
|
it's just that there's no pre-packaged solution for encoding images in that way... yet
|
|
|
Jyrki Alakuijala
|
|
juliobbv
btw, AVIF supports lower-overhead progressive through reference scaling of the preview, so overhead can be as little as 1-2%
|
|
2026-03-11 03:53:57
|
I don't know about reference scaling -- my experience was that independent progressive frames don't really work at high quality, they work great at low quality though (my experience about this was from 2011-2014 or so, so a bit fuzzy), IIRC it was a 7-15 % loss
|
|
|
jonnyawsom3
|
2026-03-11 03:55:06
|
Again I could be wrong, but I'm fairly sure the browser already does propper decoding intervals based on speed. If we allow all progressive stages, all it should do is let slow connections load it when they otherwise would have nothing
I also agree with Jon, 1:8 and progressive final groups is nice for local loading too. The cost of rendering is within error compared to the decoding, but makes a huge difference when scrolling through an album to find the right image
|
|
|
_wb_
|
|
Jyrki Alakuijala
I disagree with the concept that the first preview pass is to bring a recognizable representation -- I consider that human cognition/cognitive stress is the most expensive part here, and there shouldn't be anything in the first rendering that consumes attention
|
|
2026-03-11 03:57:47
|
What is causing cognitive stress is quite different depending on connection speed. Having 5 passes being rendered in 100ms can be distracting and causing some cognitive stress compared to having just 2 passes with the first one after 25ms) while having those same 5 passes rendered in 10 seconds (with the first one after 300ms) can be less frustrating/stressful than having only a first preview after 2500ms.
So I think the current planned mechanism of flushing a render only when the network is being the bottleneck is exactly how it should be to reduce cognitive stress. The effective number of passes then depends on connection speed (and image size), which I think is how it should be.
|
|
|
juliobbv
|
|
Jyrki Alakuijala
I don't know about reference scaling -- my experience was that independent progressive frames don't really work at high quality, they work great at low quality though (my experience about this was from 2011-2014 or so, so a bit fuzzy), IIRC it was a 7-15 % loss
|
|
2026-03-11 03:57:51
|
yeah, at some point the better strategy is to essentially re-encode the frame again (i.e. using intra mode blocks), so loss becomes effectively the size of the preview, plus the size of the final pass
|
|
|
_wb_
What is causing cognitive stress is quite different depending on connection speed. Having 5 passes being rendered in 100ms can be distracting and causing some cognitive stress compared to having just 2 passes with the first one after 25ms) while having those same 5 passes rendered in 10 seconds (with the first one after 300ms) can be less frustrating/stressful than having only a first preview after 2500ms.
So I think the current planned mechanism of flushing a render only when the network is being the bottleneck is exactly how it should be to reduce cognitive stress. The effective number of passes then depends on connection speed (and image size), which I think is how it should be.
|
|
2026-03-11 03:58:11
|
I agree, 5 passes is overkill IMO
|
|
2026-03-11 03:58:47
|
this is my opinionated take: the second-to-last pass should be recognizably different from the final pass
|
|
|
_wb_
|
2026-03-11 03:59:27
|
I also have that preference, it's annoying if you don't know if the image is done loading or not.
|
|
|
Jyrki Alakuijala
|
|
juliobbv
this is my opinionated take: the second-to-last pass should be recognizably different from the final pass
|
|
2026-03-11 03:59:29
|
my take is the opposite, the first pass should be very difficult to tell apart from the final pass, i.e., no flickering
|
|
|
juliobbv
|
2026-03-11 04:00:13
|
otherwise it'd cause me cognitive stress not knowing when the image has fully loaded -- browsers have gotten away with visual cues that page resources have fully been loaded
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:00:39
|
if you know that it is going to look the same, then it stops concering you -- no stress
|
|
2026-03-11 04:00:50
|
if you understand and trust the guarantees
|
|
2026-03-11 04:01:13
|
but those guarantees don't exist if we have 32x32 and 16x16
|
|
2026-03-11 04:01:37
|
properly interpolated 8x8 at 4k or 8k resolutions are a pretty good photo
|
|
|
_wb_
|
2026-03-11 04:02:00
|
No flickering as in "no flash of green martian", I agree with that. No flickering as "the 1:8 image looks as sharp as the final one", I disagree with that because that can only be done if you're serving images with massive browser-side downscaling, which is not a good practice.
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:02:35
|
usually people render 1:1 as far as I observe websites
|
|
|
jonnyawsom3
|
2026-03-11 04:03:28
|
A while ago I was thinking of changing the progressive AC mode. The current one adds 1 extra pass, but looks extremely similar to the final pass. The alternative adds 2 passes, but they're noticeably different to the final pass
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:04:33
|
if you want to make it observable, you can paint some oblique stripes over the DC image that are overpainted by the AC tiles
|
|
|
juliobbv
|
|
Jyrki Alakuijala
my take is the opposite, the first pass should be very difficult to tell apart from the final pass, i.e., no flickering
|
|
2026-03-11 04:04:52
|
I mean, if it's very difficult to tell, then it does bring up the question: why would you bother encoding the final pass anyway?
|
|
|
Jyrki Alakuijala
|
|
juliobbv
I mean, if it's very difficult to tell, then it does bring up the question: why would you bother encoding the final pass anyway?
|
|
2026-03-11 04:05:27
|
the idea is that it is not observable within the first 300 ms that you are observing it
|
|
2026-03-11 04:05:48
|
when your eyes adapt to the brightness then it becomes clear that there is more detail
|
|
2026-03-11 04:06:28
|
gamma compression in the eye is a dynamic process that shifts the sensitivity point where it is more useful
|
|
2026-03-11 04:07:24
|
video compression uses this a lot -- you first get the general shape of the room and the next frame brings the detailed texture of the tapestry
|
|
|
juliobbv
|
|
Jyrki Alakuijala
the idea is that it is not observable within the first 300 ms that you are observing it
|
|
2026-03-11 04:07:54
|
yeah, if you can withhold that initial pass latency guarantee, then that makes sense
|
|
|
jonnyawsom3
|
2026-03-11 04:08:01
|
Another point that hasn't been brought up. By default images only have the 1:8 LF anyway. Only if you use -p do you get 1:16, 1:32, ect and progressive lossless with squeeze. So you'll only see the early passes if you specifically want/need them
|
|
|
AccessViolation_
|
|
juliobbv
this is my opinionated take: the second-to-last pass should be recognizably different from the final pass
|
|
2026-03-11 04:08:23
|
I've thought about this before and I think my preferred approach is showing a spinny icon or circle that fills in the top right corner on hover, and disappears when the file is fully loaded (as a browser feature, not on the jxl-rs level)
Or maybe turn the cursor into a spinny icon on hover, so you're not blocking any part of the image, though that doesn't solve for mobile browsers
|
|
|
Jyrki Alakuijala
|
|
Another point that hasn't been brought up. By default images only have the 1:8 LF anyway. Only if you use -p do you get 1:16, 1:32, ect and progressive lossless with squeeze. So you'll only see the early passes if you specifically want/need them
|
|
2026-03-11 04:08:50
|
no, the viewer is not deciding -- but the author of the image -- ideally the viewer would be in charge of viewing stress
|
|
|
jonnyawsom3
|
2026-03-11 04:10:11
|
Isn't playing a video even more of a hazard compared to an image loading in 2 or 3 passes over the course of a few seconds?
|
|
|
Exorcist
|
|
juliobbv
I mean, if it's very difficult to tell, then it does bring up the question: why would you bother encoding the final pass anyway?
|
|
2026-03-11 04:10:12
|
In an ideal web, every image are progressive, no `srcset`, not blurhash<:SadOrange:806131742636507177>
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:10:24
|
there can be many benefits of the dc progression, particularly for ease of thumbnailing, that don't really relate to web viewing
|
|
|
juliobbv
|
|
AccessViolation_
I've thought about this before and I think my preferred approach is showing a spinny icon or circle that fills in the top right corner on hover, and disappears when the file is fully loaded (as a browser feature, not on the jxl-rs level)
Or maybe turn the cursor into a spinny icon on hover, so you're not blocking any part of the image, though that doesn't solve for mobile browsers
|
|
2026-03-11 04:10:26
|
yeah, I'd welcome some kind of UX feedback like this -- I miss old web browsers in this particular way
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:11:32
|
it is a good idea to take a look at the previews on Moritz's blog post rather than just think about it -- https://opensource.googleblog.com/2021/09/using-saliency-in-progressive-jpeg-xl-images.html this one
|
|
|
lonjil
|
2026-03-11 04:12:11
|
The real purpose of blurhash is that it's embedded in the HTML, so it adds zero additional latency, unlike loading an image over the network after getting the src url.
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:12:26
|
watch the videos of b, c and d in full screen
|
|
|
_wb_
|
2026-03-11 04:13:37
|
I think we agree on this but we're expressing the idea in ways that seem contradictory. I think what Jyrki means is that there should not be a jarring transition from preview to final image, like the "green martian" preview you could get in JPEG before that was fixed, or as a less extreme case, an obviously blocky preview.
And I think what Julio and me are saying is that final passes like in a typical 11-pass progressive libjpeg where those passes are only least significant bits of highest freq chroma or something like that, the image can look "final" already after 7 passes but it keeps refining and without any other UI indication of loading, this can be frustrating because you don't know when to stop waiting for the image to load, say if it's an image you want to take a close look at.
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:14:00
|
those videos already simulate a relatively low speed connection -- in reality we rarely see jpegs coming like that, so likely the transitions are going to be 2-3x or more faster
|
|
|
lonjil
|
|
lonjil
The real purpose of blurhash is that it's embedded in the HTML, so it adds zero additional latency, unlike loading an image over the network after getting the src url.
|
|
2026-03-11 04:14:07
|
It would've be nifty if HTTP/2's resource pushing could've been utilized to unconditionally send the first 200 bytes of each image that was encoded to have a useful preview within that size.
|
|
|
Jyrki Alakuijala
|
|
lonjil
It would've be nifty if HTTP/2's resource pushing could've been utilized to unconditionally send the first 200 bytes of each image that was encoded to have a useful preview within that size.
|
|
2026-03-11 04:15:55
|
you can do such things -- you can send the "preview" or DC bytes with each image first and then stream the rest of each image
|
|
2026-03-11 04:16:32
|
no one has done it yet as far as I know, but it would make the web feel quite a bit faster
|
|
|
juliobbv
|
2026-03-11 04:16:47
|
I think <@794205442175402004> made a good point about the stddev of internet connection speeds increasing... I've experienced situations where phone speeds can be so so slow, you can still see the blurhashes for a split second
|
|
|
jonnyawsom3
|
2026-03-11 04:17:12
|
On the jxl-rs repo, I suggested using partial content requests to get the LF of every image before downloading the rest
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:17:17
|
cloudflare supposedly has a web server that can do many such things, but there is no jpeg xl demo built on this tech
|
|
|
lonjil
|
|
Jyrki Alakuijala
you can do such things -- you can send the "preview" or DC bytes with each image first and then stream the rest of each image
|
|
2026-03-11 04:17:42
|
Since browsers don't support Server Push anymore, is there some other mechanism for this?
|
|
|
juliobbv
|
2026-03-11 04:18:16
|
so coming with a "one size fits all" default solution for progressive, while satisfying as many practical use cases as possible can be very challenging
|
|
|
jonnyawsom3
|
|
juliobbv
I think <@794205442175402004> made a good point about the stddev of internet connection speeds increasing... I've experienced situations where phone speeds can be so so slow, you can still see the blurhashes for a split second
|
|
2026-03-11 04:19:16
|
For a split second? Even at home I still see Discord hanging on a blurhash for a second or two, and usually around 5 seconds for the actual image to load
Oh my phone it's even worse, usually getting nothing at all until I open the image in the browser and watch it sequentially load
|
|
|
_wb_
|
|
Jyrki Alakuijala
Also, I'd like to buffer the first update to the time when the whole DC field is available, so that the first rendering of the photograph does not show some 2048x2048 areas (DC tiles) being refined first, and there would be no up-to-down or other refinements of these -- the first rendering is strictly when all of 8x8 has been received, and only then the DC is shown (as opposed to JPEG1 that would render from top to bottom usually during the streaming)
|
|
2026-03-11 04:19:47
|
I agree with this; note that when using progressive DC, the DC frame is basically progressive lossless so you do have full DC frames available also at 1:16 and earlier resolutions. In case of non-progressive DC, I agree that it's nicer to wait for all DC tiles to be available before showing something.
|
|
|
jonnyawsom3
|
2026-03-11 04:20:43
|
That's the current behaviour
|
|
|
juliobbv
|
|
For a split second? Even at home I still see Discord hanging on a blurhash for a second or two, and usually around 5 seconds for the actual image to load
Oh my phone it's even worse, usually getting nothing at all until I open the image in the browser and watch it sequentially load
|
|
2026-03-11 04:21:09
|
yeah, I'm wondering how much Discord CDN's latency might be to blame though (rather than purely connection speed)
|
|
|
Jyrki Alakuijala
|
|
lonjil
Since browsers don't support Server Push anymore, is there some other mechanism for this?
|
|
2026-03-11 04:22:49
|
you need force the browser to render only the first pass by having your server send the first chunk of data, and then intentionally pause the stream, and send stuff from other images
|
|
|
jonnyawsom3
|
2026-03-11 04:23:50
|
From the sounds of it, I think changing to the alternative progressive AC mode would help Julio's and Jon's final pass problem, and I *think* browsers should already avoid excessive progressive loading of the LF based on network conditions, but if it turns out to be a problem we could always revise it
|
|
|
lonjil
|
|
Jyrki Alakuijala
you need force the browser to render only the first pass by having your server send the first chunk of data, and then intentionally pause the stream, and send stuff from other images
|
|
2026-03-11 04:24:21
|
Ah, you mean like that. Yeah I hope we can implement that in some servers :)
I'm looking for the server to be able to bundle the preview bytes in the initial HTML response.
|
|
|
Jyrki Alakuijala
|
|
From the sounds of it, I think changing to the alternative progressive AC mode would help Julio's and Jon's final pass problem, and I *think* browsers should already avoid excessive progressive loading of the LF based on network conditions, but if it turns out to be a problem we could always revise it
|
|
2026-03-11 04:25:24
|
if you want that, you just spoil the image by adding a gentle texture (such as obligue stripes) that will go away when the final image comes, no reason to make it more complicated -- but I doubt that anyone/many will keep that feature on in practice
|
|
|
lonjil
Ah, you mean like that. Yeah I hope we can implement that in some servers :)
I'm looking for the server to be able to bundle the preview bytes in the initial HTML response.
|
|
2026-03-11 04:26:25
|
the experts told me already around 2018 that there is no reason to bundle, that the "multiplexed" streaming will deal with it equally well when done usually -- I haven't seen it done however
|
|
|
lonjil
|
2026-03-11 04:27:21
|
That's the server push feature. They never got it to work well and removed it from all browsers.
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:27:37
|
gee
|
|
2026-03-11 04:28:53
|
if javascript knows how much to load from each image, that becomes another way of doing things, but requires html/JS to know those ranges
|
|
|
lonjil
|
2026-03-11 04:29:54
|
It was a bit overcomplicated. Didn't just want to send a few preview bytes unconditionally, they tried using complex logic to decide whether the whole file should be sent, which meant they needed to know if the file was already cached, etc.
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:30:30
|
and if you watch the Moritz's progression videos on the blog post, it would be a good idea to show a blurred image for 50 ms before the image appears, i.e., more flickering would make them better or worse?
|
|
|
lonjil
|
2026-03-11 04:31:05
|
I will watch it when I get home.
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:31:40
|
(I like them perfect as they are, and more flickering will be annoying for me) π
|
|
|
juliobbv
|
2026-03-11 04:36:29
|
maybe browsers should introduce the concept of "struggle bus" mode
|
|
2026-03-11 04:37:03
|
and adjust how progressive images are received and displayed, knowing that the connection isn't the best
|
|
|
jonnyawsom3
|
|
On the jxl-rs repo, I suggested using partial content requests to get the LF of every image before downloading the rest
|
|
2026-03-11 04:38:48
|
Another idea behind that was data saving mode, only loaded the LF or the first HF pass, ect
|
|
|
juliobbv
|
2026-03-11 04:39:06
|
in normal mode, the browser will try its best to avoid the 50ms flicker case
|
|
|
jonnyawsom3
|
2026-03-11 04:39:27
|
Again, don't browsers already throttle image rendering based on speed? So they would already be doing that
|
|
|
juliobbv
|
|
Again, don't browsers already throttle image rendering based on speed? So they would already be doing that
|
|
2026-03-11 04:40:21
|
I don't think it should only be determined on just speed, but also on latency and packet drop rates
|
|
|
_wb_
|
|
From the sounds of it, I think changing to the alternative progressive AC mode would help Julio's and Jon's final pass problem, and I *think* browsers should already avoid excessive progressive loading of the LF based on network conditions, but if it turns out to be a problem we could always revise it
|
|
2026-03-11 04:41:11
|
My personal preference is to have strong LF progression (1:32, 1:16, 1:8) followed by a single HF pass. When the network is fast enough, only 1:8 followed by 1:1 is great (hides decode time); when it is slow or the connection drops when not enough is available yet to show 1:8, then I only really care about getting _something_ rather than nothing, which is what LF progression helps with. So I care more about progressiveness in the first 20% of the bitstream than in the remaining 80%.
|
|
|
Jyrki Alakuijala
|
|
juliobbv
in normal mode, the browser will try its best to avoid the 50ms flicker case
|
|
2026-03-11 04:41:27
|
but it will make it at some more or less random stage, leading to sometimes 16x16, othertimes 8x8, occasionally 32x32 -- user will get a "rich" experience of different resamplings
|
|
2026-03-11 04:41:54
|
+1 for single HF pass
|
|
|
jonnyawsom3
|
|
juliobbv
I don't think it should only be determined on just speed, but also on latency and packet drop rates
|
|
2026-03-11 04:42:19
|
Wouldn't they count as 'speed'. You can't render a pass if it's missing data, so it would treat it as a slower connection
|
|
|
Exorcist
|
|
Again, don't browsers already throttle image rendering based on speed? So they would already be doing that
|
|
2026-03-11 04:42:54
|
not based on speed
browsers limit re-render to reduce CPU usage
This is also the reason to enforce GIF min-delay
|
|
|
Jyrki Alakuijala
|
|
Wouldn't they count as 'speed'. You can't render a pass if it's missing data, so it would treat it as a slower connection
|
|
2026-03-11 04:42:56
|
it is speed vs. cognitive load -- I'd minimize cognitive load, give than the experience is going to be 6x faster than Safari has even with pure 8x8 LF
|
|
|
_wb_
|
|
Jyrki Alakuijala
and if you watch the Moritz's progression videos on the blog post, it would be a good idea to show a blurred image for 50 ms before the image appears, i.e., more flickering would make them better or worse?
|
|
2026-03-11 04:46:05
|
At this transfer speed (about 3 seconds to load the entire image, steady transfer), I agree that only showing 1:8 is a nice experience.
But say the network is shaky and you get 10% of data in 500ms, then 10 seconds of no connection at all, then another burst where you get the rest. In that case I would prefer to see a 1:16 preview after 500ms and then the final image 10 seconds later, rather than getting only a 1:8 preview after 10 seconds and the final image 500ms later.
|
|
|
juliobbv
|
|
Wouldn't they count as 'speed'. You can't render a pass if it's missing data, so it would treat it as a slower connection
|
|
2026-03-11 04:46:43
|
oh, I thought the JXL format allowed for rendering the luma of an n+1 pass if chroma data hasn't been received yet
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:46:47
|
Jon, could you formulate a recommendation on what kind of anticipated delays would be good for different resolutions
|
|
|
juliobbv
|
2026-03-11 04:47:07
|
I swear I saw one of Jon's demos do that in the past
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:47:19
|
"recommendations for a browser interface code for progression"
|
|
2026-03-11 04:47:51
|
there are no images that load in 60 seconds in Safari today, even in Kenya images load relatively quickly
|
|
2026-03-11 04:48:34
|
also, usual systems start to apply timeouts when things last very very long
|
|
|
jonnyawsom3
|
|
juliobbv
oh, I thought the JXL format allowed for rendering the luma of an n+1 pass if chroma data hasn't been received yet
|
|
2026-03-11 04:49:28
|
The format does, but jxl-rs doesn't AFAIK. You can see it in the Oxide WASM demo though
|
|
|
Jyrki Alakuijala
|
2026-03-11 04:50:10
|
how many seconds to wait, or what kind of eta-analysis to use before bothering the user with the 32x32 image flashing before the 16x16 arrives etc.
|
|
2026-03-11 04:50:43
|
I'm worried that we are otherwise going to end up with a system that has too much visual activity for the pleasure of engineers and demise of users
|
|
|
jonnyawsom3
|
2026-03-11 04:54:03
|
*If* progressive loading is being limited, then what about all steps for progressive lossless, and 1:16 for progressive DC.
Demos show that lossless looks good at pretty much any squeeze step, but the lossy LF can be quite jarring in the early passes https://discord.com/channels/794206087879852103/1464417869470371912/1480113667558211635
|
|
|
juliobbv
|
|
Jyrki Alakuijala
there are no images that load in 60 seconds in Safari today, even in Kenya images load relatively quickly
|
|
2026-03-11 04:54:53
|
IMO it's less about the baseline connection speed, and more situations where you have about a thousand people around with devices sharing the same air channels, or if you're enclosed in a pseudo-Faraday cage (like the subway) without Wi-Fi or a picocell
|
|
|
The format does, but jxl-rs doesn't AFAIK. You can see it in the Oxide WASM demo though
|
|
2026-03-11 04:57:23
|
yeah, the WASM demo was probably what I was thinking of
|
|
|
jonnyawsom3
|
2026-03-11 04:57:39
|
There are times when the connection is crippled, the website poorly optimised or the content itself is a high resolution/quality image
|
|
2026-03-11 04:58:17
|
Just off the top of my head, No Man's Sky had a page full of 50MB 8K PNGs to demo the PS5 Pro update
|
|
2026-03-11 04:58:57
|
Last week the BBC had a 40MP photo of trump as the headline of an article
|
|
2026-03-11 04:59:38
|
In an ideal world, we wouldn't need progressive loading at all, but when you need it, you tend to *really* need it
|
|
|
juliobbv
|
|
Jyrki Alakuijala
how many seconds to wait, or what kind of eta-analysis to use before bothering the user with the 32x32 image flashing before the 16x16 arrives etc.
|
|
2026-03-11 05:00:53
|
maybe some sliding window of important connection properties? (speed, latency, packet drop rate)
|
|
|
Jyrki Alakuijala
|
|
*If* progressive loading is being limited, then what about all steps for progressive lossless, and 1:16 for progressive DC.
Demos show that lossless looks good at pretty much any squeeze step, but the lossy LF can be quite jarring in the early passes https://discord.com/channels/794206087879852103/1464417869470371912/1480113667558211635
|
|
2026-03-11 05:01:17
|
I don't have opinions about progressive lossless, I would be very happy to drop the 1:32 progressive DC
|
|
|
juliobbv
|
2026-03-11 05:01:19
|
but yes, addressing this issue isn't trivial
|
|
|
_wb_
|
2026-03-11 05:05:15
|
As a general heuristic, I'd use something like this:
whenever new data arrives, do this:
- full image data now available: render it (duh)
- not yet all data available: wait DELAY ms for more data to arrive and then call flush_pixels to render what is already available
Per image, initialize DELAY to some number, e.g. 0., and after each flush increment it by some constant, e.g. 20. This ensures that the partial rendering gets sparser over time so there's a limit to how much activity there can be. In good network conditions and for not-too-large images, this will usually boil down to just instant full image or 1:8 + maybe some 1:1 groups quickly followed by final image. In poor conditions or for huge images, you might get 1:32, 1:16, 1:8, some 1:1 groups, final image.
|
|
|
Jyrki Alakuijala
|
2026-03-11 05:09:36
|
perhaps it could be predictive based on an estimate of the image size and the bandwidth: do we win 5 seconds or more rendering 32x32 instead of waiting for 16x16, and does rendering 16x16 win more than 5 seconds than waiting for 8x8
|
|
2026-03-11 05:10:01
|
that way in slow conditions it would render immediately without a delay
|
|
2026-03-11 05:10:18
|
but also in normal conditions it would never happen
|
|
|
juliobbv
|
|
_wb_
As a general heuristic, I'd use something like this:
whenever new data arrives, do this:
- full image data now available: render it (duh)
- not yet all data available: wait DELAY ms for more data to arrive and then call flush_pixels to render what is already available
Per image, initialize DELAY to some number, e.g. 0., and after each flush increment it by some constant, e.g. 20. This ensures that the partial rendering gets sparser over time so there's a limit to how much activity there can be. In good network conditions and for not-too-large images, this will usually boil down to just instant full image or 1:8 + maybe some 1:1 groups quickly followed by final image. In poor conditions or for huge images, you might get 1:32, 1:16, 1:8, some 1:1 groups, final image.
|
|
2026-03-11 05:14:20
|
makes sense, but I do think DELAY has to be dynamically determined on network conditions, because there's an interest to avoid rendering intermediate results to save on aggregate decode compute time (whenever possible), while also preventing the intermediate "flicker" situation on fast connections -- two constraints that are at odds with each other, especially with static DELAYs
|
|
2026-03-11 05:14:35
|
what Jyrki said basically π
|
|
|
_wb_
|
2026-03-11 05:15:48
|
I don't know how good Chrome's internal heuristics are for estimating current network conditions to make such estimates. Also I don't know how exactly the interface between the network layer and the rendering process is working: does every received packet immediately get passed or is there some amount of buffering already?
|
|
|
juliobbv
|
2026-03-11 05:16:38
|
yeah, we'd need to ask the experts to see what browsers are currently capable of
|
|
2026-03-11 05:16:51
|
or what could be reasonably implemented to aid progressive decoding
|
|
|
_wb_
|
2026-03-11 05:19:03
|
I think the best approach is to just implement something reasonable, simulate what happens in various simulated network conditions (fast or slow; steady or choppy), and then adjust it if needed.
|
|
|
juliobbv
|
2026-03-11 05:21:04
|
I think so too
|
|
|
jonnyawsom3
|
|
juliobbv
makes sense, but I do think DELAY has to be dynamically determined on network conditions, because there's an interest to avoid rendering intermediate results to save on aggregate decode compute time (whenever possible), while also preventing the intermediate "flicker" situation on fast connections -- two constraints that are at odds with each other, especially with static DELAYs
|
|
2026-03-11 05:23:46
|
In the light testing I did, the cost of rendering every 1% vs non-progressive was within error of each other. Loading JPEG, PNG or (I think?) WebP all render the next row as it arrives, but JXL only has data to update per-group/squeeze step, so I think the cost should even-out
|
|
2026-03-11 05:24:40
|
And yeah. Get it working first, if it's too flickery/renders too often, it can be adjusted after
|
|
|
juliobbv
|
|
In the light testing I did, the cost of rendering every 1% vs non-progressive was within error of each other. Loading JPEG, PNG or (I think?) WebP all render the next row as it arrives, but JXL only has data to update per-group/squeeze step, so I think the cost should even-out
|
|
2026-03-11 05:27:57
|
interesting! does this rendering cost also includes the copy-to-memory and display update part? I ask because IIRC, the Chrome team factors that stuff in too
|
|
2026-03-11 05:28:27
|
I don't think it's needed to think that far ahead at this stage though
|
|
|
jonnyawsom3
|
2026-03-11 05:31:55
|
Not sure, it wasn't exactly a scientific test, but seemed promising that it wasn't noticeably slower even when rendering constantly
|
|
|
juliobbv
|
2026-03-11 05:36:39
|
yeah, that's 100% fair... as long as things are promising that's enough
|
|
|
username
|
2026-03-11 07:57:53
|
speaking of progressive, two concerns I have ATM are that libjxl 0.11 is anti-progressive when it comes to encoding images (0.12 fixes most of this but since it isn't out yet it's not what a lot of JXLs are being encoded with.) and even with libjxl 0.12 non-progressive lossless images (especially large ones) have a pretty bad loading experience since you only get full groups and nothing else, this IMO is pretty bad since non-progressive lossless images are expected to be a common case even on the web since progressive squeezed lossless is infeasible for most use cases. The only solution I can think of for this is to use preview frames (which libjxl and jxl-rs don't currently support) but then the question becomes what is a reasonable resolution for a preview frame (taking into account the scale of how big the source image is)?
|
|
|
jonnyawsom3
|
2026-03-11 08:09:21
|
Could do a 1 step squeeze instead of all the way down to 8 pixels
|
|
|
username
|
|
Could do a 1 step squeeze instead of all the way down to 8 pixels
|
|
2026-03-11 08:11:13
|
would that work with chunked encoding and how much different would the file size be?
|
|
|
jonnyawsom3
|
2026-03-11 08:11:50
|
No clue, but should be a lot less overhead than full progressive
|
|
|
username
|
2026-03-11 08:13:54
|
main problems that currently prevent libjxl squeezed lossless from being the default or widely useable are the encoding overhead and file size increase
|
|
2026-03-11 08:15:16
|
I expect to see a lot of lossless images out in the wild and only having full groups get decoded for the majority of images is going to be a bit painful
|
|
|
jonnyawsom3
|
2026-03-11 08:21:55
|
Encoding overhead is mostly memory cost from having to buffer the full image for the squeeze, file size we should be able to decease by a further 10-20%, but we need to fix RCTs, maybe do better pallete and make a new predictor set that has none as an option
|
|
|
|
veluca
|
|
No clue, but should be a lot less overhead than full progressive
|
|
2026-03-11 08:24:03
|
I'm not convinced by that
|
|
|
jonnyawsom3
|
2026-03-11 08:33:59
|
Right now progressive lossless still beats non-progressive PNG at least, so that's something...
|
|
|
username
|
2026-03-11 08:38:43
|
I was just wondering if there's anything with really low overhead that's workable as something that could be a default, which is why I brought up preview frames since AFAIK with a smart encoder they could be something to get more pleasant loading out of large lossless images with negligible encoding and size overhead
|
|
|
drhead
|
2026-03-11 08:40:24
|
uhh, are there known issues with cjxl and encoding PNGs with an sBIT chunk? I'm getting results like this with both VIPS and ImageMagick where the resulting image mismatches the original. (top image here is correct).
|
|
2026-03-11 08:41:58
|
jxlinfo correctly reports images as 6-bit though (well, I do also have an image on hand that has differing bit depths per channel, which is also recorded as 6-bit which may or may not be correct, but I'd rather start with getting any non-standard to work before worrying about that)
|
|
|
Kleis Auke
|
2026-03-11 08:43:49
|
The only known issue I'm aware of has been fixed in libvips via commit <https://github.com/libvips/libvips/commit/a01a651bb54c2094b797babe5297c1163a26259b>, which will be included in the upcoming 8.18.1 release.
|
|
|
drhead
|
2026-03-11 08:45:21
|
well that'll probably fix it
|
|
|
New Player π©πͺ
|
2026-03-11 08:46:22
|
is jxl now really going into chrome?
|
|
|
username
|
|
New Player π©πͺ
is jxl now really going into chrome?
|
|
2026-03-11 08:47:33
|
do you have it installed? if so go to `chrome://flags/#enable-jxl-image-format` and you will see
|
|
|
HCrikki
|
2026-03-11 08:47:41
|
already went, and iinm they said it will eventually be enabled by default (not might or could)
|
|
|
jonnyawsom3
|
|
username
speaking of progressive, two concerns I have ATM are that libjxl 0.11 is anti-progressive when it comes to encoding images (0.12 fixes most of this but since it isn't out yet it's not what a lot of JXLs are being encoded with.) and even with libjxl 0.12 non-progressive lossless images (especially large ones) have a pretty bad loading experience since you only get full groups and nothing else, this IMO is pretty bad since non-progressive lossless images are expected to be a common case even on the web since progressive squeezed lossless is infeasible for most use cases. The only solution I can think of for this is to use preview frames (which libjxl and jxl-rs don't currently support) but then the question becomes what is a reasonable resolution for a preview frame (taking into account the scale of how big the source image is)?
|
|
2026-03-11 08:57:02
|
Also, we should be able to make encoding centre-first by default, so that would help
|
|
|
Orum
|
|
HCrikki
already went, and iinm they said it will eventually be enabled by default (not might or could)
|
|
2026-03-11 08:58:47
|
so what, Mozilla will follow suit when they finally enable it by default?
|
|
|
jonnyawsom3
|
2026-03-11 09:00:50
|
Probably
|
|
|
HCrikki
|
2026-03-11 09:05:55
|
they changed the experiment page today, adding support for beta, developper edition and stable from v149
|
|
|
Orum
|
2026-03-11 09:06:29
|
yeah but is it on by default?
|
|
|
HCrikki
|
2026-03-11 09:06:47
|
idk if that means support will be *built* by default for those branches and just disabled by default (user-activable in about:config)
|
|
2026-03-11 09:08:28
|
wip page https://pr43400.review.mdn.allizom.net/en-US/docs/Mozilla/Firefox/Experimental_features
|
|
|
Orum
|
2026-03-11 09:09:25
|
isn't jxl-rs.... slow?
|
|
|
HCrikki
|
2026-03-11 09:09:57
|
all image loading is slow in firefox on an aside
|
|
|
Orum
|
2026-03-11 09:10:08
|
no excuse to make it slower
|
|
|
username
|
2026-03-11 09:10:20
|
jxl-rs doesn't have multithreading *yet*
|
|
2026-03-11 09:10:35
|
it's planned for after progressive decoding is done
|
|
|
Orum
|
2026-03-11 09:10:36
|
anyway it looks like it it will just be built for everything but still turned off by default
|
|
|
username
|
|
Orum
anyway it looks like it it will just be built for everything but still turned off by default
|
|
2026-03-11 09:11:47
|
this page still hasn't been updated though: https://bugzilla.mozilla.org/show_bug.cgi?id=2016688
|
|
|
Orum
|
2026-03-11 09:15:25
|
yeah, but again, that's just the build process, so even if it's there it's still off by default
|
|
|
|
veluca
|
|
username
it's planned for after progressive decoding is done
|
|
2026-03-11 09:20:07
|
(and animation)
|
|
|
jonnyawsom3
|
|
Orum
isn't jxl-rs.... slow?
|
|
2026-03-11 09:30:41
|
Compared to libjxl with 1 thread, it's within 10%, sometimes a little faster. Much better progressive loading too
|
|
|
Orum
|
2026-03-11 09:31:19
|
yeah but who the hell uses a single core CPU these days? π©
|
|
|
jonnyawsom3
|
2026-03-11 09:32:13
|
Local decoding aside, most websites aren't a single image :P
|
|
|
Orum
|
2026-03-11 09:33:08
|
sure but you're only receiving one at a time
|
|
|
HCrikki
|
2026-03-11 09:33:10
|
browsers and service providers normalized certain limits (singlethreaded img decode, progressive due to preloading) and now its a pain to get em reversed
|
|
|
jonnyawsom3
|
|
Orum
yeah, but again, that's just the build process, so even if it's there it's still off by default
|
|
2026-03-11 09:33:18
|
I mean, I'd be surprised if they went from behind a flag in nightly to enabled by default on stable in a single update...
|
|
|
HCrikki
|
2026-03-11 09:35:39
|
waterfox multithreaded libjxl for a huge boost but the average image shouldnt need that - more images loading in paralel makes more sense
|
|
|
|
veluca
|
2026-03-11 09:42:19
|
not like MT in jxl-rs isn't in the plans π
|
|
|
jonnyawsom3
|
|
veluca
I'm not convinced by that
|
|
2026-03-11 10:09:41
|
Right now because the heuristics implode, we've had to disable all predictors, palette and use fixed YCoCg. Only doing 1 squeeze step for half or maybe quarter res should keep enough image structure to still make predictors useful and (hopefully) stop RCT selection from breaking. Only way to know would be to test it though, which is easier said that done since only default squeeze is implemented in libjxl
|
|
|
|
veluca
|
2026-03-11 10:11:54
|
my biggest worry is that squeeze makes the residual channels basically noise, and for a 2x squeeze those would be 3/4 of the pixels
|
|
|
Orum
|
2026-03-11 10:13:27
|
isn't that what a residual channel should be?
|
|
|
Demiurge
|
2026-03-12 12:18:46
|
Well for whatever reason, squeeze often does much better than DCT in libjxl
|
|
2026-03-12 12:19:14
|
Maybe even most of the time
|
|
2026-03-12 12:19:24
|
Smaller and better looking files
|
|
2026-03-12 12:21:53
|
So maybe it's not actually that inefficient after all. At least compared to the default libjxl DCT mode
|
|
|
monad
|
2026-03-12 12:36:05
|
wut
|
|
2026-03-12 12:36:17
|
that needs some qualifying context
|
|
|
whatsurname
|
|
HCrikki
wip page https://pr43400.review.mdn.allizom.net/en-US/docs/Mozilla/Firefox/Experimental_features
|
|
2026-03-12 01:43:16
|
That doesn't mean anything
Anyone can make a PR to MDN and it could be wrong
|
|
|
Demiurge
|
|
monad
that needs some qualifying context
|
|
2026-03-12 04:12:57
|
I mean try comparing some images with `cjxl -m 1` and you will see the artifacts are smaller and less disruptive and the file size is usually smaller at the same time
|
|
2026-03-12 04:13:40
|
So either something is wrong with libjxl's DCT or Squeeze is just surprisingly good
|
|
|
username
|
2026-03-12 04:22:28
|
you mean Modular?
|
|
2026-03-12 04:25:30
|
the m in `-m` means "Modular". lossless compression (`-d 0`) uses Modular and passing in `-p` with lossless will enable/use Squeeze
|
|
|
jonnyawsom3
|
2026-03-12 04:27:44
|
Just `-m 1` uses lossy modular, which uses Squeeze as a discount DCT to separate image frequency
|
|
2026-03-12 04:29:45
|
In my experience it's pretty much always worse, apart from non-photo where lossless already does pretty well. My branch mostly fixed the color issues, but it still looses more detail compared to VarDCT
|
|
|
Demiurge
|
2026-03-12 06:14:56
|
I'm obviously talking about lossy compression if I'm talking about artifacts and distortion
|
|
2026-03-12 06:15:34
|
Or comparing it to DCT, which in jxl is only lossy
|
|
|
In my experience it's pretty much always worse, apart from non-photo where lossless already does pretty well. My branch mostly fixed the color issues, but it still looses more detail compared to VarDCT
|
|
2026-03-12 06:17:06
|
I have mostly noticed it in illustrations where the difference is obvious. But also in cases where subtle differences in shading get blurred away in DCT but not with squeeze.
|
|
2026-03-12 06:18:09
|
If you have done some comparisons recently or you have them on hand, I would be curious what sort of photographic content DCT does better than Squeeze
|
|
2026-03-12 06:19:03
|
Because I thought Squeeze always does better, by a large margin
|
|
2026-03-12 06:20:37
|
The difference in quality is large enough that it's hard for me to mentally picture it being the reverse
|
|
|
jonnyawsom3
|
2026-03-12 06:23:38
|
Maybe post an example
|
|
2026-03-12 06:24:40
|
The other day I posted a case where modular turned green into cyan
|
|
|
_wb_
|
2026-03-12 08:58:58
|
For illustrations, lossy squeeze has the nice property that it somewhat avoids ringing around hard edges (the monotonicity condition in the tendency term avoids overshoot).
For natural photos, at similar bitrates, DCT works better than squeeze to preserve texture.
|
|
|
Demiurge
|
2026-03-12 12:09:26
|
I will have to do some testing, hopefully tomorrow
|
|
|
VcSaJen
|
2026-03-12 03:30:21
|
IMHO: "Unnoticeable progressive" is a bad UX. It just makes image looks blurry while not clearly conveying that image is being loaded, especially if it's displayed at 75% scale or something. It would make sense to jump from 1:4 straight into 1:1. Otherwise web-devs will be forced to add spinners.
But I understand the desire to not skip any steps.
In the future I think it will be controllable via css, just like how image interpolation methods are now controllable via css.
|
|
|
username
|
2026-03-12 03:33:54
|
IMO the bad UX feels kinda like the fault of the browser. like why do browsers themselves not give any indication as to whether an image is done downloading besides the global progress indicator for the site as a whole?
|
|
|
jonnyawsom3
|
2026-03-12 03:35:36
|
The alternative AC I keep mentioning has a few bonuses too. It decodes slightly faster, it has a pass sooner after the LF, and has a clearer distinction between fully loaded. I should do more testing now that jxl-rs has it all working
|
|
|
spider-mario
|
|
username
IMO the bad UX feels kinda like the fault of the browser. like why do browsers themselves not give any indication as to whether an image is done downloading besides the global progress indicator for the site as a whole?
|
|
2026-03-12 04:45:32
|
do we even still have that? these days, I have the impression that the only way I can tell if itβs even loading is by checking whether I have a βreloadβ or a βstopβ button in the toolbar
|
|
2026-03-12 04:45:53
|
I miss having a progress indicator
|
|
|
Exorcist
|
2026-03-12 04:53:35
|
In the old Firefox, there is an XUL add-on can block GIF autoplay, add control button for GIF<:SadOrange:806131742636507177>
|
|
|
jonnyawsom3
|
2026-03-12 05:18:00
|
It would be nice if the network inspector didn't only log new requests, but showed currently active connections too
|
|
|
DZgas Π
|
|
DZgas Π
```py
27745 bytes cjxl_0.9 -d 0 -e 10 (--allow_expert_options speed 0.001x)
27853 bytes cjxl_0.8 -d 0 -e 10 (--allow_expert_options speed 0.001x)
30176 bytes cjxl_0.7 -d 0 -e 9
30318 bytes cjxl_0.8 -d 0 -e 9
30373 bytes cjxl_0.6 -d 0 -e 9
30819 bytes cjxl_0.11 -d 0 -e 10
31094 bytes cjxl_0.10 -d 0 -e 9
31094 bytes cjxl_0.9 -d 0 -e 9
31181 bytes cjxl_0.10 -d 0 -e 10
37181 bytes cjxl_0.6 -d 0 -e 8
37249 bytes cjxl_0.7 -d 0 -e 8
37391 bytes cjxl_0.8 -d 0 -e 8
37679 bytes cjxl_0.11 -d 0 -e 9
38832 bytes cjxl_0.10 -d 0 -e 8
38832 bytes cjxl_0.9 -d 0 -e 8
39248 bytes cjxl_0.11 -d 0 -e 8
```
|
|
2026-03-12 06:13:55
|
nuh uh
|
|
2026-03-12 06:32:23
|
It's even sadder that I can't just say that you can use the super long 0.9 -e 10 as placebo, which took me 20 minutes to compress one picture, which makes it even more unpredictable.
|
|
|
monad
|
2026-03-12 09:40:08
|
current e11 should be faster and stronger than old e10, if you really need it
|
|
|
Orum
|
2026-03-12 09:40:56
|
is it still single threaded?
|
|
|
monad
|
2026-03-12 09:43:37
|
old e10 was massively multi-threaded (over 400 technically possible). current e11 does 2 heavy threads, then up to ~20, then one for the final encode.
|
|
2026-03-12 09:45:57
|
current e10 (old e9) is still mostly single threaded
|
|
|
Orum
|
2026-03-12 09:47:29
|
by "old" you mean v0.11.x?
|
|
|
monad
|
2026-03-12 09:49:24
|
nope, I'm referring to 0.9 like DZgas displayed
|
|
|
Orum
|
2026-03-12 09:50:11
|
aahhh okay, that makes more sense now
|
|
2026-03-12 09:51:03
|
I was thinking by current you meant head/0.12 branch
|
|
|
Demiurge
|
2026-03-13 02:08:16
|
There is only XUL
|
|
|
A homosapien
|
|
Demiurge
Vibe-revert πΉ
|
|
2026-03-13 04:18:16
|
https://github.com/Galaxy4594/libjxl/tree/0.8-retry
|
|
2026-03-13 04:19:57
|
It compiles and seems to work well, 20% faster with better looking images
|
|
|
monad
|
2026-03-13 04:32:18
|
doubt
|
|
|
A homosapien
|
2026-03-13 04:43:44
|
yeah me too, the speed up is tangible though
|
|
2026-03-13 04:46:06
|
I'm going to run it though my gauntlet of problematic images that show the worst regressions and see if anything changes for the better
|
|
|
monad
|
2026-03-13 04:46:23
|
uh ...
```
.../libjxl/lib/jxl/enc_ac_strategy.cc:488:18: error: declaration shadows a local variable [-Werror,-Wshadow]
488 | const auto q = Abs(rval);
| ^
.../libjxl/lib/jxl/enc_ac_strategy.cc:463:14: note: previous declaration is here
463 | const auto q = Set(df, quant_norm8);
| ^
```
|
|
2026-03-13 04:48:43
|
Do you compile by pretending the first one doesn't exist?
|
|
2026-03-13 04:50:55
|
By pretending the second one doesn't exist?
|
|
2026-03-13 04:51:31
|
Basically, what compiler lets you do this.
|
|
|
A homosapien
|
2026-03-13 04:55:43
|
I compile with clang, strange how it wasn't caught with `-Wall`
|
|
2026-03-13 05:00:37
|
I use msys2 to compile most things
|
|
2026-03-13 05:14:05
|
Alright I hope I fixed it
|
|
|
monad
|
2026-03-13 05:14:09
|
~~failed badly on my first image at high quality (really didn't achieve the target of d0)~~
|
|
|
A homosapien
|
2026-03-13 05:17:24
|
what quality range are you looking at? I'm testing -d 2 or 3 right now
|
|
|
monad
|
2026-03-13 05:17:46
|
mistyped, meant d1
|
|
2026-03-13 05:18:16
|
(and btw, I assumed the same as your latest change)
|
|
|
A homosapien
|
2026-03-13 05:19:48
|
hmm, I don't see any glaring failures on my images yet
|
|
2026-03-13 05:23:06
|
can you upload an example image I can test?
|
|
|
monad
|
2026-03-13 05:26:13
|
did you try photographs?
|
|
|
A homosapien
|
2026-03-13 05:28:53
|
I did, looks about on par with main
|
|
2026-03-13 05:29:12
|
some parts a little worse some parts a little better
|
|
|
monad
|
2026-03-13 05:57:34
|
~~0 for 5 on photos so far. here are a couple samples~~
|
|
2026-03-13 05:11:11
|
~~tried some digital paintings and d2. no upsides anywhere, looks like a mess compared to 0.12 at same budget. no evidence to keep me searching.~~
|
|
|
A homosapien
|
2026-03-13 06:33:06
|
I see excessive blocking in some images compared to 0.8 and 0.12. Psychovisually it looks like detail, it works until it doesn't.
|
|
2026-03-13 06:51:20
|
It performs better with problematic images I have. But something isn't working right which is not surprising.
|
|
|
monad
|
|
monad
(and btw, I assumed the same as your latest change)
|
|
2026-03-14 12:16:41
|
but I missed the first rename. <:PepeGlasses:878298516965982308> I looked again because of your insistence. my assessment was bunk.
|
|
|
Demiurge
|
2026-03-14 02:36:59
|
I did some tests with modular on the latest release-version. And I notice that modular often has more severe de-saturation than DCT
|
|
2026-03-14 02:37:32
|
And sometimes modular blurs the outlines and borders of objects worse yet preserves texture and grit somehow better.
|
|
2026-03-14 02:37:50
|
Those are the patterns I've noticed.
|
|
2026-03-14 04:30:01
|
Another weird thing I noticed is that the de-saturation is prominent in certain shades of GREEN
|
|
|
|
cioute
|
2026-03-19 09:21:02
|
oh, jxl table sheets
|
|
|
Kleis Auke
|
|
drhead
uhh, are there known issues with cjxl and encoding PNGs with an sBIT chunk? I'm getting results like this with both VIPS and ImageMagick where the resulting image mismatches the original. (top image here is correct).
|
|
2026-03-21 11:54:55
|
This should be fixed in libvips 8.18.1, though 8.18.2 is also expected soon.
|
|
|
drhead
|
|
Kleis Auke
This should be fixed in libvips 8.18.1, though 8.18.2 is also expected soon.
|
|
2026-03-21 12:05:51
|
Yep, already installed it and it seems to work fine!
|
|
|
Quackdoc
|
2026-03-21 02:32:20
|
arch not compiling libjxl with exr is the bane of my existence
|
|
2026-03-21 02:33:43
|
same with openimageio
|
|
|
spider-mario
|
2026-03-23 01:47:28
|
seems it was disabled here: https://gitlab.archlinux.org/archlinux/packaging/packages/libjxl/-/commit/98d5375d2aabbe6e06c7f6cf8f298770e7f53133
|
|
2026-03-23 01:48:19
|
the failure was fixed, but exr was never re-enabled
|
|
2026-03-23 01:48:35
|
could maybe open an arch bug
|
|
|
K
|
2026-03-30 07:22:05
|
am i doing something wrong?
|
|
|
jonnyawsom3
|
|
K
am i doing something wrong?
|
|
2026-03-30 07:23:45
|
Jxl only works in Firefox nightly, but they had the flag still visible in normal Firefox up until very recently. That fork is a few versions behind, so the flag does nothing
|
|
|
K
|
2026-03-30 07:28:35
|
look like previously it was possible but their v12 branch must have skipped it
|
|
2026-03-30 07:30:24
|
And this Issue is closed and no one mentioned it for v12
https://github.com/Floorp-Projects/Floorp/issues/1511
|
|
|
Orum
|
|
Jxl only works in Firefox nightly, but they had the flag still visible in normal Firefox up until very recently. That fork is a few versions behind, so the flag does nothing
|
|
2026-03-30 07:36:47
|
Why'd they remove the flag from normal firefox? Weren't they supposed to be moving in the direction of "We're going to support JXL"?
|
|
|
username
|
|
Orum
Why'd they remove the flag from normal firefox? Weren't they supposed to be moving in the direction of "We're going to support JXL"?
|
|
2026-03-30 07:37:51
|
the decoder was never compiled into normal firefox so the flag was misleading and would break things because it would tell websites JXL is supported and then not decode the files
|
|
|
Orum
|
2026-03-30 07:38:11
|
right, but didn't they finally compile in the decoder?
|
|
|
username
|
2026-03-30 07:38:21
|
nope Mozilla has never
|
|
2026-03-30 07:38:24
|
only in nightly
|
|
2026-03-30 07:38:32
|
not in release or beta
|
|
|
K
|
2026-03-30 07:40:20
|
i probably remember it wrong but i remember it was working in non-nightly
|
|
2026-03-30 07:40:33
|
back in whichever year i discover jxl
|
|
|
username
|
2026-03-30 07:41:24
|
in regular firefox or a fork? some forks have been enabling the jxl decoder to be compiled outside of nightly
|
|
|
|
veluca
|
2026-03-30 08:40:42
|
they are working on enabling it in stable π
|
|
2026-03-30 08:41:03
|
(I mean, compilation -- they are also working on enabling the flag by default, but that will come later)
|
|
|
|
cioute
|
2026-03-30 07:36:25
|
I have a strange preference for jpegxl over avif just cuz avif is based on a video codec.
I also like that i can quickly convert jpeg to jpegxl without loss (i guess it is killer feature).
|
|
|
Demiurge
|
2026-03-31 12:07:54
|
yes
|
|
2026-03-31 12:08:10
|
I very much hope JXL takes over the world
|
|
2026-03-31 12:08:26
|
and that very advanced encoders are created in the future
|
|
2026-03-31 12:08:51
|
with very high efficiency, especially for lossy
|
|
|
Mine18
|
2026-03-31 05:05:42
|
even if jxl never catches up to avif's lossy quality, it's still better to have it as a jpeg replacement because of its features
|
|
|
username
|
|
Mine18
even if jxl never catches up to avif's lossy quality, it's still better to have it as a jpeg replacement because of its features
|
|
2026-03-31 05:08:58
|
also a PNG replacement
|
|
|
Mine18
|
2026-03-31 05:19:39
|
that too
|
|
|
adap
|
2026-03-31 05:40:00
|
that's the main thing
|
|
2026-03-31 05:40:35
|
for me
|
|
2026-03-31 05:41:04
|
i hope they don't fuck it up when they add it to discord
|
|
2026-03-31 05:41:23
|
they'll probably convert it to webp like avif ffs
|
|
|
username
|
|
they'll probably convert it to webp like avif ffs
|
|
2026-03-31 05:44:06
|
I mean with AVIF in Discord it's just the embeds/thumbnails that are WebP, AFAIK once you go to view the full thing it actually gives you the file
|
|
|
adap
|
|
username
I mean with AVIF in Discord it's just the embeds/thumbnails that are WebP, AFAIK once you go to view the full thing it actually gives you the file
|
|
2026-03-31 06:23:14
|
yeah defeats the purpose if you have to click it though
|
|
2026-03-31 06:23:55
|
png is still the only way to properly embed hdr images without some icc profile shit
|
|
|
Mine18
|
|
they'll probably convert it to webp like avif ffs
|
|
2026-03-31 07:03:10
|
ive made an issue suggesting to show the converted image on unsupported devices but no reply, repo has been empty since last year
i imagine if you bundled dav1d playback would be just fine, this just fucks up everything and it so frustrating
not only does it hamper quality but performance too, animaged avifs higher than 540p make mobile discord chuggggg
|
|
2026-03-31 07:03:33
|
you could try mentioning scott kidder but i doubt it would do much
|
|
|
adap
|
2026-03-31 07:07:48
|
it's beyond stupid
|
|
2026-03-31 07:08:06
|
the webp images are bigger than the avifs most of the time
|
|
|
Mine18
|
2026-03-31 07:11:11
|
<@1156997134445461574> what's stopping you guys from bundling dav1d with discord so you don't need to convert to webp? the conversion process severely affects quality and performance to the point of slowing down the interface and playback on mobile devices
|
|
|
Froozi
|
|
the webp images are bigger than the avifs most of the time
|
|
2026-03-31 07:13:59
|
Maybe they're holding off on \*.avif so that they can go for \*.jxl straight away <:YEP:808828808127971399> (coping so hard right now).
|
|
|
username
|
|
Mine18
<@1156997134445461574> what's stopping you guys from bundling dav1d with discord so you don't need to convert to webp? the conversion process severely affects quality and performance to the point of slowing down the interface and playback on mobile devices
|
|
2026-03-31 07:28:24
|
wait what do you mean exactly by bundling dav1d? that sounds like it would be noticeably slow for browsers that don't support AVIF natively
|
|
2026-03-31 07:29:19
|
would be dav1d compiled to WASM having to be downloaded and then used for software video decoding
|
|
|
Mine18
|
|
username
wait what do you mean exactly by bundling dav1d? that sounds like it would be noticeably slow for browsers that don't support AVIF natively
|
|
2026-03-31 07:51:02
|
idk i meant having dav1d come with the app so that you don't have to rely on system codecs and be sure everyone will be running dav1d and not be unsupported
|
|
|
username
|
|
Mine18
idk i meant having dav1d come with the app so that you don't have to rely on system codecs and be sure everyone will be running dav1d and not be unsupported
|
|
2026-03-31 07:58:12
|
Discord is a website, so they somewhat kinda can't do that. this is especially true on iOS for the standalone app where there is no control over the browser engine/core hence why WebM videos do not work in Discord on iOS
|
|
|
Mine18
|
2026-03-31 08:11:20
|
so its ios to blame, as always
|
|
|
whatsurname
|
2026-03-31 08:12:45
|
Isn't discord app built in RN? I don't think the platform browser engine is related here
|
|
|
jonnyawsom3
|
|
Mine18
<@1156997134445461574> what's stopping you guys from bundling dav1d with discord so you don't need to convert to webp? the conversion process severely affects quality and performance to the point of slowing down the interface and playback on mobile devices
|
|
2026-03-31 08:13:03
|
I already suggested eventually moving from WebP to JXL yesterday https://discord.com/channels/794206087879852103/794206087879852106/1488191483415625778
|
|
|
AccessViolation_
|
|
Mine18
<@1156997134445461574> what's stopping you guys from bundling dav1d with discord so you don't need to convert to webp? the conversion process severely affects quality and performance to the point of slowing down the interface and playback on mobile devices
|
|
2026-03-31 08:14:10
|
well patents, for one, it seems π
look at the snapchat situation
|
|
|
jonnyawsom3
|
|
Mine18
<@1156997134445461574> what's stopping you guys from bundling dav1d with discord so you don't need to convert to webp? the conversion process severely affects quality and performance to the point of slowing down the interface and playback on mobile devices
|
|
2026-03-31 08:14:13
|
Also it already has dav1d... That was the whole point of the update where they added AV1 support
|
|
|
Mine18
|
|
AccessViolation_
well patents, for one, it seems π
look at the snapchat situation
|
|
2026-03-31 08:14:59
|
are you talking about the dolby lawsuits? that's a big ol' nothingburger, they're just using litigation fees and hevc patents to make it seem like av1 isnt royalty free
|
|
|
AccessViolation_
|
2026-03-31 08:15:38
|
ah, that's good
|
|
|
Mine18
|
|
Also it already has dav1d... That was the whole point of the update where they added AV1 support
|
|
2026-03-31 08:15:47
|
so then they're only using dav1d for videos and the fullscreen images?
|
|
|
AccessViolation_
|
2026-03-31 08:16:13
|
aren't there still patent pools that legally do hold up, though?
|
|
2026-03-31 08:16:23
|
like sisvel's
|
|
|
jonnyawsom3
|
|
Mine18
so then they're only using dav1d for videos and the fullscreen images?
|
|
2026-03-31 08:17:15
|
Seeing as I'm on Android 10 which has no support to AV1 or AVIF, yes. But it's very expensive to decode, usually crashing my app or the entire phone, hence using WebP
|
|
|
username
|
2026-03-31 08:17:47
|
I feel like them using WebP is just because they want a low encode cost unified thumbnail system for all image formats. IIRC GIFs, PNGs, and JPEGs all get WebP thumbnails/embeds in Discord
|
|
2026-03-31 08:18:53
|
and I think the full unconverted images still get shown if your viewport can fit them although I'm not 100% sure
|
|
|
Mine18
|
|
I already suggested eventually moving from WebP to JXL yesterday https://discord.com/channels/794206087879852103/794206087879852106/1488191483415625778
|
|
2026-03-31 08:18:55
|
i did see that, but that also hinges on not only the desktop client being uptodate, but also every other client supporting jxl, is react based off of chrome? does it support jxl rn?
|
|
|
Seeing as I'm on Android 10 which has no support to AV1 or AVIF, yes. But it's very expensive to decode, usually crashing my app or the entire phone, hence using WebP
|
|
2026-03-31 08:19:58
|
well avif would certainly be a lot easier to decode, also it shouldn't crash the app, even if it drops frames like crazy
|
|
|
username
|
|
Mine18
i did see that, but that also hinges on not only the desktop client being uptodate, but also every other client supporting jxl, is react based off of chrome? does it support jxl rn?
|
|
2026-03-31 08:20:31
|
all official Discord standalone clients outside of iOS are chromium underneath
|
|
|
Mine18
|
|
AccessViolation_
like sisvel's
|
|
2026-03-31 08:20:35
|
not that we know of, it's all intimidation tactics, none of this went to court
|
|
|
jonnyawsom3
|
|
Mine18
i did see that, but that also hinges on not only the desktop client being uptodate, but also every other client supporting jxl, is react based off of chrome? does it support jxl rn?
|
|
2026-03-31 08:21:00
|
They're using electron, I can even open the inspector and get the chrome UI
|
|
|
Mine18
|
|
username
all official Discord standalone clients outside of iOS are chromium underneath
|
|
2026-03-31 08:21:13
|
ios clients would still need native jxl support then, which is like iPhone 14 minimum, no?
|
|
2026-03-31 08:21:20
|
or was it ios 24
|
|
|
jonnyawsom3
|
2026-03-31 08:21:36
|
IOS 17 IIRC
|
|
|
AccessViolation_
|
|
Mine18
not that we know of, it's all intimidation tactics, none of this went to court
|
|
2026-03-31 08:21:43
|
wait so sisvel didn't sue a company and win, I could have sworn they did. I need to look into this more
|
|
|
username
|
2026-03-31 08:22:12
|
If Discord did use JXL for thumbnails/embeds it would definitely have to be something done later in the future and not now
|
|
|
whatsurname
|
2026-03-31 08:22:16
|
From the [blog post](https://discord.com/blog/modern-image-formats-at-discord-supporting-webp-and-avif)
> WebP became our preferred transformation target for several reasons:
> - Near universal support across platforms
> - Faster encode/decode times compared to AVIF
> - Mature tooling and widespread ecosystem support
> - Predictable performance characteristics
I don't think theyβve changed much since then (maybe expect point 4)
|
|
|
jonnyawsom3
|
|
Mine18
i did see that, but that also hinges on not only the desktop client being uptodate, but also every other client supporting jxl, is react based off of chrome? does it support jxl rn?
|
|
2026-03-31 08:22:39
|
I mean... Discord *do* have a CDN... They could just check the accept header
|
|
|
whatsurname
|
|
username
all official Discord standalone clients outside of iOS are chromium underneath
|
|
2026-03-31 08:31:29
|
Only desktop apps use electron, mobile ones use RN
|
|
|
adap
|
2026-03-31 04:58:26
|
JXL progressive decoding would be awesome for discord galleries
|
|
2026-03-31 04:59:12
|
they seem to load the entire gallery at once even my m4 mac be taking a while to load em
|
|
|
jonnyawsom3
|
2026-03-31 05:03:36
|
Huh, it definitely doesn't on the Windows client, I just get blurhashes until I 'preload' them all by looping though once with the arrowkey
|
|
|
adap
|
2026-03-31 05:08:39
|
Yeah ig not the entire gallery
|
|
2026-03-31 05:09:22
|
https://discord.gg/j5Za2PCMf
|
|
2026-03-31 05:09:33
|
I was looking at the mueseum one at the bottom here
|
|
2026-03-31 05:10:14
|
it had a bunch of hdr png and it was literally just archived cause even with like 8 galleries it was unusable
|
|
|
Demiurge
|
2026-03-31 05:20:48
|
Most civilized countries do not even have a legal concept of patenting math
|
|
2026-03-31 05:20:58
|
Just the US and Japan iirc
|
|
|
|
cioute
|
2026-03-31 11:14:59
|
but they have a concept of patent trolling
|
|