return to table of content

JPEG XL and the Pareto Front

jillesvangurp
35 replies
10h11m

They didn't "lose interest", their lawyers pulled the emergency brakes. Blame patent holders, not Google. Like Microsoft: https://www.theregister.com/2022/02/17/microsoft_ans_patent/. Microsoft could probably be convinced to be reasonable. But there may be a few others. Google actually also holds some patents over this but they've done the right thing and license those patents along with their implementation.

To fix this, you'd need to convince Google, and other large companies that would be exposed to law suits related to these patents (Apple, Adobe, etc.), that these patent holders are not going to insist on being compensated.

Other formats are less risky; especially the older ones. Jpeg is fine because it's been out there for so long that any patents applicable to it have long expired. Same with GIF, which once was held up by patents. Png is at this point also fine. If any patents applied at all they will soon have expired as the PNG standard dates back to 1997 and work on it depended on research from the seventies and eighties.

zokier
12 replies
9h45m

their lawyers pulled the emergency brakes

Do you have source for that claim?

lonjil
6 replies
6h54m

Prior art makes patents invalid anyway.

mort96
3 replies
6h8m

But that doesn't matter. If a patent is granted, choosing to infringe on it is risky, even if you believe you could make a solid argument that it's invalid given enough lawyer hours.

lonjil
2 replies
5h47m

The Microsoft patent is for an "improvement" that I don't believe anyone is using, but Internet commentators seem to think it applies to ANS in general for some reason.

A few years earlier, Google was granted a patent for ANS in general, which made people very angry. Fortunately they never did anything with it.

mort96
0 replies
3h52m

If the patent doesn't apply to JXL then that's a different story, then it doesn't matter whether it's valid or not.

...

The fact that Google does have a patent which covers JXL is worrying though. So JXL is patent encumbered after all.

JyrkiAlakuijala
0 replies
5h27m

I believe that Google's patent application dealt with interleaving non-compressed and ANS data in a manner that made streaming coding easy and fast in software, not a general ANS patent. I didn't read it but discussed shortly about it with a capable engineer who had.

michaelt
1 replies
6h37m

Absolutely.

And nothing advances your career quite like getting your employer into a multi-year legal battle and spending a few million on legal fees, to make some images 20% smaller and 100% less compatible.

lonjil
0 replies
6h28m

Well, lots of things other than JXL use ANS. If someone starts trying to claim ANS, you'll have Apple, Disney, Facebook, and more, on your side :)

mananaysiempre
0 replies
7h2m

Duda published his ideas, that’s supposed to be it.

jillesvangurp
2 replies
9h34m

I'm just inferring from the fact that MS got a patent and then this whole thing ground to a halt.

peppermint_gum
0 replies
8h42m

In other words, there's no source.

lifthrasiir
0 replies
6h2m

Not only you have no source backing your claim, but there is a glaring counterexample. Chromium's experimental JPEG XL support carried an expiry milestone, which was delayed multiple times and it was bumped last time on June 2022 [1] before the final removal on October, which was months later the patent was granted!

[1] https://issues.chromium.org/issues/40168998#comment52

peppermint_gum
7 replies
8h39m

To fix this, you'd need to convince Google, and other large companies that would be exposed to law suits related to these patents (Apple, Adobe, etc.), that these patent holders are not going to insist on being compensated.

Apple has implemented JPEG XL support in macOS and iOS. Adobe has also implemented support for JPEG XL in their products.

Also, if patents were the reason Google removed JXL from Chrome, why would they make up technical reasons for doing so?

Please don't present unsourced conspiracy theories as if they were confirmed facts.

jillesvangurp
6 replies
7h37m

You seem to be all over this. So, what's your alternate theory?

I've not seen anything other than "google is evil, boohoohoo" in this thread. That's a popular sentiment but it doesn't make much sense in this context.

There must be a more rational reason than that. I've not heard anything better than legal reasons. But do correct me if I'm wrong. I've worked in big companies, and patents can be a show stopper. Seems like a plausible theory (i.e. not a conspiracy theory). We indeed don't know what happened because Google is clearly not in a mood to share.

JyrkiAlakuijala
2 replies
7h4m

Why not take Chrome's word for it:

---cut---

Helping the web to evolve is challenging, and it requires us to make difficult choices. We've also heard from our browser and device partners that every additional format adds costs (monetary or hardware), and we’re very much aware that these costs are borne by those outside of Google. When we evaluate new media formats, the first question we have to ask is whether the format works best for the web. With respect to new image formats such as JPEG XL, that means we have to look comprehensively at many factors: compression performance across a broad range of images; is the decoder fast, allowing for speedy rendering of smaller images; are there fast encoders, ideally with hardware support, that keep encoding costs reasonable for large users; can we optimize existing formats to meet any new use-cases, rather than adding support for an additional format; do other browsers and OSes support it?

After weighing the data, we’ve decided to stop Chrome’s JPEG XL experiment and remove the code associated with the experiment. [...]

From: https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKc...

JyrkiAlakuijala
1 replies
5h15m

I try to make a bulletin point list of the individual concerns, the original statement is written in a style that is a bit confusing for a non-native speaker such as me.

* Chrome's browser partners say JPEG XL adds monetary or hardware costs.

* Chrome's device partners say JPEG XL adds monetary or hardware costs.

* Does JPEG XL work best for the web?

* What is JPEG XL compression performance across a broad range of images?

* Is the decoder fast?

* Does it render small images fast?

* Is encoding fast?

* Hardware support keeping encoding costs reasonable for large users.

* Do we need it at all or just optimize existing formats to meet new use-cases?

* Do other browsers and OSes support JPEG XL?

* Can it be done sufficiently well with WASM?

JyrkiAlakuijala
0 replies
53m

* [...] monetary or hardware costs.

We could perhaps create a GoFundMe page for making it cost neutral for Chrome's partners. Perhaps some industry partners would chime in.

* Does JPEG XL work best for the web?

Yes.

* What is JPEG XL compression performance across a broad range of images?

All of them. The more difficult it is to compress, the better JPEG XL is. It is at its best at natural images with noisy textures.

* Is the decoder fast?

Yes. See blog post.

* Does it render small images fast?

Yes. I don't have a link, but I tried it.

* Is encoding fast?

Yes. See blog post.

* Hardware support keeping encoding costs reasonable for large users.

https://www.shikino.co.jp/eng/ is building it based on libjxl-tiny.

* Do we need it at all or just optimize existing formats to meet new use-cases?

Jpegli is great. JPEG XL allows for 35 % more. It creates wealth of a few hundred billion in comparison to jpegli, in users' waiting times. So, it's a yes.

* Do other browsers and OSes support JPEG XL?

Possibly. iOS and Safari support. DNG supports. Windows and some androids don't support.

* Can it be done sufficiently well with WASM?

Wasm creates additional complexity, adds to load times, and possibly to computation times too.

Some more work is needed before all of Chrome's questions can be answered.

peppermint_gum
0 replies
6h43m

There must be a more rational reason than that. I've not heard anything better than legal reasons. But do correct me if I'm wrong. I've worked in big companies, and patents can be a show stopper. Seems like a plausible theory (i.e. not a conspiracy theory)

In your first comment, you stated as a fact that "lawyers pulled the emergency brakes". Despite literally no one from Google ever saying this, and Google giving very different reasons for the removal.

And now you act as if something you made up in your mind is the default theory and the burden of proof is on the people disagreeing with you.

lonjil
0 replies
7h13m

Mate, you're literally pulling something from your ass. Chrome engineers claim that they don't want JXL because it isn't good enough. Literally no one involved has said that it has anything to do with patents.

Scaevolus
0 replies
7h26m

If you want a simple conspiracy theory, how about this:

The person responsible for AVIF works on Chrome, and is responsible for choosing which codecs Chrome ships with. He obviously prefers his AVIF to a different team's JPEG-XL.

It's a case of simple selfish bias.

lifthrasiir
7 replies
10h3m

[...] other large companies that would be exposed to law suits related to these patents (Apple, Adobe, etc.) [...]

Adobe included JPEG XL support to their products and also the DNG specification. So that argument is pretty much dead, no?

luma
3 replies
6h16m

Adobe sells paid products and can carve out a license fee for that, like they do with all the other codecs and libraries they bundle. That's part of the price you are paying.

Harder to do for users of Chrome.

lifthrasiir
2 replies
6h7m

The same thing can be said with many patent-encumbered video codecs which Chrome does support nevertheless. That alone can't be a major deciding factor, especially given that the rate of JPEG XL adoption has been remarkably faster than any recent media format.

afavour
1 replies
5h41m

Is this not simply a risk vs reward calculation? Newer video codecs present a very notable bandwidth saving over old ones. JPEG XL presents minor benefits over WebP, AVIF, etc. So while the dangers are the same for both the calculation is different.

KingOfCoders
0 replies
5h23m

Video = billions lower costs for Youtube.

jillesvangurp
1 replies
9h35m

Not that simple. Maybe they struck a deal with a few of the companies or they made a different risk calculation. And of course they have a pretty fierce patent portfolio themselves so there's the notion of them being able to retaliate in kind to some of these companies.

lifthrasiir
0 replies
9h30m

I don't think that's true (see my other comment for what the patent is really about), but even when it is, Adobe's adoption means that JPEG XL is worth the supposed "risk". And Google does ship a lot of technologies that are clearly patent-encumbered. If the patent is the main concern, they could have answered so because there are enough people wondering about the patent status, but the Chrome team's main reason against JPEG XL was quite different.

izacus
0 replies
8h25m

Adobe also has an order of magnitude lower number of installed software than Chrome or Firefox which makes patent fees much cheaper. And their software is actually paid for by users.

lonjil
1 replies
7h16m

The Microsoft patent doesn't apply to JXL, and in any case, Microsoft has literally already affirmed that they will not use it to go after any open codec.

bombcar
0 replies
2h6m

How exactly is that done? I assume even an offhand comment by an official (like CEO, etc) that is not immediately walked back would at least protect people from damages associated with willful infringement.

jonsneyers
1 replies
7h23m

There are no royalties to be paid on JPEG XL. Nobody but Cloudinary and Google is claiming to hold relevant patents, and Cloudinary and Google have provided a royalty free license. Of course the way the patent system works, anything less than 20 years old is theoretically risky. But so far, there is nobody claiming royalties need to be paid on JPEG XL, so it is similar to WebP in that regard.

bombcar
0 replies
2h7m

"Patent issues" has become a (sometimes truthful) excuse for not doing something.

When the big boys want to do something, they find a way to get it done, patents or no, especially if there's only "fear of patents" - see Apple and the whole watch fiasco.

bmicraft
0 replies
7h49m

Safari supports jxl since version 17

JyrkiAlakuijala
0 replies
7h50m

That ANS patent supposedly relates to refining the coding tables based on symbols being decided.

It is slower for decoding and Jpeg xl does not do that for decoding speed reasons.

The specification doesn't allow it. All coding tables need to be in final form.

pgeorgi
17 replies
10h18m

All those requests to revert the removal are funny: you want Chrome to re-add jxl behind a feature flag? Doesn't seem very useful.

Also, all those Chrome offshoots (Edge, Brave, Opera, etc) could easily add and enable it to distinguish themselves from Chrome ("faster page load", "less network use") and don't. Makes me wonder what's going on...

eviks
14 replies
9h40m

No, obviously to re-add jxl without a flag

pgeorgi
13 replies
9h34m

"jxl without a flag" can't be re-added because that was never a thing.

eviks
10 replies
8h33m

It can, that's why you didn't say "re-add jxl", but had to mention the flag, 're-add' has no flag implication, that pedantic attempt to constraint is somehing you've made up, that's not what people want, just read those linked issues

pgeorgi
9 replies
8h12m

It has a flag implication because jpeg-xl never came without being hidden behind a flag. Nothing was taken away from ordinary users at any point in time.

And I suppose the Chrome folks have the telemetry to know how many people set that damn flag.

eviks
4 replies
7h29m

I suppose I'll trust the reality of what actual users are expressly asking for vs. your imagination that something different is implied

pgeorgi
3 replies
4h34m

Actual users, perhaps. Or maybe concern trolls paid by a patent holder who's trying to prepare the ground for a patent-based extortion scheme. Or maybe Jon Sneyers with an army of sock puppets. These "actual users" are just as real to me as Chrome's telemetry.

That said: these actual users didn't demonstrate any hacker spirit or interest in using JXL in situations where they could. Where's the wide-spread use of jxl.js (https://github.com/niutech/jxl.js) to demonstrate that there are actual users desperate for native codec support? (aside: jxl.js is based on Squoosh, which is a product of GoogleChromeLabs) If JXL is sooo important, surely people would use whatever workaround they can employ, no matter if that convinces the Chrome team or not, simply because they benefit from using it, no?

Instead all I see is people _not_ exercising their freedom and initiative to support that best-thing-since-slices-bread-apparently format but whining that Chrome is oh-so-dominant and forces their choices of codecs upon everybody else.

Okay then...

pgeorgi
1 replies
4h15m

Both issues seem to have known workarounds that could have been integrated to support JXL on iOS properly earlier than by waiting on Apple (who integrated JXL in Safari 17 apparently), so if anything that's a success story for "provide polyfills to support features without relying on the browser vendor."

149765
0 replies
3h45m

The blur issue is an easy fix, yes, but the memory one doesn't help that much.

jdiff
2 replies
7h32m

"But the plans were on display…”

“On display? I eventually had to go down to the cellar to find them.”

“That’s the display department.”

“With a flashlight.”

“Ah, well, the lights had probably gone.”

“So had the stairs.”

“But look, you found the notice, didn’t you?”

“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.’”
pgeorgi
1 replies
4h33m

I guess you're referring to the idea that the flag made the previous implementation practically non-existent for users. And I agree!

But "implement something new!" is a very different demand from "you took that away from us, undo that!"

jdiff
0 replies
4h0m

No, obviously to re-add jxl without a flag

Is asking for the old thing to be re-added, but without the flag that sabotaged it. It is the same as "you took that away from us, undo that!" Removing a flag does not turn it into a magical, mystical new thing that has to be built from scratch. This is silly. The entire point of having flags is to provide a testing platform for code that may one day have the flag removed.

lonjil
0 replies
6h31m

And I suppose the Chrome folks have the telemetry to know how many people set that damn flag.

How is that relevant? Flags are to allow testing, not to gauge interest from regular users.

elygre
0 replies
9h15m

Or (re-add jxl) (without a flag).

albert180
0 replies
9h30m

What a stupid pedantry. Feel better now?

silisili
0 replies
10h10m

Simply put these offshoots don't really seem to do browser code, and realize how expensive it would be for them to diverge at the core.

lonjil
0 replies
7h10m

you want Chrome to re-add jxl behind a feature flag? Doesn't seem very useful.

Chrome has a neat feature where some flags can be enabled by websites, so that websites can choose to cooperate in testing. They never did this for JXL, but if they re-added JXL behind a flag, they could do so but with such testing enabled. Then they could get real data from websites actually using it, without committing to supporting it if it isn't useful.

Also, all those Chrome offshoots (Edge, Brave, Opera, etc) could easily add and enable it to distinguish themselves from Chrome ("faster page load", "less network use") and don't. Makes me wonder what's going on...

Edge doesn't use Chrome's own codec support. It uses Windows's media framework. JXL is being added to it next year.

sergioisidoro
11 replies
10h11m

It's so frustrating how the chromium team is ending up as a gatekeeper of the Internet by pick and choosing what gets developed or not.

I recently come across another issue pertaining to the chromium team not budging on their decisions, despite pressure from the community and an RFC backing it up - in my case custom headers in WebSocket handshakes, that are supported by other Javascript runtimes like node and bun, but the chromium maintainer just disagrees with it - https://github.com/whatwg/websockets/issues/16#issuecomment-...

hwbunny
6 replies
9h36m

Question is for how long. Time to slam the hammer on them.

hhh
4 replies
9h30m

Why not make a better product than slam some metaphorical hammer?

mort96
2 replies
9h21m

That's not how this works. Firefox is the closest we have, and realistically the closest we will get to a "better product" than Chromium for the foreseeable future, and it's clearly not enough.

bombcar
0 replies
2h0m

The only hammer at all left is Safari, basically on iPhones only.

That hammer is very close to going away; if the EU does force Apple to really open the browsers on the iPhone, everything will be Chrome as far as the eye can see in short order. And then we fully enter the chromE6 phase.

KingOfCoders
0 replies
5h20m

And Firefox does not support the format. Mozilla is the same political company as everyone else.

Certhas
0 replies
9h12m

Because "better" products don't magically win.

caskstrength
0 replies
4h21m

What hammer? You want US president or supreme court to compel Chrome developers to implement every image format in existence and every JS API proposed by anyone anywhere?

Unless it is some kind of anti-competitive behavior like they intentionally stiffening adoption of standard competing with their proprietary patent-encumbered implementation that they expect to collect royalties for (doesn't seem to be the case), then I don't see the problem.

madeofpalk
2 replies
7h9m

Where's Firefox's and Webkit's position on the proposal?

jonsneyers
1 replies
6h17m

Safari/Webkit has added JPEG XL support already.

Firefox is "neutral", which I understand as meaning they'll do whatever Chrome does.

All the code has been written, patches to add JPEG XL support to Firefox and Chromium are available and some of the forks (Waterfox, Pale Moon, Thorium, Cromite) do have JPEG XL support.

lonjil
0 replies
5h46m

I believe they were referring to that WebSocket issue, not JXL.

pgeorgi
0 replies
4h19m

It's so frustrating how the chromium team is ending up as a gatekeeper of the Internet by pick and choosing what gets developed or not.

https://github.com/niutech/jxl.js is based on Chromium tech (Squoosh from GoogleChromeLabs) and provides an opportunity to use JXL with no practical way for Chromium folks to intervene.

Even if that's a suboptimal solution, JXL's benefits supposedly should outweight the cost of integrating that, and yet I haven't seen actual JXL users running to that in droves.

So JXL might not be a good support for your theory: where people could do they still don't. Maybe the format isn't actually that important, it's just a popular meme to rehash.

Pikamander2
5 replies
9h1m

Mozilla effectively gave up on it before Google did.

https://bugzilla.mozilla.org/show_bug.cgi?id=1539075

It's a real shame, because this is one of those few areas where Firefox could have lead the charge instead of following in Chrome's footsteps. I remember when they first added APNG support and it took Chrome years to catch up, but I guess those days are gone.

Oddly enough, Safari is the only major browser that currently supports it despite regularly falling behind on tons of other cutting-edge web standards.

https://caniuse.com/jpegxl

JyrkiAlakuijala
3 replies
6h42m

I followed Mozilla/Firefox integration closely. I was able to observe enthusiasm from their junior to staff level engineers (linkedin-assisted analysis of the related bugs ;-). However, an engineering director stepped in and locked the discussions because they were in "no new information" stage, and their position has been neutral on JPEG XL, and the integration has not progressed from the nightly builds to the next stage.

Ten years ago Mozilla used to have the most prominent image and video compression effort called Daala. They posted inspiring blog posts about their experiments. Some of their work was integrated with Cisco's Thor and On2's/Chrome's VP8/9/10, leading to AV1 and AVIF. Today, I believe, Mozilla has focused away from this research and the ex-Daala researchers have found new roles.

lonjil
2 replies
6h35m

Daala's and Thor's features were supposed to be integrated into AV1, but in the end, they wanted to finish AV1 as fast as possible, so very little that wasn't in VP10 made it into AV1. I guess it will be in AV2, though.

derf_
0 replies
5h14m

> ... very little that wasn't in VP10 made it into AV1.

I am not sure I would say that is true.

The entire entropy coder, used by every tool, came from Daala (with changes in collaboration with others to reduce hardware complexity), as did some major tools like Chroma from Luma and the Constrained Directional Enhancement Filter (a merger of Daala's deringing and Thor's CLPF). There were also plenty of other improvements from the Daala team, such as structural things like pulling the entropy coder and other inter-frame state from reference frames instead of abstract "slots" like VP9 (important in real-time contexts where you can lose frames and not know what slots they would have updated) or better spatial prediction and coding for segment indices (important for block-level quantizer adjustments for better visual tuning). And that does not even touch on all of the contributions from other AOM members (scalable coding, the entire high-level syntax...).

Were there other things I wish we could have gotten in? Absolutely. But "done" is a feature.

JyrkiAlakuijala
0 replies
6h6m

I like to think that there might be an easy way to improve AV2 today — drop the whole keyframe coding and replace it with JPEG XL images as keyframes.

miragecraft
0 replies
3h49m

It feels like nowadays Mozilla is extremely shorthanded.

They probably gave up because they simply don’t have the money/resources to pursue this.

jug
21 replies
6h39m

Pay attention to just how good WebP is at _lossless_ comparison though!

I've always thought that one as flying under the radar. Most get stuck on WebP not offering tangible enough benefits (or even worse) over MozJPEG encoding, but WebP _lossless_ is absolutely fantastic for performance/speed! PNG or even OptiPNG is far worse. And very well supported online now, and leaving the horrible lossless AVIF in the dust too of course.

a-french-anon
10 replies
5h12m

An issue with lossless WebP is that it only supports (A)RGB and encodes grayscale via hacks that aren't as good as simply supporting monochrome.

If you compress a whole manga, PNG (via oxipng, optipng is basically deprecated) is still the way to go.

Another something not mentioned in here is that lossless JPEG2000 can be surprisingly good and fast on photographic content.

edflsafoiewq
5 replies
4h52m

IIRC the way you encode grayscale in WebP is a SUBTRACT_GREEN transform that makes the red and blue channel 0 everywhere, and then use a 1-element prefix code for R and B, so the R and B for each pixel take zero bits. Same idea with A for opaque images. Do you know why that's not good enough?

JyrkiAlakuijala
2 replies
4h35m

I made a mistake there with subtract green.

If I had just added 128 to the residuals, all remaining prediction arithmetic would have worked better and it would have given 1 % more density.

This is because most related arithmetic for predicting pixels is done in unsigned 8 bit arithmetic. Subtract green moves such predictions to often cross the 0 -> 255 boundary, and then averaging, deltas etc make little sense and add to the entropy.

edflsafoiewq
1 replies
4h33m

Can you explain why?

JyrkiAlakuijala
0 replies
4h29m

I edited the answer into the previous message for better flow.

a-french-anon
1 replies
3h27m

Thankfully the following comment explains more than I know, I was speaking purely from empiric experience.

edflsafoiewq
0 replies
3h22m

Then you can't know that any difference you see is because of how WebP encodes grayscale.

out_of_protocol
3 replies
4h33m

Just tried it on random manga page

- OxiPNG - 730k

- webp lossless max effort - 702k

- avif lossless max effort - 2.54MB (yay!)

- jpegxl lossless max effort - 506k (winner!)

Andrex
2 replies
3h24m

Probably depends on the manga itself. Action manga probably don't compress as well as more dialogue-heavy works.

ComputerGuru
1 replies
2h28m

I would, at a first blush, disagree with that characterization? Dialogue equals more fine-grained strokes and more individual, independent “zones” to encode.

lonjil
0 replies
17m

I wonder if the text would be consistent enough for JXL's "patches" feature to work well.

JyrkiAlakuijala
3 replies
5h47m

Thank you! <3

WebP also has a near-lossless encoding mode based on lossless WebP specification that is mostly unadvertised, but should be preferred over real lossless in almost every use case. Often you can half the size without additional visible loss.

netol
1 replies
2h54m

Is this mode picked automatically in "mixed" mode?

Unfortunately, that option doesn't seem to be available in gif2webp (I mostly use WebP for GIF images - as animated AVIF support is poor on browsers and that has an impact in interoperability)

JyrkiAlakuijala
0 replies
2h27m

I don't know

kurtextrem
0 replies
1h36m

do you know why Jon didn't compare near-lossless in the "visually lossless" part?

jonsneyers
2 replies
6h34m

Lossless WebP is very good indeed. The main problem is that it is not very future-proof since it only supports 8-bit. For SDR images that's fine, but for HDR this is a fundamental limitation that is about as bad as GIF's limitation to 256 colors.

omoikane
0 replies
15m

I haven't ran across websites that serves up HDR images, I am not sure I would notice the difference. WebP seems appropriately named and optimized for image delivery on the web.

Maybe you are thinking of high bit depth for archival use? I can see some use cases there where 8-bit is not sufficient, though personally I store high bit depth images in whatever raw format was produced by my camera (which is usually some variant of TIFF).

jug
0 replies
4h36m

Ah, I didn't know this and I agree this is a fairly big issue and increasingly so over time. I think smartphones in particular hastened the demand for HDR quite a bit, what was once a premium/enthusiast feature you only had to explicitly buy into.

Akronymus
1 replies
4h51m

I really like webp. Sadly there's still a lot of applications that dont work with it (looking at discord)

ocdtrekkie
0 replies
3h11m

It is ironic you said this because when I disabled webp in my browser because it had a huge security vulnerability, Discord was the only site which broke and didn't immediately just serving me more reasonable image formats.

AceJohnny2
18 replies
10h5m

I do not understand why this article focuses so much on encode speed, but for decode, which I believe represents 99% of usage in this web-connected world, give a cursory...

Decode speed is not really a significant problem on modern computers, but it is interesting to take a quick look at the numbers.
lifthrasiir
10 replies
9h58m

Anything more than 100 MB/s is considered "enough" for the internet because at that point your bottleneck is no longer decoding. Most modern compression algorithms are asymmetric, that is, you can spend much more time on compression without significantly affecting the decompression performance, so it is indeed less significant once the base performance is achieved.

silvestrov
5 replies
8h24m

Decoding speed is important for battery time.

If a new format drains battery twice as fast, users don't want it.

jonsneyers
3 replies
6h5m

This matters way more for video (where you are decoding 30 images per second continuously) than it does for still images. For still images, the main thing that drains your battery is the display, not the image decoding :)

But in any case, there are no _major_ differences in decoding speed between the various image formats. The difference caused by reducing the transfer size (network activity) and loading time (user looking at a blank screen while the image loads) is more important for battery life than the decoding speed itself. Also the difference between streaming/progressive decoding and non-streaming decoding probably has more impact than the decode speed itself, at least in the common scenario where the image is being loaded over a network.

caskstrength
1 replies
4h36m

This matters way more for video (where you are decoding 30 images per second continuously) than it does for still images.

OTOH video decoding is highly likely to be hardware accelerated on both laptops and smartphones.

For still images, the main thing that drains your battery is the display, not the image decoding :)

I wonder if it becomes noticeable on image-heavy sites like tumblr, 500px, etc.

jonsneyers
0 replies
2h0m

Assuming the websites are using images of appropriate dimensions (that is, not using huge images and relying on browser downscaling, which is a bad practice in any case), you can quite easily do the math. A 1080p screen is about 2 megapixels, a 4K screen is about 8 megapixels. If your images decode at 50 Mpx/s, that's 25 full screens (or 6 full screens at 4K) per second. You need to scroll quite quickly and have a quite good internet connection before decode speed will become a major issue, whether for UX or for battery life. Much more likely, the main issue will be the transfer time of the images.

JyrkiAlakuijala
0 replies
5h56m

Agreed. For web use they all decode fast enough. Any time difference might be in progression or streaming decoding, vs. waiting for all the data to arrive before starting to decode.

For image gallery use of camera resolution photographs (12-50 Mpixels) it can be more fun to have 100+ Mpixels/s, even 300 Pixels/s.

JyrkiAlakuijala
0 replies
6h1m

I wasn't able to convince myself about that when approaching that question with with back-off-the-envelope calculation, published research and prototypes.

Very few applications are constantly decoding images. Today a single image is often decided in a few milliseconds, but watched 1000x longer. If you 10x or even 100x energy consumption of image decoding, it is still not going to compete with display, radio and video decoding as a battery drain.

oynqr
2 replies
9h42m

When you actually want good latency, using the throughput as a metric is a bit misguided.

lonjil
0 replies
7h2m

If you don't have progressive decoding, those metrics are essentially the same.

lifthrasiir
0 replies
9h39m

As others pointed out, that's why JPEG XL's excellent support for progressive decoding is important. Other formats do not support progressive decoding at all or made it optional, so it cannot be even compared at this point. In the other words, the table can be regarded as an evidence that you can have both progressive decoding and performance at once.

JyrkiAlakuijala
0 replies
9h8m

During the design process of pik/jpeg xl I experimented on decode speed as a personal experience to have an opinion about this. I tried a special version of chrome that artificially throttled the image decoding. Once the decoding speed gets into the 20 megapixels per second the feeling coming from the additional speed was difficult to notice. I tried 2, 20 and 200 megapixels per second throttlings. This naturally depends on image sizes and uses too.

There was a much more easy to notice impact from progressive images and even sequential images displayed in a streaming manner during the download. As a rule of thumb, sequential top-to-bottom streaming feels 2x faster as a waiting rendering, and progressive feels 2x faster than sequential streaming.

jsheard
2 replies
7h50m

Is it practical to use hardware video decoders to decode the image formats derived from video formats, like AVIF/AV1 and HEIC/H264? If so that could be a compelling reason to prefer them over a format like JPEG XL which has to be decoded in software on all of today's hardware. Everything has H264 decode and AV1 decode is steadily becoming a standard feature as well.

jonsneyers
0 replies
7h15m

No browser bothers with hardware decode of WebP or AVIF even if it is available. It is not worth the trouble for still images. Software decode is fast enough, and can have advantages over hw decode, such as streaming/progressive decoding. So this is not really a big issue.

izacus
0 replies
5h1m

No, not really - mostly because setup time and concurrent decode limitations of HW decoders across platforms tend so undermine any performance or battery gains from that approach. As far as I know, not even mobile platforms bother with it with native decoders for any format.

PetahNZ
2 replies
7h44m

My server will encode 1,000,000 images itself, but each client will only decode like 10.

okamiueru
0 replies
4h41m

That isn't saying much or anything.

bombcar
0 replies
2h39m

But you may have fifty million clients, so the total "CPU hours" spend on decoding will outlast encoding.

But the person encoding is picking the format, not the decoder.

lonjil
0 replies
6h23m

Real-time encoding is pretty popular, for which encoding speed is pretty important.

taylorius
11 replies
9h55m

The article mentions encoding speed as something to consider, alongside compression ratio. I would argue that decoding speed is also important. A lot of the more modern formats (webp, avif etc) can take significantly more CPU cycles to decode than a plain old jpg. This can slow things down noticeably,especially on mobile.

lifthrasiir
5 replies
9h46m

Any computation-intensive media format on mobile is likely using a hardware decoder module anyway, and that most frequently includes JPEG. So that comparison is not adaquate.

kasabali
3 replies
9h21m

"computation-intensive media" = videos

Seriously, when is the last time mobile phones used hardware decoding for showing images? Flip phones in 2005?

I know camera apps use hardware encoding but I doubt gallery apps or browsers bother with going through the hardware decoding pipeline for hundreds of JPEG images you scroll through in seconds. And when it comes to showing a single image they'll still opt to software decoding because it's more flexible when it comes to integration, implementation, customization and format limits. So not surprisingly I'm not convinced when I repeatedly see this claim that mobile phones commonly use hardware decoding for image formats and software decoding speed doesn't matter.

jeroenhd
2 replies
8h19m

I don't know the current status of web browsers, ut hardware encoding and decoding for image formats is alive and well. Not really relevant for showing a 32x32 GIF arrow like on HN, but very important when browsing high resolution images with any kind of smoothness.

If you don't really care about your users' battery life you can opt to disable hardware acceleration within your applications, but it's usually enabled by default, and for good reason.

lonjil
0 replies
6h48m

Hardware acceleration of image decoding is very uncommon in most consumer applications.

kasabali
0 replies
4h48m

hardware encoding and decoding for image formats is alive and well

I keep hearing and hearing this but nobody has ever yet provided a concrete real world example of smart phones using hw decoding for displaying images.

izacus
0 replies
4h58m

No, not a single mobile platform uses hardware decode modules for still image decoding as of 2024.

At best, the camera processors output encoded JPEG/HEIF for taken pictures, but that's about it.

oynqr
4 replies
9h48m

JPEG and JXL have the benefit of (optional) progressive decoding, so even if the image is a little larger than AVIF, you may still see content faster.

izacus
2 replies
4h58m

That's great, are there any comparison graphs and benchmarks showing that in real life (similarly to this article)?

izacus
0 replies
3h13m

Awesome, thanks.

lifthrasiir
0 replies
9h36m

Note that JPEG XL always supports progressive decoding, because the top-level format is structured in that way. The optional part is a finer-grained adjustment to make the output more suitable for specific cases.

aidenn0
10 replies
10h23m

One does wonder how much of JXL's awesomeness is the encoder vs. the format. Its ability to make high quality, compact images just with "-d 1.0" is uncanny. With other codecs, I had to pass different quality settings depending on the image type to get similar results.

edflsafoiewq
4 replies
8h43m

They've also made a JPEG encoder, cjpegli, with the same "-d 1.0" interface.

lonjil
0 replies
7h4m

I have heard that it will see a proper standalone release at some point this year, but I don't know more than that.

kasabali
3 replies
9h52m

That's a very good point. At this rate of development I wouldn't be surprised if libjxl becomes x264 of image encoders.

On the other hand, libvpx has always been a mediocre encoder which I think might be the reason for disappointing performance (I mean in general, not just speed) of vp8/vp9 formats, which inevitably also affected performance of lossy WebP. Dark Shikari even did a comparison of still image performances of x264 vs vp8 [0].

[0] https://web.archive.org/web/20150419071902/http://x264dev.mu...

JyrkiAlakuijala
2 replies
8h38m

While WebP lossy still has image quality issues it has improved a lot over the years. One should not consider a comparison done with 2010-2015 implementations indicative of quality performance today.

kasabali
1 replies
4h59m

I'm sure it's better now than 13 years ago, but the conclusion I got from looking at very recent published benchmark results is that lossy webp is still only slightly better than mozjpeg at low bitrates and still has worse max. PQ ceiling compared to JPEG, which in my opinion makes it not worth using over plain old JPEG even in web settings.

JyrkiAlakuijala
0 replies
2h2m

That matches my observations. I believe that WebP lossy does not add value when Jpegli is an option and is having hard time to compete even with MozJPEG.

JyrkiAlakuijala
0 replies
8h41m

Pik was designed initially without quality options only to do the best there is to achieve distance 1.0.

We kept a lot of focus on visually lossless and I didn't want to add format features which would add complexity but not help at high quality settings.

In addition to modeling features, the context modeling and efficiency of entropy coding is critical at high quality. I consider AVIFs entropy coding ill-suited for high quality or lossless photography.

bmacho
9 replies
9h45m

How is lossless webp 0.6th of the size of lossless avif? I find it hard to believe that.

lonjil
5 replies
6h55m

Lossless AVIF is just really quite bad. Notice that how for photographic content, it is barely better than PNG, and for non-photographic content, it is far worse than PNG.

edflsafoiewq
4 replies
6h4m

It's so bad you wonder why AV1 even has a lossless mode. Maybe lossy mode has some subimages it uses lossless mode on?

jonsneyers
3 replies
5h57m

It has lossless just to check a box in terms of supported features. A bit like how JPEG XL supports animation just to have feature parity. But in most cases, you'll be better off using a video codec for animation, and an image format for images.

samatman
1 replies
4h51m

There are some user-level differences between an animated image and a video, which haven't really been satisfactorily resolved since the abandonment of GIF-the-format. An animated image should pause when clicked, and start again on another click, with setting separate from video autoplay to control the default. It should not have visible controls of any sort, that's the whole interface. It should save and display on the computer/filesystem as an image, and degrade to the display frame when sent along a channel which supports images but not animated ones. It doesn't need sound, or CC, or subtitles. I should be able to add it to the photo roll on my phone if I want.

There are a lot of little considerations like this, and it would be well if the industry consolidated around an animated-image standard, one which was an image, and not a video embedded in a way which looks like an image.

F3nd0
0 replies
4h10m

Hence why AVIF might come in handy after all!

JyrkiAlakuijala
0 replies
4h39m

I believe it is more fundamental. I like to think that AV1 entropy coding just becomes ineffective for large values. Large values are dominantly present in high quality photography and in lossless coding. Large values are repeatedly prefix coded and this makes effective adaptation of the statistics difficult for large integers. This is a fundamental difference and not a minor difference in focus.

jug
0 replies
6h33m

WebP is awesome at lossless and way better than even PNG.

It's because WebP has a special encoding pipeline for lossless pictures (just like PNG) while AVIF is basically just asking a lossy encoder originally designed for video content to stop losing detail. Since it's not designed for that it's terrible for the job, taking lots of time and resources to produce a worse result.

derf_
0 replies
4h41m

Usually the issue is not using the YCgCo-R colorspace. I do not see enough details in the article to know if that is the case here. There are politics around getting the codepoint included: https://github.com/AOMediaCodec/av1-avif/issues/129

149765
0 replies
8h29m

Lossless webp is actually quite good, especially on text heavy images, e.g. screenshots of a terminal with `cwebp -z9` are usually smaller than `jxl -d 0 -e 9` in my experience.

anewhnaccount2
8 replies
10h10m

Should the Pareto front not be drawn with line perpendicular to the axes rather than diagonal lines?

penteract
3 replies
9h50m

Yes, it should, but it looks like they just added a line to the jxl 0.10 series of data on whatever they used to make the graph, and labelled it the Pareto front. Looking closely at the graphs, they actually miss some points where version 0.9 should be included in the frontier.

lifthrasiir
2 replies
9h23m

I think it can be understood as an expected Pareto frontier if enough options are added to make it continuous, which is often implied in this kind of discussions.

penteract
1 replies
7h27m

I'm not sure that's reasonable - The effort parameters are integers between 1 and 10, with behavior described here: https://github.com/libjxl/libjxl/blob/main/doc/encode_effort..., the intermediate options don't exist as implemented programs. This is a comparison of concrete programs, not an attempt to analyze the best theoretically achievable.

Also, the frontier isn't convex, so it's unlikely that if intermediate options could be added then they would all be at least as good as the lines shown; and the use of log(speed) for the y-axis affects what a straight line on the graph means. It's fine for giving a good view of the dataset, but if you're going to make a guess about intermediate possibilities, 'speed' or 'time' should also be considered.

jonsneyers
0 replies
5h59m

You are right, but that would make an uglier plot :)

Some of the intermediate options are available though, through various more fine-grained encoder settings than what is exposed via the overall effort setting. Of course they will not fall exactly on the line that was drawn, but as a first approximation, the line is probably closer to the truth than the staircase, which would be an underestimate of what can be done.

jamesthurley
2 replies
8h59m

Perpendicular to which axis?

penteract
1 replies
7h24m

both - staircase style.

deathanatos
0 replies
2h23m

Good grief. A poorly phrased question, and an answer that doesn't narrow the possibilities.

        *
        |
        |
        |
  *-----+
or

  +-----*
  |
  |
  |
  *
… and why?

JyrkiAlakuijala
0 replies
1h57m

Often with this kind of pareto it can be argued that even when continuous decisions are not available, a compression system could keep choosing every second at effort 7 and every second at effort 6 (or any ratio), leading, on the average interpolated results. Naturally such interpolation does not produce straight lines in log space.

Modified3019
7 replies
10h23m

At the very low quality settings, it's kinda remarkable how jpeg manages to to keep a sharper approximation of detail that preserves the holistic quality of the image better in spite of the obvious artifacts making it look like a mess of cubism when examined close. It's basically converting the image into some kind of abstract art style.

Whereas jxl and avif just become blurry.

mrob
2 replies
4h49m

JPEG bitrates are higher, so all it means is that SSIMULACRA2 is the wrong metric for this test. It seems that SSIMULACRA2 heavily penalizes blocking artifacts but doesn't much care about blur. I agree that the JPEG versions look better at the same SSIMULACRA2 score.

jonsneyers
0 replies
2h10m

Humans generally tend to prefer smoothing over visible blocking artifacts. This is especially true when a direct comparison to the original image is not possible. Of course different humans have different tastes, and some do prefer blocking over blur. SSIMULACRA2 is based on the aggregated opinions of many thousands of people. It does care more about blur than metrics like PSNR, but maybe not as much as you do.

JyrkiAlakuijala
0 replies
3h53m

Ideally one would use human ratings.

The author of the blog post did exactly that in a previous blog post:

https://cloudinary.com/labs/cid22/plots

Human ratings are expensive and clumsy so people often use computed aka objective metrics, too.

The best OSS metrics today are butteraugli, dssim and simulacra. The author is using one of them. None of the codecs was optimized for that metrics except jpegli partially.

porker
0 replies
10h10m

Yes, that was my takeaway from this that JPEG keeps edge sharpness really well (e.g. the eyelashes) while the jxl and avif smooth all detail out of the image.

izacus
0 replies
3h14m

Well, that's because JPEG is still using about twice as many bits per pixel, making the output size significantly larger.

Don't get swept away by false comparisons, JXL and AVIF look significantly better if you give them twice as much filesize to work with as well.

bmacho
0 replies
9h37m

You refer to this? https://res.cloudinary.com/jon/qp-low.png

Bitrates are in the left column, jpg low quality is the same size as jxl/avif med-low quality (0.4bpp), so you should compare the bottom left picture to the top mid and right pictures.

JyrkiAlakuijala
0 replies
10h7m

It is because JPEG is given 0.5 bits per pixel, where JPEG XL and AVIF are given around 0.22 and 0.2.

These images attempt to be at equal level of distortion, not at equal compression.

Bpps are reported beside the images.

In practice, use of quality 65 is rare in the internet and only used at the lowest quality tier sites. Quality 75 seems to be usual poor quality and quality 85 the average. I use quality 94 yuv444 or better when I need to compress.

mips_r4300i
6 replies
10h3m

This is really impressive even compared to WebP. And unlike WebP, it's backwards compatible.

I have forever associated Webp with macroblocky, poor colors, and a general ungraceful degradation that doesn't really happen the same way even with old JPEG.

I am gonna go look at the complexity of the JXL decoder vs WebP. Curious if it's even practical to decode on embedded. JPEG is easily decodable, and you can do it in small pieces at a time to work within memory constraints.

bombcar
3 replies
2h35m

Everyone hates WebP because when you save it, nothing can open it.

That's improved somewhat, but the formats that will have an easy time winning are the ones that people can use, even if that means a browser should "save JPGXL as JPEG" for awhile or something.

ComputerGuru
2 replies
2h21m

Everyone hates webp for a different reason. I hate it because it can only do 4:2:0 chroma, except in lossless mode. Lossless WebP is better than PNG, but I will take the peace of mind of knowing PNG is always lossless over having a WebP and not knowing what was done to it.

149765
1 replies
2h6m

peace of mind of knowing PNG is always lossless

There is pngquant:

a command-line utility and a library for lossy compression of PNG images.
bombcar
0 replies
1h57m

You also have things like https://tinypng.com which do (basically) lossy PNG for you. Works pretty well.

CharlesW
1 replies
2h13m

And unlike WebP, it's backwards compatible.

No, JPEG XL files can't be viewed/decoded by software or devices that don't have a JPEG XL decoder.

Zardoz84
0 replies
1h24m

JPEG XL can be converted to/from JPEG without any loss of quality. See another commenter where shows a example where doing JPEG -> JPEG XL -> JPEG generates a binary exact copy of the original JPEG.

Yeah, this not means what usually we call backwards compatibility, but allows usage like storing the images as JPEG XL and, on the fly, send a JPEG to clients that can't use it, without any loss of information. WebP can't do that.

gaazoh
6 replies
9h28m

The inclusion of QOI in the lossless benchmarks made me smile. It's a basically irrelevant format, that isn't supported by default by any general-public software, that aims to be just OK, not even good, yet it has a spot on one of these charts (non-photographic encoding). Neat.

lifthrasiir
4 replies
9h26m

And yet didn't reach the Pareto frontier! It's quite obvious in hindsight though---QOI decoding is inherently sequential and can't be easily parallelized.

gaazoh
3 replies
9h10m

Of course it didn't, it wasn't designed to be either the fastest nor the best. Just OK and simple. Yet in some cases it's not completely overtaken by competition, and I think that's cool.

I don't believe QOI will ever have any sort of real-world practical use, but that's quite OK and I love it for it has made me and plenty of others look into binary file formats and compression and demystify it, and look further into it. I wrote a fully functional streaming codec for QOI, and it has taught me many things, and started me on other projects, either working with more complex file formats or thinking about how to improve upon QOI. I would probably never have gotten to this point if I tried the same thing starting with any other format, as they are at least an order of magnitude more complex, even for the simple ones.

p0nce
0 replies
6h16m

It can be interesting if you need fast decode on low complexity, and it's an easy to improve format (-20 to -30%). Base QOI isn't that great.

lonjil
0 replies
7h6m

Of course it didn't, it wasn't designed to be either the fastest nor the best. Just OK and simple. Yet in some cases it's not completely overtaken by competition, and I think that's cool.

Actually, there was a big push to add QOI to stuff a few years ago, specifically due to it being "fast". It was claimed that while it has worse compression, the speed can make it a worthy trade off.

shdon
0 replies
3h9m

GameMaker Studio has actually rather quickly jumped onto the QOI bandwagon, having 2 years ago replaced PNG textures with QOI (and added BZ2 compression on top) and found a 20% average reduction in size. So GameMaker Studio and all the games produced with it in the past 2 years or so do actually use QOI internally.

Not something a consumer knowingly uses, but also not quite irrelevant either.

btdmaster
4 replies
9h16m

Missing from the article is rav1e, which encodes AV1, and hence AVIF, a lot faster than the reference implementation aom. I've had cases where aom would not finish converting an image in a minute of waiting what rav1e would do in less than 10 seconds.

JyrkiAlakuijala
2 replies
8h46m

Is rav1e pareto-curve ahead of libaom pareto-curve?

Does fast rav1e look better than jpegli at high encode speeds?

btdmaster
1 replies
7h32m

Difficult to know without reproduction steps from the article, but I would think it behaves better than libaom for the same quality setting.

Edit: found https://github.com/xiph/rav1e/issues/2759

JyrkiAlakuijala
0 replies
2h0m

If Rav1e found better ways of encoding, why would the aom folks copy it in libaom?

jonsneyers
0 replies
7h10m

Both rav1e and libaom have a speed setting. At similar speeds, I have not observed huge differences in compression performance between the two.

TacticalCoder
4 replies
3h18m

Without taking into account whether JPEG XL shines on its own or not (which it may or may not), JPEG XL completely rocks for sure because it does this:

    .. $  ls -l a.jpg && shasum a.jpg
    ... 615504 ...  a.jpg
    716744d950ecf9e5757c565041143775a810e10f  a.jpg

    .. $  cjxl a.jpg a.jxl
    Read JPEG image with 615504 bytes.
    Compressed to 537339 bytes including container

    .. $  ls -l a.jxl
    ... 537339 ... a.jxl
But, wait for it:

    .. $  djxl a.jxl b.jpg
    Read 537339 compressed bytes.
    Reconstructed to JPEG.

    .. $  ls -l b.jpg && shasum b.jpg
    ... 615504 ... b.jpg
    716744d950ecf9e5757c565041143775a810e10f  b.jpg
Do you realize how many billions of JPEG files there are out there which people want to keep? If you recompress your old JPEG files using a lossy format, you lower its quality.

But with JPEG XL, you can save 15% to 30% and still, if you want, get your original JPG 100% identical, bit for bit.

That's wonderful.

P.S: I'm sadly on Debian stable (12 / Bookworm) which is on ImageMagick 6.9 and my Emacs uses (AFAIK) ImageMagick to display pictures. And JPEG XL support was only added in ImageMagick 7. I haven't looked more into that yet.

izacus
2 replies
3h17m

I'm sure that will be hugely cherished by users which take screenshots of JPEGs so they can resend them on WhatsApp :P

F3nd0
1 replies
3h7m

This particular feature might not, but if said screenshots are often compressed with JPEG XL, they will be spared the generation loss that becomes blatantly visible in some other formats: https://invidious.protokolla.fi/watch?v=w7UDJUCMTng

IshKebab
0 replies
44m

Maybe. But to know for sure you need to offset the image and change encoder settings.

JyrkiAlakuijala
0 replies
2h47m

I managed to add that requirement to jpeg xl. I think it will be helpful to preserve our digital legacy intact without lossy re-encodings.

throwaway81523
2 replies
10h2m

Does JPEG XL have patent issues? I half remember something about that. Regular JPG seems fine to me. Better compression isn't going to help anyone since they will find other ways to waste any bandwidth available.

lifthrasiir
0 replies
9h51m

The main innovation claimed by Microsoft's rANS patent is about the adaptive probability distribution, that is, you should be able to efficiently correct the distribution so that you can use less bits. While that alone is an absurd claim (that's a benefit shared with arithmetic coding and its variants!) and there is a very clear prior art, JPEG XL doesn't dynamically vary the distribution so is thought to be not related to the patent anyway.

jonsneyers
0 replies
1h43m

No it doesn't.

And yes, regular JPEG is still a fine format. That's part of the point of the article. But for many use cases, better compression is always welcome. Also having features like alpha transparency, lossless, HDR etc can be quite desirable, and those things are not really possible in JPEG.

kasabali
1 replies
10h6m

I'm surprised mozjpeg performed worse than libjpeg-turbo at high quality settings. I thought its aim was having better pq than libjpeg-turbo at the expense of speed.

JyrkiAlakuijala
0 replies
4h8m

It is consistent to what I have seen. Both in metrics and in eyeballing. Mozjpeg gives good results around quality 75, but less good at 90+++

jug
1 replies
6h28m

Wow, that new jpegli encoder. Just wow. Look at those results. Haha, JPEG has many years left still.

kasabali
0 replies
4h45m

JPEG has many years left still

Such a shame arithmetic coding (which is already in the standard) isn't widely supported in the real world. Because converting Huffman coded images losslessly to arithmetic coding provides an easy 5-10% size advantage in my tests.

Alien technology from the future indeed.

sandstrom
0 replies
1h1m

JPEG XL is awesome!

One thing I think would help with its adoption, is if they would work with e.g. the libvips team to better implement it.

For example, streaming encoder and streaming decoder would be the preferred integration method in libvips.

eviks
0 replies
9h44m

Welcome efficiency improvements

And in general, Jon's posts provide a pretty good overview on the topic of codec comparison

Pity such a great format is being held back by the much less rigorous reviews

dancemethis
0 replies
6h5m

"Pareto" being used outside the context of Brazil's best prank call ever (Telerj Prank) will always confuse me. I keep thinking, "what does the 'thin-voiced lawyer' have to do with statistics?"...

a-french-anon
0 replies
9h40m

Pretty good article, though I would have used oxipng instead of optipng in the lossless comparisons, it's the new standard, there.

Zamicol
0 replies
10h24m

The new version of libjxl brings a very substantial reduction in memory consumption, by an order of magnitude, for both lossy and lossless compression. Also the speed is improved, especially for multi-threaded lossless encoding where the default effort setting is now an order of magnitude faster.

Very impressive! The article too is well written. Great work all around.

MikeCapone
0 replies
3h51m

I really hope this can become a new standard and be available everywhere (image tools, browsers, etc).

While in practice it won't change my life much, I like the elegance of using a modern standard with this level of performance an efficiency.

JyrkiAlakuijala
0 replies
1h51m

It is worth noting that the JPEG XL effort produced a nice new parallelism library called Highway. This library is powering not only JPEG XL but also Google's latest Gemma AI models.