Any reason why we're looking at uncompressed data? Some of the listed negative examples easily beat GMaps 1.5mb when compressed.
Also, I'll give a pass to dynamic apps like Spotify and GMail [1] if (and only if) the navigation after loading the page is fast. I would rather have something like Discord which takes a few seconds to update on startup, than GitLab, which makes me wait up to two seconds for every. single. click.
The current prioritisation of cold starts and static rendering is leading to a worse experience on some sites IMO. As an experiment, go to GitHub and navigate through the file tree. On my machine, this feels significantly snappier than the the rest of GitHub. Coincidentally, it's also one of the only parts that is not rendered statically. I click through hundreds of GitHub pages daily. Please, just serve me an unholy amount of JavaScript once, and then cache as much as possible, rather than making me download the entire footer every time I want to view a pipeline.
[1]: These are examples. I haven't used GMail and Spotify
Compression helps transfer but your device still has to parse all of that code. This comes up in discussions about reach because there’s an enormous gap between iOS and Android CPU performance which gets worse when you look at the cheaper devices a lot of the public use where new Android devices sold today perform worse than a 2014 iPhone. If your developers are all using recent iPhones or flagship Android devices, it’s easy to miss how much all of that code bloat affects the median user.
https://infrequently.org/2024/01/performance-inequality-gap-...
I happen to develop a JS-App that also has to be optimised for an Android Phone from 2017. I don't think the amount of JS is in any way related to performance. You can make 1MB of JS perform just as poorly as 10MB.
In our case, the biggest performance issues were:
- Rendering too many DOM nodes at once - virtual lists help.
- Using reactivity inefficiently.
- Random operations in libraries that were poorly optimised.
Finding those things was only possible by looking at the profiler. I don't think general statements like "less JS = better" help anyone. It helps to examine the size of webpages, but then you have to also put that information into context: how often does this page load new data? once the data is loaded, can you work without further loading? Is the data batched, or do waterfalls occur? Is this a page that users will only visit once, or do they come regularly? ...
"- Rendering too many DOM nodes at once - virtual lists help"
Yup, despite all the improvements, the DOM is still slow and it is easy to make it behave even slower. Only update what is necessary and be aware of the performance bottleneck forced reflow. Every time you change anything and then get clientWidth for example and then change something else - you will make the DOM calculate(and posibly render) everything twice.
I found the chrome dev tools to be really helpful with spotting those and other stuff. But sure, if you update EVERYTHING anyway, including waiting for the whole data transfer, with any click, when you just want some tiny parts refreshed, you have other problems anyway.
I think this should be more nuanced: the DOM itself has been fast for 10-15 years but things like layout are still a concern on large pages. The problem is that the DOM, like an ORM, can make it easy to miss when you’re requesting the browser do other work like recalculating layout, and also that as people started using heavier frameworks they started losing track of what triggers updates.
Lists are an interesting challenge because it’s surprisingly hard to beat a well-tuned browser implementation (overflow scrolling, etc.) but a lot of people still have the IE6 instincts and jump for implementing custom scrolling, only to find that there are a lot of native scrolling implementation features which are hard to match, and at some point they realize that what they really should have done was change the design to make layout easier to calculate (e.g. fixed or easily-calculated heights) or displayed fewer things at once.
It's faster than it was 10-15 years ago. It's still extremely slow.
You can't say things like "DOM is fast" and "oh, it's fast if you exclude literally everything that people want to be fast".
I don't know if you realise, but on the very same devices where you're complaining about "large pages and oh my god layout" people are routinely rendering millions of objects with complex logic and animations in under 5 milliseconds?
The DOM is excruciatingly slow.
Rendering a bunch of vertices in 3D is an “embarrassingly parallel” problem, you can scale it pretty much forever.
With all due respect, if you compare that to 2D layouting and stuff, you don’t know much about the topic.
Why are you sure, he was talking about vertices or 3D in general?
I was. But people are always missing a point.
There's a range of options between "can render millions of objects with complex animation and logic" in a few milliseconds and "we will warn you when you have 800 dom nodes on a static page, you shouldn't do many updates or any useful animations"
Somehow people assume that what HTML does is so insanely hard that the problem of 2D layout should not just be relevant to modern day supercomputers, but to be so slow as to be seen with the naked eye.
The 2D layout in HTML is slow because the DOM (with the millions of conflicting hacks on top of it) is slow, not the other way around.
We could do most of the frankly laughable HTML layouts at least in early 2000s, perhaps earlier.
Well, definitely earlier, as Xerox that influenced the Mac is from 1970s
"The 2D layout in HTML is slow because the DOM (with the millions of conflicting hacks on top of it) is slow, not the other way around."
Well yeah, all those hacks of html that we cannot get rid of, because of backwards compatibility are probably the main reason the DOM is slow. HTML was made to view documents after all and not design UI's. And now it is, what it is. But there are options now!
So yes, it is definitely possible to build snappy 2D layouts. I build one with HTML, using only a subset and that worked out somewhat allright .. but now I am switching to WebGL. And there is a world in between in terms of performance.
I think you’re using DOM to refer to the entire browser, not just what’s standardized as the DOM. Things like creating or modifying elements will run at tens of millions per second on an old iPhone _but_ there are operations like the one you mentioned which force the browser to do other work like style calculation and layout, and if you inadvertently write code which is something like “DOM update, force recalc, DOM update” in a loop it’s very easy to mistakenly think that the DOM is the source of the performance problem rather than things like the standard web layout process having many ways for different elements to interact.
And, yes, I’m not unaware that different display models have different performance characteristics. Modern browsers can run into into the millions of objects range but fundamentally a web page is doing more work and there’s no way it’s going to match something which does less. This is why there have been various ways to turn off some of the expensive work (e.g. fixed table layout) and when APIs like canvas, WebGL, and WebGPU use different designs to allow people who need more control to avoid taking on costs their apps don’t need.
No, I'm referring to DOM as Document Object Model.
Not in the DOM :)
That's why I'm saying that the DOM is not fast. It's excruciatingly slow even for the most basic of things. It's, after all, designed to display a small static page with one or two images, and no amount of haphazard hacks that accumulated on top of it over the years will change it. It will, actually, make it much worse :)
There is a reason why the same device that can render a million of objects doing complex animations and logic in under 5ms cannot guarantee smooth animations in DOM. This is a good example: https://twitter.com/fabiospampinato/status/17495008973007301... (note: these are not complex animations and logic :) )
This is what I was talking about: you’re talking about the DOM but describing things like layout and rendering. Yes, nobody is saying that abstractions which do less won’t be faster - that’s why things like WebGL exist! – but most of the performance issues are due to things which aren’t supported in WebGL.
If you aren’t using something slow like React you could do on the order of hundreds of thousands element creations or updates per second in the mid-2010s – checking my logs, I was seeing 600k table rows added per second on Firefox in 2015 on a 2013 iMac (updates were much faster since they didn’t have as much memory allocation), and browsers and hardware have both improved since then.
To be clear, that’s never going to catch up with WebGL for the kinds of simple things you’re focused on - displaying a rectangle which doesn’t affect its peers other than transparency is a really tuned fast path with hardware acceleration – but that’s like complaining that a semi truck isn’t as fast as a Tesla. The web is centered on documents and text layout, so the better question is which one is fast enough for the kind of work you’re doing. If you need to move rectangles around, use WebGL - that’s why it exists! - but also recognize that you’re comparing unlike tools and being surprised that they’re different.
Ah yes, because layout and rendering are absolutely divorced from DOM and have nothing to do with it :)
You will never catch up with anything. It's amazing how people keep missing the point on purpose even if they eventually almost literally repeat what I write word for word, and find no issues with that.
Here are your words: "The web is centered on documents and text layout". Yes, yes it is. And it's barely usable for that. But then we've added lots of haphazard hacks on top of it. The rest I wrote here: https://news.ycombinator.com/item?id=39485437
But can you make a 10MB js perform as good as a good 1MB js?
I'm not a JS developer but I imagine that the amount of JavaScript code isn't the most relevant part if most of it isn't being called. I mean, if you have some particularly heavy code that only runs when you click a button, is that really parsed and causes overhead before the button is clicked?
It is parsed and loaded in memory, and not executed.
"Loaded" part is no longer true in major JS implementations (for example, [1]).
[1] https://v8.dev/blog/preparser
That’s a broader claim than that page supports. They describe how they avoiding fully generating the internal representation and JITing it, but clearly even unused code is taking up memory and CPU time so you’d want to review your app’s usage to make sure that it’s acceptable levels of work even on low-end devices and also that your coding style doesn’t defeat some of those optimizations.
Depending on how the code is loaded, yes.
If all 10mb is in a single JS file, and that file is included in a normal script tag in the page’s HTML, then parsing the 10mb will block UI interaction as the page loads.
Once the browser parses 10mb, it’ll evaluate the top level statements in the script, which are the ones that would set up the click event handler you’re referencing.
If the entire page is rendered by JavaScript in the browser, then even drawing the initial UI to the screen is blocked by parsing JS.
The solution to this for big apps is to split your build artifact up into many separate JS files, analogous to DLLs in a C program. That way your entry point can be very small and quick to parse, then load just the DLLs you need to draw the first screen and make it interactive. After that you can either eagerly or lazily initialize the remaining DLLs depending on performance tradeoff.
I work on Notion, 16mb according to this measurement. We work hard to keep our entry point module small, and load a lot of DLLs to get to that 16mb total. On a slow connection you’ll see the main document load and become interactive first, leaving the sidebar blank since it’s a lower priority and so we initialize it after the document editor. We aren’t necessarily using all 16mb of that code right away - a bunch of that is pre-fetching the DLLs for features/menus so that we’re ready to execute them as soon as you say, click on the settings button, instead of having awkward lag while we download the settings DLL after you click on settings.
You can theoretically have a somewhat fast enough large JS app but it's going to be an uphill battle.
You have to make regular bundle analysis otherwise the cache won't work if you deploy too much and package updates and new additions are likely to break the performance analysis you've just made before.
Less JS = better performance is a simplified model but very accurate in practice in my opinion, especially on large teams.
I totally agree - my point was simply that people sometimes focus on network bandwidth and forget that a huge JS file can be a problem even if it’s cached. What you’re talking about is the right way to do it - I try to get other developers to use older devices and network traffic shaping to get an idea for those subjective impressions, too, since it’s easy to be more forgiving when you’re focused on a dedicated testing session than, say, if you’re trying to use it while traveling and learning that the number of requests mattered more than the compressed size or that you need more robust error handling when one of the 97 requests fails.
Even decently powerful phones can have issues with some of these.
Substack is particularly infuriating : sometimes it lags so badly that it takes seconds to display scrolled text (and bottom of text references stop working). And that's on a 2016 flagship : Samsung Galaxy S7 ! I shudder to think of the experience for slower phones...
(And Substack also manages to slow down to a glitchy crawl when there are a lot of (text only !) comments on my gaming desktop PC.)
gmail is terrible, idk if it's just me but i have to wait 20 seconds are marking an email as read before closing the tab. otherwise it's not saved as read
spotify has huge issues with network connectivity, even if i download the album it'll completely freak out as the network changes. plain offline mode would be better than its attempt at staying online
gmail still has the HTML view: https://mail.google.com/mail/u/0/h?ui=html
They are saying since a while that they will shut it down in February. So it may only work for 1-2 days
Gmail has this annoying preference which you can set: set email to read if viewed for x seconds. Mine was set to 3 seconds. Which is I guess why I sometimes would get a reply on a thread and I had to refresh multiple times to get rid of the unread status.
Maybe that’s related?
GitHub's probably the worst example of "Pjax" or HTMX-style techniques out there at this point…I would definitely not look at that and paint a particular picture of that architecture overall. It's like pointing at a particularly poor example of a SPA and then saying that's why all SPAs suck.
is there a good example of reasonably big/complex application using pjax/htmx style that sucks less? Because GitHub isn't making a good case for that technology
Im inclined to agree, but the same thing happens on GitLab.
And it's also the only part of it that doesn't work on slow connections.
I've had a slow internet connection for the past week, and GitHub file tree literally doesn't work if you click on it on the website, because it tries to load it through some scripts and fails.
However, if, instead of clicking on a file, I copy it's url and paste it into the browser url bar, it loads properly.
Wow, you're right. I just reproduced that by throttling the network.
But actually, that first click from the overview is still an HTML page. Once you're in the master-detail view, it works fast even when throttled.
I have a connection that might be considered slow by most HN readers (1~2MB/s) and the new github file viewer has been a blessing. So snappy compared to everything else
Interesting that you mention GitHub file tree. I recently encountered a periodic freezing of that whole page. I've profiled for a bit and found out that every few seconds it spends like 5 seconds recomputing relative timestamps on the main thread.
Yes; this started happening after they rolled out the new version of their UI built with React several months ago.
Because for a single page load, decompressing and using the scripts takes time, RAM space, disk space (more scratch space used as more RAM gets used), and power (battery drain from continually executing scripts). Caching can prevent the power and time costs of downloading and decompressing, but not the costs of using. My personal rule of thumb is: the bigger the uncompressed Javascript load, the more code the CPU continually executes as I move my mouse, press any key, scroll, etc. I would be willing to give up a bit of time efficiency for a bit of power efficiency. I'm also willing to give up prettiness for staticness, except where CSS can stand in for JS. Or maybe I'm staring at a scapegoat when the actual/bigger problem is sites which download more files (latent bloat and horrendously bad for archival) when I perform actions other than clicking to different pages corresponding to different URLs. (Please don't have Javascript make different "pages" show up with the same URL in the address bar. That's really bad for archival as well.)
Tangent: Another rule of thumb I have: the bigger the uncompressed Javascript load, the less likely the archived version of the site will work properly.
While you are right that there is a cost, the real question is to know whether this cost is significant. 10 Mb is still very small in many contexts. If that is the price to pay for a better dev ex and more products then I don't see the issue.