return to table of content

JavaScript Bloat in 2024

SebastianKra
35 replies
1d20h

Any reason why we're looking at uncompressed data? Some of the listed negative examples easily beat GMaps 1.5mb when compressed.

Also, I'll give a pass to dynamic apps like Spotify and GMail [1] if (and only if) the navigation after loading the page is fast. I would rather have something like Discord which takes a few seconds to update on startup, than GitLab, which makes me wait up to two seconds for every. single. click.

The current prioritisation of cold starts and static rendering is leading to a worse experience on some sites IMO. As an experiment, go to GitHub and navigate through the file tree. On my machine, this feels significantly snappier than the the rest of GitHub. Coincidentally, it's also one of the only parts that is not rendered statically. I click through hundreds of GitHub pages daily. Please, just serve me an unholy amount of JavaScript once, and then cache as much as possible, rather than making me download the entire footer every time I want to view a pipeline.

[1]: These are examples. I haven't used GMail and Spotify

acdha
21 replies
1d20h

Compression helps transfer but your device still has to parse all of that code. This comes up in discussions about reach because there’s an enormous gap between iOS and Android CPU performance which gets worse when you look at the cheaper devices a lot of the public use where new Android devices sold today perform worse than a 2014 iPhone. If your developers are all using recent iPhones or flagship Android devices, it’s easy to miss how much all of that code bloat affects the median user.

https://infrequently.org/2024/01/performance-inequality-gap-...

SebastianKra
19 replies
1d19h

I happen to develop a JS-App that also has to be optimised for an Android Phone from 2017. I don't think the amount of JS is in any way related to performance. You can make 1MB of JS perform just as poorly as 10MB.

In our case, the biggest performance issues were:

- Rendering too many DOM nodes at once - virtual lists help.

- Using reactivity inefficiently.

- Random operations in libraries that were poorly optimised.

Finding those things was only possible by looking at the profiler. I don't think general statements like "less JS = better" help anyone. It helps to examine the size of webpages, but then you have to also put that information into context: how often does this page load new data? once the data is loaded, can you work without further loading? Is the data batched, or do waterfalls occur? Is this a page that users will only visit once, or do they come regularly? ...

lukan
10 replies
1d13h

"- Rendering too many DOM nodes at once - virtual lists help"

Yup, despite all the improvements, the DOM is still slow and it is easy to make it behave even slower. Only update what is necessary and be aware of the performance bottleneck forced reflow. Every time you change anything and then get clientWidth for example and then change something else - you will make the DOM calculate(and posibly render) everything twice.

I found the chrome dev tools to be really helpful with spotting those and other stuff. But sure, if you update EVERYTHING anyway, including waiting for the whole data transfer, with any click, when you just want some tiny parts refreshed, you have other problems anyway.

acdha
9 replies
1d7h

the DOM is still slow and it is easy to make it behave even slower

I think this should be more nuanced: the DOM itself has been fast for 10-15 years but things like layout are still a concern on large pages. The problem is that the DOM, like an ORM, can make it easy to miss when you’re requesting the browser do other work like recalculating layout, and also that as people started using heavier frameworks they started losing track of what triggers updates.

Lists are an interesting challenge because it’s surprisingly hard to beat a well-tuned browser implementation (overflow scrolling, etc.) but a lot of people still have the IE6 instincts and jump for implementing custom scrolling, only to find that there are a lot of native scrolling implementation features which are hard to match, and at some point they realize that what they really should have done was change the design to make layout easier to calculate (e.g. fixed or easily-calculated heights) or displayed fewer things at once.

troupo
8 replies
1d5h

I think this should be more nuanced: the DOM itself has been fast for 10-15 years

It's faster than it was 10-15 years ago. It's still extremely slow.

things like layout are still a concern on large pages.

it easy to miss when you’re requesting the browser do other work like recalculating layout

You can't say things like "DOM is fast" and "oh, it's fast if you exclude literally everything that people want to be fast".

and also that as people started using heavier frameworks they started losing track of what triggers updates.

I don't know if you realise, but on the very same devices where you're complaining about "large pages and oh my god layout" people are routinely rendering millions of objects with complex logic and animations in under 5 milliseconds?

The DOM is excruciatingly slow.

kaba0
3 replies
1d1h

Rendering a bunch of vertices in 3D is an “embarrassingly parallel” problem, you can scale it pretty much forever.

With all due respect, if you compare that to 2D layouting and stuff, you don’t know much about the topic.

lukan
2 replies
1d

Why are you sure, he was talking about vertices or 3D in general?

troupo
1 replies
1d

I was. But people are always missing a point.

There's a range of options between "can render millions of objects with complex animation and logic" in a few milliseconds and "we will warn you when you have 800 dom nodes on a static page, you shouldn't do many updates or any useful animations"

Somehow people assume that what HTML does is so insanely hard that the problem of 2D layout should not just be relevant to modern day supercomputers, but to be so slow as to be seen with the naked eye.

The 2D layout in HTML is slow because the DOM (with the millions of conflicting hacks on top of it) is slow, not the other way around.

We could do most of the frankly laughable HTML layouts at least in early 2000s, perhaps earlier.

Well, definitely earlier, as Xerox that influenced the Mac is from 1970s

lukan
0 replies
23h36m

"The 2D layout in HTML is slow because the DOM (with the millions of conflicting hacks on top of it) is slow, not the other way around."

Well yeah, all those hacks of html that we cannot get rid of, because of backwards compatibility are probably the main reason the DOM is slow. HTML was made to view documents after all and not design UI's. And now it is, what it is. But there are options now!

So yes, it is definitely possible to build snappy 2D layouts. I build one with HTML, using only a subset and that worked out somewhat allright .. but now I am switching to WebGL. And there is a world in between in terms of performance.

acdha
3 replies
1d4h

I think you’re using DOM to refer to the entire browser, not just what’s standardized as the DOM. Things like creating or modifying elements will run at tens of millions per second on an old iPhone _but_ there are operations like the one you mentioned which force the browser to do other work like style calculation and layout, and if you inadvertently write code which is something like “DOM update, force recalc, DOM update” in a loop it’s very easy to mistakenly think that the DOM is the source of the performance problem rather than things like the standard web layout process having many ways for different elements to interact.

And, yes, I’m not unaware that different display models have different performance characteristics. Modern browsers can run into into the millions of objects range but fundamentally a web page is doing more work and there’s no way it’s going to match something which does less. This is why there have been various ways to turn off some of the expensive work (e.g. fixed table layout) and when APIs like canvas, WebGL, and WebGPU use different designs to allow people who need more control to avoid taking on costs their apps don’t need.

troupo
2 replies
1d4h

I think you’re using DOM to refer to the entire browser, not just what’s standardized as the DOM.

No, I'm referring to DOM as Document Object Model.

Things like creating or modifying elements will run at tens of millions per second on an old iPhone

Not in the DOM :)

but fundamentally a web page is doing more work and there’s no way it’s going to match something which does less.

That's why I'm saying that the DOM is not fast. It's excruciatingly slow even for the most basic of things. It's, after all, designed to display a small static page with one or two images, and no amount of haphazard hacks that accumulated on top of it over the years will change it. It will, actually, make it much worse :)

There is a reason why the same device that can render a million of objects doing complex animations and logic in under 5ms cannot guarantee smooth animations in DOM. This is a good example: https://twitter.com/fabiospampinato/status/17495008973007301... (note: these are not complex animations and logic :) )

acdha
1 replies
7h14m

No, I'm referring to DOM as Document Object Model.

This is what I was talking about: you’re talking about the DOM but describing things like layout and rendering. Yes, nobody is saying that abstractions which do less won’t be faster - that’s why things like WebGL exist! – but most of the performance issues are due to things which aren’t supported in WebGL.

If you aren’t using something slow like React you could do on the order of hundreds of thousands element creations or updates per second in the mid-2010s – checking my logs, I was seeing 600k table rows added per second on Firefox in 2015 on a 2013 iMac (updates were much faster since they didn’t have as much memory allocation), and browsers and hardware have both improved since then.

To be clear, that’s never going to catch up with WebGL for the kinds of simple things you’re focused on - displaying a rectangle which doesn’t affect its peers other than transparency is a really tuned fast path with hardware acceleration – but that’s like complaining that a semi truck isn’t as fast as a Tesla. The web is centered on documents and text layout, so the better question is which one is fast enough for the kind of work you’re doing. If you need to move rectangles around, use WebGL - that’s why it exists! - but also recognize that you’re comparing unlike tools and being surprised that they’re different.

troupo
0 replies
4h16m

This is what I was talking about: you’re talking about the DOM but describing things like layout and rendering.

Ah yes, because layout and rendering are absolutely divorced from DOM and have nothing to do with it :)

that’s never going to catch up with WebGL for the kinds of simple things you’re focused on - displaying a rectangle which doesn’t affect its peers other than transparency is a really tuned fast path with hardware acceleration

You will never catch up with anything. It's amazing how people keep missing the point on purpose even if they eventually almost literally repeat what I write word for word, and find no issues with that.

Here are your words: "The web is centered on documents and text layout". Yes, yes it is. And it's barely usable for that. But then we've added lots of haphazard hacks on top of it. The rest I wrote here: https://news.ycombinator.com/item?id=39485437

guappa
5 replies
1d12h

You can make 1MB of JS perform just as poorly as 10MB.

But can you make a 10MB js perform as good as a good 1MB js?

hnarn
4 replies
1d12h

I'm not a JS developer but I imagine that the amount of JavaScript code isn't the most relevant part if most of it isn't being called. I mean, if you have some particularly heavy code that only runs when you click a button, is that really parsed and causes overhead before the button is clicked?

guappa
2 replies
1d11h

It is parsed and loaded in memory, and not executed.

lifthrasiir
1 replies
1d10h

"Loaded" part is no longer true in major JS implementations (for example, [1]).

[1] https://v8.dev/blog/preparser

acdha
0 replies
1d7h

That’s a broader claim than that page supports. They describe how they avoiding fully generating the internal representation and JITing it, but clearly even unused code is taking up memory and CPU time so you’d want to review your app’s usage to make sure that it’s acceptable levels of work even on low-end devices and also that your coding style doesn’t defeat some of those optimizations.

jitl
0 replies
1d4h

Depending on how the code is loaded, yes.

If all 10mb is in a single JS file, and that file is included in a normal script tag in the page’s HTML, then parsing the 10mb will block UI interaction as the page loads.

Once the browser parses 10mb, it’ll evaluate the top level statements in the script, which are the ones that would set up the click event handler you’re referencing.

If the entire page is rendered by JavaScript in the browser, then even drawing the initial UI to the screen is blocked by parsing JS.

The solution to this for big apps is to split your build artifact up into many separate JS files, analogous to DLLs in a C program. That way your entry point can be very small and quick to parse, then load just the DLLs you need to draw the first screen and make it interactive. After that you can either eagerly or lazily initialize the remaining DLLs depending on performance tradeoff.

I work on Notion, 16mb according to this measurement. We work hard to keep our entry point module small, and load a lot of DLLs to get to that 16mb total. On a slow connection you’ll see the main document load and become interactive first, leaving the sidebar blank since it’s a lower priority and so we initialize it after the document editor. We aren’t necessarily using all 16mb of that code right away - a bunch of that is pre-fetching the DLLs for features/menus so that we’re ready to execute them as soon as you say, click on the settings button, instead of having awkward lag while we download the settings DLL after you click on settings.

realusername
0 replies
1d13h

You can theoretically have a somewhat fast enough large JS app but it's going to be an uphill battle.

You have to make regular bundle analysis otherwise the cache won't work if you deploy too much and package updates and new additions are likely to break the performance analysis you've just made before.

Less JS = better performance is a simplified model but very accurate in practice in my opinion, especially on large teams.

acdha
0 replies
1d19h

It helps to examine the size of webpages, but then you have to also put that information into context: how often does this page load new data? once the data is loaded, can you work without further loading? Is the data batched, or do waterfalls occur? Is this a page that users will only visit once, or do they come regularly?

I totally agree - my point was simply that people sometimes focus on network bandwidth and forget that a huge JS file can be a problem even if it’s cached. What you’re talking about is the right way to do it - I try to get other developers to use older devices and network traffic shaping to get an idea for those subjective impressions, too, since it’s easy to be more forgiving when you’re focused on a dedicated testing session than, say, if you’re trying to use it while traveling and learning that the number of requests mattered more than the compressed size or that you need more robust error handling when one of the 97 requests fails.

BlueTemplar
0 replies
1d13h

Even decently powerful phones can have issues with some of these.

Substack is particularly infuriating : sometimes it lags so badly that it takes seconds to display scrolled text (and bottom of text references stop working). And that's on a 2016 flagship : Samsung Galaxy S7 ! I shudder to think of the experience for slower phones...

(And Substack also manages to slow down to a glitchy crawl when there are a lot of (text only !) comments on my gaming desktop PC.)

willsmith72
2 replies
1d14h

gmail is terrible, idk if it's just me but i have to wait 20 seconds are marking an email as read before closing the tab. otherwise it's not saved as read

spotify has huge issues with network connectivity, even if i download the album it'll completely freak out as the network changes. plain offline mode would be better than its attempt at staying online

bmacho
0 replies
1d4h

gmail still has the HTML view: https://mail.google.com/mail/u/0/h?ui=html

They are saying since a while that they will shut it down in February. So it may only work for 1-2 days

avgcorrection
0 replies
1d10h

Gmail has this annoying preference which you can set: set email to read if viewed for x seconds. Mine was set to 3 seconds. Which is I guess why I sometimes would get a reply on a thread and I had to refresh multiple times to get rid of the unread status.

Maybe that’s related?

jaredcwhite
2 replies
1d19h

GitHub's probably the worst example of "Pjax" or HTMX-style techniques out there at this point…I would definitely not look at that and paint a particular picture of that architecture overall. It's like pointing at a particularly poor example of a SPA and then saying that's why all SPAs suck.

agos
0 replies
1d11h

is there a good example of reasonably big/complex application using pjax/htmx style that sucks less? Because GitHub isn't making a good case for that technology

SebastianKra
0 replies
1d19h

Im inclined to agree, but the same thing happens on GitLab.

flexagoon
2 replies
1d19h

go to GitHub and navigate through the file tree. On my machine, this feels significantly snappier than the the rest of GitHub. Coincidentally, it's also one of the only parts that is not rendered statically

And it's also the only part of it that doesn't work on slow connections.

I've had a slow internet connection for the past week, and GitHub file tree literally doesn't work if you click on it on the website, because it tries to load it through some scripts and fails.

However, if, instead of clicking on a file, I copy it's url and paste it into the browser url bar, it loads properly.

SebastianKra
1 replies
1d18h

Wow, you're right. I just reproduced that by throttling the network.

But actually, that first click from the overview is still an HTML page. Once you're in the master-detail view, it works fast even when throttled.

rarafael
0 replies
1d18h

I have a connection that might be considered slow by most HN readers (1~2MB/s) and the new github file viewer has been a blessing. So snappy compared to everything else

panstromek
1 replies
1d11h

Interesting that you mention GitHub file tree. I recently encountered a periodic freezing of that whole page. I've profiled for a bit and found out that every few seconds it spends like 5 seconds recomputing relative timestamps on the main thread.

azangru
0 replies
1d11h

I recently encountered a periodic freezing of that whole page.

Yes; this started happening after they rolled out the new version of their UI built with React several months ago.

hn_acker
1 replies
1d17h

Any reason why we're looking at uncompressed data? Some of the listed negative examples easily beat GMaps 1.5mb when compressed.

Because for a single page load, decompressing and using the scripts takes time, RAM space, disk space (more scratch space used as more RAM gets used), and power (battery drain from continually executing scripts). Caching can prevent the power and time costs of downloading and decompressing, but not the costs of using. My personal rule of thumb is: the bigger the uncompressed Javascript load, the more code the CPU continually executes as I move my mouse, press any key, scroll, etc. I would be willing to give up a bit of time efficiency for a bit of power efficiency. I'm also willing to give up prettiness for staticness, except where CSS can stand in for JS. Or maybe I'm staring at a scapegoat when the actual/bigger problem is sites which download more files (latent bloat and horrendously bad for archival) when I perform actions other than clicking to different pages corresponding to different URLs. (Please don't have Javascript make different "pages" show up with the same URL in the address bar. That's really bad for archival as well.)

Tangent: Another rule of thumb I have: the bigger the uncompressed Javascript load, the less likely the archived version of the site will work properly.

sbergot
0 replies
1d10h

While you are right that there is a cost, the real question is to know whether this cost is significant. 10 Mb is still very small in many contexts. If that is the price to pay for a better dev ex and more products then I don't see the issue.

wruza
25 replies
1d14h

10MB, 12MB, …

Compare it to people who really care about performance — Pornhub, 1.4 MB

Porn was always actual web hi-tech with good engineering, not these joke-level “tech” giants. Can’t remember a single time they’d screw up basic ui/ux, content delivery or common sense.

devjab
16 replies
1d13h

I never really understood why SPAs became so popular on the web. It’s like we suddenly and collectively became afraid of the page reload on websites just because it’s not a wanted behaviour in actual web applications.

I have worked with enterprise applications for two decades, and with some that were build before I was born. And I think the React has been the absolute best frontend for these systems compared to everything that came before. You’re free to insert Angular/Vue/whatever by the way. But these are designed to replace all the various horrible client/server UIs that came before. For a web-page that’s hardly necessary unless you’re g-mail, Facebook or similar, where you need the interactive and live content updates because of how these products work. But for something like pornhub? Well PHP serves them just fine, and this is true for most web sites really. Just look at HN and how many people still vastly prefer the old.reddit.com site to their modern SPA. Hell, many people still would probably still prefer an old.Facebook to the newer much slower version.

figmert
4 replies
1d11h

It’s like we suddenly and collectively became afraid of the page reload on websites

I used to work at a place where page reloads was constantly an issue brought up as a negative. They couldn't be bothered to fix the slow page loads and instead avoided page changes.

I argued several times that we should improve performance instead of caring about page reloads, but never got through to anyone (in fairness, it was probably mostly cos of a senior dev there).

At some point a new feature was being developed, and instead of just adding it to our existing product, it was decided to use an iframe with the new feature as a separate product embedded.

ozim
3 replies
1d10h

Oh my god iframes should be removed from browsers so no one should be able to use them ever.

ezreth
1 replies
1d5h

I think there are a couple of legitimate uses of iframes, but in most cases, it’s not something you want to use.

If you want to take payments in your application using a vendor like Nodus, ie, if you have an app that is being used by a CSR/Salesperson, and they need to take CC or eCheck data, showing the payment application in an iframe (within the context of the app) lets you have that feature within the application but importantly, means you don’t have to be PCI DSS compliant yourself

tstenner
0 replies
2h4m

Opening the payment flow in a separate window while displaying a message like "please complete the payment with your chosen payment provider" is almost as good from a usability standpoint and a lot better when considering security best practices.

dudus
2 replies
1d11h

Why SPAs became popular? Because they "feel" native on mobile. Now you have page transitions and prefetch which really should kill this use case.

IMO the bloat he talks about on the post is not representative of 2024. Pretty much all frontend development of the last 2 years has been moving away from SPAs with smaller builds and faster loading times. Fair enough it's still visible in a lot of sites. But I'd argue it's probably better now than a couple years ago.

wruza
0 replies
1d10h

Personally, I don't see why one would hate SPA concept. I like it, because it enables to lower the traffic and make interactions... normal? And to use local computing when possible, instead of adding 2 and 2 over HTTP.

When people hate SPAs they actually hate terminally overweight abominations that no one (i.e. google) keeps in check, and that's the reason they are like that. Why should on-page interaction be slow or cost one 20MB of downloads? No reason. replaceChild() uses the same tech as <a href>. SPA is not a synonym for "impotent incompetence 11/10".

If humanity didn't invent SPAs, all these companies would just make half-a-minute loading .aspx-es instead. Because anything else they can not.

troupo
0 replies
1d5h

Why SPAs became popular? Because they "feel" native on mobile.

They... Don't :) The absolute vast majority of them is a shittier, slower, clunkier version of a native app

IMO the bloat he talks about on the post is not representative of 2024. Pretty much all frontend development of the last 2 years has been moving away from SPAs with smaller builds and faster loading times.

He literally lists Vercel there. One of the leaders in "oh look at our beautiful fast slim apps". Their front page loads 6.5 megabytes of javascript in 131 requests to those smaller bundles.

diggan
2 replies
1d11h

But for something like pornhub? Well PHP serves them just fine,

Kind of fun to make this argument for Pornhub when visiting their website with JavaScript disabled just seems to render a blank page :)

how many people still vastly prefer the old.reddit.com site to their modern SPA

Also a fun argument, the times I've seen analytics on it, old.reddit.com seems to hover around/below 10% of the visitors to subs. But I bet this varies a lot by the subreddit.

nickserv
1 replies
1d8h

There might be a legal requirement for them to make sure you accept the "I'm 18" dialog before displaying anything.

diggan
0 replies
1d8h

Been possible to do without JavaScript before, and is still possible today :) Where there is a will, there is a way. I'd still argue Pornhub is a bad example for the point parent was making.

ralusek
1 replies
1d12h

I love SPAs. I love making them, and I love using them. The thing is, they have to be for applications. When I'm using an application, I am willing to eat a slower initial load time. Everything after that is faster, smoother, more dynamic, more responsive.

ihateolives
0 replies
1d11h

Everything after that is faster, smoother, more dynamic, more responsive.

IF and only IF you have at least med- to high-end computer and smartphone. If you have low-end hardware you first have to wait for that 20MB to load AND get to use slow and choppy app afterwards. Worst of both worlds, but hey, it's built according to modern standards!

ffsm8
1 replies
1d5h

I feel like the term SPA has since ceased to have any meaning with the HN crowd.

I mean i do generally agree with your sentiment that SPAs are way overused, but several of the examples of TFA arent SPAs, which shoould already show you how misguided your opinion is.

depending on the framework, SPAs can start at ~10KB. really, the SPA is not the thing thats causing the bloat.

ch_sm
0 replies
23h24m

You’re right. People miss that SPAs aren’t heavy by definition, but by accident and/or acquire bloat over time. You can totally have a super-useful, full featured, good looking and fast SPA in 150 KB gzipped including libraries. Only it takes knowledge and discipline to do that.

littlecranky67
0 replies
1d1h

Well to stay within the example porn website of OP, because it is not an SPA you cant really make a playlist play in full screen - the hard page reload will require you to intersct to go fullscreen again on every new video. Not an issues in SPAs (see youtube).

thakoppno
2 replies
1d14h

Can’t remember a single time they’d screw up basic ui/ux, content delivery or common sense.

There are many, many cases of porn websites breaking the law.

yomly
1 replies
1d12h

Yes - writing PHP in 2024 is a crime that we should hold PH accountable for.

goatlover
0 replies
1d8h

Modern PHP is a pretty good language. Not my favorite, but it's not antiquated. And there's tons of websites built with it (granted Wordpress and Drupal are the majority of them).

npteljes
2 replies
1d8h

Porn was always actual web hi-tech with good engineering, not these joke-level “tech” giants. Can’t remember a single time they’d screw up basic ui/ux, content delivery or common sense.

Well, I do remember the myriad of shady advertisement tactics that porn sites use(d), like popups, popunders, fake content leading to other similar aggregation sites, opening partner website instead of content, poisoning the SEO results as much as they can, and so on. Porn is not the tech driver people make it up to be, even the popular urban legend around the Betamax vs VHS is untrue, and so is that they drive internet innovation. There is a handful of players who engineer a high quality product, but it's hardly representative to the industry as a whole. Many others create link farms, dummy content, clone websites, false advertisement, gaming the search results, and so on. Porn is in high demand, it's a busy scene, and so, many things happen related to it, and that's about it.

The current state of snappy top-level results is I think the result of competition. If one site's UX is shitty, I think the majority of the viewers would just leave for the next one, as there is a deluge of free porn on the internet. So, the sites actually have to optimize for retention.

These other websites have different incentives, so the optimized state is different too. The user is, of course, important, but if they also have shareholders, content providers, exclusive business deals, monopoly, then they don't have to optimize for user experience that much.

wruza
1 replies
1d6h

I generally agree and understand. The reasoning is fine. But comments like this make me somewhere between sad and contemptuous towards the field. This neutral explanation supports the baseline that no professional can vocalize anywhere and retain their face. I'm talking youtube focus & arrows issues here, not rocket science. Container alignment issues [1], scrolling issues [2], cosmic levels of bloat [$subj], you name it. Absolutely trivial things you can't screw up if you're at all hireable. It's not "unoptimized", it's distilled personal/group incompetence of those who ought to be the best. That I cannot respect.

[1] https://www.youtube.com/watch?v=yabDCV4ccQs -- scroll to comments and/or switch between default/theater mode if not immediately obvious

[2] half of the internet, especially ux blogs

npteljes
0 replies
1d4h

Yeah, it's not my favorite experience either, and I found it really hard to be indifferent, especially in my earlier years. The goal is almost never a perfect product. And it's also a result of many people's involvement, who often have different values than I do, or what I expect of them.

kmlx
0 replies
1d9h

i worked in that field. one of the main reasons adult entertainment is optimised so heavily is because lots of users are from countries with poor internet.

countless hours spent on optimising video delivery, live broadcasts (using flash back in the day, and webrtc today), web page sizes... the works.

dontupvoteme
0 replies
1d1h

Youtube also ripped off their "interesting parts of the video" bit entirely.

PetitPrince
18 replies
1d21h

This compares how much Javascript is loaded from popular sites (cold loaded). Some highlights:

- PornHub loads ~10x less JS than YouTube (1.4MB vs 12MB)

- Gmail have an incomprehensible large footprint (20MB). Fastmail is 10x ligthter (2MB). Figma is equivalent (20MB) while being a more complex app.

- Jira has 58MB (whoa)

jkoudys
11 replies
1d17h

Pornhub needs to be small. Jira will download once then be loaded locally until it gets updated, just like an offline app. Pornhub will be run in incognito mode, where caching won't help.

wruza
8 replies
1d14h

Pornhub will be run in incognito mode

It’s not '80s anymore, nobody cares about your porn. I have bookmarks on the bookmarks bar right next to electronics/grocery stores and HN. And if you’re not logged in, how would PH and others know your preferences?

thiht
6 replies
1d12h

I don’t think you’re speaking for the majority here, no one has pornhub in their bookmarks bar

wruza
2 replies
1d9h

Yeah that was incoherent remark on my side, and there are situations where I'd put them in a folder.

But I still find going incognito to watch porn paranoid.

npteljes
0 replies
1d8h

It can also be considerate, like when using a shared device. Similar to how we don't vocalize every intrusive thought.

bmacho
0 replies
1d4h

My friend doesn't want porn to show up in the address bar, that's why she uses a different profile for it. Incognito mode is not good, since it actually forgets history and bookmarks.

nox101
0 replies
1d11h

When I said I didn't watch porn on company laptops, only my personal one, the 5 Apple employees at a party all said they watched porn on company laptops. I was in the minority.

jkoudys
0 replies
23h27m

I'd never even heard of pornhub until you degenerates told me about it. I had to look it up. Apparently "porn" is short for "pornography"?

indrora
0 replies
10h9m

Pornhub has a team dedicated to mobile and consoles.

I worked in IT throughout high school and college. Trust me: Old married dudes it's a coin flip on if you're gonna get Pornhub or one of its neighbors that MindGeek owns in their bookmarks; if they didn't have it bookmarked, there's still a 50% chance that it's in the history. A surprising number of women had at least some porn in their history.

jjav
0 replies
1d11h

I have bookmarks on the bookmarks bar

Must be fun at work zoom meeting when you share the browser window!

thrwwycbr
0 replies
1d14h

Jira will download once

Maybe you should take a look on the Network Tab, because Atlassian sure does have a crappy network stack.

panstromek
0 replies
1d11h

JIRA transfers like 20MB of stuff everytime you open the board, including things like 5MB JSON file with list of all emojis with descriptions (at least the last time I profiled it).

mewpmewp2
2 replies
1d18h

YouTube feels really snappy to me, but Figma is consistently the worst experience I have ever felt for web apps. Jira is horrible and slow also though.

troupo
0 replies
1d12h

On desktop they load 2.5MB of JS and 12 MB of Javascript to show a grid of images. And it still takes them over 5 seconds to show video length in the previews.

Youtube hasn't felt snappy in ages

latency-guy2
0 replies
1d14h

YouTube does not feel snappy to me anymore, its still one of the better experiences I have on the internet, but quite bad from years before.

I just tested my connection to youtube right now, just a tiny bit over 1.2 seconds from not using it for a few days. A fresh, no cache, no cookies, the entire page loaded in 2.8 seconds. A hot reload on either side varied between 0.8s to 1.4 seconds. All done with at most ublock as an extension on desktop chrome with purported gigabit speeds from my ISP.

That speed is just OK, but definitely not the 54ms response time I got to hit google's server to send me the HTML document that bears all the dynamic content on youtube

Figma is very surprising to me, that bullshit somehow is PREFERRED by people, getting links from designers from that dogshit app screeches my browser to speeds I haven't seen in decades, and I don't think I'm exaggerating at all when I say that

Spivak
1 replies
1d19h

Holy god YouTube is 12MB? How!?

troupo
0 replies
1d12h

They also load 2.5 MB of CSS on desktop :)

pkphilip
0 replies
1d13h

No idea why an email client should have 20 MB of JS.

djtango
17 replies
1d12h

I recently came back from a road trip in New Zealand - a lot of their countryside has little to no cell coverage. Combined with roaming (which seems to add an additional layer of slowness) and boy did it suck to try to use a lot of the web.

Also if any spotify PMs are here, please review the Offline UX. Offline is pretty much one of the most critical premium features but actually trying to use the app offline really sucks in so many ways

Tistron
5 replies
1d11h

Offline is still miles and miles better than patchy Internet. If spotify thinks you have Internet it calls the server to ask for the contents of every context menu, waiting for a response for seconds before sometimes giving up showing a menu and sometimes falling back to what would have been instant if it was in offline mode. I really loathe their player.

meowtimemania
1 replies
1d11h

I get irritated by this too. When it happens I put my phone on airplane mode to force Spotify to show the offline ui.

Tistron
0 replies
23m

Yeah, me too.

And vow to someday get around to finding a different music solution.

shepherdjerred
0 replies
1d6h

This was one of the major reasons I left Spotify. Apple Music handles this much more gracefully.

jve
0 replies
1d11h

Hmm, this would also imply they would need more infrastructure at their side when they could just maybe use cached values stored locally.

jnsaff2
0 replies
1d11h

Not only that, there are many apps with no online aspect to them that have facebook sdk or some other spyware that does a blocking call on app startup and the app won't start without it succeeding, unless you are completely offline.

Especially annoying when one is using dns based filtering.

user432678
4 replies
1d11h

Re: Spotify

So much agree here, the offline mode is so beyond being annoying so I even started building my own iOS offline first music app.

jjav
3 replies
1d11h

my own iOS offline first music app

Sadly ironic that apple used to sell this, in the shape of an ipod!

I hold on to mine, it is perfect in every way that a phone is terrible.

It is tiny and 100% offline, just what I need.

rob74
2 replies
1d11h

Wait... iOS doesn't have an offline music app anymore either? Google replaced the "Play Music" app (which could also play offline music files) with "Youtube Music" a few years ago (not sure if that works with offline files, I switched to a third party app), but I thought iOS still had one (precisely because they used to sell the iPods, specifically the iPod touch which was more or less an iPhone lacking the phone part)?

d1sxeyes
1 replies
1d10h

It does. You can download playlists on Apple Music (or even use your own files with it, a rarity in the post-cloud world).

Maybe OP was talking specifically about the behaviour of Spotify?

user432678
0 replies
1d9h

To be honest the offline support of Apple Music even though exists, is on par with Spotify. It will bug you about “turn on wifi, you’re offline”, ask you to provide or update payment information, fail at synchronisation time to time with music purchased on iTunes, and overall work unreliably when you just want to listen to some songs you’ve copied from your physical CD’s. Like the whole thing trying hard to push you to use their subscription service.

dukeyukey
3 replies
1d11h

I live in London which typically gets great signal everywhere. Except in the Underground network, where they're rolling out 5G but it's not there yet.

Please Spotify, why do I need to wait 30 seconds for the app to load anything when I don't have signal? All I want to do is keep listening to a podcast I downloaded.

m_rpn
2 replies
1d10h

i will never understand what all the people on the tube are doing on their phones with no internet, do they have the entirety of youtube bufferred XD?

G3rn0ti
1 replies
1d8h

do they have the entirety of youtube bufferred XD?

Well, with Youtube Premium you can actually download videos in advance and watch them on the go w/o Internet access required.

balls187
0 replies
1d2h

Well, with Youtube Premium you can actually download videos in advance and watch them on the go w/o Internet access required.

It will even download videos in the background it thinks you will enjoy, so you don't even need to manage anything.

diggan
1 replies
1d11h

Also if any spotify PMs are here, please review the Offline UX. Offline is pretty much one of the most critical premium features but actually trying to use the app offline really sucks in so many ways

Also, Spotify (at least on iOS) seems to have fallen into the trap of thinking there is only "Online" and "Offline", so when you're in-between (really high latency, or really lossy connection), Spotify thinks it's online when it really should be thinking it's offline.

But to be fair, this is a really common issue and Spotify is in no way alone in failing on this, hard to come up with the right threshold I bet.

rgblambda
0 replies
1d10h

I've noticed BBC Sounds has the opposite problem. If you were offline and then get a connection it still thinks you're offline. Refreshing does nothing. You need to restart the app to get online.

jakelazaroff
13 replies
1d18h

I know the implication here is "too much JavaScript" but we also need to talk about how much of this is purely tracking junk.

BandButcher
8 replies
1d16h

Was going to mention this, almost any company's brand site will have tracking and analytics libraries set in place. Usually to farm marketing and UX feedback.

Whats worse is some of them are fetched externally rather than bundled with the host code thus increasing latency and potential security risks

willsmith72
6 replies
1d14h

im pro privacy, but is it really so bad to get anonymous data about where people clicked and how long they stayed where?

it would be almost impossible to measure success without it, whether it's a conversion funnel or tracking usage of a new feature

kevin_thibedeau
3 replies
1d13h

That data can be gathered with self-hosted JS if the devs were allowed to implement it. The infatuation with third party analytics is just a more elaborate version of leftpad.

ehnto
2 replies
1d13h

Marketing snippets are rarely implemented by devs, they get dropped into a text box or added to Google Tags by marketing/SEO peeps.

If it is put in by a developer, the budget for that is like an hour to copy paste the code snippet in the right spot. Few are going to pay the hours required for an in house data collection layer that then has to integrate with the third party if that's even an option.

At least that is my experience through agency work. Maybe a product owner company could do it.

Not to be rude to the industry either, but I don't see why the assumption would bet that an in house dev has the chops to not make the same mistakes a third party does.

RugnirViking
1 replies
1d6h

its not about having the chops to do it well, its about not importing every feature under the sun just because you want to do intern's first marketing analytics campaign.

earlier in the conversation someone talked about pasting a snippet. We're talking about the "chops" to not paste a snippet that is hundreds of thousands of lines long. A snippet so long it would crash many editors.

ehnto
0 replies
1d6h

Typically the person including the JS tag that then fetches the massive third party payload has no idea that's what is happening.

It is very common to get multiple departments and contracted companies sticking their misc JS in since every marketing SaaS tool they use has it's own snippet. Your SEO guy wants 3 trackers, your marketing has another 5, and you sell on XYZ online market and they have affilliate trackers etc.

No devs engaged at any point and the site performance isn't their responsibility. They can't do their job without their snippets so the incentives are very sticky, and the circus goes on.

It's kind of like an NPM dependency tree of martech SaaS vendors...

psychoslave
0 replies
1d11h

How should we feel if when we learn that due to technical constraints, Stasi wasn’t able to measure how successful it was?

MrJohz
0 replies
1d13h

It is when (a) that data collection takes up a significant amount of bandwidth whenever I visit your website, and (b) I don't trust that that data collection really is as anonymous as the website says (or even thinks).

The major players here are explicitly not anonymous, they are designed to keep track of people over time so that they can collate habits and preferences across different sites to better target advertising. Yes, your AB test script isn't doing the same thing, but is it really adding any value to be as a consumer, or is it just optimising an extra 0.01% revenue for you?

tadfisher
0 replies
1d16h

Whats worse is some of them are fetched externally rather than bundled with the host code thus increasing latency and potential security risks

Some vendor SDKs can be built and bundled from NPM, but most of them explicitly require you fetch their minified/obfuscated bundle from their CDN with a script tag. This is so they don't have to support older versions like most other software in the world, and so they can push updates without requiring customers to update their code.

Try to use vendors that distribute open-source SDKs, if you have to use vendors.

ginko
1 replies
1d10h

But surely even pieces of scummy tracking code can't take up megabytes of memory, right?! Just collect user session data and send it to some host.

jakelazaroff
0 replies
1d5h

Not sure whether you looked at the requests in the screenshots, but the tracking script code alone for many of these websites takes up megabytes of memory.

veeti
0 replies
1d11h

It's easy to test with adblock in place. For instance, the Gitlab landing page went from 13 megabytes to "just" 6 megabytes with tracking scripts blocked. The marketing department will always double the bloat of your software.

DanielHB
0 replies
1d8h

In a previous job I had to declare war against google tag manager (tool that let marketers inject random crap in your web application without developer input). Burned some bridges and didn't win, performance is still crap.

After those things it is the heavy libs that cause performance problems, like maps and charts, usually some clever lazy loading fixes that. Some things I personally ran into: - QR code scanning lib and Map lib being loaded at startup when it was actually just really small features on the application - ALL internationalisation strings being loaded at startup as a waterfall request before any other JS ran. Never managed to get this one fixed... - Zendesk just completely destroys your page performance, mandated through upper-management, all I could do was add a delay to load it

After that then it comes just badly designed code triggering too many DOM elements and/or rerenders and/or waterfall requests.

After that comes app-level code size, some lazy loading also fixes this, but it is usually not necessary until your application is massive.

nsonha
12 replies
1d14h

it's always bothered me that this dogma exists. Somehow web apps need to be super frugal with code size, while apps distributed on other (native) platforms never have such a problem. Somehow it's the bloated web that blocks the access for children in Affrica, but they can download bloated android apps just fine?

Maybe, just maybe, the problem isn't the size of the javascript, it's how broken the entire web stack (specifically caching & PWA) is, that makes a trivial thing like code size a problem.

troupo
3 replies
1d12h

No. Native apps are not exempt from this either. Nothing justifies the bloat

nsonha
2 replies
1d12h

are you deliberately not getting the point for some reason? How big is considered bloated in native apps, how big can a native app can get before it hurts accessibility because people cannot download it? Is it a few MB?

Web apps are soon going to be if not already matching native apps in terms of complexity yet we are still distracted from the real problem and quibble about some arbitrary and frankly pathetic code size restriction. Fix the root problem with PWA or something.

troupo
1 replies
1d8h

How big is considered bloated in native apps, how big can a native app can get before it hurts accessibility because people cannot download it?

I'd say if it takes more than 50 megabytes to display a list of text, it's a problem :)

yet we are still distracted from the real problem

Yes, I agree, "how broken the entire web stack" is the main problem. And the ungodly amounts of javascript you end up for the simplest problems is the symptom. However, neither caching nor PWA are the specific main problems in the web's brokenness :)

nsonha
0 replies
5h51m

the ungodly amounts of javascript you end up for the simplest problems

replace javascript with any native programming language, this has been always a problem with UI programming, but it's not THAT big of a problem because in other platforms people download the app once, instead of every time. There is a long list of problems in software engineering, but sorry to disagree with you, "the binary/script is too big" is no where near the top.

atraac
3 replies
1d13h

while apps distributed on other (native) platforms never have such a problem

Could you give an example? I was an android app dev over 5 years ago and there was a huge push for lower app size everywhere. Google even made Android App Bundles to fight this issue specifically.

nsonha
2 replies
1d12h

however big, you're only required download an app once. Next time there is an update that you cannot download, generally that won't block you from using the app at that point in time.

atraac
1 replies
1d10h

It actually depends, developer can block you from using the app until you update, either themselves or through Play Store's 'Immediate updates'.

nsonha
0 replies
5h58m

I did say "generally"

eitland
1 replies
1d13h

If Atlassian did't use a full minute to update a certain roadmap view on my MBP from January I would be one step closer to agreeing with you even if it was still 50 MB.

But I am old enough to remember Gannt charts didn't use to take that long on old Pentium processors back in school, way before Git was invented.

Another thing is the sheer yuck of it:

If a typical web app was lots of business code, maybe. But when you look at the network tab and it feels like you are looking at the hair ball from a sink drain, lots of intertwined trackers to catch everything that passes, that is another story.

nsonha
0 replies
1d12h

yes, the problem with the narrative is that web apps that are logic heavy are lumped together with content based sites who cannot justify their code bloat.

tgv
0 replies
1d12h

There's always a speed/storage tradeoff. Apps should be more economic too, but you download native apps once, and web apps almost every time you open them. So indeed, caching could help, but how large would your cache have to be? Big enough to hold all 50MB downloads for every website you visit? That's an awfully large cache. So I'd say economy is more necessary for web-apps than for native/store apps.

I just checked my web usage on my work computer. The last 120k opened URLs were on 8300 unique hosts. 8300 * 50MB is not feasible.

donkeybeer
0 replies
1d13h

Web sites should be frugal. If content websites, even more so if they even lack comment sections, are getting huge that's a pure "skill issue". It's not our place to speculate why what should have been a content site became a web app.

iamsaitam
10 replies
1d14h

At this point, blog posts like these just look like "rage bait" for web developers.. What's the point of it? What's the alternative?

The biggest reason why this topic is a topic, is due to the browser developer tools letting anyone glance at these details easily. If this wasn't a low hanging fruit blog post, it would also try to figure out if this is isolated to web development or can we see this across the board (hint: look at how big games have become, is it only textures though?).

YuukiRey
3 replies
1d13h

I think it's good and important that people occasionally call this out. Same as https://news.ycombinator.com/item?id=39315585

I don't understand why you'd consider this "low hanging fruit". What could the author have done to make it a high quality submission in your eyes?

The alternative is to have more awareness of the amount of dependencies you really need, of when you actually need a framework with a runtime, and so on. He mentioned a fair share of essentially static landing pages that really have no reason to ship so much crap. And even though it's not explicitly mentioned: this isn't just a potential issue for end users. This code likely makes life hard for the developers as well. With every dependency you get more potential for breaking changes. With every layer it gets harder to understand what's going on. The default shouldn't be to just add whatever you want and figure it out later, the default should be to ask yourself what really needs to be added. Both in terms of actual code, but also layers, technologies, frameworks, libraries.

matthewhammond
1 replies
1d10h

It would be much more interesting to analyse one of the sites in detail and consider what could be done to reduce the code size, looking at where it's coming from and what it does. Or to find out why it might be that some landing pages are shipping a lot of JS (could be because they are landing pages for web apps?). Or consider performance more holistically (are pages shipping a lot of code, but lazy loading or otherwise optimising it so that pages still perform well?). Or maybe compare these web application sizes to mobile or desktop equivalents too (where it's surely easier to optimise amount of code shipped).

The article is just a lot of vague pointing at sites and insinuating (not even asserting) that they're too bloated or not. I don't get a sense of whether it's worse that Medium ships 3mb of code or Soundcloud ships 12. There's a lot of bad faith "this is just a text box" for sites which clearly do much more than that too.

troupo
0 replies
1d5h

It would be much more interesting to analyse one of the sites in detail and consider what could be done to reduce the code size, looking at where it's coming from and what it does.

No. No it wouldn't. It's not the job of strangers on the internet to do the job of incompetent developers.

Or to find out why it might be that some landing pages are shipping a lot of JS (could be because they are landing pages for web apps?)

How does being a landing page for a web app excuse downloading 6-10 MB of javascript to show two pages of static text and images?

Or consider performance more holistically (are pages shipping a lot of code, but lazy loading or otherwise optimising it so that pages still perform well?).

Here's a holistic overview of performance: The Performance Inequality Gap, 2024 https://infrequently.org/2024/01/performance-inequality-gap-...

There's a lot of bad faith "this is just a text box" for sites which clearly do much more than that too.

Not clearly. Not clearly at all.

---

Edit: note on the incompetence.

If you embed Youtube player in your website, Lighthouse will scream at you for being inefficient and loading too many resources. Nearly all of those issues will come from youtube.

Lighthouse will helpfully provide you with a help page [1] listing wrappers developed by other people to fix this. Chrome's "performance lead" even penned an article[2] on lazy loading iframes and linked to a third-party youtube wrapper which promises 224x speed up over the official embed.

They know. They either are so incompetent that they cannot do the job themselves, or they don't care.

[1] https://developer.chrome.com/docs/lighthouse/performance/thi...

[2] Over at webdev: https://web.dev/articles/iframe-lazy-loading pointing to https://github.com/paulirish/lite-youtube-embed

BTW. web.dev is created by web devs at Google. Promoting web development best practices. It takes it ~3 seconds to display a list of articles and the client-side only navigation is broken https://web.dev/articles

guappa
0 replies
1d12h

This code likely makes life hard for the developers as well.

Well if you have static pages you don't need a team of 10 developers… so there is some self interest there.

palmfacehn
2 replies
1d14h

To let us know we are not alone with our disdain. Too many JS enthusiasts paper over this insanity. The remnant lives on.

"But JS is the biggest developer ecosystem in the world!"

StressedDev
1 replies
1d12h

This is not a JavaScript issue. It’s a software enginner/people issue. The question is how do you get people to care about performance, security, reliability, etc.? How do you get organizations to care about these issues?

These are hard problems and people have been complaining about software size forever. Back in the early 90s, it was bloated C++ code.

You will also see that all software continues to use more ram, more disk space, more network bandwidth etc. This trend has been going on for decades.

For example, why do we use JSON as an interchange format? It’s relatively slow (i.e. creating it and parsing it is slow), nor is it is not space efficient. Back in the 1980s, the Unix community created RPC and the RPC wire formats were much more efficient because they were binary formats. The reason we use JSON is it makes the developer’s life easier and because developers prefer ease of use to performance.

balls187
0 replies
1d1h

The question is how do you get people to care about performance, security, reliability, etc...How do you get organizations to care about these issues?

It's very hard to, unless there is a risk to the bottom line.

Let me pose it differently--apparently ZED is ridiculously fast code editor. Do I want to switch from my vscode investment? Or will I deal with the "bloat"

shiomiru
1 replies
1d12h

What's the alternative?

Many websites listed on the page don't even need JS.[0]

Consider how just a few years ago there used to be an entire suite of alternative frontends to major "web apps"/"social media platforms", which generally worked without any JS, and were created & ran by volunteers. In general, they all provided superior UX by just not loading megabytes and megabytes of tracking code; this would be the alternative.[1]

Now they are slowly evaporating: not because of lack of interest, but because it was affecting the margins of these companies, and they actively blocked the frontends.[2]

So I think of it this way: these megabytes and megabytes of JS do not serve me, the user. It's just code designed to fill the pockets of giant corporations that is running on my computer. Which is, indeed, quite infuriating.

OK, maybe not even that; after all, you get what you pay for. It's just sad that despite the technological possibilities, this is the norm we have arrived at.

[0]: Of course, there is valid use of JS, it's a wonderful technology. I'm talking about cases where you could pretty much write the whole thing in pure HTML without losing core functionality.

[1]: Well, only if there existed a viable financing model besides "selling user data to the highest bidder" :( technologically at least, it's possible.

[2]: See cases of bibliogram, teddit, libreddit, nitter, ...

iamsaitam
0 replies
1d7h

You're confounding using Javascript for web development and "loading megabytes of tracking code". They are not mutual exclusive, nor have anything to do with the development side of it. Advertising and tracking are a business decision, not technical one.

Well, you're entitled to not visit/use those websites if the megabytes of JS don't serve you. That would be the cost of the decision made to use Javascript by those websites, to have people that share your opinion not using them. And cost is a subjective factor dependent on multiple things and hidden from the user, so when you judge a website to be using Javascript without need, you're basically saying that you know better the cost to the developer than the developer itself.

xk_id
0 replies
1d11h

What's the point of it? What's the alternative?

An absurd, idiotic situation that is endemic to the whole industry deserves every bit of scorn and ridicule. Developers need to be educated about this so they can make the right choices, instead of remaining complacent agents of a shameful situation. It also strengthens my own resolve to support (through my use) those websites that abide by the original principles of the web: they either deliver actual documents (instead of javascript apps), or offer public APIs to access the plaintext data. There is always an alternative out there.

420698008
10 replies
1d15h

Meanwhile the author has a ton of 1440p images while pegging his website width to 560px

I like the conversation about web performance, but you should make sure you practice what you preach

ustad
2 replies
1d12h

The article is related to js bloat and the images are required for the presentation. The images come to 10MB - not bad. The js? 4.5KB!

subtra3t
1 replies
1d10h

required for the presentation

But is such a high resolution really needed?

ustad
0 replies
1d3h

Sure, they could have been resized and squeezed through an imagemagick pipeline. But give the kid a break! He speaks the truth.

mrighele
1 replies
1d12h

Even if you are not using an high resolution display like the sibling comment says, the images are reasonably sized (they are a few hundreds KB each). I have seen landing pages with 20-30 MB images for no good reason.

guappa
0 replies
1d12h

Don't forget the ones with background videos of random diversely raced young good looking models having coffee and looking at screens while nodding.

cyclotron3k
1 replies
1d13h

720px actually, but more importantly, 720px is not actually 720px on a HiDPI screen.

I just measured it on my mbp and "720px" is actually... 1440 physical pixels. I was surprised too!

FractalHQ
0 replies
23h54m

MBP device pixel ratio is 2 after all!

tgv
0 replies
1d12h

Some of my (non-programming) colleagues don't seem to be able to wrap their head around image size. And someone who taught courses to communication/marketing students told me, it took 2 hours to explain resolution and all that to them, and then half even didn't get it. Yeah, I can hear you: "something that easy? must have been a bad teacher," but the concepts are rather weird for non-techies. So those people become responsible for updating the website content, and upload whatever the graphical artists show them. And designers like to zoom in, a lot, so often that's a 20MB png, where a 200kB jpeg would suffice.

devtailz
0 replies
1d13h

The images display at 720x720, on a 2x retina screen the image needs to be 1440x1440 to fill that.

Alifatisk
0 replies
5h7m

I like the conversation about web performance, but you should make sure you practice what you preach

I'd say the author is practicing what he preaches, the JS is just 4,6 kB. There is some optimizations [1][2][3] that can be done to the images but I wouldn't fully disqualify the article because of that. The websocket connection is kinda odd though, I tried reading the code but didn't fully catch the purpose, it just says something about pointers.

[1] https://www.youtube.com/watch?v=uqmgQB5Gyfo [2] https://www.youtube.com/watch?v=uqmgQB5Gyfo [3] https://www.youtube.com/watch?v=hJ7Rg1821Q0

jspdown
8 replies
1d12h

The state of the web is very sad. Most people with a fiber connection don't even notice how slow it became. But when you are still on a 2Mbps connection, this is just plain horrible. I'm in this case, it's terribly painful. Because of this, I can't even consider not using an ad/tracker blocker.

Would love to see this test with Ublock origin enabled.

threatofrain
3 replies
1d11h

Most people with a fiber connection? I bet a lot of people with money don't have a fiber connection, certainly not most people here on HN.

agos
2 replies
1d11h

read it as "of the people who have a fiber connection, most..."

threatofrain
1 replies
1d11h

Yeah, I'm saying the relevance of that statement is pretty low because most of us don't experience that, certainly not enough to tip the needle of JS culture.

jspdown
0 replies
1d9h

I'm leaving in France, not off-grid and not very far from a big city. We still don't have fiber connection available yet. What I'm saying is that we are a lot in this situation, even in developed countries. Those that accept shipping 10mb bundle clearly forget that not everyone have the same connection they have in their office.

To the point where the web is mostly unusable if you don't disable ads. I'm not against ads, but the cost is just to high for my day to day use of internet.

deliriumchn
1 replies
1d11h

But when you are still on a 2Mbps connection

What happens when you use modern apps on iphone 3 or first nexus phone? I don't understand, do people think that with better, faster computers and network speed we should focus on smaller and smaller apps and websites?

pavlov
0 replies
1d11h

Your iPhone CPU doesn't suddenly become an iPhone 3G CPU sometimes, but network availability does vary a lot.

You may also one day find yourself on a flaky 3G connection needing access to some web app that first loads twenty megabytes of junk before showing the 1 kB of data you need, and then it's clearer what the problem is here.

bambax
1 replies
1d12h

Would love to see this test with unblock origin enabled.

Me too. I suspect most of this code is for user tracking and ad management.

panstromek
0 replies
1d11h

Tracking is a bit heavy, but from what I've looked at, the app code is usually much worse. I've looked at what Instagram and JIRA ship during the initial load and it's kinda crazy.

lifthrasiir
7 replies
1d19h

One thing completely ignored by this post, especially for actual web applications, is that it doesn't actually break JS files down to see why it is so large. For example, Google Translate is not an one-interaction app once you start to look further; it somehow has dictionaries, alternative suggestions, transliterations, pronunciations, a lot of input methods and more. I still agree that 2.5 MB is too much even after accounting that fact and some optional features can and should be lazily loaded, but as it currently stands, the post is so lazy that it doesn't help any further discussion.

BandButcher
3 replies
1d16h

Don't want to hate on the author's post but the screenshots being slow to load made me chuckle, understandable as images can be big and there were a lot, but just found it a little ironic.

crooked-v
2 replies
1d12h

These days, slow-loading images usually mean that somebody hasn't bothered to use any of the automatic tooling various frameworks and platforms have for optimized viewport- and pixel density-based image sets, and just stuck in a maximum size 10+ MB image.

troupo
0 replies
1d12h

Which absolutely false for the website in question

blue_pants
0 replies
1d11h

Could you suggest some of those automatic tools?

troupo
1 replies
1d12h

For example, Google Translate is not an one-interaction app once you start to look further; it somehow has dictionaries, alternative suggestions, transliterations, pronunciations, a lot of input methods and more.

Almost none of those are loaded in the initial bundle, are they? All those come as data from the server.

How much JS do you need for `if data.transliteration show icon with audio embed`?

lifthrasiir
0 replies
1d11h

Almost none of those are loaded in the initial bundle, are they?

In my testing, at least some input methods are indeed included in the initial requests (!). And that's why I'm stressing it is not "one-interaction" app; it is interactive enough that some (but not all) upfront loading might be justifiable.

infensus
0 replies
1d11h

100% agree. Most of these apps could definitely use some optimization, but trivializing them to something like "wow few MBs of javascript just to show a text box" makes this comparison completely useless

pier25
6 replies
1d16h

What's up with the React site?

This is embarrassing...

tjosepo
2 replies
1d15h

You can see from the recording that it's downloading the same few files from Codesandbox over and over again, as the iframes used for the examples are being unloaded and reloaded on scrolls and because the author disabled caching.

The author could've scrolled forever and the number would've gone up indefinitely.

CompuIves
1 replies
1d9h

Exactly, the result would've been different if the author would not have disabled caching.

In this case it's because the iframes are loaded/unloaded multiple times, but we also spawn web workers where the same worker is spawned multiple times (for transpiling code in multiple threads, for example). In all those cases we rely on caching so we don't have to download the same worker code more than once.

pier25
0 replies
1d7h

Wouldn't it be better if the code editors only activated if the user interacts with one?

Even with caching it's absurd to download so much JS for a feature that probably most users will not use. It's a docs site after all.

silverwind
1 replies
1d8h

Maybe they forgot some `memo` so it re-renders everything including iframes all the time.

danabramov
0 replies
1d8h

1) In React, re-renders don't destroy the DOM. So nothing would happen if iframes were re-rendered.

2) Rather, we intentionally unload interactive editor preview iframes to improve memory usage when you scroll away from them. We do load them again when you scroll up — and normally that would be instant because the code for them would get cached. But the author has intentionally disabled cache, so as a result they get arbitrarily high numbers when scrolling up and down.

makepanic
0 replies
1d12h

Codesandbox is embedded for the code samples. If you not have cache disabled it will fetch from memory cache after the initial load and unload.

otabdeveloper4
5 replies
1d12h

Slack is some of the worst software ever written by mankind.

solumunus
3 replies
1d12h

It's far better than Microsoft Teams, so I must disagree.

xigoi
0 replies
1d11h

That’s a really low bar.

nesarkvechnep
0 replies
1d11h

Both are some of the worst software ever.

mnau
0 replies
1d10h

Tbh, new teams eat only half the memory old one did.

Still sucks, but it's 700MB RAM vs 1.4GB just to open a chat.

balls187
0 replies
1d1h

In what way?

I've used a number of interoffice communication apps, and was on IRC in the hey days of dialup.

Slack (for work purposes) just nails the experience for a broad set of users.

jerbear4328
2 replies
1d12h

Your editor downloads a 32.6MB ffmpeg WASM binary on every page load.

Throttling the network to "Slow 3G", it took over four minutes of a broken interface before ffmpeg finally loads. (It doesn't cache, either.) A port of the Audacity audio editor to web[1] with WASM takes 2.7 minutes on the same connection, so the binary is totally reasonable, but I think claiming less than 2 MB is disingenuous.

[1]: https://wavacity.com/

lukaqq
1 replies
1d11h

Sorry for that, we just focus on js bundle and don't realize how big the ffmepg.wasm is. Thanks for reminding, next step we will try to rebuild ffmepg.wasm and make it smaller.

occz
0 replies
1d8h

Caching might be your highest bang for the buck, depending on how often you expect someone to return to the app/reload the page it's hosted in.

lukaqq
0 replies
1d9h

By re-confirmed, the 36MB ffmpeg.wasm file is compressed by brotli to 9.6 MB as transferred.

mortallywounded
3 replies
1d2h

SPAs were a psyop by FAANG companies to gain a monopoly and mind-share of developer talent.

Are you team Google (Angular)? Meta (React)? Are you a hipster (Ember)?

From there things only went downhill faster.

Alifatisk
2 replies
5h4m

Why did you leave out Vue?

mortallywounded
1 replies
4h0m

It came after the first wave of SPAs and if we listed every SPA framework (or "library") we'd be here all day.

Alifatisk
0 replies
3h27m

Oh I get it

gpjanik
3 replies
1d10h

Serious question: what is the issue with these paritcular sizes? I know that features/look these websites have are definitely achievable with less JS at a higher engineering cost, but what's the problem with it? 10MB loads in two seconds on an okay-ish desktop connection (correct me if I'm wrong, but most of people don't deploy Vercel apps from their phone from a mountain range with 3G connection). The experience on the websites mentioned is smooth as it can only get; everything is super fast and nice. Every subsequent click is just instant action. That's how web should look like.

Is the problem here that they perform poorly on slower computers/connections? Is it even true? Is there an audience of developers who can't use Vercel or GitLab productively because of that? Any metrics to support that? IMHO optimizing against bundle size/JS sent over the network is one of the worst metrics for performance I could imagine.

npteljes
0 replies
1d3h

I think it serves as a generic metric for bloat, because nobody really optimizes for size, thereby making it a good untainted metric. As the web gets bloaty and slow, the size of the websites grow as well, which also invites using size as a metric for bloat.

Smooth, fast, nice, these would be good to measure, but it's much harder. I like an interface response time metric, for example [0]. I always lament that interfaces are getting slow - I get that they are nicer, too, but god damn why am I waiting 1 second for anything, when my Pentium III with Win XP was near instant?

[0] https://www.nngroup.com/articles/response-times-3-important-...

alexey2020
0 replies
1d8h

came here to write a similar comment. Totally agree!

Focusing on a particular metric for the sake of the metric - what's the point?

Let's spend a couple months, refactor an app to generate less js just to look cool in the eyes of dev community?

AHTERIX5000
0 replies
1d6h

But the experience is not smooth as it can get if you're running a bit older HW. Gmail and Slack are notorious for bad performance even when the content displayed is rather simple plain text. As a user I don't really get anything in return when developers decide to use complex JS solutions for simple use cases.

danabramov
3 replies
1d9h

The React site part of this is not real. The author ticked "Disable cache" which means the same code (which powers the interactive editable sandboxes they're scrolling by) gets counted over and over and over as if it was different code.

If you untick "Disable cache", it's loaded once and gets cached.

fesc
2 replies
1d8h

Yeah but once per page, which is why it is okay to disable cache as the author wanted to simulate a cold load of each of those pages.

danabramov
0 replies
1d8h

That's also not true? I'm navigating between pages, and it does get served from cache for all subsequent navigations.

The only case when this code gets loaded is the literally the first cold load of the entire site — and it's only used for powering live editable interactive sandboxes (surely you'd expect a in-browser development environment to require some client-side code). It doesn't block the initial rendering of the page.

What is the problem?

balls187
0 replies
1d1h

I think the issue isn't with the methodology (disabling cache), but rather the erroneous conclusion that the React.dev website continually requests data as somehow problematic when it's a sideeffect of disabling browser cache.

Also, FWIW, OP is one of the authors of react.dev and a member of the react core team (not that it's relevant to the objection).

vendiddy
2 replies
1d10h

I'm inspired to slim down my own app. Any good tools or techniques out there to identify what's causing bloat?

npteljes
0 replies
1d8h

Turning on any kind of performance monitoring can help. They usually have ones for every platform. For the web, for example, I used YSlow in the past, but there are several alternatives: https://alternativeto.net/software/yahoo-yslow/

lifthrasiir
0 replies
1d10h

In general, look for dependencies. Any popular enough bundlers should have a feature or plugin to show the bundle statistics (e.g. webpack-bundle-analyzer or rollup-plugin-analyzer). Audit your dependency to see if that's not requested nor needed, and try to remove them with a finer-grained dependency or leaner library or rewriting, in the order of preference. That alone is enough for usual JS apps, because not much people do so...

sublinear
2 replies
1d20h

Any piece of software reflects the organization that built it.

The data transferred is going to be almost entirely analytics and miscellaneous 3rd party scripts, not the javascript actually used to make the page work (except for the "elephant" category which are lazy loading modules i.e. React). Much of that is driven by marketing teams who don't know or care about any of this.

All devs did was paste Google Tag Manager and/or some other script injection service to the page. In some cases the devs don't even do that and the page is modified by some proxy out in the production infrastructure.

Maybe the more meaningful concern to have is that marketing has more control over the result than the people actually doing the real work. In the case of the "elephant" pages, the bloat is with the organization itself. Not just a few idiots but idiots at scale.

jve
0 replies
1d11h

I remember associating Google with lean and fast: Google the search (vs Yahooo) and Chrome (vs IE/FF (i'm talking about when Chrome was released))... chrome on itself had not much of an UI and it was a feature.

DanielHB
0 replies
1d8h

All devs did was paste Google Tag Manager and/or some other script injection service to the page. In some cases the devs don't even do that and the page is modified by some proxy out in the production infrastructure.

google tag manager is the best tool for destroying your page performance, previous job had google tag manager in the hands of another non-tech department. I had to CONSTANTLY monitor the crap that was being injected in the production pages. I tried very hard to get it removed.

okaleniuk
2 replies
1d10h

Meanwhile, all the pages on https://wordsandbuttons.online/ with all the animation and interactivity are still below 64 KB.

This one, for example, https://wordsandbuttons.online/trippy_polynomials_in_arctang... is 51 KB.

And the code is not at all economical. It's 80% copy-paste with little deviations. There is no attempt to save by being clever either, it's all just good old vanilla JS. And no zipping, no space reduction. The code is perfectly readable when opened with the "View page source" button.

The trick is - zero dependency policy. No third party, no internal. All the code you need, you get along with the HTML file. Paradoxically, in the long run, copy-paste is a bloat preventor, not a bloat cause.

lifthrasiir
0 replies
1d10h

You can do the same with dependencies and "modern" JS toolkits. Dependency itself is not a cause but a symptom; websites and companies are no longer incentivized to reduce bloats, so redundant dependencies are hardly pruned.

kaba0
0 replies
1d1h

It could add at least some minimal margin. On mobile, I literally can’t see the edges.

ohazi
2 replies
1d14h

Not surprised at all that JIRA is 50 MB, but still... What the fuck

redbar0n
0 replies
6h55m

Expected JIRA to be a worst offender, tbh. In any category, really.

ihateolives
0 replies
1d11h

While Minix 1.1 uncompressed is 5 MB and comes complete with network stack etc. I used to run it on 386 back in the day.

dieulot
2 replies
1d13h

For a more complete view, the HTTP Archive tracks general site weight over time and gives you percentiles (click Show table): https://httparchive.org/reports/page-weight#bytesJs

The median JS bundle is 600kB on desktop. p90 (“high 10%”) is 1830kB.

aembleton
1 replies
1d10h

Weird spike in March 2021 in the `Other Bytes` graph. Wonder what went on then; or if there was a glitch in their data.

lifthrasiir
0 replies
1d10h

That spike is only visible when Drupal/Magento/WordPress lenses are selected, and disappears with the top ~1M websites, so I assume it is a very long tailed behavior.

chuckadams
2 replies
1d15h

Jeepers, how does this even happen? I've been developing a fairly complex app with Nuxt, Apollo client, and PrimeVue, and paid no attention to size whatsoever. Yet the most complex page in the app with the most module dependencies loads only 3.8 megs, and that's not even a minified build. Same page from the Nuxt dev server throws 24.4M at me, but I'm pretty sure it's pre-loading everything. Do the big players just not do any code splitting at all?

On the other hand, node_modules weighs in at 601M. Sure I've got space to burn on the dev box, but that's reason #1 I'm not doing yarn zero-install and stuffing all that into the repo.

hdjrudni
0 replies
1d15h

I'm at 329 kB of JS. I've been building on the same app for 10 years. That's a lot of cruft build up. And I'm still nowhere close to any of these numbers. I've got React, jQuery, lodash, and I don't know what else in there.

Alifatisk
0 replies
5h5m

On the other hand, node_modules weighs in at 601M.

Use pnpm and see the difference

w3news
1 replies
1d12h

So true, we build large complex frameworks, abstractions over abstractions. Try to make things easy to build and maintain. But I think the problem is that many developers that using these frameworks not even know the Javascript basics. Of course there are smart people at these large companies. But they try to make things easy instead of learn people the basics. We over engineer the web applications, create too much layers to hide the actual language. 20 years ago, every web developer can learn building websites by just check the source code. Now you can see the minified Javascript after a build, and nobody understand how it works, even the developers that build the web application don't recognize the code after the build. I love Javascript, but only pure Javascript, and yes, with all his quirks. Frameworks don't protect you from the quirks, you have to know it so you don't make quirks, and with all the abstraction layers, you not even know what you are really building. Keep it simple, learn Javascript itself instead of frameworks, and you downsize the Javascript codebase a lot.

wruza
0 replies
1d10h

Pretty sure the situation wouldn't change if it wasn't minified.

Recently I had to add a couple of mechanics into sd-web-ui, but found out that the "lobe theme" I used was an insufferable pile of intermingled React nonsense. Returned to sd-web-ui default look that is written in absolutely straightforward js and patched it to my needs easily in half an hour.

This is a perfect example based on a medium-size medium-complexity app. Most sites in TFA are less complex. The delusions that frontend guys are preaching are on a different level than everything else in development.

nazka
1 replies
1d7h

Great article! But in the conclusion jQuery is enough? No thanks

Alifatisk
0 replies
5h3m

jQuery being so lightweight compared to the rest felt a bit symbolical, Idk how to express it.

jauntywundrkind
1 replies
19h58m

It sucks that we blew up CDNs for security reasons.

How nice it would be if sites using React could use the React already in cache from visiting another site!

I keep wanting to have some kind of technical answer possible here. Seems hard. And who cares, because massive bundles are what we do now anyways, in most cases. But it sucks that the web app architecture and web resource architecture are both massively less capable than they were 20 years ago.

ivanjermakov
0 replies
16h59m

Empty React app is ~400KB uncompressed AFAIR. So just CDNs wouldn't drastically improve the situation.

chrismorgan
1 replies
1d14h

JavaScript is a universal metric for “complexity of interactions”.

I don’t understand this claim in the slightest. It seems trivially falsified by the data that follows.

redbar0n
0 replies
6h56m

I think he means ‘should be’, and then goes on to show that it in reality isn’t.

ulrischa
0 replies
1d10h

Make Jquery great again

sushirundown
0 replies
1d11h

This also happens with other software, and it's arguably even worse. Some mobile apps take up hundreds of megabytes because of all of the bloat that they feel the need to bundle.

singularity2001
0 replies
1d12h

The static page sites fit the narrative but for word editing and e-mail you should compare with native clients

renegade-otter
0 replies
1d9h

I often have to close a tab because my 12-core Mac mini starts heating the room and the fan sounds like it's about to fly off the axis. This is not specific to Javascript but the ad code doing this is obviously JS.

Speaks more to the general Enshitifcation of the web.

Does Web 3.0 fix this? /s

qsantos
0 replies
1d2h

Coincidently, I have written a browser extension to navigate Hacker News comments using Vi-style key bindings [1]. It has no compilation steps, no npm. It is mostly a 1kLoC file of vanilla JavaScript.

Modern frameworks are definitely needed for large applications, but there is no need all that complexity when the scope is reduced.

[1] https://github.com/qsantos/ViHN/

qingcharles
0 replies
20h5m

I work with newly-released prisoners and homeless people who are mostly on free Lifeline phones. They typically get 15GB of data a month. This is used up in 2-3 days on average. After that they can sometimes get 2G data, but it is impossible to use Google Maps to get to an interview, or even to download their email or fill out a job application online. Because the phone is no longer useable they often get lost or stolen.

I regularly come across web sites with >250MB home pages these days. It doesn't take many of those to kill your entire data allotment.

pepoluan
0 replies
8h58m

Will using HTMX save a lot?

panstromek
0 replies
1d11h

Interesting how well does the amount of JS inversely correlates with my experience of using those sites...

ohmantics
0 replies
1d3h

What percentage of all that JS is just crap injected into sites by pointless invasive trackers?

l5870uoo9y
0 replies
1d12h

I suspect ChatGPT is so bloated because they (inefficiently) include entire Markdown parser and code highlighting libraries. Add that various tracking libraries and you have a big bundle.

jwr
0 replies
1d10h

I have a large and complex (ERP/MRP/inventory system for electronics) ClojureScript application and I was worried about my compiled code size being around 2.45MB.

This puts it into perspective. I heard complaints about ClojureScript applications being large, which I think is true if you write a "Hello, world!" program, but not true for complex applications.

Also, Google Closure compiler in advanced compilation mode is a great tool. Of course, since it is technically superior, it is not in fashion, and since it is hard to use correctly, people pretend it isn't there.

jongjong
0 replies
1d10h

Just checked my SaaS platform. The entire application front end is 1.3 MB. But 300KB of that are font files and 490KB is for a cryptographic library for blockchain authentication.

The thing is; I didn't use any bundling or minification. Also, it loads faster than most of the websites mentioned in this article and that's with minimal optimizations and my server being located on the other side of the world.

ivanjermakov
0 replies
16h57m

Don't get me started on mobile and desktop apps where 100MB is a bare minimum.

gloosx
0 replies
1d12h

It would be useful if author sorted the requests by size. Most of this junk anyway is analytics, heatmaps, tracking and all that bullshit. Ofc you can make <1MB sites easily with the most complex UI and functionality, but the business just doesn't demand this or care while pennies are flowing. Caching and compression is also very important especially for virtualised sites like react-dev which author did not understood, they are essential features packed and turned on in every browser and in "real-life" test i wouldn't disable them.

flohofwoe
0 replies
1d12h

And here I am feeling bad that my WASM C64 emulator doesn't fit into 64 KBytes anymore ;) (it's grown to 116 KBytes total compressed download size over the years).

diimdeep
0 replies
1d10h

Remember The Website Obesity Crisis [1] article from 2015, since then [2] things only got worse, and it is been almost 10 years already, well will be next year.

Is it foolish to say that in 10 more years you wont be able to navigate the web on a circa 2015 PC ? If nothing changes seems like it.

My old macbook from 2013 with latest Firefox is already can not handle loading https://civitai.com web page with 23.98 MB of JavaScript, it is just hangs for half a minute while trying to render this disaster of web frontend.

It is not just web, mobile all in one apps got so large that 2013 phone the same way unable to load them, and guess what, half of them are written on top of web tech stack, why three comma ,,, budget companies can not afford to write native application ?

[1] https://idlewords.com/talks/website_obesity.htm

[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

danlugo92
0 replies
1d17h

I only know that i know nothing, but lately libraries (not my own code) have been getting increasingly bloated... mostly true for big-ticket stuff such as Firebase or Meta's stuff.

Anything homemade performs faster than ever though, so engines are getting better but my code has stayed as simplest as ever and finding improved performance.

daflip
0 replies
1d10h

All the js asset sizes quoted in the article seem inflated compared to the figures I see when I try to verify the authors figures.

cranberryturkey
0 replies
1d13h

i lived in a trailer and had spotty wifi via at&t hotspot. i also have starlink but trees made it unreliable. You realy don't understand how bad js bloat is until you have a shitty internet connection.

ccorcos
0 replies
1d13h

Lots of websites render HTML and CSS from JS (e.g. react). So some of these “static” websites are getting an uncharitable review.

bufferoverflow
0 replies
13h35m

If you want to see the framework that does it right, check out Qwik.

Incredibly small JS / CSS bundles. Only loads what it needs.

https://qwik.dev/

brap
0 replies
1d10h

That dark-mode button is broken.

advael
0 replies
1d12h

I have to also wonder what percentage of each of those web logic figures come from code that's there to silently do telemetry rather than part of e.g. a UX framework

CraftThatBlock
0 replies
1d6h

Also worth noting some of these website (such as Linear) pre-load other pages once the current page is done loading. The actual JavaScript on the page seems to be about 500kb (as opposed to 3mb)

Alifatisk
0 replies
4h54m

While I agree with the article, I am also obsessed with keeping the size as low as possible, I can't stop thinking about how it feels like the Author is somewhat mixing up webapps with websites.

Websites requiring that much JS for doing very simple static tasks is bloat, but the same bar should not be set for webapps. It should still be required and a high priority to keep the bundle size low but webapps should be considered as different category. Websites can (and should) function without JS, webapps cannot.

Another thing is that the Author only looked at the visible elements, for example on Google Translate and Outlook. What he did not consider is that there is a lot of more apps accessible behind the menus.

If you take a closer look at Outlook at first glance, sure, it's just a simple app to display your emails. But on the sidebar you have access to a lot more features like calendar, contacts and the office suite.