return to table of content

How web bloat impacts users with slow devices

ericra
44 replies
21h21m

As someone with recent experience using a relatively slow Android phone, it can be absolutely brutal to load some web pages, even ones that only appear to be serving text and images (and a load of trackers/ads presumably). The network is never the bottleneck here.

This problem is compounded by several factors. One is that older/slower phones cannot always use fully-featured browsers such as Firefox for mobile. The app is takes too many resources on its own before even opening up a website. That means turning to a pared-down browser like Firefox Focus, which is ok except for not being able to have extensions. That means no ublock origin, which of course makes the web an even worse experience.

Another issue is that some sites will complain if you are not using a "standard" browser and the site will become unusable for that reason alone.

In these situations, companies frequently try to force an app down your throat instead. And who knows how much space that will take up on a space-limited device or how poorly it will run.

Many companies/sites used to have simplified versions to account for slower devices/connections, but in my experience these are becoming phased out and harder to find. I imagine it's much harder to serve ads and operate a full tracking network to/from every social media company without all the javascript bloat.

pixl97
14 replies
19h8m

That means no ublock origin

Talk about a catch-22 situation. The modern web is useless without adblocking. Especially when you get forever scrolling pages with random ads stuffed in there.

squarefoot
11 replies
17h35m

As a web developing illiterate, I wonder how hard would be writing a browser extension that loads a page, does infinite scroll in memory and in background, then while it is still loading the infinite stuff, splits the content in pages and shows them instead, so that the user can go back and forth to page numbers. This wouldn't reduce the network and system load, however navigating the results would be much more friendly.

freedomben
9 replies
16h59m

Problem is, "infinite scroll" often is infinite, meaning it will load an ass load of data in the background and take up a ton of memory, and the user may never even end up looking at that data.

I really hate the load on scroll (especially Google Drive's implementation which is absolute trash, and half the time I'll scroll too fast and it will just miss a bunch of files and I'll have to refresh the page and try again), but a better hack might be an extension that scrolls a page or two ahead for you and stores that in memory. If it was smart enough to infinitely scroll websites that are actually finite (like google drive) that would be amazing though.

jwells89
4 replies
16h45m

In these situations what’s eating up your resources usually isn’t the data being represented but instead the representation.

This is why native apps use recycler views for not just infinite scroll, but anything that can display more rows/columns/items/etc than can fit on screen at once. Recycler views only create just enough cells to fill the screen even if you have tens of thousands of items to represent, and when you scroll they reuse these cells to display the currently relevant segment of data. When used correctly by developers, these are very lightweight and allow 60FPS scrolling of very large lists even on very weak devices.

These are possible to implement in JavaScript in browsers, but implementation quality varies a lot and many web devs just never bother. This is why I think HTML should gain a native recycler widget of its own, because the engineers working on Blink, Gecko, and WebKit are in much better positions to write high quality optimized implementations, plus even if web devs don’t use it directly, many frameworks will.

Sn0wCoder
2 replies
16h16m

I find this idea interesting ‘These are possible in JavaScript in browsers, but implementation quality varies a lot and many web devs just never bother.’

Do you have any examples that you consider good implementations? I ask because tables seem to be the biggest offenders of slow components in say Angular / PrimeNG. I am going to a legacy app soon that is being updated (Angular but not PrimeNG). Would like to see if we can build a feature rich table that is more performant than the PrimeNG one that I know looks amazing but is the cause of many headaches.

NOTE: its not Angular or PrimeNG specifically that make the tables slow/hogs, but the amount of DOM elements inside and some of the implementation details that I disagree with (functions that are called withing the HTML being evaluated every tick). Would be great to see if this idea of a ‘recycler widget’ can help us. Cheers.

nmjenkins
0 replies
13h8m

We do this at Fastmail and, if I say so myself, our implementation is pretty damn good. We’ve had this for over a decade, so it was originally built for much lower powered devices.

jwells89
0 replies
15h6m

its not Angular or PrimeNG specifically that make the tables slow/hogs, but the amount of DOM elements inside

Yep, this happens even with nothing but a 10-line vanilla JS file that adds N more items every time the user scrolls to the bottom of the page. Performance degradation increases with every load due to the growing number of DOM elements which eventually exceeds whatever margin is afforded by the machine the browser is running on, causing chug.

Web is not my specialty so I don’t have specific recommendations, but plenty of results turn up when searching for e.g. “angular recycler” or “react recycler”.

thaumasiotes
2 replies
12h8m

Problem is, "infinite scroll" often is infinite, meaning it will load an ass load of data in the background and take up a ton of memory, and the user may never even end up looking at that data.

It's also an infinitely worse user experience and prevents you from holding your place in whatever is being scrolled. Are there advantages? Why is infinite scroll used in any context?

kmacdough
0 replies
9h29m

1 batch of content = 1 batch of add space = more money.

Each next page click is a moment for you to reflect and notice the waste of time. Simple as that.

_flux
0 replies
9h27m

Personally I prefer infinite scroll, versus the alternative of finding the "next page" button at the bottom, waiting for the content to load (preloading could help here) and sometimes navigating to the beginning of actual beginning of the content I was viewing. I even used a browser extension that matched "next" buttons from pages and loaded the next page content automatically, but the extension (can't recall its name) is not available anymore.

Granted there are some downsides, such as having the browser keep extra-long pages in its memory, but overall I prefer working infinite scroll mechanisms over paged ones. As far as I see, the ability to remember the current location in the page could be easily implemented by modifying page anchor and parameters accordingly, though personally I've rarely needed it.

Perhaps if there was a standard way (so in the HTML spec) to implement infinite scrolling, it would work correctly in all cases and possibly even allow user to select a paged variant according to their preference.

Not all the paged views work correctly either. In particular systems that show threaded discussions can behave strangely when you select the next page. Worst offender is Slashdot.

d1sxeyes
0 replies
9h11m

You don’t actually need to load everything, just the previous, current, and next pages.

wolpoli
0 replies
17h20m

It'll give a nicer experience and will eliminate situation where an element changes location just as you try to tap on it.

The extension just needs to handle GDPR notice and Email subscription overlays.

nox101
0 replies
9h55m

I just choose not to use it. if I follow a link and there is an ad per paragraph and video starts playing I close the tab. it's rare the page I was about to look at was actually important

gcanyon
0 replies
5h47m

I use ublock origin, and on literally more than one occasion (insert doofenshmirtz nickel quote) I've found a site that I quite like, think it's awesome to the point I actually write to the people who create it with suggestions, and then for whatever reason happen to load it without blockers and discover it's halfway useless with all the ads on it.

I fully support people being able to make some money off the useful things they build on the internet, whether it's some random person who built a thing, or the New York Times or even FB or Google, but there has to be a better local maximum than online advertising.

mattl
11 replies
15h52m

Can you run all your traffic through a self-hosted pihole to avoid such things?

ericra
6 replies
15h10m

Certainly an option for me. But not a scalable solution for the large number of non-tech people with older devices.

mattl
2 replies
15h8m

I’d love something like it for all my older devices where I can set it and forget it.

SkyArrow
1 replies
14h30m

NextDNS is pretty good for this - just change the DNS in your network settings.

NicoJuicy
0 replies
10h1m

You'll also need to add bundles to block dns names ( free fyi)

bombcar
2 replies
11h28m

Wasn’t there an old browser that would render the page on the server and just send down the result or something like that?

kreddor
0 replies
10h59m

The old Opera for mobile did that. I think Chrome had something similar at one point.

jamiek88
0 replies
10h53m

Opera!

nottorp
1 replies
8h0m

You're already too rich and too tech aware to qualify as the low end described in the article if you ask that question :)

mattl
0 replies
3h17m

Maybe so but I also test things in a variety of browsers and devices frequently to try and avoid the problems described in the article.

em3rgent0rdr
1 replies
15h15m

Having a decent internet experience shouldn't require going through your own self-hosted server.

mattl
0 replies
15h9m

Absolutely not but then I never thought I’d need a 20,000 entry hosts file either.

bboygravity
7 replies
7h50m

Even if you do use a standard browser, companies will force you to use an app by making there website broken (on purpose?).

Random recent example: Nike. Popping useless errors upon checkout in the webshop. Support: "oh, we're so sorry, just try the app, k bye".

Another example of major companies with broken websites more often than not: (European) airline booking websites.

And major companies think this is totally fine and doesn't damage their brand? I mean not being able to create a functioning website with unlimited funds in 2024 is not a bad look?!

LilBytes
5 replies
7h1m

Reddit is another example where they've broken the mobile browser experience, to send you to another app. Arguably broken, but in different ways.

uaserussia
3 replies
4h39m

Pro-tip: type in old. where the www. used to be and rebbit becomes usable.

Timber-6539
2 replies
4h22m

But then you get the desktop version of the site. Never mind that Reddit has a mobile-friendly version (whose design Reddit has kept on bungling too).

RF_Savage
1 replies
3h21m

The i.reddit mobile site sadly seems to have stopped working. At least for me.

nebalee
0 replies
2h34m

try adding .i to the end of the url.

rapnie
0 replies
5h52m

LinkedIn, leaders in deceptive design (though given recent HN on internal situation, a more favorable interpretation may be that they can't handle their own bloat and it shows).

n_ary
0 replies
5h48m

I can show some forgiveness to airlines, because they simply outsource it to some agency somewhere.

But I have zero sympathy for giants, like Slack. If I do a "Request the Desktop Site", then it suddenly works(albeit with lot of scrolling) on my Firefox(iOS), but if I disable the "Request the Desktop Site", then it blocks everything and forces me to download the app from AppStore.

Sadly, the downloaded app looks like an optimized mobile version of the site.

anon373839
3 replies
14h36m

I've got an old MacBook Pro from 2013 that I still keep around because the keyboard is the best Apple ever made. It's not fast by any means, but I haven't encountered any difficulty with websites whatsoever. They're not as snappy as I'd expect on new hardware, but perfectly usable. I do use uBlock Origin, however.

Are these Androids actually less powerful than an 11 year-old, base-spec MacBook?

ericra
2 replies
14h24m

Are these Androids actually less powerful than an 11 year-old, base-spec MacBook?

Yes. Definitely. a Macbook Pro from 2013 has between 4-16GB of memory for one thing. The lowest spec phone in the article (Itel P32) has 1GB. A 2013 Macbook Pro has a 4th gen i5 processor. This phone has a MediaTek MT6580. It's not even in the same ballpark.

This is a bit of an extreme example, but the fact is that a very large number of people in many areas of the world use phones like these.

jwells89
1 replies
13h36m

Additionally, weak Android devices are not necessarily old Android devices. New underpowered Android stuff is sold every day. Cheap tablets are particularly bad about this — I have a Lenovo tablet that I bought maybe a year ago which uses a SoC that benches a bit above a 2015 Apple A9.

neurostimulant
0 replies
6h27m

$50 android phones are still sold in developing countries and they usually have an MT6580 or UMS312 with 720p screen.

gmokki
2 replies
11h30m

I wrote code to main nokia.com site 10 years ago where it used few ways to detect slow loading of resources and set a flag to disable extra features from the site. This was done because the site had to work in every country and many of the slowest phones sold were from said company.

dijit
1 replies
10h42m

I also worked for Nokia 13 or so years ago, though not on Nokia.com

Thanks for your work, one of the things that I really liked about Nokia was the passion for performance.

On the flip side: I was on the Meego project and we joked that we had the most expensive clock application ever created, because it kept being completely recreated.

SturgeonsLaw
0 replies
9h3m

I liked Meego and Maemo, I always felt that they were an expression of the idea that general purpose computing can work in the mobile form factor, which is something that tremendously appeals to me (I wish I still had my N900).

swiftcoder
1 replies
9h24m

In these situations, companies frequently try to force an app down your throat instead. And who knows how much space that will take up on a space-limited device or how poorly it will run.

And honestly, that app is going to be a browser shell with a (partially) offline copy of the website in it, 9 times out of 10...

MaxBarraclough
0 replies
8h26m

that app is going to be a browser shell with a (partially) offline copy of the website in it, 9 times out of 10

If you're lucky. The main UI may just be a website, but as a native app is has a greater ability to spam you, track you, accidentally introduce security vulnerabilities, etc.

lelanthran
42 replies
18h59m

This article is basically unreadable for me 48 y/o on desktop). In the dev tools I added the following to the body to make it readable:

    font-size: 18px;
    line-height: 1.5em;
    max-width: 38rem;
Now look how readable (and beautiful) it is. I read a lot of Dan Luu's posts, and each time I have to do this sort of thing to make it readable.

Seriously, techies, it's an extra 64 Bytes to make your page more readable.

gnicholas
17 replies
18h47m

The first time I saw this blog posted on HN I wondered how it could possibly be popular with such horrendous layout.

The conclusion I came to is that the audience is very tech-savvy and is used to activating Reader Mode when they encounter pages like this.

chmod775
9 replies
18h34m

I went with the other techie solution: resizing my browser window.

mardifoufs
7 replies
17h11m

How do you do that on mobile?

skydhash
3 replies
16h12m

Who read article like this on mobile? In a pinch, I'd just activate Reader Mode (Safari, iOS), or more likely save it for reading on a bigger screen (tablet, laptop,…)

wcedmisten
0 replies
3h55m

Who read article like this on mobile?

The irony of this on an article about how developers ignore users on low-performance mobile devices

pbronez
0 replies
4h11m

Read it on iOS Safari, without reader mode. Worked great.

Only thing that annoyed me is that there are very lengthy appendices. Thus the scroll bar suggests the main article is much longer than it actually is.

ParetoOptimal
0 replies
11h42m

I just read it on Firefox mobile without reader mode.

kjkjadksj
0 replies
49m

Its already perfectly readable on mobile either vertically or horizontally (a rare affordance these days)

dxdm
0 replies
11h13m

Firefox on Android has a button to activate Reader Mode right in the URL bar.

chmod775
0 replies
14h57m

Flip the phone into portrait mode.

gnicholas
0 replies
15h55m

Since I have a ton of tabs open and jump between them, this ends up not being a solution I use anymore.

mardifoufs
3 replies
17h8m

It's more of a hipster thing imo. For some people since it's minimalist and looks "old" , it must be good. Like I get keeping it simple but man it's CSS..

anon373839
2 replies
14h44m

Yep, exactly. It's fashion. FOUC-chic.

joeblubaugh
1 replies
14h33m

Dan’s site has been like this for over a decade. If it’s a fashion, then he’s one of the creators of it.

anon373839
0 replies
14h26m

Brutalist web design has been a thing for a while: https://www.washingtonpost.com/news/the-intersect/wp/2016/05...

Some of it can be appealing, when basic ergonomic needs are met (readable text size and line length, adequate margins, and so forth). Most is just brutally pretentious, IMO.

literallycancer
1 replies
2h16m

The text fills the entire screen on mobile. That's a lot better than reading something where there's 50% of whitespace.

kjkjadksj
0 replies
49m

Or better yet a postage stamp of text between two ad players and a header and footer banner

lstamour
0 replies
18h40m

Actually when I hit pages like this, I use the increase font size buttons. I tend to do this on phones too, especially. Yes, reader mode is also an option, but just bumping up the font size works too. You could also go back to the days when we had 800x600 monitors and 16px tended to be just the right size for that. ;-)

nottorp
4 replies
7h50m

max-width: 38rem; Now look how readable (and beautiful) it is.

How is it readable when you're limiting text width and not taking advantage of the whole screen you paid for?

[Turning 48 next month and wearing glasses.]

wraptile
3 replies
7h37m

FYI the optimal line length is 50-75 characters and that has been the standard for text since the type writers. You don't want to move your neck when you read a single line that's kinda silly.

nottorp
2 replies
7h20m

that has been the standard for text since the type writers

I have a feeling it was the standard because they used the minimum font size to make the letters readable, and that's how much it fit on the physical page width. Which was standardized before typewriters for unknown historical reasons?

You don't want to move your neck when you read a single line that's kinda silly.

I don't have to move my neck to read the article spread across the full width of my monitor. On 13" laptop or 24" desktops. Are you using a 21:9 utrawide?

wraptile
1 replies
5h37m

It's been this way forever because it's not particularly difficult science and is extremely easy to test for so there are probably thousands of papers covering this. Here's a good summary by Baymard Institute[1].

Also WCAG recommends line length set to <80 characters too [2]. I'm not sure what else could make this more convincing or official.

1 - https://baymard.com/blog/line-length-readability

2 - https://www.w3.org/WAI/WCAG21/Understanding/visual-presentat...

nottorp
0 replies
3h28m

Also WCAG recommends

"recommends". Want to deny me the option of longer lines?

gerdesj
3 replies
18h36m

I'm 53 and I'm at least five years behind getting my specs sorted out - they are currently perched right on the end of my nose now and I have to get the angle right sometimes (astigmatism).

That page is nearly fine for me but I just hit CRTL + to scale up. That works for me.

That page is pure text with no or at least minimal fiddling. You have your solution for your use case and I have mine. A blind reader will also have their solution, so they can even access it. Thanks to the simplicity of the source: all solutions to accessibility are also going to be reasonably simple.

I think that Dan understands how to communicate effectively - keep it simple and don't assume that eyes will read your words. You can trivially (and you do) fiddle with the presentation yourself for your own purposes.

I think that if you don't like the presentation of something like this then you could reformat it yourself, prior to engagement. Dan has kindly provided his message as a simple text stream that can be trivially fiddled with.

solatic
2 replies
5h54m

That page is nearly fine for me but I just hit CRTL + to scale up. That works for me.

How do you do CTRL++ on a mobile phone?

literallycancer
0 replies
2h19m

In Brave you can do Accessibility - Text Scaling

fireflash38
0 replies
2h34m

Pinch to zoom, which since basically pinch to zoom was invented should reflow elements.

ordu
2 replies
15h47m

> In the dev tools I added the following to the body to make it readable

For cases when you don't agree with styles there is Reader Mode. Your way works also, but Reader Mode just simplier, it is just one click away.

gnicholas
0 replies
15h9m

True, although not all browsers have Reader Mode. Chrome didn't have it until last year, and the version they built is a sidebar, unlike most Reader Modes. This is probably because they want to make sure ads are shown alongside the Reader Mode.

gitaarik
0 replies
14h50m

In reader mode the colors in the table disappear. Ironical the author does style that.

BlackFingolfin
1 replies
7h12m

I just activate reader mode on his pages, works great. (Not disagreeing with you, just stating another workaround)

Also wish his pages had dates on them (one or both of first posted / last updated) AFAIK he intentionally leaves them out, I don't get why.

lifthrasiir
0 replies
5h33m

AFAIK he intentionally leaves them out, I don't get why.

Some people like to brag about the timelessness of their articles [1], and that might be one reason. (I personally don't fully agree though, even the linked original WikiWikiWeb page has a last edited date.)

[1] https://wiki.c2.com/?WikiNow

zzo38computer
0 replies
17h44m

I disagree. The user can change the window size, font size, colours, etc according to their own preferences.

I read a lot of Dan Luu's posts, and each time I have to do this sort of thing to make it readable.

You shouldn't have to. You should be allowed to add a CSS file which can apply to multiple files, and then use that, instead of having to do it for each file individually.

userbinator
0 replies
13h58m

Then adjust your browser settings to your preference, because that certainly isn't mine either.

I've had to remove "max-width"'s from a ton of sites using my filtering proxy. My window is this big, I expect your content to fill it!

progval
0 replies
10h52m

You can change your browser's default font size if you find it too small. It's in Firefox's main settings page. Websites shouldn't force "font-size: 18px;" because it then makes the font smaller for users who picked a larger font in their browser.

jagged-chisel
0 replies
18h53m

Do you have any idea of the layers of tooling you must use these days to produce those 64 bytes, and how each of those layers change and remove was was fed from all the other layers? To get exactly those bytes out the other end of the tools would be a herculean effort.

Because we can’t just go around trying to understand basic web-based development without the frameworks … can we?

hmottestad
0 replies
18h56m

It’s pretty terrible on my phone too. Almost no margins and small font. Thankfully Reader Mode works in Safari, which fixes everything.

extra88
0 replies
18h56m

I agree that they should add some minimal CSS. But using your browser's Reader View also works, a click rather than multiple steps in DevTools.

erichdongubler
0 replies
4h39m

Funnily enough, Dan calls out the differences of opinion of the styling of his site starting at this paragraph:

Just as an aside, something I've found funny for a long time is that I get quite a bit of hate mail about the styling on this page (and a similar volume of appreciation mail). …
dchest
0 replies
13h6m

If you can't read font-size: 14px, you got your resolution/scaling/screen size wrong. The default text size is similar to the standard text size of OS UI controls. If you can't read them, I'd suggest to reconfigure your setup: change resolution, change scaling, or configure the default zoom level.

blehn
0 replies
18h51m

I think your mods are sensible, however if Dan Luu added those CSS rules himself, there would be comments on here lamenting the low density and "excess whitespace". Luu's audience, on the whole, probably prefers the relatively unstyled approach.

XorNot
0 replies
18h24m

I'd prefer to see the grey text trend die honestly. I think my number one style-rewrite is just setting `font-color: black` on things.

genewitch
42 replies
19h22m

as a data point youtube is unusable on raspberry pi 3. This happened within the last year, because prior to that you could "watch" videos at about 10-15FPS which is enough, for instance, to get repair videos in a shop setting (ask me how i know). When the raspberry pi model B - the first one released - came out, you could play 1080p video from storage, watch youtube, play games.

I'm not sure what youtube is doing (or everyone else for that matter.)

If we're serious about this climate crisis/change business, someone needs to cast a very hard look at google and meta for these sorts of shenanigans. eating CPU cycles for profit (ad-tech would be my off the cuff guess for why youtube sucks on these low power devices) should be loudly derided in the media and people should use more efficient services, even if the overall UX is worse.

LM358
14 replies
19h9m

Could it just be due to lack of hardware video decoding? The Pi3 has x264 HW acceleration and youtube started using other codecs a while ago.

gerdesj
7 replies
18h57m

Is YT so impoverished they can't manage some sort of negotiation mechanism that includes x264 and makes it work?

hinkley
3 replies
18h37m

They encode videos ahead of time and they likely decided that whatever hardware you’re judging them by is only .9% of the market so fuck those guys.

Big companies use percentages in places they shouldn’t and it gets them in trouble. .1% when you have a billion users is a million people you’re shitting on.

For me that might be a dozen people. Very different.

wonnage
2 replies
15h50m

Encoding and storing billions of videos in a format used by 0.1% of users feels like a waste though

hinkley
0 replies
15h12m

The context above was exclusion of people based on income level.

geraldhh
0 replies
7h22m

robustness is only wasted if you're lucky

treflop
0 replies
18h53m

That might be more on the browser that you’re using. It might be saying “yes I can play this format” to a format it can barely play.

ogurechny
0 replies
16h51m

Supposedly, the whole point of Google financing “open codecs” was for them to break free from MPEG codec licensing. I imagine the total amount of fees had a lot of zeros. So, yes, each time they don't serve H.264 (unless absolutely required) results in saving a lot of money.

geraldhh
0 replies
7h23m

every yt video is available as x264 but vp9 is cheaper (smaller) and has better quality

Retr0id
4 replies
19h5m

I have no idea if it still works, but the "h264ify" browser extension used to be great for working around this issue (by forcing youtube to serve h264) https://github.com/erkserkserks/h264ify

genewitch
2 replies
18h34m

i did a full apt dist-upgrade to try and get the h264ify plugin to install and if i remember correctly i never was able to get it to install. I upgraded from "chromium" to "chromium-browser" and set all the compositing and other settings recommended for the RPI.

and to reply to another sibling, "yt-dlp" isn't workable, this is for a senior citizen that does small motor repairs.

I got an HP elitedesk that's a few years old coming in monday to replace the RPI; hopefully that will last another 3 years before google et al decide to "optimize" again.

antisthenes
1 replies
16h3m

RPI 3 for a senior citizen seems like a poor solution in the first place.

I would have opted for a small business-pc that is x86 based and 3-4 years old.

geraldhh
0 replies
7h21m

a used laptop that can play youtube videos can be had for about the same money

geraldhh
0 replies
7h31m

ytdl-format=best[vcodec!*=vp9]

extra88
0 replies
18h52m

Probably. I remember when YouTube switched to H.264 (it might have been some Flash-based video before that). I had an older Mac mini hooked up to my TV at the time and suddenly video framerates dropped to an unwatchable level because they saved their bandwidth (and mine but I didn't have to care about my Internet service was not metered) at the expense of client-side processing.

gruez
13 replies
11h59m

If we're serious about this climate crisis/change business, someone needs to cast a very hard look at google and meta for these sorts of shenanigans

By all accounts client devices' energy consumption is a rounding error in terms of contribution to climate change. Going after them to solve climate change makes as much sense as plastic straw or bag bans.

MrVandemar
5 replies
10h37m

It has a cumulative effect and drives the continual "upgrade" cycle. When you consider the life-time of an average mobile device, and the resources required to manufacture and ship them, it's a not insignificant problem.

gruez
3 replies
10h11m

Random source from google[1]:

Berners-Lee writes that in 2020, there were 7.7 billion mobile phones in use, with a footprint of roughly 580 million tonnes of CO2e. This equates to approximately 1% of all global emissions

Of course, not everyone is replacing their phones yearly. Another source[2] says the average consumer phone is 3 years old. That works out to 0.33% of global emissions, assuming the phones aren't recycled/reused to developing countries. Even if assume people are upgrading their phones for app/web performance reasons, the impact is far less than 1%.

[1] https://reboxed.co/blogs/outsidethebox/the-carbon-footprint-...

[2] https://www.statista.com/statistics/619788/average-smartphon...

nicce
1 replies
7h57m

Isn’t that quite huge number to be fair?

gruez
0 replies
7h43m

Compared to a single person's emissions? Yeah sure, but that's because anything multiplied by 8 billion people is going to be huge. The same could be said for plastic bags and/or straws. In relative terms it's absolutely minuscule, and in terms of low hanging fruit it's definitely not the top. You'd be far better off figuring out ways to decarbonize the electricity grid (40%) or the transport system (20%)

Panzer04
0 replies
5h41m

To be clear, these emissions include the manufacturing cost, which for reasonable users seems to make up ~80-90% of the carbon footprint. The power usage of the phone itself and associated data centres etc is only a small portion.

It's still somewhat surprising that one could attribute 0.2% of global emissions solely to phone power consumption... I would have expected it to be lower.

dmwilcox
0 replies
3m

I would imagine for phones and laptops the extraction of materials (rare earth metals to make fancy new chips, lithium for batteries,etc) is probably the bigger issue.

Having gotten away from 500+ watt desktops as the standard for light non-gaming computing has been a win in the energy consumption court.

I think there are lots of good reasons to avoid the upgrade cycle but energy consumption of the end device itself probably isn't it. (Embodied energy of the devices, environmental impacts of mining, no good EOL story for ewaste, etc)

nottorp
4 replies
7h45m

By all accounts client devices' energy consumption is a rounding error in terms of contribution to climate change.

It adds up? How many devices are there? Tens of billions?

Web 345 devs just don't care because the costs are borne by the customer.

gruez
3 replies
7h33m

The customer doesn't care either because a page that takes 5s longer to load on a 1W TDP SoC costs them around one-millionth of a penny. Even if you're refreshing 100 times per day it's only around 0.05 kWh per year, which at any reasonable electricity prices is a sum that's simply not worth worrying about. You'd get more savings from getting people to turn off their led light bulbs for a few minutes.

nottorp
2 replies
7h23m

US centric electricity prices view :)

Also, it's not just your site. It's every site. And the customer pays all those millionths of a penny added up out of their pocket. And all those 5 second delays out of their lifetime.

Edit: btw at a quick glance you underestimated cell phone soc TDP by a 2-4 factor.

Panzer04
1 replies
5h44m

A single use of an electric kettle sounds like it would completely dominate this consumption.

The time cost is certainly the greatest expense here, power is cheap in consumer computing contexts, generally speaking (at least nowadays with most things racing to sleep), and is mostly relevant because of battery life, not power cost.

nottorp
0 replies
4h32m

A single use of an electric kettle sounds like it would completely dominate this consumption.

But at least that gets you tea, instead of engagement.

You've got to put X joules in to boil Y liters of water. No choice there, except giving up on the tea.

You can greatly reduce the joules necessary to see cat photos though. And you don't have to give up on seeing the cat photos.

maigret
1 replies
11h13m

IT is emitting around as much as aviation, and that was a surprise to me, most of it are due to client devices. Don’t have the source at hand at the moment though. And of that, most emissions are upfront until you buy it. Buying a new device because it’s not fast anymore causes emissions, not running it. Think about e-waste as well.

gruez
0 replies
10h18m

IT is emitting around as much as aviation

What counts as "IT"? It's most certainly a superset of "client devices", which is what my and the parent comment was talking about.

userbinator
3 replies
14h2m

I use Invidious for browsing the site, and watch the actual videos via a script that deobfuscates and gets the actual stream URL and then passes that to VLC.

As another data point, YouTube a decade ago would've been perfectly fine on that hardware too. The culprit is web bloat in general, and more specifically the monstrosities of abstraction that have become common in JS.

Even for those who don't believe at all in "climate crisis", there is something to be said for the loss of craftsmanship and quality over time that's caused this mess, so I think it's something everyone across the whole political spectrum can agree with.

zelphirkalt
1 replies
8h36m

Can you share that script? Also using invidious, but passing to vlc sounds good for saving cpu cycles.

nolist_policy
0 replies
8h4m

Just use something like this:

    mpv --demuxer-max-bytes=1024MiB --vo=gpu --opengl-es=yes --ytdl --ytdl-format="best[height<=800]" "$url"

geraldhh
0 replies
7h28m

'apt install yt-dlp mpv'

then put this in '.config/mpv/mpv.conf' to twart hw requirements

ytdl-format=best[height<=?720][vcodec!=vp9]/bestvideo[height<=?720][vcodec!=vp9]+bestaudio/best[vcodec!*=vp9]/best

and pass url's to it (i use 'play-with' ff extension)

nicbou
2 replies
14h48m

I had to upgrade my 12" Macbook because Youtube Music brought it to a crawl. I could play music or work, but not both.

bombela
0 replies
8h24m

That's absurd. I remember using winamp (and the skin compatible Linux clone, I forgot it's name) streaming internet radios while programing a toy OS in 2004. I could listen to music while compiling and running the BOSHS emulator on my AMD Atlon CPU with a whooping 256MiB of RAM.

Nextgrid
0 replies
7m

I used a 12" Macbook as my main development machine. It ran IntelliJ with Python/Django applications, Postgres & Redis running in parallel (along with Safari, Mail, etc) around 2018-2020 just fine.

Tried it somewhat recently around Ventura and the machine clearly appeared to be struggling with the OS alone. So we had a machine that used to be capable of actual, productive work, and is now seemingly struggling at idle? It doesn't look like the new OS brought anything new or useful to the table (besides copious amounts of whitespace) either.

pimlottc
1 replies
15h39m

YouTube is definitely getting heavier. My early 2021 MacBook Air (Intel) now gets random video pauses under moderate load, something that never used to happen.

nicce
0 replies
7h22m

Could be just ads that adblocker tries to block. Google is trying new ways all the time to bypass adblockers.

ogurechny
0 replies
16h29m

YouTube was not tested because monitors can't handle CMYK, and we need a lot of that extra coal black to color the results.

nolist_policy
0 replies
19h1m

Its worth trying out different browsers. In my experience Chromium based browsers are a bit faster than Firefox on really low end devices (Pinephone, ...) as long as you have enough ram (>1Gb?).

E.g. On the OG Pinephone a 720p video on Youtube is running smoothly in Chromium, but not Firefox.

hinkley
0 replies
18h40m

We need some watchdog group that watches page weight across sites and users and names and shames them.

Maybe they could do that Consumer Reports style, or maybe it’s an add on the works a bit like Nielsen ratings.

flir
0 replies
18h57m

I've got an old Roku box that has started rebooting after a few minutes of playing youtube videos.

In your case, maybe pulling the video with yt-dlp then playing it works...

bluquark
41 replies
20h9m

Dan's point about being aware of the different levels of inequality in the world is something I strongly agree with, but that should also include the middle-income countries, especially in Latin America and Southeast Asia. For example, a user with a data plan with a monthly limit in the single-digit GBs, and a RAM/CPU profile resembling a decade-old US flagship. That's good enough to use Discourse at all, but the experience will probably be on the unpleasantly slow side. I believe it's primarily this category of user that accounts for Dan's observation that incremental improvements in CPU/RAM/disk measurably improve engagement.

As for users with the lowest-end devices like the Itel P32, Dan's chart seems to prove that no amount of incremental optimization would benefit them. The only thing that might is a wholesale different client architecture that sacrifices features and polish to provide the slimmest code possible. That is, an alternate "lite/basic" mode. Unfortunately, this style of approach has rarely proved successful: the empathy problem returns in a different guise, as US-based developers often make the wrong decisions on which features/polish are essential to keep versus discarded for performance reasons.

zozbot234
7 replies
18h21m

> That's good enough to use Discourse at all, but the experience will probably be on the unpleasantly slow side. ... an alternate "lite/basic" mode

Why does this need to be the "alternate" choice though? What does current Discourse provide that e.g. PhpBB or the DLang forum do not? (Other than mobile friendly design, which in a sane world shouldn't involve more than a few tweaks to a "responsive" CSS stylesheet).

Cacti
3 replies
17h53m

Voice, video, realtime interaction, a devoted user base, an incredible amount of money…

a_bored_husky
1 replies
17h35m

Discourse, not Discord.

Cacti
0 replies
6h38m

whoopsie. thanks.

zelphirkalt
0 replies
17h32m

What do you mean by voice and video? Why would I want to have voice in a forum? I think that would be akin to receiving voice messages in messengers. Or do you mean, that for these kinds of things a widget can be displayed? That certainly is possible in old style forums. It is just HTML, an embed code away.

mardifoufs
1 replies
17h12m

I like the scroll view in discourse. Makes it super easy to follow a thread. The subthreads and replies are also easier to use. The search is better, the ability to upvote makes it better for some use cases, and in general phpbb is a mess in terms of actually being able to see what's useful and what threads are relevant.

I think flipping the question makes more sense, why do you think some forums switched to or started using discourse instead of just using phpbb? I can guarantee you that it's not just to follow a fad or whatever, most niche or support forums don't care about that.

ParetoOptimal
0 replies
11h51m

I do think trendiness and modern feeling uis are requirements for most forums these days from most perspectives.

I say this as someone that frequently uses and enjoys both rue brutalist design of a text web browser and the emacs mastodon client.

AJ007
0 replies
17h5m

I was thinking about this when I saw this post earlier today.

Why shouldn't the default be: does this website work in Lynx? I think that's a damn good baseline.

And in response to the other parent post, on a (almost) new iPhone, both news sites & Twitter continuously crash and reload for me. I'm not sure what the state of these other popular sites are because I don't use them.

yawaramin
6 replies
18h9m

The only thing that might is a wholesale different client architecture that sacrifices features and polish to provide the slimmest code possible. That is, an alternate "lite/basic" mode. Unfortunately, this style of approach has rarely proved successful

But it is gaining popularity with the unexpected rise of htmx and its 'simpler is better even if it's slightly worse' philosophy.

golergka
5 replies
18h7m

Isn't that 'worse is better' philosophy?

LoganDark
4 replies
16h4m

I think it's rather a "performance is more important than functionality" philosophy.

yawaramin
3 replies
14h44m

In the case of the devices we're talking about, performance is effectively functionality.

lukan
1 replies
8h5m

The most performant site is a blank page.

fuzzfactor
0 replies
2h18m

Astute observation.

It should be easy to use this as a "north star" and your only job is to not screw it up hardly at all.

Some people are just worse screw-ups than others.

LoganDark
0 replies
14h41m

My point exactly. By making your website fast and light, you make it easier and more pleasant to use. HTMX has a limited set of actions that it supports, so it can't do everything that people typically want. It can do more than enough though. (remember websites that actually used the `<form>` element?)

jhanoncomm
5 replies
19h45m

If all the sites tot more efficient it may also increase longevity of laptops and PCs where unsavvy people might just “need a new computer it is getting slow”.

Also applies to bloatware shipped with computers. To the point where I was offered a $50 “tune up” to a new laptop I purchased recently. Imagine a new car dealer offered you that!

genewitch
3 replies
19h17m

I worked at a now-defunct electronics store (not fry's in this instance) in the early 2000s that offered this "tune-up" - it was to remove the stuff that HP and Dell got paid to pre-install, and to fully update windows and whatever else.

Remove the mcafee nuisance popups and any browser "addons" that were badged/branded. and IIRC we charged more than $50 for that service back then.

fbdab103
2 replies
17h53m

For the performance boost it could offer the unsavy user stuck on a HDD, it was probably worth it to many. Gross to be the middleman, but it is what it is.

genewitch
1 replies
9h52m

Another computer shop i worked in charged $90 for virus removal, but we also eventually made it policy to just reformat/reimage the drive and remove all the crap and fully update the OS. Prior to that the policy was "remove viruses, remove crapware, update OS", but we had a few customers that had machines with 30,000 viruses. I forget what the record was, but it was way up there in count. Trying to clean those machines had a marginal failure rate, enough that it was costing the owner money to have us repeatedly clean them without payment.

No one wants to tell a customer that they need to find better adult content sites, and that we won't be cleaning their machines without payment anymore!

lukan
0 replies
8h7m

"just reformat/reimage the drive and remove all the crap"

And that is not more work?

It was usually the way I did it, too. But this requires checking with the owner what apps are important, saved preferences, where are the important files stored (they never know) etc.

teamonkey
0 replies
7h59m

What’s the financial incentive in that? Manufacturers ideally want you to buy a whole new device every year, they don’t want you repairing or extending the life.

goalieca
4 replies
19h18m

For example, a user with a data plan with a monthly limit in the single-digit GBs, and a RAM/CPU profile resembling a decade-old US flagship

I’m in Canada and have a single digit plan and I just upgraded from an almost decade old flagship. Most websites are torture.

II2II
2 replies
15h59m

I'm in Canada and have a triple-digit plan, in MBs. It's for emergency use only. It would be nice if something as simple as checking on power outages didn't chew up a good portion of the data plan.

doubled112
1 replies
14h59m

I had a 200MB plan for $35/month until early 2022. It was an old Koodo plan.

I never used it. I don't do a lot. WiFi at home, drive to work, WiFi at work, drive to home.

Travelling with the kids I've found the new plan makes life easier.

II2II
0 replies
4h54m

Yeah, different people need different things out of their phones. Yet the point remains that stingy data plans still exist in developed countries. Even though people may have better devices than those mentioned in the article (it is easier to justify a one-time expense than a recurring one), there are people who are stuck with them for various reasons. Affordability is definitely one of the reasons.

Even so, we should avoid pigeonholing those who have limited access to data as poor people. There are other reasons.

123yawaworht456
0 replies
18h13m

in mid 00's, I had ADSL with iirc ≈300 MB included in the monthly payment, with an extremely predatory rate over the limit. I used to stretch it for 3 weeks out of a month browsing with images disabled (and bulk of my bandwidth spent on Warcraft 3).

that would last for a few hours of lightweight (not youtube/images/etc) browsing now.

freddie_mercury
3 replies
6h16m

For example, a user with a data plan with a monthly limit in the single-digit GBs

I live in a poor Southeast Asian country.

People with small data plans don't use data from efficient websites, they use wifi which is omnipresent.

30GB of data on a monthly plan is $3.64. Which is about 4-6 hours of minimum wage (minimum wage is lower in agricultural areas).

But more to the point, people don't use data profligately like in the West. Every single cafe, restaurant, supermarket, and mall has free wifi. Most people ask for the wifi password before they ask for the menu.

I've never seen or heard anyone talk about a website using up their data too fast.

It honestly sounds like a made up concern from people who've never actually lived in a developing country.

People here run out of data from watching videos on TikTok, Instagram, and Facebook. Not from website bloat.

lozenge
0 replies
1h28m

"It honestly sounds like a made up concern from people who've never actually lived in a developing country."

You mean, the one developing country you live in.

You are also missing the full spectrum of users. People don't just browse the web for fun. They look for important information like health or finance information, they might not want to do that in a public place or they might not be able to put it off for when they next have wifi.

If you are building an e commerce website it might not matter, but you could be building a news site, or any number of other things.

keybored
0 replies
5h4m

I mean not using Data Plan here in Northern Europe was me 11 years ago… and me using it sparingly because video or songs would blow through the Data Plan instantly was me eight years ago.

CaptainFever
0 replies
4h52m

Thank you for the first hand experience anecdote!

I think one way for first world country citizens to empathise with this is how people behave when on roaming data plans during overseas trips. One does keep to public WiFi as much as possible and keep mobile data usage to a minimum or for emergency purposes.

Telemakhos
3 replies
17h1m

It's not even just the middle-income countries—I have an iPhone 13, so only three years old, on a US wifi connection with high speed broadband, and it can't handle the glitzy bloat of the prospectus for one of my ETFs. I don't understand why a prospectus shouldn't just be a PDF anyway, but it baffles me that someone would put so much bloated design into a prospectus that a recent phone can't handle it.

eviks
2 replies
12h7m

It shouldn't be a PDF because they don't reflow text, especially important for phones

literallycancer
1 replies
4h31m

Make 2 pdfs.

eviks
0 replies
3h56m

there are more than 2 screen widths

prisenco
2 replies
18h31m

an alternate "lite/basic" mode.

In another world this mode dominated UI/UX design and development and the result was beautiful and efficient. Where design more resembles a haiku than an unedited novel.

We don't get to live in that world, but it's not hard to imagine.

bee_rider
1 replies
17h51m

I think it is sort of hard to imagine; a world populated mostly by humans that appreciate that sort of simplicity is pretty different!

If we had modern computers in 200X, we wouldn’t just have music on our myspaces, we’d put whole games there I bet.

csande17
0 replies
16h23m

People did, in fact, embed games on MySpace, mostly using Flash if I recall correctly.

gxs
1 replies
19h23m

Some of these sites are un-fucking-bearable on my gen old iPhone.

And the if I’m in a place with a shitty signal, forget about it, this problem is 10 times worse.

I’m not even talking about the cluttered UI where only a third of the page is visible because of frozen headers and ads, I’m talking about the size of the websites themselves that are built by people who throw shit against the wall until it looks like whatever design document they were given. A website that would have already been bloated had it been built correctly that then becomes unusable on a slow internet connection, forget slow hardware.

All that is to say, I can’t imagine what it must be like to use the internet under the circumstances in which you described.

I can only hope these people use localized sites built for their bandwidth and devices and don’t have to interact with the bloated crap we deal with.

ryandrake
0 replies
8h11m

I really wish all software developers had to have 10 year old phones and computers and a slow 3G connection as their daily drivers. It might at the very least give them some empathy about how hard it is to use their software on an underspec machine.

bombcar
0 replies
11h26m

Most of those users have the advantage of not using English - and so there are often sites in their native language that cater to lower power devices.

But if you’re in that middle world country AND your official language is English, you’re gonna have a hell of a slow time.

andai
0 replies
3h47m

Could you elaborate on features and polish, i.e. give some specific examples?

porcoda
36 replies
12h6m

I like how most people blame bosses or scary big companies. No developers appear willing to admit that there is a large cohort of not that great web programmers who don’t know much (and appear to not WANT to know much) about efficiency. They’re just as to blame for the sad world of web software as the big boss or corporate overlord that forced someone to make bad software.

holri
7 replies
10h33m

Usually bad web software correlates with bad content. Therefore having a slow device is an excellent filter helping to avoid garbage.

teamonkey
5 replies
8h8m

Unfortunately the modern web has consolidated to a point where you need to use them. For example, small local businesses that don’t have a web site but do have a Facebook page.

geraldhh
2 replies
7h53m

seems fair to correlate "small local businesses that don’t have a web site but do have a Facebook page" with "bad content"

teamonkey
1 replies
4h38m

My local butcher provides good content without being terminally online.

Unfortunately this means needing to use Facebook to find out if they’re open on a national holiday.

geraldhh
0 replies
2h47m

idk if "only facebook" is worse than no online presence at all

holri
1 replies
7h26m

Having only a facebook page and forcing people on that toxic platform, is a strong indication that they do not value freedom (of the web) and ethics. Again a good filter for business / people I want to avoid.

FragmentShader
0 replies
6h39m

is a strong indication that they do not value freedom (of the web) and ethics.

I don't think the average barbershop/restaurant owner will care about that, for instance? They just wanna set up a Facebook/Instagram and done, they can now instantly receive messages from clients to make reservations and also share their stuff with posts. I bet they don't even know they can make a website.

Also, every time they end up getting a website, it's powered by Wordpress hosted in the slowest server you can imagine. And it will end up redirecting you to a propietary service to make your reservation (Whatsapp, Facebook, Instagram...)

At least that's what I see in Europe and south america, I have no clue how it is everywhere else.

dazc
0 replies
4h43m

A friend of mine who was a barber asked me how much it would cost to build him a website and I said I would do him a basic 3 page site for free, although he would need to let me know if his opening hours had changed or he needed to tell customers he was on holiday, etc.

He said, with no irony whatsoever, that he didn't realise it would be so complicated and decided not to take me up on the offer. I suspect this attitude is not the unusual with one-man businesses that have survived just fine thus far?

dijit
7 replies
10h40m

"it's better for the company that I don't try, my time is expensive and any minute not spent on a feature is a waste of my salary" - is a common justification that I hear all too often.

autoexec
6 replies
10h23m

"It's better for the company that I don't try" seems like a convenient take for a dev without the skills to have. I'd argue that performance is a feature, and if someone can't deliver it their salary is being wasted already.

TheAceOfHearts
5 replies
9h15m

Performance is a feature and management often doesn't care to optimize for it. If the market valued performance more then we would probably see competitive services which optimize for performance, but we generally don't. I'm sure there's plenty of developers that could deliver improved performance, it's just a matter of tradeoffs.

Maybe the people who care this much about performance should start competing services or a consulting firm which optimizes for that. Better yet, they could devote their efforts to helping create educational content and improved frameworks or tooling which yields more performant apps.

zelphirkalt
2 replies
8h43m

One issue is, that the caring about performance is often not visible. How does management accout for or measure how annoyed people get visiting their bloated websites? How many people do not know better, how fast and snappy a not bloated website can be, because they apend all their time on Instagram, FB, and co? Even if a company does measure it somehow via some kind of truly well executed A/B test, other explanations might be reached for, to explain why a user left the website, than the performance.

orangevelcro
1 replies
3h51m

Isn't that what the tracking stuff is supposed to track? Measure things like how 'annoyed' people get by bounce rate and whatever other relevant metrics.

zelphirkalt
0 replies
1h30m

Yes, but how do you determin the actual reason for a bounce? The test would need to have all the same starting conditions and then let some users have a better performing version or something like that. But at that point one would probably rollout the better performing version anyway. Maybe artificially worsen the performance and observe how the metrics change. And then it is questionable, whether the same amount by which performance decreased would have the same effect in reverse, if the performance increased by that amount. Maybe up to a certain point? In general probably not. In general it is difficult, because changing things to perform better is usually accompanied by visual and functionality changes as well.

ryandrake
1 replies
8h18m

Performance is not a feature. Decisions about performance are part of every line of code we write. Some developers make good decisions and do their job right, many others half-ass it and we end up with the crap that ships in most places today.

This “blame the managers” attitude denies the agency all developers have to do our jobs competently or not. The manager probably doesn’t ultimately care about source control or code review either, but we use them because we’re professionals and we aim to do our jobs right. Maybe a better example is security: software is secure because of developers who do their jobs right, which has nothing to do with whether or not the manager cares about security.

LegibleCrimson
0 replies
4h18m

I can agree to a point, but it's not very scalable. Imagine if the safety of every bridge and building came down to each construction worker caring on an individual level. At some point, there need to be processes that ensure success, not just individual workers caring enough.

Secure software happens because of a culture of building secure software, or processes and requirements. NASA doesn't depend on individual developers "just doing the right thing", they have strict standards.

bezbac
7 replies
10h36m

That's not fair. Sure, if there's an experienced dev who _values_ efficiency on the team, who pushes for the site to be more efficient or builds it more efficiently to begin with, the page would be better off. But it's mostly about incentives. If management doesn't care, they will likely not react well to programmers spending time making the site more efficient instead of spending half the time to just get it running and then crunching through their backlog.

zilti
6 replies
10h4m

It usually requires less time, not more, to create a slim and efficient page.

rokkamokka
3 replies
9h9m

Definitely not true in my experience, and I would think if it were true, most pages would be "slim and efficient". Where is the business value in doing anything else at that point?

zelphirkalt
0 replies
8h52m

The GP might not always be true, but no, we would not have slim and efficient sites, because of push web developers get to include all kinds of unnecessary tracking and in general bloat on websites.

kjkjadksj
0 replies
1h26m

Static html sites are so easy. You can write one by hand in five minutes and it can run on a toaster. There’s more business value in ads and dark patterns.

Jensson
0 replies
8h9m

Where is the business value in doing anything else at that point?

You think developers prioritize business value? That isn't how employment works.

rizky05
0 replies
9h20m

but can it do feature x that generates more $$$ ?

danlugo92
0 replies
5h45m

True but only if you know how to. Also slim will 99% of the time be less code too.

p_l
2 replies
4h15m

People who never worked with some of the bloated sites often forgot third party in the bloating.

Marketing team mandating inclusion of at least one "Tag Manager" (if they are especially bad, there will be multiple).

A "Tag Manager" is a piece of JS that is installed together with an API key in the site... and then it downloads whatever extra JS that was configured for given API key. The actual site developer often has absolutely no control over it (the closest I got once was PoC-of-PoC where we tried to put even inclusion of tag manager behind an actually GDPR-compliant consent screen).

Marketing team gets to add "tags" (read, tracking, chat overlays, subscription naggers, whatever), sometimes with extra rules (that also take time processing!), all without involving the development team behind the site.

andai
1 replies
4h2m

So marketing department is responsible for web being such a sad and painful experience?

p_l
0 replies
2h33m

Not the only department.

But consider how much of the bloated JS tends to be from external parties, and pretty much everything that isn't CDN-ed frameworks will be stuff either required by marketing, or flat out added through the use of a tag manager.

arp242
1 replies
5h51m

About 5 years ago I applied for a job at a company that enabled people in rural Africa to more easily sell the goods they produced (farmers, basket weavers, what-have-you).

If you mainly target people in the US or EU, there's perhaps something to be said for not optimizing too aggressively for low-end hardware and flaky low-bandwidth high-latency connections. But if you're targetting rural Africa fairly aggressive optimisation seems like a no-brainer, right?

Their homepage loaded this 2M gazillion by gazillion pixel image downscaled to 500 by 1000 pixels with CSS. It got worse from there. I don't recall the exact JS payload size, but it was multi-MB – everything was extremely frontend-heavy, which was double ridiculous because it was mostly a "classic" template-driven backend app from what I could see.

I still applied because I liked the concept but the tech was just horrible. I don't really know why it was like this as I never got to the first interview stage, but it's hard to image it's anything other than western European developers not quite realizing what they're doing in this regard.

carlosjobim
0 replies
4h42m

The website was never intended for people in rural Africa, it was intended for donors and governments in Western countries, so that the company could get juicy grant money and pay themselves to pretend to empower African farmers.

titzer
0 replies
1h32m

Is it really hard to believe that the solutions on offer are usually giant piles of steaming crap that do way more than they should but are nevertheless easy to get set up and get going? When programming ecosystems get big, they accumulate a ton of ways of doing things and people keep trying to put a layer on top on top of a layer on top of a layer (like floors in an old house). It doesn't matter if a thing underneath is O(n); someone will put another O(n) thing on top of that that represents all its data as strings and uses regex or horribly-inefficient JSON or something. Very few people ever think things from the ground up.

tdudhhu
0 replies
9h30m

You are right.

I browse the web on Firefox with uBlock Origin, 3rd party cookies disabled, and so on.

So I am missing the bloat most people talk about.

But still apps like Clickup are really slow. It's just bad software.

mouzogu
0 replies
5h42m

we need some html tag or attribute for slow network detection.

instead of this nasty js feature detection that 99% of time no one does.

prefers reduce motion was a good start. although its rarely respected.

ksec
0 replies
8h32m

Probably a bit of both.

Client Side Rendering ( Regardless of Frameworks ) is hip, and gets more media attention. Sometimes backed by VC. It is new, it is complex. And fits both the hype cycle, software engineers complexity attraction, and Resume Driven Development model. And just like the article stated, it is suppose to bring so many good things in its idealogy to the table.

Since majority of software developers wants to works on it, so their Resume gets a tick and could jump to another job later. Management now faces lots of application for these technology and zero for old and boring tech.

great web programmers who don’t know much (and appear to not WANT to know much) about efficiency.

Remember when Firefox OS developers thought $35 dollar Smartphone will one day take over the world and CPU will be so much faster due to Moore's law, performance will soon becomes irrelevant.

I mean that is like Jeff hates Qualcomm, without actually understanding anything about Mobile SoC business nor the CPU behind it. And how ARM's IP works. A lot of people dont want to know "why" either.

A more accurate description and also a general observation. Most software developers and especially those on Web Development have very little understanding of hardware or low level Software engineering. Cloud Computing makes this even more abstracted.

keybored
0 replies
5h9m

The whole point of hierarchical organizations is that those higher up have more influence than those at the lower tiers. Cutting the blame in half doesn’t make sense.

atoav
0 replies
5h58m

I have worked with such people. When I asked them specifics about the "result" (HTML, CSS, JS) they looked at me as if I was talking another language. They came from javascript framework world, and there they didn't really think all that much about that.

My philosophy is nearly completely different, I ask myself what the minimum maintainable code is that would produce the equivalent of a well hand coded HTML+CSS+JS website. Usually the result is magnitudes smaller.

One of those people asked me how I did realtime list filtering on 1000 table rows and still have it load fast ans perform well on mobile. While that isn't really a feat, all I sid was deliver the whole data on the first request and then hide non-filtered data dynamically. That means the webserver didn't have to do anything wild, orher than deliver the same cached data to everybody who filters that list and because this was the only javascript going on on that site it was (to them) unusually performant. If you look at a comparable table row from their solution (some framework, didn't have much insight into it) the resulting html was 80% boilerplate that they didn't even use.

Web development is too entrenched and many wandered too far from the essentials of web technology.

_fat_santa
0 replies
4h16m

Having worked with the "big scary companies", I can say they are 100% to blame. It doesn't start with the developers but rather the budget. Unless folks at the top are tech savvy and/or have an engineering background, they typically only budget for new features and either under-budget or don't budget for maintenance and tech debt removal. And when they do budget for maintenance, it's handled almost exclusively by "maintenance teams" that are offshore and cheaper.

So you have a feature team that works on a feature for 6 months, does a 1 hour "KT Session" with the offshore maintenance team and hands them the code. The offshore team has some information on the feature but not enough to really manage existing tech debt, just to keep the lights on. And on top of this they know they are the lowest totem on the pole and don't want to get fired so they don't go out of their way to try and fix any existing code or optimize it, again just enough to keep the thing working.

Then this cycle repeats 100-1000x within an org and pretty soon you have a scenario where the frontend has 2M lines of code when it really should be 250k max. A new feature team might come on with the brightest engineers and the best of intentions, but now they have to work within the box that was setup for them. Say they have a number of elements that don't line up with their feature mockups. The mockups might be incorrect, there might have been an upgrade to the UI kit, or the existing UI kit might need refactoring. Problem is none of that is budgeted for so the team is told to just copy the components and modify them for their own use. And of course on handoff to maintenance team, the new team does not want to mess with the existing feature work so they leave it as is. Management is non-technical so they don't know the difference, and you end up with 50+ components all called "Button" in your codebase from years and years of teams constantly copy/pasting to accommodate their new feature.

ryukoposting
29 replies
21h29m

I only recently moved from a 6-year old LG flagship phone to a shiny new Galaxy, and the performance difference is staggering. It shouldn't be - that was a very high-end phone at release, it's not that old, and it still works like new. I know it's not just my phone, because the Galaxy S9s I use to test code have the same struggles.

I would like to have seen Amazon in the tests. IME Amazon's website is among the absolute worst of the worst on mobile devices more than ~4 years old. Amazon was the only site I accessed regularly that bordered on unusable, even with relatively recent high-end mobile hardware.

eric__cartman
16 replies
21h11m

I have noticed with two 7 year old Snapdragon 835 devices that RAM and running a recent Android version makes a huge difference.

I daily drive a OnePlus 5 running Android 14 through LineageOS and the user experience for non-gaming tasks is perfectly adequate. This phone has 6GB of ram, so it's still on par with most mid-range phones nowadays. My only gripe is that I had to replace the battery and disassembling phones is a pain.

Meanwhile a Galaxy S8 with the same SoC, 4GB of memory and stock Android 9 with Samsung's modifications chugs like there's no tomorrow.

I can understand that having two more gigabytes of memory can make a difference but there is a night and day difference between the phones. Perhaps Android 14 has way better memory management than Android 9? Or Samsung's slow and bloated software is hampering this device?

Either way it's irritating to see that many companies don't test on old/low-end devices. Most people in the world aren't running modern flagships, especially if they target a world-wide audience.

hinkley
15 replies
20h24m

This is what I miss from the removal of serviceable components on MacBooks. Was a time I would buy the fastest processor and just okay memory and disk, then the first time I got a twinge of jealousy about the new machines, buy the most Corsair memory that they would guarantee would work, and a bigger faster drive. Boom, another 18 months of useful lifetime.

lotsofpulp
13 replies
19h58m

Is the total useful lifetime more than MacBooks with non serviceable components? I see people around me easily using Airs for 5+ years.

kome
7 replies
19h32m

My MacBook Air (11-inch, Early 2014) is my only computer. I still don't feel like changing it so far...

Baguette5242
3 replies
19h11m

Amateur… I am using a 2009 15’ MacBook Pro Unibody, with a swapped SuperDrive to SSD, another main SSD and RAM boosted to 8Gb. OpenCore Legacy to update to a relatively recent version of MacOS. The only thing that is so annoying is the webcam that doesn’t work anymore, and a USB port is dead also.

So sad this kind of shenanigans are not possible anymore.

sockbot
1 replies
14h51m

I have one of these with a MacBook Pro 6,2 that I did the same upgrades to. However I finally decided to retire it when 2nd replacement battery swelled and Chrome stopped supporting OSX 13.

It didn't look like a good candidate for OpenCore Legacy because of the dual video cards, but it feels so gross recycling a perfectly working computer.

walteweiss
0 replies
59m

I run the one from 2011 (16 Gb of ram though) and it runs highly minimalistic Arch Linux. So far so good.

hagbard_c
0 replies
18h34m

Pfah, showoff. My 2005 Thinkpad T42p crawls circles around that thing - slowly. Maxed out to 2GB, Intel 120GB SSD with a PATA->SATA adapter (just fits if you remove some useless bits from the lid) and - what keeps this machine around - a glorious keyboard and 1600x1200 display. It even gets several hours on the battery so what more could you want?

zer00eyz
0 replies
18h31m

My air isnt that old, and I'm eyeing a new one...

I find that a lot of my work is "remote" at this point. Im doing most things on Servers, VM's, and containers on other boxes. The few apps that I do run locally are suffering (browser being the big offender).

Is most of what you're doing remote? Do you have a decent amount of ram in that air?

knowaveragejoe
0 replies
12h1m

The main thing that convinced me to get on the ARM macs is the heat and battery life(which kind of go together). It's never uncomfortable on the lap.

genewitch
0 replies
19h12m

i have an Air from 2011 or 2012 that is out of storage with just the OS installed. I can't update or install any other software because the most recent update installed on it capped out the storage. Low-end windows laptops (the $150-$300 at walmart type) have this same issue. 32GB of storage and windows takes 80% of the space, and you can no longer fit a windows update on it.

I still have the air with whatever the macos is, but as soon as i have a minute i'm going to try and get linux or BSD on it. I'm still sore at how little use i got out of that machine - and i got it "open box" "scratch and dent", so it was around $500 with tax. I got triple the usage out of a 2009ish eeePC (netbook)

stavros
3 replies
19h7m

Yes, but that's the slow-boiled frog syndrome. I use my computers for years as well, and whenever I get a new one I think "wow, why didn't I switch sooner, this is so much snappier".

ghaff
2 replies
18h3m

As a counterpoint, I have a 2015 MacBook, a 2015 iMac, and a recent Apple Silicon MacBook. Of course I do Photoshop, Lightroom, Generative AI, etc. on the Apple Silicon system. But I basically don't care which system I browse the web with and, in fact, the iMac is my usual for video calls and a great deal of my web document creation and the like.

I suspect that people who have somewhat older Macs (obviously there's some limit) who find their web browsing intolerably slow probably have something else going on with either their install or their network.

resource_waste
1 replies
7h2m

I do Generative AI,

This makes me call into question literally everything else in your post.

You might be able to do CPU based for a few trials for fun, but you arent running LLMs on CPU on a daily basis.

ghaff
0 replies
4h25m

I do some local image generation now and then (mostly using Photoshop). Are you happy now? My only point was that any CPU/GPU-intensive applications I run (and really most local applications) I do on my newish computer. But most stuff I run is in a browser.

The relatively little LLM use I do is in a browser and it doesn't matter which computer I'm doing it on.

BolexNOLA
0 replies
19h8m

I’ve been a Mac user since 2003 or so and I can confidently say my machines last 6-7 years as daily drivers then sunset over 2-3 years when I get a new computer. I always go tower, laptop, tower, laptop. They have a nice overlap for a few years that serves me well.

dijit
0 replies
6h55m

Controversial counterpoint: Having standardised hardware causes optimisation.

What do I mean?

In game development, people often argue that game consoles hold back PC games. This is true to a point, because more time is spent optimising at the cost of features, but also optimising for consoles means PC players are reaping the benefits of a baseline decent performance even on low end hardware.

Right now I am developing a game for PC and my dev team are happy to set system requirements at an 11th generation i7 and a 40-series (4070 or higher) graphics card. Obviously that makes our target demographic very narrow but from their perspective the game runs: so why would I be upset?

For over a decade memory was so cheap that most people ended up maxing out their systems, the result is that every program is electron.

For the last 10 years memory started to be constrained and suddenly a lot of electron became less shitty (its still shitty) and memory requirements were something that you could tell at least some companies started working to reduce (or at least not increase).

Now we get faster CPUs, the constraint is gone, and since the M-series chips came out I am certain that software that used to be useful on intel macs is becoming slower and slower. Especially the electron stuff which seems to especially perform well on M-chips

zuhsetaqi
5 replies
20h36m

Interesting that you have such problems with Amazon. I‘m using an iPhone XR (5,5 years old) and don’t have any problems using Amazon in the browser (Safari). And I’m on the latest iOS (17.4).

callalex
1 replies
11h20m

iPhone browser performance has run circles around android browser performance on equivalent hardware for like the last 10 years or so. It’s really the secret sauce of iOS.

walteweiss
0 replies
52m

Yeah, by the way browsing on iPhone 6S Plus is quite okay, compared to even MacBook Pro (2011, but that’s a laptop!), I would say.

ww520
0 replies
10h6m

iPhone has exceptional long lasting performance. I have a 5 year old iPhone and it still runs smooth like silk.

ryukoposting
0 replies
16h0m

OS version may have an impact. The Galaxy S9s both run Android 9. That LG phone is stuck on Android 8 because AT&T sucks and never got around to updating their shitware-riddled Android fork. If they had, I wouldn't have needed to spend spend $800 on a new phone. I'm not bitter about it at all, though.

EVa5I7bHFq9mnYK
1 replies
19h19m

I recently visited Brazil and had my shiny new phone snatched from my hand ... now with my spare 4 years old phone, frankly dont see any difference. But I use Firefox with all the ad blockers, maybe that helps.

ryukoposting
0 replies
15h57m

I run Firefox with uBO and NoScript. Based on the other replies, OS version may play a role.

Accacin
1 replies
20h15m

Did you try disabling JavaScript on Amazon? It actually doesn't function too badly. I know, I know, you shouldn't need to do it and I agree.

ryukoposting
0 replies
15h59m

I fiddled with NoScript but I must have done something wrong because I broke the site entirely.

seam_carver
0 replies
20h20m

I have no issues with Amazon on my iPhone 8 running latest iOS 16

MiddleEndian
0 replies
17h22m

I have a Palm Phone. I generally consider web browsing to be almost impossible no it at this point lol

mastazi
26 replies
17h56m

While reviews note that you can run PUBG and other 3D games with decent performance on a Tecno Spark 8C, this doesn't mean that the device is fast enough to read posts on modern text-centric social media platforms or modern text-centric web forums. While 40fps is achievable in PUBG, we can easily see less than 0.4fps when scrolling on these sites.

Remember this the next time marketing asks the frontend team to implement that new tracking script and everyone assumes that users won't even be able to tell the difference.

spintin
4 replies
17h35m

PUBG is now a very special beast: It's CPU bound = we are unlikely to ever see a "AAA" game with anything beyond it's complexity for eternity. You can run it on a 1030 GPU at 60 FPS.

willcipriano
1 replies
14h20m

Somebody has to be working on a Simcity or Civilization MMO.

spintin
0 replies
7h11m

I wish! The truth is server and client programmers rarely get along so persistent MMOs with alot of moving parts are only going to happen once one developer is schizo enough to do both well. AAA will never be able to do it.

duck2
1 replies
17h17m

It's not like websites are GPU bound

spintin
0 replies
16h48m

No but most games are. Thus PUBG is an outlier.

steve_taylor
3 replies
17h1m

These days they coerce the dev team into implementing a tag manager so they can add their filthy trackers without asking the dev team.

_heimdall
2 replies
15h10m

The "they" here can't really coerce the dev team unless the dev team is willing to comply. Refusing to implement an unethical feature is always an option, and given that we're often considered engineers it is well within our right to deem something unsafe or against best practices.

Escapado
1 replies
14h58m

I hate all that tracking and marketing bs as much as the next guy but if the marketing team is the main stakeholder and is responsible for the budget that won't work. I also might be a bit biased as a freelancer but every team I worked in so far had other freelancers on it and if we strongly recommend aginst a practice but the client insisted then we basically had the choice to either abandon the project (and therefore our current source of income) or simply do what they say. I would love to be on a position where refusing is an option that would not cost me my gig.

_heimdall
0 replies
14h6m

This thread really has no purpose if we don't see it as enough of a problem to stand against. I really don't mean that to sound like I'm on a high horse (I'm sure it still sounds that way). There's nothing wrong with being okay with the trade offs, but we don't get to implement these features and complain about how bad they are.

dogtierstatus
3 replies
15h48m

One time long ago, e-commerce company i worked for decided to add tiktok analytics to the front-end. The dev team added the changes but were worried it might impact performance and UX. As a solution we were told to run the performance tests to check it.

The performance tests were created to mimic user behaviour but only involved company APIs. Not third party requests. No one in the top level, cared about this bit of information. We ran this performance test and saw the the response times are almost the same so it's time to pat ourselves on the back and move on ...

_heimdall
2 replies
15h12m

Did no one call bullshit on the test before running it? Personally I'd just flat out refuse to run the test, likely designing the proper test comparing while third party scripts where enabled.

Management and product owners should understand how these things work, and shouldn't ask for bogus data when they do. But teams implementing the changes should just flat out refuse when they know the request isn't reasonable.

SadCordDrone
1 replies
13h50m

Sir, in most companies if you suggest something technical without having equivalent political power, at best, no one will listen to you. At worst you will create political enemies.

Probably there was an SDE-2 or SDE-3 who called bullshit on it and got ignored.

_heimdall
0 replies
6h22m

You call bullshit on it by either refusing to run the test, or better and more helpfully by running a test that answers the performance question.

I've seen these kinds of requests plenty of ways. Sometimes those asking include a design or specs because they honestly thought that was the right way to do it, other times they are knowingly asking for (in this case) a useless test to check a box. In either case, IMO the right response is to ask questions to clarify the goals and build to that, changing the provided design or specs if necessary.

I've had to play this out dozens of times over the years and never earned enemies from if, at one point I won over the PM leader that everyone on the dev team warned me about. Its all about tact and approach, assume everyone is on the up and up and just ask good questions to clarify the goals. Its hard to get mad at that unless its done in a condescending or argumentative way.

paledot
2 replies
17h35m

(But don't under any circumstances break the four other trackers already running on the site.)

rmbyrro
1 replies
16h30m

You mean the four new ones they added last week alone, right?

JJMcJ
0 replies
16h13m

Newpaper sites are notorious for this.

Aerroon
2 replies
15h38m

I imagine it has more to do with the monstrous website design than the tracking scripts. New reddit vs old reddit or desktop reddit vs mobile Reddit shouldn't be that different in terms of tracking. But the newer ones run like ass.

sokz
1 replies
14h46m

Reddit doesn't even run satisfactorily in my gaming laptop. I can run AAA games but a website is noticeably slow.

pooper
0 replies
14h15m

Just curious, are you using old.reddit.com?

dudul
1 replies
16h22m

Just throw a ticket in jira for these stupid devs to "make it faster".

champtar
0 replies
14h16m

To make Jira faster ?

dsr_
1 replies
17h26m

To be fair, it is usually difficult to tell the difference between 243 tracking scripts and 244.

Ruq
0 replies
16h1m

Modern webdev is fugged

makeitdouble
0 replies
16h57m

The sad part being that traditional marketing cares very little about these users outside of the aggregation parts.

When the goal is to make people pay, a base strategy is to target user who are already spending money. So "fast enough on a current [device sales team is using]" becomes the baseline, and optimizing for older/weaker/cheaper environments isn't an proposition that will convince.

Except when you're ad supported. Then the balance will be a bit more in the middle.

emodendroket
0 replies
17h26m

Remember this the next time marketing asks the frontend team to implement that new tracking script and everyone assumes that users won't even be able to tell the difference.

I mean, maybe they can but the business doesn't care. If you polled "users" of cable television I doubt anyone would say they prefer the experience of commercials.

Gibbon1
0 replies
16h7m

Firing up my neoliberal brain.

We should just tax ad and spying on users bandwidth and front/backend end resource use.

Karrot_Kream
21 replies
21h32m

I'm normally a fan of Dan Luu's posts but I felt this one missed the mark. The LCP/CPU table is a good one, but from there the article turns into a bit on armchair psychology. From some random comments coming from Discourse's founder, readers are asked to build up an idea of what attitudes software engineers supposedly have. Even Knuth gets dragged into the mud based on comments he made about single vs multi-core performance and comments about the Itanium (which is a long standing point of academic contention.)

This article just felt too soft, too couched in internet fights, to really stand up.

troupo
12 replies
21h22m

readers are asked to build up an idea of what attitudes software engineers supposedly have.

But they do, don't they. Discourse's founder's words are just very illustrative. Have you used the web recently? I have. It's bloated beyond any imagination to the point that Google now says that 2.4 seconds to Largest Contentful Paint is fast now: https://blog.chromium.org/2020/05/the-science-behind-web-vit... (this is from 4 years ago, it's probably worse now).

You don't have to go far to see either Youtube loading 2.5 megabytes of CSS on desktop to the founder of Vercel boasting its super fast sites that take 20 seconds to load the moment you throttle it just a tiny bit: https://x.com/dmitriid/status/1735338533303259571

Karrot_Kream
10 replies
21h15m

You're making the same mistake the post did. It depends on the reader already having sympathy for the idea that bloat is bad in order to make its case. I can read nerd site comments all day that lament bloat. For an article to stand on its own on this point it has to make the case to people who don't already believe this.

Dan's articles have usually been very good at that. The keyboard latency one for example makes few assumptions and mostly relies on data to tell its story. My point is that this article is different. It's an elevated rant. It relies on an audience that already agrees to land its point, hence my criticism that it's too couched in internet fights.

liveoneggs
7 replies
20h0m

State your case that bloat is good. I currently have a client who will do literally anything except delete a single javascript library so I'd like to understand them better.

jiggawatts
2 replies
19h30m

The latest version of Excel loads faster on my laptop than most websites do. I’ve timed this.

I can load the entire MS Office suite and open a Visual Studio 2022 project in less time then it takes to open a blank Jira web form.

What’s your point?

skydhash
0 replies
19h9m

Due to prevalence of native apps in the macOS world, the difference are often stark. I use Things and Bear, and it’s fast, then try to load gmail (dump account, so it’s not in Mail) and it’s so slow. Youtube too. Fastmail, in comparison, loads like it’s on localhost.

You block JavaScript and the amount of sites that is broken is ridiculous, some you would not expect (websites, not fullblown interactive apps).

jodrellblank
0 replies
19h10m

My point is to reply to "State your case that bloat is good" with a famous blog stating a case that bloat is good. Bloat makes the company more money by allowing them to develop and ship faster, bloat makes the company more money by being able to offer more features to more customers (including the advertisers and marketers and etc. side of things), and - well, read the article.

I, too, dislike slow websites and web apps, but I don't think they are some mystery - natural selection isn't selecting for idiot developers, market selection is selecting for tickbox features and with first-mover-advantage they are selecting against "fast but not available for another year and has fewer features and cost more to develop".

deathanatos
1 replies
18h15m

That was 2001.

Core frequencies aren't going up at 2001 rates anymore. (And although Moore's law has continued, it is only just. Core freqs have all but topped out, it feels like.) Memory prices seem to have stalled, and even non-volatile storage feels like it's stalled.

My computer in 1998, compared to it's predecessor, storage was going up in size at ~43% YoY. It was an amazing time to be alive; the 128 MiB thumbdrive I bought the next decade is laughable now, but it was an upgrade from a 1.44 "MB" diskette. Today, I'm not sure I'd put more storage in a new machine than what I put in a 2011 build. E.g., 1 TiB seems to be ~$50; cheaper, yet. Using the late 90s growth rates, it should be 17 TiB… so even though it's about half the price, we can see we've fallen off the curve.

jodrellblank
0 replies
15h45m

"And although Moore's law has continued, it is only just."

https://en.wikipedia.org/wiki/Transistor_count has a table of transistor count over time. 2001 was Intel Pentium III with 45 million transistors and nVidia NV2A GPU with 60 million. 2023 has Apple M2 Ultra with 134 billion transistors and AMD Instinct CPU with 146 billion, and AMD Aqua Vanjaram CDNA3 GPU with 153 billion. That's some ~3,000x more, about a doubling every two years.

Core frequencies aren't going up, but amount of work per clock cycle is - SIMD instructions are up, memory access and peripheral access bandwidth is up, cache sizes are up, branch predictors are better, multi-core is better.

"E.g., 1 TiB seems to be ~$50"

You can get a 12TB HDD from NewEgg for $99.99, Joel's blog said $0.0071 per megabyte and this is $0.0000083 per megabyte, ten thousand times cheaper in 23 years. Even after switching to more expensive SSDs 1TB for $50 is $0.00005 per megabyte, a hundred times cheaper than Joel mentioned - and that switch to SSDs likely reduced the investment in HDD tech. And as you say "I'm not sure I'd put more storage in a new machine than what I put in a 2011 build" few people need more storage unless they are video or gaming enthusiasts, or companies.

liveoneggs
0 replies
18h59m

The web doesn't scale like desktops - not even close.

Furthermore - this philosophy has made Windows worse and less responsive in all cases.

I understand that this "pays the bills" but my charge is (currently) to make things faster so I am against slowness.

troupo
0 replies
10h28m

The article you diss has actual benchmarks in it. The article I linked has actual numbers in it.

At this point you're willingly ignoring it because you dislike that this is additionally illustrated by quotes from specific people.

DinaCoder98
0 replies
18h45m

The reason we have bloat is it's easier to satisfy stakeholders if you don't give a damn. There's really no reason to discuss this at all once you realize this.

But of course, ranting and reading rants is satisfying in its own right. What's the problem?

bitwize
0 replies
14h3m

Usually the directive "don't worry about bloat" comes from above, or outside, the software engineering team. I'm a software engineer and I would love to fix performance problems so that everything runs Amiga smooth. But that takes time and effort to find, analyze, and fix performance issues... and once The Business sees something in more or less working order, implementing the next feature takes priority over removing bloat. "Premature optimization is the root of all evil" and that. I know that's not what Knuth meant, he meant don't be penny-wise and pound-foolish when you do optimize. But much like "GO TO considered harmful", something approaching the stupidest possible interpretation of the maxim has become the canonical interpretation.

And that's before getting into when The Business wants that sweet, sweet analytics data, or those sweet, sweet ad dollars.

yawaramin
2 replies
18h3m

...which is a long standing point of academic contention.

What contention? If anything, Luu is being rather generous–Knuth was just whining that the decades-long free lunch program was being cancelled.

moonchild
1 replies
13h54m

VLIW (Itanium is a VLIW arch) is what's contentious, not multiprocessing.

yawaramin
0 replies
13h4m

OK I missed that. Thanks. But it looks like Itanium was only tangential to this discussion, in that Knuth thinks multicore programming may be an even worse mistake than Itanium.

goalieca
2 replies
19h12m

what attitudes software engineers supposedly have

I don’t think I’ve ever seen a company take performance seriously. No one scoffs when a simple API service for frontend has 500ms response time! How many engineers even know or care how much their cloud bill is?

nolist_policy
1 replies
18h43m

I'm sure Google invests a lot of resources in making Google Search load fast. AFAIK they serve a specialized version for each user agent out there.

maigret
0 replies
11h10m

One the best counter examples to the rule. I tried running Lighthouse on a few Google services that are less prominent and had a few good laughs.

torginus
0 replies
6h14m

Knuth is kinda right imo - parallelism as we have it now is unused by 90% of software outside of specialist use cases and running the same single-threaded program on multiple data items.

Programming languages and hardware both offer poor support for fine-grained parallelism and it's very hard to speed up classical software using parallel approaches.

ksec
0 replies
7h58m

I thought he summarised it pretty well. Jeff Atwood was only picked as example. But there are LOTS of high profile, huge followers web developments thought leaders constantly pump out similar views. And a lot of their followers just blindly accept what they were told.

andy99
10 replies
21h35m

Nobody cares about people with older devices. We've shifted to a mode where companies tell their customers what they have to do, and if they don't fit the mold they are dropped. It's more profitable that way - you scale only revenue and don't have to worry about accessibility or customer service or any edge cases. That's what big tech has gotten for us.

dexwiz
8 replies
21h22m

You’re getting downvoted but I think despite the tone you are correct. 10 years ago corporate guidance on web dev was backwards compatibility going back several versions. Now it’s hardly any concern for anything more than 6 months old.

More than anything I think it’s because corporate IT has had to modernize due to security. Security now wants you to update constantly instead of running old vetted software. You also cannot demand user use an old version of a browser that still supports some old plugin. And as a vendor it’s not profitable to support people who maintain that mindset.

Also “update to the latest version” is the new “turn it off and back on again,” when it comes to basic IT help.

perardi
4 replies
20h50m

Who is this mythical end-user with an old browser? Because they don’t show up in browser usage statistics.

https://gs.statcounter.com/browser-version-market-share

Chrome is evergreen, even on Android. Safari, after a bit of a fallow period, is updated fairly aggressively, and though it’s still coupled with OS updates, it’s no longer married to the annual x.0 releases.

Mind you, I still believe, and practice, you should write semantic HTML with progressive enhancement. But at the same time, I absolutely do not think you should go out of your way to test for some ancient version of Safari running on a first-generation iPad Pro—use basic webdev best practices, and don’t spend time worrying that container queries aren’t going to work for that sliver of the market.

zozbot234
0 replies
18h47m

Browsers may be self-updating but hardware is not. You can't just download more RAM or a faster CPU.

skydhash
0 replies
18h54m

Most people auto update their software or they don’t at all. What they don’t do is buy a brand new laptop as soon as it’s out. And the one they have is a cheap one from HP or Dell. To know their pain, try to use one of these.

hgs3
0 replies
18h1m

I've got an iPad Air 2 running iOS 15.8. My user agent will surely tell you I'm only one or two major versions behind the "latest and greatest" but the hardware itself is a different story. On this device modern GitHub consistently crashes when displaying more than a few hundred lines of code. I've lost the ability to use a perfectly functioning device due to bloatware.

dexwiz
0 replies
20h16m

Exactly. The landscape has changed because those old browser users have been forced to update.

gjsman-1000
1 replies
21h12m

Part of it was that users were terrible at updating browsers. You needed to support Internet Explorer 6, or cut off a third of your customers. It sucked.

Now every browser gets updates, automatically and aggressively. The only real outlier is Safari, but even that updates way quicker than older browsers used to.

As a result, who needs backward compatibility?

BeFlatXIII
0 replies
4h59m

Without all the compatibility shims, it means that you can drop coat bloat sooner when the JS gets replaced with a native browser capability.

Gigachad
0 replies
21h9m

Because the people with money who are buying your products are all running the latest version of iOS. The ones on a 6 year old Android version are not spending anything therefor it isn't worth investing money in making sure it works for them.

marcosdumay
0 replies
21h0m

Clues that a market is not competitive...

What most impresses me is that this happen on many markets that should be competitive by any sane rationale. Like group buying or hotel booking. Yet, they also do that kind of shit, and people still have nowhere to go.

The world economy became integrated and incredibly rigid.

Ruq
10 replies
16h2m

Related: Too much of technology today doesn't pay attention or even care to the less technologically adept, either.

Smartphones in my opinion are a major example of this. I can't tell you the number of people I've meet who barely even or don't even know how to use their devices. It's all black magic to them.

The largest problem is the over-dependence on the use of "Gesture Navigation" which is invisible and thus non-existent to them. Sure, they might figure out the gesture bar on an iPhone, but they have no conception of the notification/control center.

It's not that these people are dumb either, many of them could probably run circles around me in other fields, but when it comes to tech, it's not for a lack of trying, it's a lack of an intuitive interface.

crabmusket
7 replies
15h49m

It appears to me, as an outsider, that interfaces are designed with a "one size fits all" approach, at least at the prestige end of town. Instead of allowing the user to choose design and interaction that works for them, the designer (or product owner) acts as if they know what's best for all users.

idle_zealot
6 replies
13h52m

What would the alternative look like? Applications shipping as a bag of arrangeable buttons and widgets that the user assembles into pages?

rustcleaner
2 replies
11h48m

Actually, I find this highly ideal. I wish there was a button to press which would switch the interface into an almost Visual BASIC GUI editor like thing, permitting me to edit the arrangements. Also, I would like it if such an OS was more strict on forcing its interface objects (think: SimCity 2000 for Win95 with GDI-integrated GUI good, SimCity 3000 with Fisher-Price full screen toy interface bad). Also throw out much of the post- Windows 2000/KDE 3.5 desktop user interface 'innovation' but make all things editable in layout. I WANT MY COMPLICATED BUTTON GRIDS! :^(

rustcleaner
0 replies
11h34m

Siemens PLM NX 10 is another example of what I like in an interface. The GIMP big time as well for its customizability. You know what I don't like? Gnome. I curse Gnome 3 (namely, the design cancer Gnome fell to early on) for why KDE has yet to recover to the comfiness of KDE 3.5. Apple is another hate.

I want a computational environment, I am a cyborg! I build my environments to my specifications. I am a privacy and control absolutist with these devices, because they are cybernetic extensions of my mind. SV: Stop being over-opinionated pricks trying to monetize every last drop of attention for every bottom-pocket penny in microtransactions. What we develop here is far and beyond more spiritual than we can all imagine. The utter lack of owner/user sovereignty shown lately, basically since iPhone and Facebook, captured in the term Enshittification, is absolutely appalling.

Anyway, thank you for reading my unspellchecked schizo-ramblings. Now carry on with the great monetization, metatron hungers!

Panzer04
0 replies
5h48m

I think this tends to sound like a better idea than it is. It's good for power users who want to optimise their UI to suit, but regular users aren't going to do that.

Gesture navigation's lack of discoverability is a problem for sure, although I'm not sure how to best address it (people aren't likely to sit through tutorials...)

eviks
0 replies
12h4m

Or the user picks from a set of assembled by someone else

eimrine
0 replies
11h51m

The smartphone world is too crooked to have an alternative IMO. Just keep eating everything the vendor gives you on the top of shovel.

LeoPanthera
0 replies
7h8m

Congratulations, you just invented OpenDoc.

nolist_policy
0 replies
8h8m

I don't know, I've seen mature people who couldn't operate a cassette deck and likely would have trouble with a typewriter. These people definitely grew up around these devices.

I don't think (modern) technology is at fault here.

kjkjadksj
0 replies
52m

It doesn’t help when you get a new iphone it doesn’t ship with its documentation. You have to get to the actual documentation page on apples site, and then dig a little to get to a page that looks like this (1) that merely outlines a few possible gestures. Not which ones to use when beyond a one sentence example. And this is just for the OS. What apps ship with documentation that outlines how these gesture functions are used in their app?

https://support.apple.com/guide/iphone/learn-basic-gestures-...

throw_m239339
1 replies
20h58m

Or HN. All that talk about "brutalist web design" yet most websites are more bloated than ever...

MaxBarraclough
0 replies
20h36m

'Brutalist web design' is a pretty small niche though, no? It's the kind of thing Hacker News readers will have heard of, but I don't think it was ever close to mainstream.

ildjarn
1 replies
20h47m

This loads faster than native apps serving local content on my device.

yen223
0 replies
20h33m

For me, it took an estimated 3-5s to load on first visit. Fast, but not "faster than native apps"

The second time round it loaded almost instantly.

I'm guessing there's some caching going on.

Solvency
1 replies
20h52m

Ironically the modern web, built by programmers, is scorned by programmers. You all collectively, persistently, shamelessly decided AngularReactNodeWebpackViteBloat 200mb asynchronous hellspawn websites needed to be made this way.

When all this time, lightweight CSS and anchor links and some PHP was all we needed.

RGamma
0 replies
20h21m

*built by techbros

xyst
0 replies
20h21m

How crude. I can’t even post gifs. This is basically a glorified e-mail client, but with extra steps. No social media integration? What is this 2004? It’s not even decentralized like matrix.

Can’t even post inline videos, bro.

\s

Jokes aside, I do miss this type of interaction. Especially for open source projects. It made finding solutions to common issues much easier when documentation was lacking or has not been updated in a long time.

Now all or most projects have adopted some form of: discord channel, slack group, subreddit, twitter. I remember searching for my similar issue in a slack channel only to realize the chat history has been limited because the owners did not pay the extra amount to archive messages beyond what was given for free.

mhd
0 replies
18h17m

IIRC, the D forum also offers direct NNTP access. Would be interesting to compare web access with e.g. tin on a variety of devices…

c2xlZXB5Cg1
0 replies
20h58m

What a refreshing experience.

Devasta
9 replies
20h32m

If you don't have a good phone and a high speed connection, you don't have any money to spend on either the sites products or the products of their advertisers.

When looked at from that angle, bloat is a feature.

It's not reasonable to have an expectation of quality when it comes to the web.

genewitch
2 replies
19h1m

Virtually all pharmaceutical advertising is targeted at prescribers, yet we all have to watch/view them.

YoshiRulz
1 replies
10h10m

That's mostly an American thing.

genewitch
0 replies
10h1m

As an american by accident, i apologize, you're right. More civilized countries have outlawed that sort of advertising.

jhanoncomm
1 replies
19h37m

I think you are close to the truth there.

But I doubt companies purposely increase their hosting costs as some kind of firewall to only include the rich. More like they just don’t care. Same reason for technical debt, everyone wants to grow and move needles.

If a company could magically make their site more available and efficient for free I am sure they would jump at the chance. But spending a million on that vs. a million on ads wont seem worth it.

pixl97
0 replies
18h41m

Ah, the modern AAA games take on MTX. Who cares about gamers, fish for whales.

LAC-Tech
1 replies
20h21m

Huh? I have a 5 year old, mid range android, and I still buy things online.

Not everyone cares about phones.

blauditore
0 replies
19h59m

Also, there are some websites targeting users with little money as well.

politelemon
0 replies
19h29m

This is addressed in TFA and is not true. The bloat is a symptom of what I've seen referred to as the "laptop class" and is unrelated to any feature adjacent.

Uehreka
0 replies
20h0m

Well that take sure goes from 0 to 60 real fast. Can you really be sure that only people with good phones and connections have money to spend? Just to poke some obvious holes: what about old rich people who have a distaste for modern phones but spend lavishly on vacations every year? Or outdoorsy rich people who are frequently in areas with poor cell coverage but are constantly purchasing expensive camping/climbing equipment? How about people who aren’t rich, but work for companies where their input is part of a purchasing process with millions of dollars of budget? Those people are all super-lucrative advertising targets, I don’t think advertisers are intentionally weeding them out.

zbrozek
7 replies
19h19m

There's also a huge tendency to design for fast, high quality connectivity. Try using any Google product on airplane wifi. Even just chat loads in minutes-to-never and frequently keels over dead, forcing an outrageously expensive reload. Docs? Good luck.

I wish software engineers cared to test in less than ideal conditions. Low speeds, intermittent connectivity, and packet loss are real.

genewitch
2 replies
19h8m

The last decade of my life has been a speedrun in "less than ideal conditions" for computing. CGNAT, 5mbit dsl, spotty "fixed wireless" and my latest debacle: starlink, although that seems to be getting better slowly; used to drop 15/60 seconds, now it drops more like 4/200 seconds. Constant power issues and lightning strikes - i only have 1 computer that has a working NIC, because evidently tiny power fluctuations are enough to send most chipsets into the graveyard. I had to switch to full fiber between all compute sites on my property, and a wifi backup, because copper is too risky.

zer00eyz
1 replies
18h23m

Do you have earth return on your power?

genewitch
0 replies
9h55m

Yes, and it works, too. But i have outbuildings with servers and networking gear in them and metal conduit between buildings on/underground. Voltage potentials don't care, if there's a wet extension cord or something that's a less resistive path to start flowing and some gear is on that circuit or adjacent, it'll go.

Overall switching to fiber is cheaper than aggressive lightning protection, and i moved all the network gear to a commercial UPS, and the interconnect between the "modems" and the switches is media converted to fiber for 3 feet. any time i have to run networking further than 6' or so i run fiber and put a media converter or a single gbic switch there. I'm hoping i futureproofed enough to upgrade to 10gbit in a year or so. My backup NAS has 10gbit but nothing else is connected at that speed yet.

edit: One time lightning hit a pine tree in the back of the house, and it used my dipole antenna to reach a tree 80' away, and apparently there was an extension cable near there, which went back into the house, and it went all the way around the house, to reach the telco CPE box where DSL lived. the telco box and my mains earth are roughly 1 meter apart. That surge took out my main desktop computer, a washing machine (singed the dryer where it arced between it and the washer), the toaster oven, a microwave, my NAS, and my router connected to telco. It went two different paths inside the house, along both outside walls, one via mains copper and the other via cat5e copper. That was quite an expensive misadventure.

timeon
0 replies
19h12m

What I find interesting is that design of websites is often 'mobile first' but rarely 'mobile connection first'.

pixl97
0 replies
19h4m

Developers are expensive, so we give them fast connections and fast computers. Then we act shocked when modern software/web requires fast computers.

Unless it's somehow regulated that people test less than ideal conditions it won't happen, yet most people (myself included) don't really want that either.

nottorp
0 replies
7h34m

I call this "Designed in California" like some fruity company proudly says on their devices.

For software this means designed on top of the line hardware, with fast low latency internet. TFA describes the consequences.

For hardware it means designed inside climate controlled dust free offices and cars for people with long commutes to work on straight roads where you don't have to pay much attention.

Think phones shutting down if you have a real winter. Think smart turn stalks that can't signal a left turn on a crossroads that's not at 90 degrees. Think ultra thin laptops where the keyboard is so dust sensitive it lasts 3 months if you use them outdoors. Think a focus on audiobooks and podcasts because you're stuck in traffic so much.

kjkjadksj
0 replies
35m

I live in some hills and some days I need to fully drive out of them to get google maps to load the map. The map I am using half a gb to cache locally on my phone already. Whats even the point of that cache? Same thing with spotify. Why is there latency searching my downloads library in offline mode?

mik1998
7 replies
20h50m

I often use a Thinkpad X220 (which still works for a lot of my usage and I'm not too concerned about it being stolen or damaged) and the JS web is terrible to use on it. Mostly resulted in my preference of using native software (non-electron), which generally works perfectly fine and about as well as on my "more modern" computer.

jwells89
4 replies
20h24m

Whenever I pull out old machines I’m a little shocked at how responsive they are running a modern OS (Win10 or Linux), so long as the modern web is avoided. Anything with a Core 2 Duo or better is adequate for a wide range of tasks if you can find non-bloated software to do them with.

Even going back so far that modern OS support is absent, snappiness can be found. My circa 2000 500Mhz PowerBook G3 running Mac OS 9.1 doesn’t feel appreciably slower than its modern day counterpart for more than one might expect, and some things like typing latency are actually better.

skydhash
0 replies
19h5m

I have a mac mini 2011 and it works great with Linux Mint. But load youtube and you’re in a world of pain.

ogurechny
0 replies
16h37m

“True UNIX way” solution to this would be getting the data from the Web non-interactively and redirecting it into some regular expressions to produce the only thing you want. Random example:

https://github.com/l29ah/w3crapcli

nicbou
0 replies
14h43m

My 12" Macbook was my main computer for 2022 and part of 2023. It ran smoothly for my workflow, even with a 4k monitor.

However YouTube and Gmail brought it to a crawl. I had to sell it because Youtube Music slowed down my work.

anthk
0 replies
19h54m

A Core Duo it's perfectly fine with an ad blocker:

git://bitreich.org/privacy-haters

amlib
1 replies
8h45m

I remember going trough a similar situation when using a netbook. At first they were ok for doing light work and even accessing websites, but as time went on websites and browsers became more and more heavy. Youtube was a struggle, even Google felt laggy. Want to browse a map? You are better off getting a physical one! But, no worry, it was still fine for other low intensity things and some programming projects I worked on. About two years later and both KDE and GNOME would struggle to run on it, it was painful. Maybe I should have switched to an all CLI/terminal workflow but eventually I bought a used thinkpad X220 which was like taking a breath of fresh air after holding it for years. But now I do see the same pattern emerging, much slower mind you, but it is surely happening. Some websites feel sluggish, some gnome apps also feel sluggish and I have to avoid electron apps like the plague. But at least it has enough brawn (16GB of RAM and an SSD) to cut trough the bullshit and work ok on most things. Maybe I should have embraced that terminal lifestyle after all...

keyringlight
0 replies
2h51m

I'm sure there's an odd parable with netbooks, around the time they first started appearing as a hacky project and early commercial products they were lean and mean. Lightweight local software to do things online, compact flash IDE converters versus HDDs (which seems like a precursor to SSDs by proving a market), bare bones linux and there was a new wave of web standards and performance which non-IE browsers were leading in.

Then after going mass market OEMs put full windows and client software on there, and the web became heavier so webmail or simple office/collaboration slowed down. After that mobile/tablets were in competition for the market, and has practically devoured non-professional usage for PCs outside of gaming.

What I keep coming back to is bundling versus unbundling - having one tool to do everything with likely inevitable compromises, versus splitting into a number of precise specialized ones. It's difficult to convince any decent number of people to take something that does less.

MichaelMug
7 replies
21h18m

Since 2000, I've observed the internet shift from free sharing of information to aggressive monetization of every piece of knowledge. So I suspect that is the culprit. If you use the mobile web on the latest iPhone you'll find its unusable without an ad-blocker.

smokel
5 replies
21h5m

Hm, not entirely true, depending on what you mean with "the internet shifting".

The internet has grown, and the free sharing is still going strong. Have a look at Wikipedia, Hacker News, Arxiv.org.

To be honest, the stuff that was shared freely in 2000 was not all that great, and most of that which was, is still available. Remember that you had to buy a subscription to Encyclopaedia Britannica back then, and to all the academic journals.

Granted, there are some non-free information silos, but generally I'm pretty happy with the procrastination advice on Reddit being surrounded by annoying ads that drive me away.

ibz
1 replies
20h45m

Encyclopedia Britannica was on CDs, not on the internet. I'm old enough to remember.

geraldwhen
1 replies
20h49m

And Britannica wasn’t filled with highly moderated propaganda.

Wikipedia is a failed experiment.

permo-w
0 replies
20h28m

Wikipedia is great, it's just not as good as it could be

Solvency
0 replies
20h55m

Google "Roche Ff7 rebirth". I was curious who this character is. In 2000-2012 all the top links would be amazing fan sites and forums describing, discussing, and detailing the character with rich info.

Now it's all AI seo spam LADEN with data mining and ad boat on monolithic sites like Fandom they barely work on the newest iphone.

permo-w
0 replies
20h28m

the tragedy of the commons

zac23or
6 replies
20h51m

Nobody, nobody, nobody cares about old hardware, performance, users, etc. if anyone cared, React wouldn't be a success. The last time I tried to use the react website on an old phone, it was slow as hell.

LetsEncrypt is stopping serving Android 7 this year. Android 7 will be blocked from 19% of the web: https://letsencrypt.org/2023/07/10/cross-sign-expiration The option is to install Firefox.

Users with old hardware are poor people. Nobody wants poor people around, not even using their website.

“Fuck the user”, that's what we heard from a PO when we tried to defend users, imagine if we tried to defend poor users.

robocat
2 replies
19h53m

PO

What's the acronym?

Unfortunately acronyms are context sensitive and many users here are not in your context... Maybe try to avoid using acronyms!

zac23or
0 replies
19h47m

Product Owner

gkbrk
0 replies
19h50m

Product owner?

supertrope
1 replies
19h55m

I think Let's Encrypt made a heroic effort. They deployed a hack to support Androids long abandoned by the operating system maintainer and manufacturer. If you want to blame LE for the breakage then also blame: GOOG for using the IBM PC clone business model without a device tree standard, QCOM for selling chips but very quickly cutting support, the manufacturer, and cellular carriers who prefer to lock you into another 24 month installment plan than approve an update for your existing handset.

zac23or
0 replies
19h42m

If you want to blame LE for the breakage then also blame ...

Of course they are also guilty. LE isn't the most to blame in reality, it's just an example that old hardware isn't important to decision makers.

jiggawatts
0 replies
19h26m

The problem is that this attitude infects even government departments, which ought to serve all citizens, not just the rich ones.

tdudhhu
6 replies
20h38m

Not only the user is affected by this.

The difference between a 2MB and a 150KB CSS file can be a lot of bandwidth.

The difference between a bad and good framework can be a lot of CPU power and RAM.

Companies pay for this. But I guess most have no clue that these costs can be reduced.

And some companies just don't care as long as money is coming in.

supertrope
2 replies
20h2m

A lot of companies don't care about end user performance experience. Companies will burden issued PCs with bloated anti-virus, endpoint monitoring, TLS interception, Microsoft Teams, etc. If there's no explicit responsiveness goal, then performance dies by a thousand cuts.

pixl97
1 replies
18h43m

Companies will burden issued PCs with bloated anti-virus,

Ugh, bane of my day job. I work with two companies in particular that have high security requirements in their environments and very similar total workloads with our software. One spends around $250k (ish) a year in self hosting costs, the other over a million to get the same throughput. The less costly one worked with us as a vendor to get anti-virus/endpoint exclusions on the file io intensive part of our application and put anti-virus scanning before that point, then harden those machines in other ways. The other customer is "policy demands we scan everything everywhere and the policy is iron law".

Nemo_bis
0 replies
10h50m

Worst is, nowadays such bloated "security" software is being forced onto Linux servers too... every time I check why something feels slow, Microsoft Defender is hogging resources.

afavour
1 replies
19h53m

Eh. Cloudfront pricing starts at 8.5c per GB and goes down to 2c. I think you’d struggle to use that pricing as a justification when compared to the software engineer hours required to shrink down a CSS bundle. (don’t get me wrong, 2MB is insane and ought to be a professional embarrassment. But I think you’re going to struggle using bandwidth bills as the reason)

I agree with you about frameworks, though. So much waste in creating everything as (e.g.) a React app when there’s no need. Sadly the industry heavily prioritises developer experience over user experience.

Valord
0 replies
18h28m

This, although I often feel near modern web frameworks (React, similar) do not provide better developer experience.

jillesvangurp
0 replies
10h18m

It's a numbers game. Mostly the difference doesn't matter at all to the vast majority of users. Optimizing for the bottom 1 or 2 percent that don't have any disposable income to update their phones, or pay for your wonderful products or services is not a big priority. And not all companies have rockstar developers working for them. That's why things like wordpress are so popular.

I actually pulled the plug on a wordpress site for my company last week. We now have a static website. It's a big performance improvement. But the old site was adequate even though it was a bit slow to load. So, nobody really noticed the improvement. Making it faster was never a requirement.

What is worth optimizing for is good SEO. There's of course a correlation between responsiveness and people giving up and abandoning web sites. That's why big e-commerce sites tend to be relatively fast. Because there's a money impact when people leave early.

What I find ironic is that the people complaining about this stuff are mostly relatively well off developers with disposable incomes and decent hardware. If they use crappy/obsolete hardware it's mostly by choice; not necessity. Some people are a bit OCD about performance issues as well. They notice minor stutters that nobody cares about and it ticks them off.

2MB is nothing. I'm saying this as somebody who used cassettes, and later floppy disks with way less capacity. But that's 35 years ago. The only time when this matters to me is when I'm on a train in Germany and my phone is on a really flaky mobile network that barely works. Germany is a bit of a third world country when it comes to mobile connectivity. So, that's annoying. But not really a problem web developers should concern themselves with.

hexage1814
6 replies
18h20m

Many pages actually remove the parts of the page you scrolled past as you scroll

There is a special place in hell for every web developer who does that.

teg4n_
5 replies
16h17m

It’s a performance optimization for rendering a large amount of html. If the DOM had all the items in memory it would perform much worse. Thankfully browsers are working on a feature where you can keep the markup in the DOM for things like CTRL-F without hurting performance.

Granted the main reason such a technique is needed is designs that avoid pagination.

raybb
1 replies
6h5m

What is the feature called?

anonymoushn
1 replies
14h28m

We had web pages with big lists and tables in the DOM 20+ years ago, they were fine. The difference is that now we use web frameworks that do work proportional to DOM size many times per second.

hexage1814
0 replies
11h49m

Call me a conspiracy theorist, but I think it's all a plane to make it harder for people to save stuff. If the content just stays there after you loaded, you could just save the page as HTML and, if there wasn't a lot of javascript shenanigans, it should save it okay. When you add this element, this doesn't work anymore. I'm pretty sure instagram, for example, does that with the intention of making it harder for people to save profiles.

majewsky
0 replies
8h32m

I am usually just a backend developer, but for a little reporting application that I built, I couldn't get the UI team to do a UI in the short time that I had to build it, so I had it output some basic HTML. About 10000 list items. Rendered imperceptibly fast on my browser.

Then because of $mandate, the report was moved to the team's standard React UI frontend. Now it takes 5 seconds to load and only gives you like 100 items at a time, so Ctrl-F is broken. Also, filter dropdowns somehow did not work until they fixed it, so it appears like the select tag was not fit for their design and they rolled their own.

efields
6 replies
21h8m

How web bloat impacts users: negatively. Better do your best to fix it.

This stuff is simpler than we let it be sometimes, folks.

withinboredom
5 replies
20h55m

This stuff is simpler than we let it be sometimes, folks.

Meanwhile watches a team build a cathedral when all they needed was a shack.

ponector
3 replies
19h9m

Why not to build a cathedral if someone else is paying?

I've never seen companies where developers are rewarded for performance improvement or any kind of improvement. Did an improvement? Nice! Good job! And that's it.

withinboredom
2 replies
8h0m

The point is, you build a shack so you can build a cathedral where it’s warranted. If you are stuck maintaining a cathedral you can’t move on to bigger better things.

nottorp
1 replies
7h25m

Yep, you build a shack and charge the users the price of a starbucks latte per month because "it's just a starbucks latte".

Then you wonder why the solo founder saas has no customers.

withinboredom
0 replies
6h53m

I'm not sure what you're talking about; but what I was trying to say is I tend to see teams get charged with building X (which should be a cathedral), but then build a cathedral of configuration parsing, and a cathedral of CRUD; instead of focusing on X.

efields
0 replies
6h41m

I like my job security large, ornate and full of stained glass.

throw_m239339
4 replies
21h2m

Every company stopped caring, especially the companies who were at the forefront of standards and good web design practices, like Google and Apple.

Google recently retired their HTML Gmail version, mind you, it still worked on a 2008 256MO RAM Android phone with an old Firefox version and it was simply fast... of course the new JS bloated version doesn't, it just kills the browser. That's an extreme example, yet low budget Phones have 2GB of RAM, you simply cannot browser the web with these and expect reasonable performances anymore.

Mobile web sucks, an it's done on purpose, to push people to use "native" apps which makes things easier when it comes to data collection and ad display for companies such as Apple and Google.

spintin
2 replies
17h34m

Yes, on Thursday Google ended their only viable "product".

RIP Google.

The new Reddit is unusable, and the old is well too old.

Twitch is borderline usable, with chat and video stream problems...

The list is long...

All changes are bad when you have the final formula because they are job security.

Eventually the monkeys on this ball of dirt will realize that jobs and money don't exist, but then it will be to late... oh that is now!

RIP Humans.

subtra3t
1 replies
10h30m

the old is well too old

What's wrong with the old Reddit UI?

spintin
0 replies
7h8m

It has usability problems with f.ex. collapsing a comment tree.

I returned to it last major reskin too but then they fixed the new to become usable.

Now they removed the middle version... they should have made recent.reddit.com for those that want to wait until new.reddit.com doesn't suck as much.

lukan
0 replies
20h49m

"Mobile web sucks, an it's done on purpose, to push people to use "native" apps which makes things easier when it comes to data collection and ad display for companies such as Apple and Google."

Partly for sure, but Amazon for example? Or Decathlon? (a big sports/outdoor chain in europe)

Their sites are just horrible on a mobile (or in Decathlons case also on a Desktop, that is not high end), but they also don't offer me their app in plain view, so I have to assume it is just incompetence. The devs only testing everything on their high end devices connected to a backbone.

olliej
4 replies
21h14m

It’s not just slow devices, it’s also any time you have any kind of weak connectivity.

I think every OS now has tools to let you simulate shitty network performance these days so it’s inexcusable that so many sites and even native apps fail so badly anytime you have anything less than a mbit connection or greater than 50ms latency :-/

layer8
3 replies
21h10m

It’s not just weak connectivity. I know people in rural areas who still have less than 1 Mbps internet speed over their DSL landline. Using the internet there isn’t a lot of fun.

olliej
2 replies
21h0m

Which is absurd when you think that the internet used to be usable on 14.4k modems.

I remember having to plan to take up hours of time on our phone line to download giant files that were smaller than many basic webpages these days (ignoring things like photos where there's obviously a basic size/quality tradeoff + more pixels)

layer8
0 replies
19h14m

Yes, 1 Mbps was actually high-speed internet 25 years ago.

genewitch
0 replies
18h56m

When i first moved to where i live now DSL had a waitlist, so i tried both a verizon hotspot (myfi!) and dialup. Dialup with HTML gmail (for slow connections!) took minutes to load. IRC was completely usable, but hangouts was not. danluu's website would have loaded just fine, as an example. I just remembered that after getting DSL if more than one person decided to watch a youtube video the pings went up in the 1000ms range.

maxloh
4 replies
19h53m

YouTube is one of the slowest websites I have ever used.

It takes several seconds to load, even with moderate hardware and fast internet connections.

jhanoncomm
2 replies
19h43m

Reddit for me is the slowest site. And while old.reddit fixes this they try to steer you back to main reddit at any opportunity!

genewitch
0 replies
19h3m

RES fixes this, i think. It's a browser extension that forces everything to stay the way it was when reddit worked fine - before publishers bought it.

I don't have any issue with reddit usability, although i do use it a lot less since they nuked my cellphone app from orbit as a cash grab.

Vilian
0 replies
3h50m

Same, but i'm using lemmy more

lazypenguin
0 replies
16h11m

YouTube doesn’t feel zippy as a website but the reliable and speed of videos have been very good for me. I remember the days when buffering videos was hell.

amelius
4 replies
21h14m

Perhaps these people are better off by running a web browser on a remote machine and interfacing with it over VNC.

ericra
1 replies
21h3m

This is trolling, right?

Lemme just give my grandma a list of instructions for doing this so she can get to Facebook. I'll let you know how it works out.

wmf
0 replies
20h32m

Obviously you'd want to productize it (see WebTV, Mighty browser).

wmf
0 replies
21h6m

Who's going to pay for that server? We're talking about $50-100 phones here.

hexo
0 replies
19h55m

webdevs and their managers should use these web "apps" on bad machine over VNC on a slow connection for a few months. these javascript hellpages are basically crime against humanity and do contribute a lot to e-waste, pollution and carbon dioxide emissions

nolist_policy
3 replies
20h2m

Next try out the search engines.

Anecdotally, Google Search loads ~500ms faster than DuckDuckGo on the OG Pinephone.

jhanoncomm
2 replies
19h41m

That is one performance metric. What about energy use and loading search results not just the home page. I find DDG faster from a perception point of view. I imagine on sone metrics it is faster.

ponector
0 replies
19h5m

Did you measure time user needs to scroll and click reject google cookies?

coolcoder613
3 replies
13h9m

Compare with one of my projects, [1]

It is a minimal, though modern-looking web chat. The HTML, CSS and JS together is 5024 bytes. The Rust backend source is 2801 bytes. It does not pull in anything from anywhere.

[1] https://github.com/coolcoder613eb/minchat

cuu508
1 replies
12h10m

Is there a demo site or screenshots somewhere? Add them to README :-)

coolcoder613
0 replies
8h34m

I added a screenshot.

coolcoder613
0 replies
8h8m

Clarification: The frontend does not pull anything, the backend pulls in libraries for websockets and json using cargo.

uaserussia
2 replies
4h43m

I have modern I7, 64GB RAM, RTX3090, a 7gbps NVME SSD and a 1Gbps internet connection. Can run pretty much any game maxxed out in 4k with 100 fps. Download 100GB files in few minutes. Can do all sorts of tasks and workloads. Can calculate the 20th billionth number of PI in a microsecond. What I cant do however is use twitter without stutters and hitches, or windows, or any shopping website.

Nice work, webdevelopers!

azangru
1 replies
4h32m

or any shopping website

Could you give an example? I used a shopping website yesterday, both on a laptop and on an android phone, and apart from cookie banner popups (design choice, not hardware limitation), did not have any significant inconveniences.

uaserussia
0 replies
4h17m

Amazon, Ebay, Yeezy (which was better than most), Armani, Nike, any assortment of the regular webshopping webiste, even computer parts shopping websites lol. They're all slow and a glithcy mess. They're all horrible. Trying to browse them on my old 13inch screen thinkpad is akin to torture.

sylware
2 replies
20h56m

google web engines (blink/geeko) and apple web engines with their SDK are sickening. They are an insult to sanity. They well deserve their hate.

tredre3
1 replies
20h17m

The engines are perfectly fine.

It's the websites/web developers that are the problem.

sylware
0 replies
18h16m

I don't agree, the web devs are making it worse.

npteljes
2 replies
58m

It's interesting research, but at the end of the day, the websites are there to make money. Well, looking at the table, maybe the author's own isn't, but the rest is. And so, I think the businesses don't optimize more because there isn't much more money to be made that way. Instead, the same effort is better spent elsewhere, like marketing, having a software that's quickly adaptable, that's easy to get interchangeable developers for. So they are optimized, just not for the speed on low-end devices. Different goals.

lmpdev
1 replies
52m

I was thinking about this the other day

I’d happily pay $100/month to access the internet similar to that of pre-2005ish

As in, banning almost all commercial activity

I truly believe Google isn’t getting worse, just the incentives behind the creation of web content have become progressively maligned with that of the user’s desires

I want a high quality internet, and am willing to fork out large sums of money to access it

I hope I’m not alone

npteljes
0 replies
5m

Well, that's not the pre-2005 internet I remember. What I remember are popups, pop-unders, poisoned search results due to crude SEO tactics like including small background-colored text on websites, endless rings of web pages referring to each other, heavily blinking banners, and even the best ad blocker being so slow on the machine that it's no joke. And this is the commercial abuse only, there was a lot of other types going around.

I do despise many aspects of the current internet, but I think that it's the fallibility of man that poisons the nice things, and I don't think that it was ever too much different in this regard.

For a different internet, there are ways to go about it. I'm not sure how much of it you already know.

Millionshort provides alternative search results to queries. I think this is similar to what you're looking for, and it's free.

Alternative networks spring up from time to time, like the Gemini network. I'm not sure how much of a content desert they are, as I'm not a frequent user.

Generally if you hang around in free software / open source spaces, they have a lot of people with an alternative take on the modern things, including people taking part of an internet that's not mainstream, for example by excluding running any JavaScript. This can lead to other places, forums, and so on.

I wish you luck. But be prepared that the past is gone. Maybe never existed in the first place.

legulere
2 replies
21h0m

Missing text styling impacts all users. The text is hardly legible. You really don't need much styling (bloat) to get a good result, as demonstrated on http://bettermotherfuckingwebsite.com

bensecure
0 replies
19h48m

addressed in the article

hotdailys
2 replies
15h20m

When websites pack in too many high-res images, videos, and complex scripts, it’s like they’re trying to cram that overstuffed suitcase into a tiny space. Your device is struggling, man. It’s like it’s running a marathon with a backpack full of bricks.

So, what happens? Your device slows down to a crawl, pages take forever to load, and sometimes, it just gives up and crashes. It’s like being stuck in traffic when you’re already late for work. And let’s not even talk about the data usage. It’s like your phone’s eating through your data plan like it’s an all-you-can-eat buffet.

Now, if you’re on the latest and greatest tech, you might not notice much. But for folks with older devices or slower connections, it’s a real pain. It’s like everyone else is zooming by on a high-speed train while you’re chugging along on a steam engine.

So, what can we do? Well, we can start by being mindful of what we put on our websites. Keep it lean, mean, and clean, folks. Your users will thank you, and their devices will too. And hey, maybe we’ll all get where we’re going a little faster.

lmz
1 replies
15h10m

Maybe we'll see the return of the proxy + lightweight browser model like Opera Mini.

hotdailys
0 replies
15h6m

And lightweight APPs, one tap to load all...

grishka
2 replies
17h44m

That Discourse guy is a classic example of someone designing their product for the world they wished existed instead of the world we actually live in. Devices with Qualcomm SoCs exist in billions, and will keep existing and keep being manufactured and sold for the foreseeable future. No amount of whining will change that. Get over it and optimize for them. People who use these devices won't care about your whining, they'll just consider you an incompetent software developer because your software crashes.

BeFlatXIII
1 replies
5h9m

Or they take the route to say “not for you”

grishka
0 replies
1h59m

It only works when you have a coherent vision of your product. "We can't be assed to optimize our code because we value DX above all else" certainly isn't that.

avodonosov
2 replies
14h56m

I think bloat could be prevented if it was noticed the moment it is introduced.

After application evolves bloated, it's difficult to go back and un-bloat it.

Bloat is often introduced accidential/y, without need, and unnoticed just because developers test on modern and powerful devices.

If developer's regular test matrix included a device with minimal hardware pewer that was known to run the product smoothly in the past, the dev could immediately notice the newly introduced bloat and remove it.

A bloat regression testing.

I call this "ecological development".

We should all do this. No need to aim for devices that already have trouble running your app / website. But take a device that works today and test that you do not degrade with respect to this device.

cuu508
1 replies
11h51m

After application evolves bloated, it's difficult to go back and un-bloat it.

It will be hard to get to pristine quality, but there ought to be some amount of low hanging fruit, where minimal changes bring noticeable improvement.

avodonosov
0 replies
10h33m

Maybe, but determining it will take some investigation. If the regular testing is done on a low profile device, developer knows as soon as possible that his recent changes introduced a bloat regression.

apatheticonion
2 replies
9h50m

This is why I'm excited for Web Assembly. Writing an efficient high performance, mutli-threaded GUI in Rust or Go would be awesome.

Just waiting on it to be practically usable

jenadine
0 replies
7h41m

I wouldn't be so sure. The browser ultimately need to render the UI from the DOM which is intrinsically linked with JavaScript. Wasm can help for some of the application logic maybe. But it also comes at a cost of some fixed overhead to bring up the wasm blob. JavaScript performance aren't that bad for UI.

atahanacar
0 replies
7h43m

Because non-web applications are always very efficient and high performance, right?

The problem isn't the technologies available to us. Majority of devs just have no desire to write efficient code.

zdw
1 replies
20h4m

Is this new or old reddit being benched?

That would be an interesting direct comparison.

re
0 replies
18h33m

New Reddit, per the appendix. I think that Old Reddit is likely to be fairly competitive (I would guess placing near Wordpress), and yeah I agree it would be interesting to have in the table to see how far it's fallen.

smj-edison
1 replies
15h20m

I've always wondered why people removed parts of the page when they were scrolled out. Like, don't you think the browser would already optimize for that? And even if it's not stored in the DOM, it's still being stored in the JavaScript's memory. It's frustrating when people try to reimplement optimizations that the browser already does better.

mike_hearn
0 replies
9h34m

The browser does not in fact optimize that. Yes it's surprising. If you want it to do basic optimizations like not rendering invisible content you need to give it hints via obscure and relatively recent CSS rules nobody ever heard of.

sams99
1 replies
18h57m

Highly Gamed === It is better if users with slow devices see a white screen for 30 seconds vs an indication that something is happening, because ... reasons?

yawaramin
0 replies
18h0m

You missed the point, which is that it's better if users with slow devices see actually useful content rather than a splash screen.

publius_0xf3
1 replies
16h22m

He mentions Substack, which is maybe the most egregious example of bloat I regularly encounter. Like I cannot open Scott Alexander's blog on my phone because it comes to a crawl.

But the Substack devs are aware of this. [They know it's a problem](https://old.reddit.com/r/slatestarcodex/comments/16xsr8w/sub...).

I'm much more of a backend person, so take this with somewhat of a grain of salt, but I believe the issue is with how we're using react. It's not necessarily the amount of content, but something about the number of components we use does not play nicely with rendering content at ACX scale.

As for why it takes up CPU after rendering, my understanding is that since each of the components is monitoring state changes to figure out how to re-render, it continues to eat up CPU.

They know—but they do nothing to fix it. It's just an impossibility, rendering all those comments.

hexage1814
0 replies
11h31m

Substack

I don't access this site a lot, but I remember until very recently they had other front-end, it worked great. Honestly, I think they will follow the path of medium.com, and start to make the user experience worse and worse.

It's a site where people post text, a few images, maybe 1 or 2 videos per post. It shouldn't be complicated.

ordu
1 replies
15h23m

> Surely, for example, multiple processors are no help to TeX

But TeX was designed to run on a single CPU-core, so no surprise here. I wonder what TeX could become if all Knuth had at the time a multicore machine with cores managing maybe 0.1 MIPS each (or even lower). Like what the world would become if we lived in a counterfactual world where Intel and its buddies starting in 1970s boosted not the frequency and instruction per second per core but number of cores?

My take we'd switched to functional-style programming at 1980s with immutable data, created tools to describe multistage pipelines with each stage issuing tasks into a queue, while cores concurrently picking tasks from the queue. TeX would probably have a simplified and extra fast parser that could cut input into chunks to feed them into a fullblown and slow parser which would be a first stage of a pipeline, and then these pipelines somehow would converge into an output stream. TeX probably would prefer to use more of lexical scoping, to reduce interaction between chunks, or maybe it would make some kind of a barrier for pipelines where they all stop and wait for propagation of things like `\it` from its occurrence to the end.

This counterfactual world seems much more exciting to me than the real one, though maybe I wouldn't be excited if I lived there.

ahepp
0 replies
3h40m

I assumed that to mean the layout work is limited to a single thread. You need to know what content made it onto page one before you can start working on page two, right?

myself248
1 replies
19h8m

Where "users with slow devices" equals "anyone trying to keep hardware running more than a few years", it seems. It's enforced obsolescence.

I've said for a long time, devs should be forced to take a survey of their users' hardware, and then themselves use the slowest common system, say, the 5th-percentile, one day a week. If they don't care about efficiency now, maybe they will when it's sufficiently painful.

logtempo
0 replies
17h34m

thing is, boss is not a dev. He is a business man.

masa331
1 replies
9h23m

My own recent experience with this - i run a small sass web app and about a year ago i decided to partner with advertising company to help with the grow.

Part of the plan was that they will remake our static homepage in Wordpress bc it will be easier to manage it for them and also easier to add a blog, which was part of the new plan. I know Wordpress is slow and i would say unnecessary also but i said yes bc i did not want to micromanage them.

A year later we parted our ways and i was left with WP where the page load was abysmal(3-5 seconds) and about 10Mb of bs. There was something called "Oxy" or "Oxy builder" which would add a tons of style,js and clutter to the markup and kind of SPA page load style but horribly failing.

So now i migrated the site to Jekyll, got rid of all the bs and it's back fast. And for me also again possible to really improve.

So for my businesses i'm not touching WP ever again and that will be a huge bloat reduction in itself

askonomm
0 replies
9h18m

Seems like your issues were not with WP itself, but with whatever plugins and themes were added to it. Avoiding WP entirely for this is like avoiding a programming language because the 1 developer you had experience with sucked at it. WP itself can be very fast, as is evident by a ton of high profile sites running it (CSS-Tricks, TechCrunch, New York Times, Time Magazine, etc). I'm not a fan of WP myself, but that's just because I don't like how its built and how it entirely avoids modern programming standards, not because it is slow, which it most definitely doesn't have to be.

keernan
1 replies
16h45m

A problem that recently started in Feb 2024 for me is probably unrelated to the topic, but close enough that I'm posting in the hopes someone has an idea of what is happening.

I am running on a relatively new Lenovo Legion (~ 18 months old) with 64kb of ram running windows 11. About 6 weeks ago I began getting the BSOD every time I streamed a live hockey game (I watch maybe 3 games a week from Oct to Jun via Comcast streaming or 'alternative' streams).

The crashes happened multiple times every game. After maybe 10 games of this, I began closing and reopening the browser during every game break. I've experienced zero crashes since doing that.

When the crashes started, I was using Chrome - but I still experienced BSOD crashes when I switched and tested Fox and Brave. Just very odd to start happening suddenly without any changes to my machine that I could pinpoint - no upgraded bios or nvidia that I can recall.

coolcoder613
0 replies
13h38m

with 64kb of ram running windows 11

I hope you mean GB.

kabes
1 replies
17h44m

As someone who makes bloated sites I can only say that management doesn't give a fuck about bloat as long as features are checked of in due time. So please don't blame me

carlosjobim
0 replies
2h31m

What about pride in your vocation?

jauntywundrkind
1 replies
21h8m

These sites can and should be much better. Yes. Definitely.

At the same time, while a 10s load time is a long time & unpleasant, it doesn't seem catastrophic yet.

The more vital question to me is what the experience is like after the page is loaded. I'm sure a number of these sites have similarly terrible architecture & ads bogging down the experience. But I also expect that some of those which took a while to load are pretty snappy & fast after loading.

Native apps probably have plenty of truly user-insulting payloads they too chug through as they load, and no shortage of poor architectural decisions. On the web it's much much easier to see all the bad; a view source away. And there is seemingly less discipline on the web, more terrible and terribly inefficient cases of companies with too many people throwing whatever the heck into Google Tag Manager or other similar offenses.

The latest server-side react stuff seems like it has a lot of help to offer, but there's still a lot of questions about rehydration of the page. I'm also lament see us shift away from the thick-client world; so much power has been embued to the users from the web 9.9 times out of 10 just being some restful services we can hack with. In all, I think there's a deficiency in broad architectural patterns for how the thick client should manage it's data, and a really issue with ahead-of-time bundles versus just-in-time & load behind code loading that we have failed to make much headway on in the past decade, and this lack is where the real wins are.

Karrot_Kream
0 replies
21h3m

Yeah this is exactly the kind of nuance I'd love to see explored but as you say, auditing native apps is difficult, and it's really hard to compare apples to apples unless you can really compare equivalent web and mobile apps.

dan-robertson
1 replies
19h2m

If one cares about accessibility of a website to people with much slower devices, particularly living in less developed parts of the world, I guess there are more considerations:

- using more clear English with simple sentence structures should make the content more accessible to people who don’t read English with the fluency of an educated American

- reducing the number of requests required to load a page as latency may be high (and latency to the nearest e.g. cloudflare edge node may still be high)

zozbot234
0 replies
18h28m

reducing the number of requests required to load a page

In practice this pretty much requires pure SSR and "multiple page" design, given the amount of network roundtrips on typical SPA sites. (Some lightweight SPA updates may nonetheless be feasible, by using an efficient HTML-swapping approach as seen in HTMX as opposed to the conventional chatty-API requests and heavy DOM manipulation.)

daft_pink
1 replies
17h39m

I really wish he compared an m3 Mac to a 6 year old intel chip and not some random processor I’ve never seen or experienced that I’m not sure is even available in the usa

illusive4080
0 replies
15h45m

I can vouch that my 2017 MacBook Pro struggles with all kinds of tasks, especially web ones.

automatic6131
1 replies
5h46m

Just as an aside, something I've found funny for a long time is that I get quite a bit of hate mail about the styling on this page (and a similar volume of appreciation mail)

Yes! I've definitely felt like this while using his website. Of course, today I just fixed it with

main { max-width: 720px; margin: 0 auto; }

but tbh, I don't want to install an extension to customise the css on this one site...

augustk
0 replies
4h25m

I have never understood why web browser designers don't care to provide a default style sheet that makes unstyled web pages look nice i.e. with proper spacing of elements and sizes of headings etc.

Thorrez
1 replies
20h45m

Okta has a speed test?

wmf
0 replies
20h36m

Presumably Ookla.

GIFtheory
1 replies
18h19m

Using https://www.mcmaster.com/ makes me wish I were a hardware engineer. Makes every other e-commerce site feel like garbage. If amazon were this fast, I’d be broke within days. Why haven’t other sites figured this out?

bombela
0 replies
4h20m

As a hobbyist, I cannot justify the cost of McMaster. I will confess that I often use it to find the precise name of a part for purchasing on Amazon/AliExpress.

Maybe a quality service really does cost that much? But the gap in performances and usability is so great, it seems that something else must be at play sometimes.

AlienRobot
1 replies
20h0m

I'm glad people remember what WW in WWW means. :)

It makes me very sad to see that reddit's new design is so heavy it can't even be accessed by part of the world. It's like parts of the internet are closing theirs doors just so they can have more sliding effects that nobody wants.

Or maybe I'm just a weird one who prefers my browser to do a full load when I click a link.

Btw there was a time everyone kept talking about "responsive" web design and, having used only low-end smartphones and tablets, I kept finding it weird that there was such focus on the design being responsive for mobile devices when those mobile devices were so extremely slow to respond to touch to begin with. Of course I know that's not what they meant, but it still felt weird.

Izkata
0 replies
16h22m

I'm glad people remember what WW in WWW means. :)

Welcome to the Wide Web, where bloat is the norm.

torginus
0 replies
6h24m

I feel like there's a good point made by the Discourse CEO about Qualcomm (and competitors) - the product decision to segment their CPU line by drastic differences in single-threaded CPU perf is a highly anti-consumer one.

In contrast AMD and Intel use the same (or sameish) CPU arch in all of their lineup in a given generation, the absolute cheapest laptop I could find used a Pentium 6805, which still has a GB6 score of well over 1000, sold in a laptop that's cheaper than most budget smartphones.

In contrast, Qualcomm and Mediatek will sell you SoCs that don't even have half of that performance as a latest-gen 'midrange' part.

timnetworks
0 replies
21h2m

68k.news loads fine, it's probably that the people writing your applications are not great at their jobs?

richrichie
0 replies
13h34m

I wonder how much of the bloat of modern shiny internet widgets is pure lipstick that does not add any tangible value.

pompino
0 replies
10h36m

I think it would be useful to separate data & code here. What if you kept the code the same, and downgraded the assets so the overall package is smaller/easier to process/execute? Or maybe tweaked the renderer so the same code & data can render quicker and slightly worse image quality consuming fewer CPU cycles? Basically I'm envisioning something like a game where the same game data+code can support multiple performance targets (except in this case the different CDN hookups to get the assets out, rather than everyone getting the bloated data download)

nofunsir
0 replies
12h50m

It impacts me, and I have a fast device!

nicbou
0 replies
15h15m

I travel a lot and experience a wide range of internet connection speeds and latencies. Hotel Wi-Fi can be horrible.

The web is clearly not designed for or tested on slow connections. UIs feel unresponsive and broken because no one thought that an action might take seconds to load.

Even back home in Germany, we have really unreliable mobile internet. I designed the interactive bits of All About Berlin for people on the U-Bahn, not just office workers on M3 Macbooks with fiber internet.

nhggfu
0 replies
21h3m

re: Wordpress - with which theme? benchmarked on default theme they give away free like "2024" or whatever ?

obvs a good coder optimizes their own theme to get 100% score on lighthouse.

ksec
0 replies
8h11m

There are two attitudes on display here which I see in a lot of software folks. First, that CPU speed is infinite and one shouldn't worry about CPU optimization. And second, that gigantic speedups from hardware should be expected and the only reason hardware engineers wouldn't achieve them is due to spectacular incompetence, so the slow software should be blamed on hardware engineers, not software engineers.

Not just the quote but the whole piece. I am glad this was brought out by Dan, and gets enough attentions to be upvoted. ( Although most are focusing on Server Rendering vs Client Side Rendering; Sigh) A lot of what it said would have been downvoted to oblivion on HN. A multi billion dollar company CTO once commented on HN, why should I know anything about CPU or Foundry as long as they give performance improvements every few years.

Not only Jeff Atwood, there are plenty of other Software developers, from programming languages authors, backend and Frontend Frameworks authors, with hundreds of thousands of followers, continue to pump out views like Jeff on social media. Without the actual understanding of hardware nor the business or selling IPs or physical goods.

Hardware Engineers has to battle with Physics. And yet gets zero appreciation. Most of the appreciations you see now around Tech circle are completely "new". For a long time no one heard of TSMC. ASML wasn't even known until Intel loss its leading node. Zero understanding of CPU design nor even basic development cycles. How it will takes years just to get a new CPU out. And somehow hate Qualcomm because they didn't innovate. A company that spends the highest percentage of revenue on R&D in tech industry.

khiqxj
0 replies
7h20m

the web is a pile of horse shit why is this even news. the best part is how all the SJW apple tesla cloud smart tech yuppies in tech dont care about how 99% of the world who cant afford to buy a new machine every year have an experience on their product worse in every way than dial up as they force every formerly paper transaction onto web. just opening firefox with blank home page can take deciseconds and even minutes. even opening a new blank tab is unresponsive and lags up the UI. on anything but mid-high range desktop hardware.

how does this even have 200 upvotes? i cant count more than 1 or 2 websites that doesnt have infinite bloat for useless nonsense like the cookie popup social media whatever 10 meme frameworks and 100 js libs injected into the page. HNers just read "bad stuff bad", respond "yup" like a zombie, and continue doing bad stuff

julianlam
0 replies
13h48m

It's a shame that NodeBB was not included in the list of forums tested.

We worked really hard to optimize our forum load times, and it handedly beats the pants off of much we've tested against.

But that's not much of a brag, the bar is quite low.

Dan goes on and lambasts (rightfully so) Atwood for deriding Qualcomm and assuming slow phones don't exist.

Well, let's chat, and talk to someone whose team really does dogfood their products on slower devices...

jhatemyjob
0 replies
13h39m

Dan, I respect you and I feel your pain, but...

Another common attitude on display above is the idea that users who aren't wealthy don't matter.

If you want to make money, then this is the correct attitude. You need to target the users who have the means to be on the bleeding edge. It may not be "fair" or "equitable" or whatever, but catering to the masses is a suicide mission unless you have a lot of cash/time to burn.

This post reminds me of the standard Stallman quip "if everyone used the GPL, then our problems would be solved"

illusive4080
0 replies
15h51m

My 2017 i7 MacBook Pro struggles on websites. It’s absurd.

hexage1814
0 replies
18h11m

What I noticed more and more is me using alternative front-end or deliberately changing my user-agent to some old browser in some sites that still have some legacy version

hacker_88
0 replies
7h9m

PUBG runs on 60fps , Web runs 0.4fps. Oh No Optimization

gcanyon
0 replies
5h45m

I've had this same experience with low-bandwidth situations while traveling: more than a few times I've cursed Apple for not making iOS engineers test with 3G or even 2G connections.

eneville
0 replies
6h17m

Some might be interested in pre-compressing their sites:

  https://gitlab.com/edneville/gzip-disk
It doesn't stop client CPU burn, but it might help get data to the client device without on-the-fly compression a bit quicker, which in my experience is helpful from the server side too.

demondemidi
0 replies
18h13m

I was expecting this to go one level deeper and point out that bloated sites that are critical, like: banking, medical, government -- can lead to problems paying bills or getting timely information (especially in the case of medical situations that aren't quite emergencies but close to it).

dan-robertson
0 replies
18h34m

Relating to the aside about opportunities in different countries: the comparison between potential programming career prospects between a poor American and middle class Pole feels reasonable for someone born around the same time as the OP (early ’80s I guess) but I suspect it’s since shifted in Poland’s failure.

I think the relative disadvantages of a poor American compared to their wealthier peers have increased as there’s more competition (as the degree is seen as more desirable by motivated wealthy parents) and the poor student likely won’t even have a non-phone computer at home where all their wealthier peers probably will. Possibly they could work around the competitiveness of computer science by going via some less well-trodden path (eg mathematics or physics) except that university admission isn’t by major. They may also be disadvantaged by later classism in hiring. Meanwhile a middle class Pole will have access to a computer and, provided they live sufficiently near one of the big cities, access to technical schools which can give them a head start on programming skills (and on competitive programming which is a useful skill for passing the current kind of programming interview questions). To get the kind of good outcome described in the OP, they then need to get hired somewhere like Google in Zurich (somewhat similar difficulty to in the US except the earlier stages were easier (in the sense of being more probable) for the hypothetical Pole) and progress from there (maybe impeded by initially not being at the headquarters / fewer other employment opportunities to get career advancement by changing jobs). Class will be less of a problem as the hypothetical middle class pole isn’t so different in wealth from other middle class Europeans and you get much less strong class-selection than when (e.g.) Americans are hiring Americans.

cubefox
0 replies
15h31m

I would still add that users running out of monthly mobile data volume are still a big issue, likely bigger than slow phones. They can't load most websites with 64 kbit/s, because they are multiple megabytes large, often without good reason.

For example, when Musk took over Twitter, he actually fixed this issue for some time, I tested it. But now they have regressed again. The website will simply not show your timeline on a slow connection. It will show an error message instead. Why would slow connections result in an error message?!

A simple solution that e.g. Facebook (though apparently not Threads) and Google use, is to first load the text content and the (large) images later. But many websites instead don't load anything and just time out. Probably because of overly large dependencies like heavy JavaScript libraries and things like that.

chefandy
0 replies
16h56m

The web is a communication medium: having bad delivery is going to impact the efficacy of the message. I've worked as both a developer and a designer, and as a developer I've certainly had to push back against content-focused people requesting things they didn't realize were, frankly, bananas. Tech isn't their job, so it was my job to surface those problems before they arose. However, as a designer, I've also had to push back against developers that refused to acknowledge that technical purity is a means to an end, not an end in itself. Something looking the same in lynx and firefox isn't a useful goal in any situation I've encountered, and the only people that think a gopher resource has better UX than a modern webpage stare at code editors all day long.

No matter who it is, when people visualize how to solve a problem, they see how their area of concern contributes more clearly than others'. It's easy to visualize how our contributions will help solve a problem, and also hard to look past how doing something else will negatively impact your tasks. In reality, this medium requires a nuanced balance of considerations that depend on what you need to communicate, why, and to whom. Being useful on a team requires knowing when to interject with your professional expertise, but also know when it's more important to trust other professionals to do their jobs.

cettox
0 replies
7h38m

This is one of the reasons I've started building https://formpress.org. Seeing the bloat in many form builder apps/services, I've decided there is need for a lightweight and open source alternative.

How we achieve lightweightness? Currently our only sin is, our inclusion of jquery, that is just to have some cross browser way of interacting with DOM, then we hand craft required JS code based on features used in the form builder. We then ship a lightweight runtime, whose whole purpose is to load necessary JS code pieces to have a functional form that is lightning fast. Ps: we havent gone to the last mile in optimizations, but we definteley will. Even with current state, it is the most lightweight form builder out there.

It is open source, MIT licensed, built on modern stack(react, node.js, Kubernetes and Google Cloud) and we are also hosting a freemium version.

I think, there will be ever increasing need and market for lightweight products, as modern IT means a lot of products coming together. So each one should minimize their overhead.

Give our product a go and let us know what you think?

bsdpufferfish
0 replies
15h2m

The most interesting part of this is the comments about software shifting from a normal career to a prestige target for wealthy families, and that this demographic shift has massive consequences on technology design and services.

bradgessler
0 replies
11h54m

It also impacts users with fast devices.

When I load a bloated website on an iPhone 15 Pro Max over a Unifi AP 7 Pro access point connected to a 1.2Gb WAN, it’s still a slow bloated website.

If you build websites, do as much as you possibly can on the server.

As an industry, how can we get more people to understand this?

bloatedforever
0 replies
4h53m

I think Web bloat started with pretty urls, they provide nothing on top of traditional urls yet every request has to parse them unnecessarily. It's such a waste on a huge scale, especially for slow languages plus the expensive regex processing as well.

baseline-shift
0 replies
17h49m

People can only comfortably read a maximum of 17 words per line. Best is 12. That text should be in two columns.

avodonosov
0 replies
15h9m

Some years ago I tested real world web sites, turned out only about 30% of the javascript they load was actually invoked by the user's browser (even for sites optimied with Closure Compiler, that has some dead code elimination):

https://github.com/avodonosov/pocl

The unused javascript code can be removed (and loaded on demand). Although I am not sure how valuable that would be for the world. It only saves network traffic, parsing time and some browser memory for compiled code. But js traffic in the Internet is neglidgible comparing to, say, video and images. Will the user experience be signifiqanty better if browser is the saved from the unnesessary js parsing? I don't know of a good way to measure that.

ashayh
0 replies
18h34m

This is bad from a global warming perspective.

archy_
0 replies
2h58m

Something I've observed over time, as programming has become more prestigious and more lucrative, is that people have tended to come from wealthier backgrounds and have less exposure to people with different income levels. An example we've discussed before, is at a well-known, prestigious, startup that has a very left-leaning employee base, where everyone got rich, on a discussion about the covid stimulus checks, in a slack discussion, a well meaning progressive employee said that it was pointless because people would just use their stimulus checks to buy stock. This person had, apparently, never talked to any middle-class (let alone poor) person about where their money goes or looked at the data on who owns equity. And that's just looking at American wealth. When we look at world-wide wealth, the general level of understanding is much lower. People seem to really underestimate the dynamic range in wealth and income across the world.

Perhaps the falling salaries for programming in the US could be a good thing in that regard. So many people get into this career because they want to make it big, which seems to drive down the quality of the talent pool.

aragonite
0 replies
16h29m

Another example is Wordpress (old) vs. newer, trendier, blogging platforms like Medium and Substack. Wordpress (old) is 17.5x / 10x faster (LCP* / CPU) than Medium and 5x / 7x faster (LCP* / CPU) faster than Substack on our M3 Max ...

It's a persistent complaint among readers of SlateStarCodex (a blog which made a high-profile move to Substack from an old WordPress site). Substack attributes the sluggishness to the owner's special request to show all comments by default, but the old WordPress blog loads all comments by default and was fine even on older devices.

https://www.reddit.com/r/slatestarcodex/comments/16xsr8w/sub...

https://www.reddit.com/r/slatestarcodex/comments/1b9p55g/any...

anticensor
0 replies
9h16m

This is a manifestation of Wirth's law, again.

anthk
0 replies
19h57m

A simple text site such as Reddit and some Digg clones are nearly unusable under an Item ATOM with a JS based client.

Zpalmtree
0 replies
1h18m

I don't care. Upgrade your device. You don't make me money.

Razengan
0 replies
12h34m

Browsers should only display documents, not apps.

That's what operating systems are for.

Just give native apps what made the web popular in the first place:

• Ability to instantly launch any app just by typing its "name"

• No need to download or install anything

• Ability to revisit any part of an app just by copy/pasting some text and sharing it with anyone.

All that is what appears and matters to users in the end.

--

But I suppose people who would disagree with this really want:

• The ability to snoop and track people across apps (via shit like third-party cookies etc)

INGSOCIALITE
0 replies
19h52m

web bloat also impacts my sanity

Hackbraten
0 replies
17h42m

Mind that not only low-end or old phones have slow CPUs.

Both the $999 Librem 5 and the $1999 Liberty Phone (latest models) have an i.MX8M, which means they have similar processing power as the $50 phones the article is talking about.

I tried to log into Pastebin today. The Cloudflare check took several minutes.

FrojoS
0 replies
10h40m

Has someone attempted to do the math on how much CO2 is emitted because of needless bloat and adds?

FridgeSeal
0 replies
12h48m

As sites have optimized for LCP, it's not uncommon to have a large paint (update) that's completely useless to the user, with the actual content of the page appearing well after the LCP

Aahh yes, the “I’ve loaded in my 38 different loading-shimmer-boxes, now kindly wait another 30 seconds while each of them loads more”

Can we go back to “your page is loaded when _everything _ finishes loading” and not these unhelpful micro-metrics web devs are using to lie to themselves and users about the performance of their site?

Aerbil313
0 replies
26m

I believe HTML, CSS and JS needs an overhaul. There’ll be a point where maintaining backwards compatibility will result in more harm than benefit. Make a new opt-in version of the three which are brutally simplified. Deprecate the old HTML/CSS/JS, to be EOL’d in 2100.