A lot of this resonates. I'm not in Antartica, I'm in Beijing, but still struggle with the internet. Being behind the great firewall means using creative approaches. VPNs only sometimes work, and each leaves a signature that the firewall's hueristics and ML can eventually catch onto. Even state-mandated ones are 'gently' limited at times of political sensitivity. It all ends up meaning that, even if I get a connection, it's not stable, and it's so painful to sink precious packets into pointless web-app-react-crap roundtrips.
I feel like some devs need to time-travel back to 2005 or something and develop for that era in order to learn how to build things nimbly. In deficit of time travel, if people could just learn to open web tools and use its throttling tool: turn it to 3g, and see if their webapp is resilient. Please!
We design for slow internet, react is one of the better options for it with ssr, code splitting and http2 push, mixed in with more off-line friendly clients like Tauri. You can also deploy very near people if you work “on the edge”.
I’m not necessarily disagreeing with your overall point, but modern JS is actually rather good at dealing with slow internet for server-client “applications”. It’s not necessarily easy to do, and there is almost no online resources that you can base your projects on if you’re a Google/GPT programmer. Part of this is because of the ocean of terrible JS resources online, but a big part of it is also that the organisations which work like this aren’t sharing. We have 0 public resources for the way we work as an example, because why would we hand that info to our competition?
By far the lightest weight JS framework isn't React, it's no javascript at all.
I regularly talk to developers who aren't even aware that this is an option.
If you're behind an overloaded geosynchronous satellite then no JS at all just moves the pain around. At least once it's loaded a JS-heavy app will respond to most mouse clicks and scrolls quickly. If there's no JS then every single click will go back to the server and reload the entire page, even if all that's needed is to open a small popup or reload a single word of text.
However, getting 6.4KB of data (just tested on my blog) or 60KB of data (a git.sr.ht repository with a README.md and a PNG) is way better than getting 20MB of frameworks in the first place.
False dichotomy, with what is likely extreme hyperbole on the JS side. Are there actual sites that ship 20 MB, or even 5 MB or more, of frameworks? One can fit a lot of useful functionality in 100 KB or less of JS< especially minified and gzipped.
Well, I'm working right now so let me check our daily "productivity" sites (with an adblocker installed):
I sometimes wish I could spare the time just to tear into something like that Slack number and figure out what it is all doing in there.
Javascript should even generally be fairly efficient in terms of bytes/capability. Run a basic minimizer on it and compress it and you should be looking at something approaching optimal for what is being done. For instance, a variable reference can amortize down to less than one byte, unlike compiled code where it ends up 8 bytes (64 bits) at the drop of a hat. Imagine how much assembler "a.b=c.d(e)" can compile into to, in what is likely represented in less compressed space than a single 64-bit integer in a compiled language.
Yet it still seems like we need 3 megabytes of minified, compressed Javascript on the modern web just to clear our throats. It's kind of bizarre, really.
That's a good question. I just launched Slack and took a look. Basically: it's doing everything. There's no specialization whatsoever. It's like a desktop app you redownload on every "boot".
You talk about minification. The JS isn't minified much. Variable names are single letter, but property names and more aren't renamed, formatting isn't removed. I guess the minifier can't touch property names because it doesn't know what might get turned into JSON or not.
There's plenty of logging and span tracing strings as well. Lots of code like this:
The JS is completely generic. In many places there are if statements that branch on all languages Slack was translated into. I see checks in there for whether localStorage exists, even though the browser told the server what version it is when the page was loaded. There are many checks and branches for experiments, whether the company is in trial mode, whether the code is executing in Electron, whether this is GovSlack. These combinations could have been compiled server side to a more minimal set of modules but perhaps it's too hard to do that with their JS setup.Everything appears compiled using a coroutines framework, which adds some bloat. Not sure why they aren't using native async/await but maybe it's related to not being specialized based on execution environment.
Shooting from the hip, the learnings I'd take from this are:
1. There's a ton of low hanging fruit. A language toolchain that was more static and had more insight into what was being done where could minify much more aggressively.
2. Frameworks that could compile and optimize with way more server-side constants would strip away a lot of stuff.
3. Encoding logs/span labels as message numbers+interpolated strings would help a lot. Of course the code has to be debuggable but hopefully, not on every single user's computer.
4. Demand loading of features could surely be more aggressive.
But Slack is very popular and successful without all that, so they're probably right not to over-focus on this stuff. Especially for corporate users on corporate networks does anyone really care? Their competition is Teams after all.
js developers had this idea of "1 function = 1 library" for a really long time, and "NEVER REIMPLEMENT ANYTHING". So they will go and import a library instead of writing a 5 line function, because that's somehow more maintainable in their mind.
Then of course every library is allowed to pin its own dependencies. So you can have 15 different versions of the same thing, so they can change API at will.
I poked around some electron applications.
I've found .h files from openssl, executables for other operating systems, megabytes of large image files that were for some example webpage, in the documentation of one project. They literally have no idea what's in there at all.
too much crap holy and this is worse case scenario with adblock
This is mind blowing to me. I expect that the majority of any application will be the assets and content. And megabytes of CSS is something I can't imagine. Not the least for what it implies about the DOM structure of the site. Just, what!? Wow.
I agree with adding very little JavaScript, say 1kB https://instant.page/ to make it snappier.
I just tried some websites:
I'm getting almost 2MB (5MB uncompressed) just for a google search.
Well, in TFA, if you re-read the section labeled "Detailed, Real-world Example" yes, that is exactly what the person was encountering. So no hyperbole at all actually.
Also wonder how many savings are still possible with a more efficient HTML/CSS/JS binary representation. Text is low tech and all but it still hurts to waste so many octets for such a relatively low amount of possible symbols.
Applies to all formal languages actually. 2^(8x20x10^6) ~= 2x10^48164799 is such a ridiculously large space...
The generalisation of this concept is what I like call the "kilobyte" rule.
A typical web page of text on a screen is about a kilobyte. Sure, you can pack more in with fine print, and obviously additional data is required to represent the styling, but the actual text is about 1 kb.
If you've sent 20 MB, then that is 20,000x more data than what was displayed on the screen.
Worse still, an uncompressed 4K still image is only 23.7 megabytes. At some point you might be better off doing "server side rendering" with a GPU instead of sending more JavaScript!
I read epubs, and they’re mostly html and css files zipped. The whole book usually comes under a MB if there’s not a lot of big pictures. Then you come across a website and for just an article you have to download tens of MBs. Disable JavaScript and the website is broken.
If your server renders the image as text we'll be right back down towards a kilobyte again. See https://www.brow.sh/
Some 7~10 years ago I remember I saw somewhere (maybe here on HN) a website which did exactly this: you gave it an URL - it downloaded a webpage with all its resources, rendered and screenshot'ed it (probably in headless Chrome or something), and compared size of png screenshot versus size of webpage with all its resources.
For many popular websites, png screenshot of a page indeed was several times less than webpage itself!
Surely you're aware of gzip encoding on the wire for http right?
Sure, would be interesting to know how it would fare against purpose-made compression under real world conditions still...
Shouldn’t HTTP compression reap most of the benefits of this approach for bigger pages?
Check this proposal out: https://github.com/tc39/proposal-binary-ast
Yeah, but your blog is not a full featured chat system with integrated audio and video calling, strapped on top of a document format.
There are a few architectural/policy problems in web browsers that cause this kind of expansion:
1. Browsers can update large binaries asynchronously (=instant from the user's perspective) but this feature is only very recently available to web apps via obscure caching headers and most people don't know it exists yet/frameworks don't use it.
2. Large download sizes tend to come from frameworks that are featureful and thus widely used. Browsers could allow them to be cached but don't because they're over-aggressive at shutting down theoretical privacy problems, i.e. the browser is afraid that if one site learns you used another site that uses React, that's a privacy leak. A reasonable solution would be to let HTTP responses opt in to being put in the global cache rather than a partitioned cache, that way sites could share frameworks and they'd stay hot in the cache and not have to be downloaded. But browsers compete to satisfy a very noisy minority of people obsessed with "privacy" in the abstract, and don't want to do anything that could kick up a fuss. So every site gets a partitioned cache and things are slow.
3. Browsers often ignore trends in web development. React style vdom diffing could be offered by browsers themselves, where it'd be faster and shipped with browser updates, but it isn't so lots of websites ship it themselves over and over. I think the SCIter embedded browser actually does do this. CSS is a very inefficient way to represent styling logic which is why web devs write dialects like sass that are more compact, but browsers don't adopt it.
I think at some pretty foundational level the way this stuff works architecturally is wrong. The web needs a much more modular approach and most JS libraries should be handled more like libraries are in desktop apps. The browser is basically an OS already anyway.
I don't know exactly which features you are referring to, but you may have noticed that CSS has adopted native nesting, very similarly to Sass, but few sites actually use it. Functions and mixins are similar compactness/convenience topics being worked on by the CSSWG.
(Disclosure: I work on style in a browser team)
I hadn't noticed and I guess this is part of the problem. Sorry this post turned into a bit of a rant but I wrote it now.
When it was decided that HTML shouldn't be versioned anymore it became impossible for anyone who isn't a full time and very conscientious web dev to keep up. Versions are a signal, they say "pay attention please, here is a nice blog post telling you the most important things you need to know". If once a year there was a new version of HTML I could take the time to spend thirty minutes reading what's new and feel like I'm at least aware of what I should learn next. But I'm not a full time web dev, the web platform changes constantly, sometimes changes appear and then get rolled back, and everyone has long since plastered over the core with transpilers and other layers anyway. Additionally there doesn't seem to be any concept of deprecating stuff, so it all just piles up like a mound of high school homework that never shrinks.
It's one of the reasons I've come to really dislike CSS and HTML in general (no offense to your work, it's not the browser implementations that are painful). Every time I try to work out how to get a particular effect it turns out that there's now five different alternatives, and because HTML isn't versioned and web pages / search results aren't strongly dated, it can be tough to even figure out what the modern way to do it is at all. Dev tools just make you even more confused because you start typing what you think you remember and now discover there are a dozen properties with very similar names, none of which seem to have any effect. Mistakes don't yield errors, it just silently does either nothing or the wrong thing. Everything turns into trial-and-error, plus fixing mobile always seems to break desktop or vice-versa for reasons that are hard to understand.
Oh and then there's magic like Tailwind. Gah.
I've been writing HTML since before CSS existed, but feel like CSS has become basically non-discoverable by this point. It's understandable why neither Jetpack Compose nor SwiftUI decided to adopt it, even whilst being heavily inspired by React. The CSS dialect in JavaFX I find much easier to understand than web CSS, partly because it's smaller and partly because it doesn't try to handle layout. The way it interacts with components is also more logical.
You may be interested in the Baseline initiative, then. (https://web.dev/baseline/2024)
That does look useful, thanks!
Yes. It's inexcusable that text and images and video pulls in megabytes of dependencies from dozens of domains. It's wasteful on every front: network, battery, and it's also SLOW.
The crap is that even themes for static site generators like mkdocs link resources from cloudflare rather than including them in the theme.
For typedload I've had to use wget+sed to get rid of that crap after recompiling the website.
https://codeberg.org/ltworf/typedload/src/branch/master/Make...
This makes perfect sense in theory and yet it's the opposite of my experience in practice. I don't know how, but SPA websites are pretty much always much more laggy than just plain HTML, even if there are a lot of page loads.
It often is that way, but it's not for technical reasons. They're just poorly written. A lot of apps are written by inexperienced teams under time pressure and that's what you're seeing. Such teams are unlikely to choose plain server-side rendering because it's not the trendy thing to do. But SPAs absolutely can be done well. For simple apps (HN is a good example) you won't get too much benefit, but for more highly interactive apps it's a much better experience than going via the server every time (setting filters on a shopping website would be a good example).
Yep. In SPAs with good architecture, you only need to load the page once, which is obviously weighed down by the libraries, but largely is as heavy or light as you make it. Everything else should be super minimal API calls. It's especially useful in data-focused apps that require a lot of small interactions. Imagine implementing something like spreadsheet functionality using forms and requests and no JavaScript, as others are suggesting all sites should be: productivity would be terrible not only because you'd need to reload the page for trivial actions that should trade a but of json back and forth, but also because users would throw their devices out the window before they got any work done. You can also queue and batch changes in a situation like that so the requests are not only comparatively tiny, you can use fewer requests. That said, most sites definitely should not be SPAs. Use the right tool for the job
One thing which surprised me at a recent job was that even what I consider to be a large bundle size (2MB) didn't have much of an effect on page load time. I was going to look into bundle splitting (because that included things like a charting library that was only used in a small subsection of the app). But in the end I didn't bother because I got page loads fast (~600ms) without it.
What did make a huge different was cutting down the number of HTTP requests that the app made on load (and making sure that they weren't serialised). Our app was originally going auth by communicating with Firebase Auth directly from the client, and that was terrible for performance because that request was quite slow (most of second!) and blocked everything else. I created an all-in-one auth endpoint that would check the user's auth and send back initial user and app configuration data in one ~50ms request and suddenly the app was fast.
My experience agrees with this comment – I’m not sure why web browsers seem to frequently get hung up on only some Http requests at times, unrelated to the actual network conditions. Ie: in the browser the HTTP request is timing out or in a blocked state and hasn’t even reached the network layer when this occurs. (Not sure if I should be pointing the finger here at the browser or the underlying OS). However, when testing slow / stalled loading issues, this (the browser itself) is frequently one of the culprits- however, this issue I am referring to even further reinforces the article/sentiments on this HN thread (cut down on the number of requests / bloat, and this issue too can be avoided.)
If if the request itself hasn't reached the network layer but is having a networky feeling hang, I'd look into DNS. It's network dependent but handled by the system so it wouldn't show up in your web app requests. I'm sure there's a way to profile this directly but unless I had to do it all the time I'd probably just fire up wireshark.
In many cases, like satellite Internet access or spotty mobile service, for sure. But if you have low bandwidth but fast response times, that 2mb is murder and the big pile o requests is NBD.If you have slow response times but good throughput, the 2MB is NBD but the requests are murder.
An extreme and outdated example, but back when cable modems first became available, online FPS players were astonished to see how much better the ping times were for many dial up players. If you were downloading a floppy disk of information, the cable modem user would obviously blow them away, but their round trip time sucked!
Like if you're on a totally reliable but low throughput LTE connection, the requests are NBD but the download is terrible. If you're on spotty 5g service, it's probably the opposite. If you're on, like, a heavily deprioritized MVNO with a slower device, they both super suck.
It's not like optimization is free though, which is why it's important to have a solid UX research phase to get data on who is going to use it, and what their use case is.
Having written a fair amount of SPA and similar I can confirm that it is actually possible to just write some JavaScript that does fairly complicated jobs without the whole thing ballooning into the MB space. I should say that I could write a fairly feature-rich chat-app in say 500 kB of JS, then minified and compressed it would be more like 50 kB on the wire.
How my "colleagues" manage to get to 20 MB is a bit of mystery.
More often than not (and wittingly or not) it is effectively by using javascript to build a browser-inside-the-browser, Russian doll style, for the purposes of tracking users' behavior and undermining privacy.
Modern "javascript frameworks" do this all by default with just a few clicks.
Yeah, right. GitHub migrated from serving static sites to displaying everything dynamically and it’s basically unusable nowadays. Unbelievably long load times, frustratingly unresponsive, and that’s on my top spec m1 MacBook Pro connected to a router with fiber connection.
Let’s not kid ourselves, no matter how many fancy features, splitting, optimizing, whatever you do, JS webapps may be an upgrade for developers, they’re a huge downgrade for users in all aspects
Every time I click a link in GitHub, and watch their _stupid_ SPA “my internal loading bar is better than yours” I despair.
It’s never faster than simply reloading the page. I don’t know what they were thinking, but they shouldn’t have.
I have an instance of Forgejo and it’s so snappy. Even though I’m the only user, but the server is only 2GB, 2vcores with other services present. Itp
On the other side, Gitlab doesn’t work with JS disabled.
Here’s something that’s not true: in js, the first link you click navigates you. In the browser, clicking a second link cancels the first one and navigates to the second one.
GitHub annoys the fuck out of me with this.
Getting at least “n”kb of html with content in it that you can look at in the interim is better than getting the same amount of framework code.
SPA’s also have a terrible habit of not behaving well after being left alone for a while. Nothing like coming back to a blank page and having it try to redownload the world to show you 3kb of text, because we stopped running the VM a week ago.
In my experience page weight isn't usually the biggest issue. On unreliable connections you'll often get decent bandwidth when you can get through. It's applications that expect to be able to multiple HTTP requests sequentially and don't deal well with some succeeding and failing (or just network failures in general) that are the most problematic.
If I can retry a failed a network request that's fine. If I have to restart the entire flow when I get a failure that's unusable.
No JS can actually increase roundtrips in some cases, and that's a problem if you're latency-bound and not necessarily speed-bound.
Imagine a Reddit or HN style UI with upvote and downvote buttons on each comment. If you have no JS, you have to reload the page every time one of the buttons is clicked. This takes a lot of time and a lot of packets.
If you have an offline-first SPA, you can queue the upvotes up and send them to the server when possible, with no impact on the UI. If you do this well, you can even make them survive prolonged internet dropouts (think being on a subway). Just save all incomplete voting actions to local storage, and then try re-submmitting them when you get internet access.
It's not always the application itself per se. It's the various / numerour marketing, analytics or (sometimes) ad-serving scripts. These third party vendors aren't often performance minded. They could be. They should be.
And the insistence on pushing everything into JS instead of just serving the content. So you’ve got to wait for the skeleton to dl, then the JS, which’ll take its sweet time, just to then(usually blindly) make half a dozen _more_requests back out, to grab JSON, which it’ll then convert into html and eventually show you. Eventually.
Well grabbing json isn't that bad.
I made a CLI for ultimateguitar (https://packages.debian.org/sid/ultimateultimateguitar) that works by grabbing the json :D
I've not so much good vs not so bad vs bad. It's more necessary vs unnecessary. There's also "just because you can, doesn't mean you should."
Yup. There's definitely too much unnecessary complexity in tech and too much over-design in presentation. Applications, I understand. Interactions and experience can get complicated and nuanced. But serving plain ol' content? To a small screen? Why has that been made into rocket science?
HTTP/2 push is super dead: https://evertpot.com/http-2-push-is-dead/
Sounds like what would benefit you is a HTMX approach to the web.
What about plain HTML & CSS for all the websites where this approach is sufficient? Then apply HTMX or any other approach for the few websites that are and need to be dynamic.
That is exactly what htmx is and does. Everything is rendered server side and sections of the page that you need to be dynamic and respond to clicks to fetch more data have some added attributes
I see two differences: (1) the software stack on the server side and (2) I guess there is JS to be sent to the client side for HTMX support(?). Both those things make a difference.
The size of HTMX compressed is 10kb and very rarely changes which means it can stay in your cache for a very long time.
I'm embedded so I don't much about web stuff but sometimes I create dashboards to monitor services just for our team, tganks for introducing me to htmx. I do think html+css should be used for anything that is a document or static for longer than a typical view lasts. Arxiv is leaning towards HTML+css vs latex in acknowledgement that paper is no longer how "papers" are read. And on the other end, eBay works really well with no js right up until you get to an item's page, where it breaks. If ebay can work without js, almost anything that isn't monitoring and visualizing constant data (last few minutes of a bid, or telemetry from an embedded sensor) can work without js. I don't understand how amazon.com has gotten so slow and clunky for instance.
I have been using wasm and webgpu for visualization, partly to offload any burden from the embedded device to be monitored, but that could always be a third machine. Htmx says it supports websockets, is there a good way to have it eat a stream and plot data as telemetry, or is that time for a new tool?
It sounds like GP would benefit from satellite internet bypassing the firewall, but I don't know how hard the Chinese government works to crack down on that loophole.
I lived in Shoreditch for 7 years and most of my flats had almost 3G internet speeds. The last one had windows that incidentally acted like a faraday cage.
I always test my projects with throttled bandwidth, largely because (just like with a11y) following good practices results in better UX for all users, not just those with poor connectivity.
Edit: Another often missed opportunity is building SPAs as offline-first.
You are going to get so many blank stares at many shops building web apps when suggesting things like this. This kind of consideration doesn't even enter into the minds of many developers in 2024. Few of the available resources in 2024 address it that well for developers coming up in the industry.
Back in the early-2000s, I recall these kinds of things being an active discussion point even with work placement students. Now that focus seems to have shifted to developer experience with less consideration on the user. Should developer experience ever weigh higher than user experience?
Developer experience is user experience. However, in a normative sense, I operate such that Developer suffering is preferable to user suffering to get any arbitrary task done.
SPAs and "engineering for slow internet" usually don't belong together. The giant bundles usually guarantee slow first paint, and the incremental rendering/loading usually guarantees a lot of network chatter that randomly breaks the page when one of the requests times out. Most web applications are fundamentally online. For these, consider what inspires more confidence when you're in a train on a hotspot: an old school HTML forms page (like HN), or a page with a lot of React grey placeholders and loading spinners scattered throughout? I guess my point is that while you can take a lot of careful time and work to make an SPA work offline-first, as a pattern it tends to encourage the bloat and flakiness that makes things bad on slow internet.
Oh, London is notorious for having... questionable internet speeds in certain areas. It's good if you live in a new build flat/work in a recently constructed office building or you own your own home in a place OpenReach have gotten to yet, but if you live in an apartment building/work in an office building more than 5 or so years old?
Yeah, there's a decent chance you'll be stuck with crappy internet as a result. I still remember quite a few of my employers getting frustrated that fibre internet wasn't available for the building they were renting office space in, despite them running a tech company that really needed a good internet connection.
"I feel like some devs need to time-travel back to 2005 or something and develop for that era in order to learn how to build things nimbly."
No need to invent time travel, just let them have a working retreat somewhere with only bad mobile connection for a few days.
Amen to this. And give them a mobile cell plan with 1GB of data per month.
I've seen some web sites with 250MB payloads on the home page due to ads and pre-loading videos.
I work with parolees who get free government cell phones and then burn through the 3GB/mo of data within three days. Then they can't apply for jobs, get bus times, rent a bike, top up their subway card, get directions.
Having an adblocker (firefox mobile works with uBlock origin) and completely deactivate loading of images and videos can get you quite far with limited connection.
"But all the cheap front-end talent is in thick client frameworks, telemetry indicates most revenue conversions are from users on 5G, our MVP works for 80% of our target user base, and all we need to do is make back our VC's investment plus enough to cash out on our IPO exit strategy, plus other reasons not to care" — self-identified serial entrepreneur, probably
Just put them on a train during work hours! We have really good coverage here but there's congestion and frequent random dropouts, and a lot of apps just don't plan for that at all.
I hear you on frontend-only react. But hopefully the newer React Server Components are helping? They just send HTML over the wire (right?)
The problem isn't in what is being sent over the wire - it's in the request lifecycle.
When it comes to static HTML, the browser will just slowly grind along, showing the user what it is doing. It'll incrementally render the response as it comes in. Can't download CSS or images? No big deal, you can still read text. Timeouts? Not a thing.
Even if your Javascript framework is rendering HTML chunks on the server, it's still essentially hijacking the entire request. You'll have some button in your app, which fires off a request when clicked. But it's now up to the individual developer to properly implement things like progress bars/spinners, timouts, retries, and all the rest the browser normally handles for you.
They never get this right. Often you're stuck with an app which will give absolutely zero feedback on user action, only updating the UI when the response has been received. Request failed? Sorry, gotta F5 that app because you're now stuck in an invalid state!
Yep. I’m a JS dev who gets offended when people complain about JS-sites being slower because there’s zero technical reason why interactions should be slower. I honestly suspect a large part of it is that people don’t expect clicking a button to take 300ms and so they feel like the website must be poorly programmed. Whereas if they click a link and it takes 300ms to load a new version of the page they have no ill-will towards the developer because they’re used to 300ms page loads. Both interactions take 300ms but one uses the browser’s native loading UI and the other uses the webpage’s custom loading UI, making the webpage feel slow.
This isn’t to exonerate SPAs, but I don’t think it helps to talk about it as a “JavaScript” problem because it’s really a user experience problem.
Yes, server-rendering definitely helps, though I have suspicions about its compiled outputs still being very heavy. There's also a lot of CSS frameworks that have an inline-first paradigm meaning there's no saving for the browser in downloading a single stylesheet. But I'm not sure about that.
Yes, though server side rendering is everything but a new thing in the react world. NextJS, Remix, Astro and many other frameworks and approaches exist (and have done so for at least five years) to make sure pages are small and efficient to load.
Tried multiple VPNs in China and finally rolled my own obfuscation layer for Wireshark. A quick search revealed there are multiple similar projects on GitHub, but I guess the problem is once they get some visibility, they don't work that well anymore. I'm still getting between 1 and 10mbit/s (mostly depending on time of day) and pretty much no connectivity issues.
Wireguard?
Haha yes, thanks. I used Wireshark extensively the past days to debug a weird http/2 issue so I guess that messed me up a bit ;)
I do that too looking stuff up.
Or maybe, just get rid of the firewall. I am all for nimble tech, but enabling the Chinese government is not very high on my to-do list.
Whether you like it or not, over 15% of the world's population lives in China.
Are you a citizen of China, or move there for work/education/research?
Anyway, this is very unrelated, but I'm in the USA and have been trying to sign up for the official learning center for CAXA 3D Solid Modeling (I believe it's the same program as IronCAD, but CAXA 3D in China seems to have 1000x more educational videos and Training on the software) and I can't for the life of me figure out how to get the WeChat/SMS login system they use to work to be able to access the training videos. Is it just impossible for a USA phone number to receive direct SMS website messages from a mainland China website to establish accounts? Seems like every website uses SMS message verification instead of letting me sign up with an email.
Please understand that Chinese government wants to block "outside" web services to Chinese residents, and Chinese residents want to access those services. So if the service itself decides to deny access from China, it's actually helping the Chinese government.
Not even that, with outer space travel, we all need to build for very slow internet and long latency. Devs do need to time-travel back to 2005.
I'm sure this is not what you meant but made me lol anyways: sv techbros would sooner plan for "outer space internet" than give a shit about the billions of people with bad internet and/or a phone older than 5 years.
Tbh, developers just need to test their site with existing tools or just try leaving the office. My cellular data reception in Germany in a major city sucks in a lot of spots. I experience sites not loading or breaking every single day.
developers shouldn't be given those ultra performant machines. They can have a performant build server :D
Eh, I'm a few miles from NYC and have the misfortune of being a comcast/xfinity customer and my packetloss to my webserver is sometimes so bad it takes a full minute to load pages.
I take that time to clean a little, make a coffee, you know sometimes you gotta take a break and breathe. Life has gotten too fast and too busy and we all need a few reminders to slow down and enjoy the view. Thanks xfinity!
Chrome dev tools offer a "slow 3G" and a "fast 3G". Slow 3G?
With fresh cache on "slow 3G", my site _works_, but has 5-8 second page loads. Would you have consider that usable/sufficient, or pretty awful?
I live in a well connected city, but my work only pays for other continent based Virtual Machines so most of my projects end up "fast" but latency bound, it's been an interesting exercise of minimizing pointless roundtrips in a technology that expects you to use them for everyting