I feel like that web only is a positive way forward. If only it was possible to prove nothing goes back to the server I think it would gain a lot more trust.
Though companies who want to see your data might not be so keen.
On my phone, but will try it out when I get home.
Naive answer: isn't the browser network tab enough?
It only shows past behaviour so not completely a proof that nothing could be sent.
In which browser? It's a live view in Chrome and Firefox.
"Live view" means a log of the past, not potential futures.
I don't know what you're on about, but it does show the past and any new network activity.
Yes. Exactly. It omits future network calls (things that have not yet happened by the moment you look), which is what the person you were replying to was talking about.
It does not omit future network calls. You can, in fact, use the network tab to monitor a page's ongoing network activity as originally suggested.
You won't be able to see that activity until after it has happened. An empty network monitor list isn't a guarantee of future behavior. Or current behavior.
Okay. Then solve p=np. Until then, we monitor and reverse engineer to verify as best we can.
It doesn't need to be that hard. A reasonable solution is to quarantine the tab/app. Proactively revoke its network access after its loaded.
They're saying you won't know until after a request is already sent, and seem to be implying that this somehow stops someone from learning if data is sent to the server or not. I think they've forgotten the original point of this thread because their replies are missing the point
I'm not sure. The impression I get is they're not aware that the tab isn't just a log of stuff before the page "finishes" loading, or not aware that the notion of a static page that can't make network requests at any time without a full reload went out with AJAX in the 2000s.
Why do you think web only is positive?
15 years from now, will this site still be up?
Will you be able to open your projects from today, then?
I think web only is a really compelling way to get someone to try a product, but I’d much rather install a tool like this. Unless you could host the site yourself, of course.
Honestly there’s 2 situations:
- if the tool is updated continuously for 15 years it’ll still be up
- if it’s not updated, it will be technically irrelevant anyway and you’ll have switched to another tool by then
Future support is overrated for tools, just use one now and worry about tomorrow later.
Winamp would like to have a word with you :) Granted I'm using the latest release from 2018, but I still sometimes load up v2.x released in 1998 just to show people that 25+ year old software still works just fine... even the AVS visualizer and Shoutcast internet radio features work, which is to me just insane.
I also use older software quite often that has long since been updated, such as older versions of Audacity, Ableton, Adobe Premiere, etc. for various reasons such as: not wanting to spend money, avoiding spyware, ads (see: Windows 11), and other bloat which often IMO negatively outweighs the positivity of new features. There are a lot of other small utilities that I still use that are 10+ years old because they still work fine and I know how to use them blind-folded. There are also tools that haven't received updates in many years but still work great, why would I bother to look for something new that potentially will spy on me and not offer the same functionality?
We’re not talking about Winamp. A more accurate comparison is the Adobe CS suite: no one use CS5 anymore.
Strong disagree on the second point. I don't want the choice of whether I switch tools to be based on an arbitrary factor such as when a website suddenly doesn't exist anymore. I might be very heavily invested into that tool in terms of project files, learning curves, workflow integration etc. I also might be in the middle of something very important with a deadline at the moment that I'm unable to access the site.
Other points that weren't raised - I want to be free to work in situations where I have poor or no internet e.g. when traveling.
Tying tools down to whether or not a website is available and you have reliable internet access is a huge step backwards in my opinion.
I see what you mean, but I'm not using anything from 15 years ago today apart from Linux.
You're using a 15 year old version of Linux?
Web only does bring with it the notion of web scale rendering. Cloud render farms are already a thing, so it would be a compelling feature. Lots of video acquisition is already cloud based, so the footage is already there. There are still plenty of times where the render stage takes enough time that rendering on my local single machine is not pleasant.
I've long wished for something like OpenBSDs pledge to be available in browsers, ideally both through meta tags and through JS APIs. Once a pledge is made, the resource will be unavailable to the page until it's closed, like:
- I pledge to only make network connections to X, Y and Z
- I pledge to only make GET requests to http://example.com/foo/*
- I pledge not to use canvas, iFrames or storage APIs
This info wouldn't be immediately useful to most users, but it could massively help experienced users with trusting local utilities.
Doesn't solve any trust issue since data can be send as part of the URL, and the backend response can change at will.
That was just an example - it fully solves trust issues if the pledge is "only make GET requests to exactly example.com/favicon.ico or example.com/style.css". This way you can't send any data (as there's no body, and encoded data wouldn't match the URLs).
What you are describing are essentially an extended version of various security http headers.
* first requirement can already be done using Content-Security-Policy header
* haven't found a suitable header for the second requirement
* third requirement can be done with Permissions-Policy header
That's partially true, but it would be important for this to both work without a server, and at runtime.
Not relying on a server makes this functionality available for downloaded sites. I'm a big fan of offering single file builds for web utilities, and the pledge should be part of that build instead of something the user supplies.
Having this as a runtime API would enable easier integration - say I'm developing a video editor that needs some WASM blobs. It might be a lot easier to load the blobs and pledge no further network access than having the URLs known on the server-side.
Serenity OS also makes use of pledge. The episode in which Andreas kicks it off was delightful to watch.
What about just turning off your network?
That means you have to keep the network off for as long as you’re using the app, which is inconvenient.
In chrome you can turn off the network per tab.
In Firefox I use this extension for the same purpose
https://addons.mozilla.org/en-US/firefox/addon/work-offline-...
This only works on Google's OS, Chrome.
What about programs that run on your computer so you don't even need the Internet for them?
That's an interesting question, but I think it's also equally difficult to prove for non-browser software.
On macs I feel like little snitch or LuLu are the norm. I wonder why, given that Windows and windows apps are historically more inclined to install stuff you don't want. Anyway, both are outgoing network monitors/firewalls and it's one of the first things I install on a new system.
Pretty sure they are very far from the norm, in % of Mac users
No making any accusations but I used Little Snitch extensively at a shop that didn't pay licenses for either Final Cut or the Adobe Suite.
Why do you say it's equally difficult? By limiting network operations of a local application you can indeed prove this, as long as you trust the facilities provided by the operating system.
With web applications doing the same is more difficult, because you need to pass some requests, and some requests need to pass while others could be smuggling data.
Some crypto wallets, facing similar concerns but with I suppose higher stakes, will provide the user download a local copy of the software, load offline a private tab, close it when done and only then, go back online again.
A bit fiddly for sure - but seems comprehensive enough.
Needs browser support really. Probably harder than we imagine.
There should be an electron local app for running offline web apps. Shouldn't be too hard to build.
Desktop browsers have surprisingly reasonable support for offline PWAs and integrate them as desktop shortcuts etc. Better than Android and iOS, in my experience, although neither is a hard bar to clear.
In what way? Good luck using this thing if the network is down, or if the website is down, or if DNS is down, or if the domain expires, or if the author disappears. A program you download and run is yours forever, a website can disappear tomorrow, or get acquired and get enshittified. It happens every single time, and then there's a thousand-comment thread here, until the next web app that everyone loves, and the cycle repeats itself. Do we never learn? Am I taking crazy pills?
Break the cycle.