I don't get why parts of the Linux community are so resistant to embrace AppImages and provide first class integration for them. Decent desktop integration isn't that hard.
As a user I want to download the thing and run it. AppImages provide that.
As an app developer I want to create one single package that works everywhere that users can just download. AppImages provide that.
Snap and Flatpacks are solving problems that don't need solving for most people. Shitty sandboxing that doesn't even work and makes my app slower? I don't want it.
Most software is best installed by the native package manager. For the few exception AppImages are perfects.
AppImage is a good distribution format, but IMO is not comparable to your system's package manager or Flatpak for that matter. For starters, when you downloads an AppImage, you are just getting the binary. Documentation, Desktop and Service files, and update tracking are all things that are missing from a vanilla AppImage deployment that your system package manager always provides (Flatpak and Snap only handle some of those sometimes).
The missing piece is perhaps some sort of AppImage installer which can be registered as the handler for the AppImage filetype. When ran it could read metadata (packaged as part of the AppImage?) and generate the support files. Ideally would also maintain a database of changes to be rolled back on uninstall and provide a PackageKit or AppStream provider to manage updates with your DE's store.
Now none of that addresses dependency duplication, but thats clearly not in scope for AppImage.
How big of a problem is dependency duplication on 1TB drives?
The issue is memory. Every library in an app image has a unique memory space and so you have a bunch of copies of sometimes very large libraries sitting in memory rather than one copy from disk mmapped into memory and page duplicated all over the place.
Linux has had support for not doing duplicate pages for a long time now. I am forgetting the name of the feature but essentially this duplication is a solved problem.
That's only the case if the libraries loaded are identical, It won't work with slightly different versions of the same library (unless the differences are small and only replacements, so the pages remain aligned between different versions), and that case is very unlikely to be solvable
The parent comment doesn't talk about different versions and I wasn't either.
This is basic paging and CoW (Copy on Write) behaviour. I agree, it's mostly a non-issue
For different minor versions and builds of libraries?
Could be big, depending on how much room you give to /. All my Linux life, I have allocated about 50GB to the root partition and it's been adequate, leaving enough room for my data (on a 512GB drive). Now I install one flatpak and I start getting low disk space warnings.
How do you manage software with large assets? A single game can take 10-100GB of storage.
My steam library is on another drive (I have multiple O(TBs) large spinning-rust drives on the desktop). The nvme is strictly the base system + apt packages + build tools + /home/.
I also shun the snap bullshit. But TBH I haven't divided my disks into more than one partition (except from /boot and EFI stuff) for many many years now.
That's also a big reason why I prefer appimages.
ossia score's AppImage is 100 megabytes: https://github.com/ossia/score/releases/tag/v3.2.0
Inside, there's:
- Qt 6 (core, widgets, gui, network, qml, qtquick, serial port, websockets and a few others) and all its dependencies excluding xcb (so freetype, harfbuzz, etc. which I build with fairly more recent versions than many distros provide)
- ffmpeg 6
- libllvm & libclang
- the faust compiler (https://faust.grame.fr)
- many random protocol & hardware bindings / implementations and their dependencies (sdl)
- portaudio
- ysfx
with Flatpak I'd be looking at telling my users to install a couple GB (which is not acceptable, I was already getting comments that "60 MB are too much" when it was 60 MB a few years ago).
In addition to memory, there's the ability to patch a libz bufferoverflow once, and be reasonably sure you don't have any stale vulnerable copies still in use.
I believe there is a tool called appimagelauncher that does just that.
Literally this. One thing both Apple and Windows have that Linux does not is, every software has an easy universal package for their respective platform. For Linux if its even in your main package manager its like whatever was "stable" in 2022 or whenever the last major version of Debian / Ubuntu came out, and that's all you get. It's annoying to no end. I now just download AppImage every chance I get.
Unless something has drastically changed since I switched to Linux around 10 years ago, Windows did not at all have a "universal package". Instead, installing software meant manually downloading an installer from the vendor's website and then manually interacting with that installer through a GUI. Windows installers come in a variety of different (even custom written) formats which essentially make it impossible to automate package management in a universal way.
my biggest point being that Linux package managers carry out of date packages. Unless you want to use arch linux or similar.
Stop using Debian based distros, this a deliberate choice on their part. Use a rolling release like Fedora.
Use Debian Testing. This is the _rolling_ release from the Debian family.
And don't be fooled by the name of the release: "Testing". The name "Testing" exists due to an orthodox approach of Debian community that only the Debian stable can use "stable".
Testing is not really a rolling release and is not guaranteed to get security updates in a timely manner so it's certainly not for everyone.
testing is absolutely not a rolling release. Sure it gets new packages faster than stable which can appear to be the same as a rolling release it gets frozen before a stable release and suddenly it's not a rolling release.
And ignore the orthodox Debianeers who say "nobody should be using testing".
Fedora isn't more rolling than Debian. That is both have rolling testing vs rawhide but otherwise they are not.
Then point your /etc/apt/sources.list to 'testing'.
@giancarlostoro
This is the solution. Use the following web page to compare packages versions from Debian Testing and Fedora 40: https://distrowatch.com/dwres.php?firstlist=debian&secondlis...
Back then Windows had already blessed Windows Installer packages (.msi), which could be installed unattended from the CLI. But, to be fair, many companies still preferred to use other installer tools, including sometimes Microsoft themselves (e.g. ClickOnce).
But there's been improvements. The winget CLI can now install and update several well-known applications, and even upgrade them when you first downloaded them any other way. There's also the MSIX package format, which is much closer to distro packages or mobile apps, can auto-update without a central repository and supports sandboxing.
Nowadays, even if still a bit hacky, I definitely consider Windows packaging less painful than the mess that is Linux distribution. Packages in distro repositories are regularly outdated, and every other package is installed misconfigured or has been patched in unclear ways by the distro maintainers; then there's Flatpak which make for gigantic installs with clunky sandboxing, or AppImages.
... sorry, this sounds to good to be true.
Usually it works like this (forced to use macOS at work and Windows from time to time for special software): One installs some software from trusted websites (VSCode, VLC, Firefox, etc.) on macOS/Windows, and then when one just wants to get some work done: Update-PopUps, yay. They annoy the hell out of me, especially because there is no unified system for macOS/Windows. Break my flow, wait for download, privilege escalation, installation and starting again. Thank you very much. This happens multiple times a week especially for packages like VSCode, VLC, Office, Outlook, Firefox, KeepassX, Calibre... packages which you don't want to be outdated, ever. It is f*cking ridiculous that I have to take care of this BS in 2024.
On my Linux boxes I login and work. Updates have been silently downloaded and installed for the packages and/or flatpaks and everything is up to date, no annoying update-popups which break my flow and I know that I have the latest version of all software especially security sensitive packages.
At the end, you can pick your poison.
Having integrated/working package management with silent updates is one of my killer features of Linux/Flatpak. I want to set it up one time (automatically) and never have to think about or deal with it again.
When combined with Flatseal as an easy to use privilege granting system, it really is hard to beat.
I likewise love AppImages and wish more projects used them, but I also love Flatpak. The downside of Flatpak is the overhead it takes to learn what it does, how it works, and how to manage them. If you already know container stuff like docker/podman then it isn't too bad, but it's a non-zero cost and friction.
I think most people don't like AppImages mostly because they don't provide any sandboxing. I think that's a silly reason myself, but I'm also an old, and us olds aren't terrified of using our computers like the youngs seem to be ;-). Though, other OSes are getting sandboxing for applications and Linux needs to not get left behind, so I'm glad it's being solved.
I also think fragmentation is a (valid) reason people dislike AppImage also. It's nothing wrong with AppImage specifically, but that its existence harms adoption of Flatpak by making it easy for people to not use Flatpak. Personally I see them occupying different niches. I use App Images for things like Kdenlive, Logseq, Upscayl, and UHK Agent. Those could all be Flatpaks, but developer effort matters. If devs provide an App Image build I think we should be praising them from the rooftops for caring about Linux!
Another reason is that it clashes with immutable OSes like Fedora Silverblue or SteamOS that are heavily container-based.
What I'd love to see is a tool that takes an App Image and automatically builds it into a Flatpak (possibly with predefined metadata). Flathub could easily be populated this way so that it's easy for developers/distributors to package and ship, but also Flatpak is the standard.
It is an unquestionable mistake to conflate packaging/distribution and runtime sandboxing into a single solution. These are different problems that fundamentally have nothing to do with each other. There are great solutions for sandboxing that don't care how an application is installed, and solutions for packaging that don't force a specific sandboxing solution on you. This failure to separate concerns is one of the primary disadvantages of Flatpack.
I would agree with you if there wasn't already RPM, debs, etc. But also given how flatpak is implemented on top of containers, it's basically "sandboxed by default" and any unsandboxing involves exposing/mounting things in from the host into the container. You can "unsandbox" any flatpak you want using flatseal (or if you know how, flatpak directly), so I think flatpak is actually pretty good in this regard.
No, these are purely packaging solutions that do not conflate their purpose with sandboxing.
Yes, this is the problem I'm referring to.
Unpopular opinion maybe, but I think sandboxing and app packaging/distribution should be entirely separate components so the user can mix and match the two freely. Combining both into a single “solution” makes for inconsistency and trouble.
Flatpak uses bubblewrap, and you can use that separately (directly or through ex. bubblejail).
it should have been very telling to the linux community that Microsoft with all its mighty billion dollars wasn't able to force sandboxing on the users with WinRT and had to backtrack and allow Win32 apps again. Hell, people moved to linux because MS tried that.
That's not because of the sandboxing, that's because WinRT required UWP. Microsoft since introduced Win32 sandboxing, which is how the Microsoft Store, most "inbox" apps on Windows, and Game Pass games work.
But sandboxing isn't a requirement on Linux, its an optional distribution method. When you look at a console like the Steam Deck, that system practically requires you to use Flatpaks for third-party software because the OS is an immutable image. If you install anything through pacman it will be wiped on the next OS upgrade.
Greetings fellow UHK user, the best keyboard.
Truly, it has forever made me dislike all other keyboard :-)
The layers (especially mouse layer) is so genius I can't believe it isn't more common
I disagree with you in that, only system software should be installed by the native package manager. Everything else should be AppImages.
What is "system software"?
The stuff that gets your desktop started up into a usable state. Same as on any OS, macOS or Windows. The software that is expected to be there on a given version of the OS, that other software may depend on being at some particular (major, at least) version until there’s a new version of the OS.
The stuff that updates when you update the OS.
On macOS, the software that’s not an optional package manageable through the App Store (even if installed by default) or managed by Homebrew or what have you.
You know, the system software.
That's the problem. I don't know.
I've made up half a dozen definitions for it before asking, and none of them were a good guide to decide about the software on my machine. Yours seems to focus on the DE components, what is both way too restrictive (why are `ls` or `test` out?) and way too inclusive (my DE installs with an Earth rendering and graphic calculator, my workplace's DE installs with CandyCrush).
A universal definition isn’t needed to apply the concept, in the same way the Internet can’t agree on some fixed universal definition for what a sandwich is, yet this doesn’t impair assembling a BLT.
It is in fact applied by organizations, and they manage fine, so lack of a universal definition isn’t a hindrance.
If you want to guarantee certain versions of ls or test are available for the duration of the supported life of an OS, yeah, they’re part of the base system. This kind of arrangement is very nice for both users and software vendors. The base-system instability of an Arch or a Gentoo (rolling release), or the ancient productivity-software packages of a Debian, aren’t the only options—lockstep-release stable base system and rolling release user packages are an option.
This is a distinction that should NOT exist. Like on phones, you end up with "software" that is just a glorified web browser and doesn't integrate with the rest of your system nor cannot access your hardware to its full extent.
I.e. If I want to set up a script that makes Libreoffice trigger phone calls through a bluetooth modem, I should be able to. Otherwise it isn't really a computer. These "system" vs "non-system" almost always end up down this slippery slipe, and avoiding it is one of the reasons I enjoy desktop Linux for all its brokenness.
It’s very nice when it exists because you can do whatever you need with user-facing software without risking system stability. Long-term stable versions of basic software, including gui libs, also provides a reliable target for software deployment.
No thanks. Everything should be installed via the operating system package manager.
Personally I dislike AppImages, Snap, Flatpack, Docker, etc. for one main reason:
If an app is so hard to distribute in any other way, that to me is a red flag that the app is not up to my quality standards or otherwise violates my sensibilities in some way. On my Linux desktop, I am very much in the camp of "that which exists without my knowledge exists without my consent".
(I fully recognize that I am extreme outlier in general, and perhaps a slight outlier among Linux users. Just offering one perspective, I make no claims that this it the correct perspective for most Linux users.)
"Quality standards" like using whatever old version of library the distro provides. And yes it's a madhouse both for app developers and for distro developers
This problem is exacerbated by things like the Canonical interview process where your hiring is intrinsically tied to having drank the whole koolaid and thinking it's the best thing around
We need a linux distro made by people who hate linux. People who buy no excuses. Maybe then things will work
What is stopping you?
$$$ and knowing that desktop distros are basically a loss leader, and basically nobody managed to profit from them (not even RH).
Android only managed to get where it got by keeping the kernel and ditching most of linux userspace
Mint seems to make solid money.
But if you want other people to do your desired work for free you might have to work on your messaging.
Do they? They rely on sponsorship https://linuxmint.com/sponsors.php
Not sure about the full financials and I'm not even sure they divulge it
I have run across the periodic application that violates both my quality standards and sensibilities, yet I find indispensable. An example here is the e-reader software KOReader. It makes zero sense as a desktop application since it is designed to run on dedicated tablets hardware with e-ink screens. It is not packaged by many distributions, likely because few people would be interested in maintaining such a package.
So why would I want to use it on a desktop? Because the breadth and depth of features are unmatched. In my case, I am willing to put up with a quirky black and white touch based interface[1] in order to have access to those features on my laptop. So I use the AppImage.
While I dislike the mentioned distribution formats for the reasons you mentioned, some software is so wonderful[2] that it's warts should be ignored.
[1] It does have keyboard controls. In verifying a couple of details for this post, I also discovered that it can be controlled from a gamepad. Connecting my laptop to a television and sitting back to read a book with only a gamepad in hand is something that I am going to have to try one day.
[2] And sometimes that wonderfulness extends beyond features. The lua based portions of KOReader are sufficiently clear that I was able to create a profile for an e-reader (tablet) that is so new to the market that it isn't yet supported in the release version (the e-reader is only about a month old, while release versions of the software come out every four to eight weeks).
Yep, that is super reasonable. I also will use them if that is my only option. So in that sense I guess I'm am thankful they exist, I just don't want them to become to default or even mainstream.
What's the update story for AppImages? AFAIU you have to download updates by hand?
There aren't centralized solutions that I know of, but there are projects such as AppImageLauncher[1] that can provide some automated management of that. It's not perfect but it is helpful.
[1]: https://github.com/TheAssassin/AppImageLauncher
My Arduino AppImage self-updates - so I know it's possible
Yes. Things that are critical to update regularly should be handled by the package manager.
I use AppImages when I want a specific version of a specific app. I never want them to be automatically updated because doing so might introduce breaking changes that might mess with my workflow. Imagine complex software like Blender or Krita updating automatically while you working on a specific project, possibly even breaking your saves, absolute horror.
Updating app images are solved if a few ways: https://docs.appimage.org/packaging-guide/optional/updates.h...
Because Linux userspace libraries aren't designed to handle long term binary compatibility. There is no guarantee that a simple package upgrade or a simple distro version upgrade will not break things.
There is no guarantee that an Appimage will continue working 3 months later. If it relies on web communication, there is no guarantee that the application will stay secure since you have to bundle core system libraries like OpenSSL with your application (unlike Windows and MacOS).
I will even go and say especially GNU stuff is made specifically to make reasonable (imo at least 5 years) binary only i.e. closed-source software friendly distribution hard.
It is the culture adopted by all middle layer libraries and desktop environments too. The only supported form is source and every piece of software in Linux world assumes it is built as a part of an entire distro.
That's why Snap and Flatpak actually install a common standardized base distro on top of your distro or why Docker in its current form exists (basically packaging and running entire distro filesystems).
Only way to get around it is basically recreating and reengineering the entire Linux userspace as we know it. Nobody wants to do that.
Creating long term stable APIs that allow tweaking is very difficult, requires lots of experience in designing complex software. Even then you fail here and there and forced to support multiple legacy APIs. Nobody will do that unless they are both very intelligent and paid well (at the same level as an Apple, Microsoft or Android engineers). It is not fun and rewarding most of the time.
The kernel however, is. So how about static linking (almost) all the things?
You cannot at the moment with GNU stuff since glibc relies on a plugin system to load things like DNS, user management etc. at runtime since it enables stuff like LDAP. OpenSSL also relies on dynamic library infrastructure.
The OpenGL drivers are also similarly dynamically loaded by libglvnd at runtime. Otherwise universal graphics drivers won't work. It'll be going back to the bad old days booting into black monitors when changing GPUs and then trying to figure out drivers in the TTY.
High performance data transfers like real-time audio, graphics buffers, camera data etc still has to use the lowest level possible i.e. shared memory. Dynamic libraries really help for having simpler APIs for those.
And then there is the update problem. If all programs are statically linked, a single update will easily reach gigabytes per upgrade for each CVE etc. The distro maintainers has to be extremely careful that they didn't miss to upgrade a dependency.
that's somewhat an exaggeration. Here's an appimage I built 7 years ago which stills run on my up-to-date archlinux. If you follow the appimage guide it will work pretty much without issue.
https://github.com/ossia/score/releases/tag/v1.0.0-b32
And neither method really works for the desktop usecase, because one expects things to actually integrate with the desktop, and that often requires IPC, not just dynamic libraries. So if you bundle an entire filesystem with all libraries you've made things WORSE. Accessibility & I-Bus will break almost guaranteed every other release...
After using Ubuntu for years, this change made myself and many others I know switch. I've been on MX for the past two years and love it.
what is MX?
MX Linux- https://mxlinux.org/
That website conveys very little useful information for understanding what makes this distro unique. I don't know what antiX or Mepis are. Why would I choose this over a more well known and presumably better supported distro?
It has more information on the front page about the distro than the debian website has about debian. I guess debian shouldn't be used either.
Also antix has a hyperlink which clicking would have answered your question as to what antix is.
AppImages are nice as a user, but it seems the lead dev is intentionally not adding support for wayland (besides some other odd choices) so it feels like a dead end technology
In what way is AppImage interacting with the display server in its own right such that it would need to be specifically updated to support Wayland?
I don't know enough about that to tell you something specific
While I'd love to see good tooling for AppImages appear, it's just not made for it.
The fundamental problem is that AppImages are literally just an archive with a bunch of files in them, including libraries and other expected system files. These files have to be selected by the developer. It's really hard to tell which libraries can be expected to exist on every distribution, which libraries you can bundle and which ones you absolutely cannot due to dependence on some system component you can't bundle either, or things like mesa/graphics drivers. There's tools to help developers with this, "linuxdeploy" is one, but they're not perfect. Every single AppImage tutorial will tell you: Test the AppImage on every distribution you intend this to run on.
At the end of the day, this situation burdens both the developer (have I tested every distribution my users will use? both LTS and non-LTS?) as well as the user (what is this weird error? why isn't it working on my system?), and even if this all somehow works, newer versions of distributions are not guaranteed to work.
Flatpak, for all its bells and whistles, at least provides one universal guarantee: Whatever the developer tests, is exactly what the user will experience. I think this is a problem that needed solving for many people.
...it hurts a lot to say this as a longtime flatpak avoider, still always prefer distribution packaging, but I've come to accept that there's a genuine utility to flatpak, if only to cover for letting users test different versions of software easily, and similar situations that distributions just cannot facilitate no matter how fancy their package manager.
It can also happen for Flatpak not to work on some distros.
Yes. And no.
About 70-80% of AppImages I downloaded in the past didn't even start. 100% of my installed Flatpaks have been running fine on any distro I tried. I would love a world where Flatpak was redundant, but it very clearly isn't.
I'm curious, could you try this one and tell me if it starts ? so far it works in all the mainstream distros I could try but if there's someone out there who cannot open it with an OS less than a decade old, I want to make sure I can fix that : https://github.com/ossia/score/releases/download/v3.2.0/ossi...
The Linux community is self-selected and highly opinionated on everything.
Concerning AppImages: I have no interest at all in them and although Flatpak still has its share of problems, basically all important communities adopted it (but Ubuntu).
Flatpak integrates in your system, has sandboxes, has automatic updates, shares dependencies etc.
Is Flatpak perfect or running w/o problems? Certainly not.
But IMHO AppImages add nothing over Flatpak, but lack the unified infrastructure and integration into packet managers etc.
We all would benefit from agreeing on one standard, and by now it looks like Flatpak is that one standard. So, I don't want to download random AppImages from the internet, I want a certified Flatpak which integrates with my system.
(Being a member of the Linux community for longer than most people using Linux are alive by now, of course and invetible, once Flatpak is working and established, it will be replaced by some broken other solution. :-P)
AppImages don't solve the discoverability or upgrade problem.
I'm on an immutable distribution (Fedora Kinoite) so installing native packages is discouraged. I have everything running in Flatpaks, even Steam and all games. I haven't experienced any performance impact. I think Snaps do something weird with squashfs, but Flatpaks just set up and manage symlinks to dependencies via ostree.
Although I like appimages more compared with flatpack and snap
I think they are all worse than the other packaging methods
With a NixOS configuration I can install almost all software I use in a single command
Even in window I mostly use scoop/chocolatey
IMO AppImage fell short by not requiring upgrading support in all appimages. I don't want to have to monitor random sites for updates to my applications. So after trying to use them for a time I moved on. Currently experimenting with both flox and flatpaks as they both handle this aspect reasonably.
To me that also raises the issue of Snap/Flatpacks and Wayland both handling part of sandboxing/capabilities. Ultimately if you want to handle how your apps can be run and got resort back to lower primitive and you’re handling very different things like files acces(Snap/Flatpacks) and visual elements(Wayland) and maybe got back to the Linux kernel to handle that.
As a user I don't want to deal with updating all my software individually. AppImages also provide that. I would be perfectly fine with AppImage format packages being shipped through my package manager, but I don't want to download isolated software from the web when I can avoid it.
FWIW my package manager of choice is GNU Guix, which solves all of the same problems as snap and flatpak in a much more elegant way.
Loudly declaring your ignorance is not an opinion. If really cared, you would go look at the large volume of the well-thought-out complaints. But you don't.
AppImages are good for certain use case. Recreational vehicles are also good for certain use cases. But just as a I don't live in RV parked in my garage when I'm at home, I don't use AppImages as my primary way of running locally-installed software on my main system.
You should just release source tarballs (or static binaries if you're not FOSS), and let the distros do their job of packaging and distributing the software. AppImages are fine to maintain in parallel for specific use cases, but I just want normal software binaries running in the OS environment I've already set up to my liking, not embedded in someone else's encapsulated runtime environment that's inconsistent in dozens of ways from my system config.
Exactly. But static builds are far superior to AppImage.