IMO Android architecture does offer an advantage relative to vanilla Linux in the way that it creates a well-defined separation between kernel (+ drivers) and user space development, somewhat fixing the OS fragmentation. It helps prevent users getting locked out of the latest version of the OS just because the device manufacturer didn't update their BSP. It's the reason why Samsung offers 5 years of device updates and Pixel offers 7 now.
https://source.android.com/docs/core/architecture/kernel/gen...
How's that even an issue in Linux? If anything, vanilla Linux users have way more power over their computing platform and are never locked out of anything.
What is vanilla Linux though?
If you're talking about something like Ubuntu, there's a tonne of work that goes into making sure it works with ..say.. a random three year old laptop with all the binary blobs in place.
Try getting something like wifi or Bluetooth working on some weird ARM dev board and suddenly there's no vanilla Linux unless you're willing to write device drivers.
That should be true for everything, if no one has written drivers for something it won't work. But once driver is added will it fail/updated for newer versions of Linux?
The reality is that drivers are not added, as you say. Most companies release an out-of-tree BSP targeting a specific kernel version. They often contain blobs and are often not GPL. Linux doesn't support a stable kernel ABI/API (https://www.kernel.org/doc/Documentation/process/stable-api-...) and the only way to avoid the associated issues is to mainline drivers, which most companies don't want to do (don't want to open source their IP, don't want to invest in maintaining it etc.)
Android GKI/KMI addresses the issues related to this. GKI is relatively recent and OEMs don't offer 5+ years of Android updates because they haven't adopted it yet.
That Linux note about why they don't want a stable API is ridiculous. This shows how unfriendly is Linux to out-of-tree software. They want third-party developers to refactor their working and tested drivers every time they decide to change something.
I think that at least open source community should move to "never break things" concept. So that you can write and test code once and never touch it anymore. Refactoring your code because kernel/library has changed its interface is just a waste of time which gives nothing in return, often time of unpaid volunteers who could do something useful instead.
This should apply to libraries, applications, programming languages, browser extensions. Take upgrading your Python 2 code to Python 3 as an example. You need to put a lot of efforts while gaining nothing. This is not how software should work. You should write a code once and it should work forever without any maintainance.
Correct me if I'm wrong, but I thought that the actual code itself is stable, just that the compiled kernel API/ABI isn't.
So if you open source your drivers and get them accepted into the kernel, then you don't need to rewrite/recompile them for each new kernel version. Just like AMD did with their drivers. And I think this is part a conscious decision that forces people to open source drivers.
But this is honestly just guesswork based on what I've read, would love to learn more!
You don't, but someone does. In the note linked above they give an example of how the USB interface changed from synchronous to asynchronous, this must have required some refactoring in all drivers that used USB
That someone updating the drivers is exactly the same person who makes the change in the subsystem that they depend on. They cannot break what is already in the kernel, but they can break what's outside; it is after all an internal API.
And that is wrong because people might have reason not to mainline drivers.
That's fine; but their reason is their problem. They are not entitled to free labor of other people.
What reasons are there other that corporate shenanigans? Sure it takes more effort to mainline the driver and address review comments, but if you aren't designing a throw-away product, it's effort well spent.
Those reasons tend to be petty ones that benefit the hardware company and noone else. Seems fair that the hardware company should pay for the consequences of their decisions and not the people advancing the kernel.
The Linux developer community promised almost 20 years ago [1] that no release of the stable kernel will ever break something that worked in a stable kernel before. AFAICT, this promise holds. If you upstream your driver, the community will take care (cf. AMD), if you don't your users experience occasional pain (cf. NVidia).
[1] http://kroah.com/log/blog/2018/02/05/linux-kernel-release-mo...
no, they do it for you for free - you just have to make sure the quality is high enough to merge into mainline.
That's exactly what I meant: Linux is unfriendly to third-party software developers (drivers not in kernel, apps not in distro repositories).
I know a lot of people who are unfriendly towards sloppy work, yes. if the driver is of a good enough quality, they will be happy to take it off your hands and look after it for you!
as a user, I hate apps that aren't in the repository. it takes effort to make sure it doesn't clobber something important just because the developer wanted to use the latest and most broken version of some library. thankfully nixos allows me to run whatever version of every dependency and then nuke the whole mess once I'm done with it.
That's not going to be fixed. This is a typical M-N problem. There are too many distros and it would require lot of work to port every app to every distro. Instead, there should be a "platform" (set of APIs) that app can work with and the same app would be able to work in any compatible distro.
To be fair, if the app isn't in the nixos distro, it's rarely that useful. The only one I had to figure out was a proprietary logic analyser interface, but it wasn't too hard.
Just the python's 3 string/binary distinction and explicit conversion squashed an entire category of bugs. Yes, it required to make explicit what was implicit before, but still, that's huge achievement in my book; saying that it is nothing is a very, very shortsighted.
And created an entire category of new ones.
They wouldn't have to, that would be done for them, for free, if they got their code into the kernel...
A driver is not a static one time thing.
Once you mainline it, making improvements, supporting new devices and features become 10x slower.
Nobody says the old mainlined driver must support new devices. Mainline a new driver, or don't, either way it's better than nothing.
Yeah, better than nothing.
With a stable ABI, it would be better than everything. But we can't have it because of ideological reasons.
If the kernel had a stable ABI and made it a bit easier for closed source drivers to access kernel APIs, yes.
That's how you can use a random 25 year old peripheral on multiple versions of Windows without anyone having to update a driver.
That's also how you make windows ship with three different USB stacks and their weird interaction.
For sure. The real world of devices is messy.
Avoiding that mess for some notion of beauty or purity only gets you so far.
In theory maybe, in practice I'd bet in a better Linux support on a 25 year old peripheral compared to Windows.
Out of the box, sure.
For windows you can usually download a binary from some 20 year old website and it'll work.
It depends on how open the particular drivers are implemented. E.g. over the last couple of years the Nvidia driver situation for cards from the last 3 generations has changed across pretty much all 3 major levels:
1. Originally you had to use the proprietary binary driver to get anything useful to happen with your card at all. Updating the kernel without updating this would more or less lead to having an expensive brick in your PC. Some Wi-Fi adapters fall into the "can't really be updated" category as well. A _LOT_ of ARM shit is like this.
2. nvidia-open came along (still beta for desktop cards at the moment) and it puts enough in the kernel that you can update the kernel without needing an updated binary driver for your card to function
3. nouveau/nvk have very recently started to come to a decently usable state (i.e. they have reclocking via GSP and somewhat usable graphics API coverage/efficiency) for an even more open driver stack which tracks system updates even better.
If your binary blobs fall into 1/2 then long term upgrades can be anywhere from impossible to unreliable. If they fall into 2/3 they can be anywhere from somewhat reliable to "will be working longer than any sane person would still be trying to update the kernel on that device". E.g. the AMD 7750 is 12 years old but can run OpenGL 4.6 and Vulkan via the latest AMDGPU driver in mainline mesa/kernel.
LTS distros solve this by using LTS kernels and security patching them rather than requiring "actual" underlying OS updates during the version lifecycle.
depends; but may i suggest starting at kernel.org then look at 'linux from scratch'/gentoo then maybe slackware; surely not ubuntu
May I suggest trying your hand on LFS on a ARM dev board and report how the experience goes?
There's a reason all other kenels have adopted a stable driver API and ABI.
- Linux it's just a kernel.
- Vanilla Linux it's propietary.
- GNU/Linux-Libre it's the proper libre one, rebased from GNU maintainers.
- There's no proper 'Vanilla distro' as LFS or Gentoo. Every one since 1993 has been an external bundle made from volunteers.
- The closest vanilla distro from the GNU project would be Guix as it works as a distro and a universal package manager to ship libre software.
But that work gets done and "users" of Ubuntu or Debian or Arch get to use it from the sources they trust (aka Ubuntu, Debian or Arch). I'm not claiming to have a full understanding of how every package or kernel module Debian or Arch or Fedora ships works. But I'm trusting Debian or Arch or Fedora for my packages. If it comes to light that debian or fedora maintainers had no idea that they shipped a malware in a release, then I'll seriously question going with that distribution in the future. And without sounding facetious, that has happened multiple times in past especially with debian. But the times it happened it was clear to me that it was merely a mistake rather than extreme incompetence or malice.
With Android, you have Google, which despite the general HN rhetoric I personally trust to not ship straight up malware. But when we're talking about Android that's not what we're talking about. We're often talking about binary ROM blobs from random XDA or RW users. Funny thing is 20 years ago, I'd have shouted about what's the difference between a random anon on XDA or WZBB providing a ROM blob. But now I know better.
You trust Canonical, android users trust Google.
Binary blobs are an unfortunate reality and no amount of trust in a company or entity can really solve this.
For the record, Debian and Arch doesn't work very well on non standard hardware. I use arch on my desktop but gave up on using it productively on a new HP laptop.
Heh, and you could argue that laptops are standard these days. More laptops are sold than desktops, and HP is definitely a mainstream brand.
I understand your usage of the word, I'm just pointing out that if the mainstream ain't "standard" anymore... it kind of sets the standard in practice.
you understand little about what you're talking about.
hardware is either supported or not.
the "work" you mention on old wifi cards is to get a driver that was never contributed in any official way. hence unsupported.
Linux have more hardware support out of the box than anything, ever.
just educate yourself before purchasing.
This reminds me of certain countries that define literacy as the ability to sign ones name.
Linux driver support for a large fraction of hardware is incomplete at best.
Linux offers no stable driver API; they claim that submitting your drivers into the kernel tree is a good substitute, but they have strict requirements on what kind of code they will accept and also decline contributions out of basically organisational politics reasons. So in practice you're locked out from any old or obscure hardware unless you have a lot of time to spend playing kernel politics, keeping up with their API changes, or both.
They accept only code which they will be able to maintain, yes. That excludes spaghetty or implementations of complex undocumented APIs that are useless without a binary blob. Nothing unreasonable about that.
Perhaps. In my experience a bad driver is better than no driver; not wanting bad code in your codebase is reasonable, but gets less reasonable when you also refuse to offer an API. And even if your code does meet their quality requirements, it can still get rejected for kernel-politics reasons.
Unless you are going to provide examples then I am going to assume that " kernel-politics" refers to things like rejecting "drivers" that can only used with a closed source userspace blob. In those cases the reasons have technical merit - how are the kernel developers supposed to maintain an effectively proprietary API without any idea how it is being used.
It's unreasonable if there aren't any reasonable alternatives.
How is it unreasonable that the kernel developers expect those that want to rely on their existing and continued work to put in a modicum of effert to not make said kernel developers life more difficult?
no sympathy. you're also out of luck using old hardware in the first place because the manufacturer wants you to buy the new version. they will help you LESS than Linux devs. you're point is very unhelpful to the discussion, unless you got drivers updates from the manufacturer
The manufacturer probably doesn't exist any more. They wrote decent drivers and supported them for a number of years, that should be enough. Or maybe a dedicated fan wrote some OSS drivers at some point in the past. On both Windows and FreeBSD that's good enough; once a driver has been written, it largely stays working, so I can keep using my hardware. This really is a unique downside of Linux.
I wonder if Linus has ever made the connection between this problem and his stance against applying the Unix philosophy to kernels :P
I think you've only experienced Linux on Intel/AMD, not Linux on ARM SOCs.
I'm looking at several ARM devices running Linux on my desk right now. What's your point?
This "power" is hypothetical unless you are willing to DIY all the drivers, which you will be writing blindly without datasheets. Vanilla Linux doesn't support IPU6 webcams, Qualcomm WiFi 7 chips, and dozens of other common parts.
I dont think they are the target market. E.g. Some people will never buy computers but only assemble them; some will only run freeBSD or linux. For vast majority - just open and use the computer.
yes the beautiful advantage of having some super-forked kernel.
the solution is to get everyone on the same kernel, which is then updatable - not hack together something that kinda works on top of a never updated snowflake
The Linux kernel team has been hostile to binary drivers in the past. Is that not still the case?
The linux kernel team understandably does not want to maintain overcomplicated shims, keep multiple versions of the same subsystem and debug opaque problems so some can keep their precious binaries secret.
You keep the binaries, you get to maintain them and solve their problems. Seems fair.
Ok but this is also going to mean people are going to keep long lived forks, like Android, that do support binary drivers.
Maybe then companies should stop hiding their bad code and binary hacks behind closed source drivers and write proper open source ones instead.
If they are using 3rd party IP, they should work with the providers for compromises, or license the patents and re-implement them.
AMD and Intel did great strides on these fronts, not counting the money-loving people called HDMI forum.
Nvidia became the biggest company on earth in part thanks to closed source drivers, allowing them to invest more into software.
Nvidia isn't even in the top 50 of largest companies by any sensible metric.
https://companiesmarketcap.com/
By market cap, nvidia is the 3rd biggest one, about as big as google and facebook together. Market cap is a sensible metric.
Then I guess Rhodium is the largest metal and the Hinkley Point C nuclear power station is the largest building in Britain.
If you conflate value with size, both terms become near useless. Value can only mean something in relation to something else.
Picture this: You have a large company with thousands of employees having similar revenue to their competitor, which accomplishes the same with one employee and a much smaller operation.
Which one is likely to be more valuable? Obviously the smaller one. If we however conflate value with size, as is so often done in popular economics, just pointing out this single fact becomes a complicated exercise of having to carefully employ language that we neutered for no good reason at all. Not to speak of all the misunderstandings this is going to create with people who aren't used to this imprecise use of the English language.
If you mean revenue, say revenue, if you mean value, say value, if you mean size, say size. Don't use "large" to say "valuable". Why would you do that if there's a perfectly good word already? Imprecise language is often used to either confuse or leave open an avenue to cover one's ass later... which brings us back to popular economics.
"Size" is unitless, so I disagree with your rationale.
Precision is always helpful, but valuation is a very common size metric, and there was no confusion about OP's meaning.
Market cap for tech companies is a bubble. It has more in common with the lottery total than with a representation of real value.
What is your reason for believing the closed source nature of nVidia's graphics drivers played a role in their success? AMD's and Intels Windows drivers are also closed source and so were AMD's Linux drivers when nVidia managed to secure the lead.
nVidia is also finally moving to an open source kernel module so a closed source one doesn't seem important to them for keeping their moat.
Presumably amadeuspagel means nvidia has walked a careful line with CUDA & ML.
CUDA is much more accessible/documented/available than a lot of comparable products. FPGAs were even more closed, more expensive and had much worse documentation. Things like compute clusters were call-for-pricing, prepare to spend as much as a small house.
On the other hand, CUDA is closed enough the chips that run it aren't a commodity. If you want to download that existing ML project and run it on your AMD card - someone will have to do the leg work to port it.
That means they've been able to invest quite a lot of $$$ into CUDA, knowing the spending gets them a competitive advantage.
nVidia built this bubble by playing dirty on many fronts.
Their Windows drivers are a black box which doesn't conform to many of the standards, or behave the way they see fit, esp. about memory management and data transfers. GameWorks library actively sabotaged AMD cards (not unlike Intel's infamous compiler saga). Many nVidia optimized games either ran on completely unoptimized or outright AMD/ATI hostile code-paths. e.g.: GTA3 ran silky smooth on an nVidia Geforce MX400 (bottom of the barrel card) while thrice powerful ATI cards stuttered. Only a handful of studios (kudos to Valve) optimized engines for both and showed that a "paltry" 9600XT can do HDR@60FPS.
On Datacenter front, they actively undermined OpenCL and other engine by aritifically performance-capping them (you can use only one DMA controller in Tesla cards, which actually have three DMA engines), slowing down memory transfers and removing ability to stream in/out of the card. They "supported" versions of OpenCL, but made it impossible to compile on their hardware, except OpenCL 1.1.
On driver front, they have a signed firmware mechanism, and they provide a seriously limited firmware to nouveau just to enable their hardware. You can't use any advanced features of their cards, because the full-capability firmware refuses to work with nouveau. Also, they re not opening their kernel module. Secret sauce is moving to firmware, leaving an open interfacing module. CUDA, GL and everything is closed source.
On the other hand, they actively said that "The docs of the cards are in the open. We neither help, nor sabotage nouveau driver project. They're free", while cooking a limited firmware for these guys.
They bought Mellanox, the sole infiniband supplier to vertically integrate. Wonder how they will cripple the IB stack with licenses and such now.
They're the Microsoft of hardware world. They're greedy, and don't hesitate to make dirty moves to dominate the market. Because of what they did, I neither respect nor like them.
I'd agree with you, but ever since mobile devices have taken off, aren't we much worse than before? In the last peak PC years, say, before 2011, I was under the impression that hardware vendors were starting to play ball, but now things seem super locked down and Linux seems to be falling behind in this tug of war between FOSS - binary-only.
I think the problem with mobile devices is not the software but the hardware. These devices are locked down partly because of business interests (planned obsolescence), but another part is personal identity security.
A run-of-the-mill Android or iOS device carries more secrets inside, which have much more weight (biometric data, TOTP, serial numbers used as secure tokens/unique identifiers, etc). This situation makes them a "trusted security device," and allowing tampering with it opens an unpleasant can of worms. For example, during my short Android stint, I found out that no custom ROM can talk with the secure element in my SIM, and I'm locked out of my e-signature, which is not pleasant.
If manufacturers can find a good way to make these secure elements trustworthy without the need to close down the platform and weld shut, I think we can work around graphics and Wi-Fi drivers.
Of course, we also have the "radio security" problem. Still, I think it can be solved by moving the wireless radio to an independent IP block inside the processor with a dedicated postbox and firmware system. While I'd love to have completely open radio firmware, the wireless world is much more complex (I'm a newbie HAM operator, so I have some slight ideas).
So, the reasons for closing down a mobile device are varied, but the list indeed contains the desire for more money. If one of the hardware manufacturers decides to spend the money and pull the trigger (like AMD did with its HDMI/HDCP block), we can have secure systems that do not need locking down. Still, I'm not holding my breath, because while Apple loves to leave doors for tinkerers on their laptops, iPhone is their Fort Knox. On the other hand, I don't have the slightest confidence in the company called Broadcom to do the right thing.
Interesting details, thank you for providing them!
Regarding this:
People don't realize, but 99% of what Apple does hinges on the iPhone. The rest of the products pack a much lower punch if the iPhone were to vanish from the face of the Earth completely. It's the product all their customers have and they have it at all times with them. It's the product that's probably the easiest to use and the easiest to connect to other things.
So yeah, the iPhone will probably be the last non-military device on the planet to be opened up :-)
This is not a difference between ChromeOS and Android. ChromeOS is replete with binary vendor blobs, which is why the webcams, wifi, and touchpads work correctly on Chromebooks and poorly or not at all on other laptops running vanilla Linux.
The linux kernel team should offer a fixed compatibility layer for drivers. And for user applications too, while we are at it.
You are free to implement it.
But would it be accepted?
From what I see with both NVidia's and OpenZFS's compat layer, it seems to indicate that the Linux kernel folks are actively hostile against any such thing.
(Contrast this with, say, the FreeBSD folks where both the kernel- and user-land API/ABI stays frozen over the life of a major version release.)
I think this is more related to GPL licensing than them not wanting to assist. If it's completely generic then maybe, but how do you argue that the closed source Nvidia license or the dubious license for zfs can be linked to the GPL licensed kernel without being invitation of one of those licenses. Specially concidering how vague the GPL is regarding what it even means to be using GPL licensed code.
If I present an API anyone can use and Nvidia happens to target it it's a different ballgame than if I implemented and shim specifically targeted at Nvidia's binary blob. To my knowledge the latter is a violation of GPL should it be the inverse and therefore an explicit exemption would need to be put in place for the shim in the GPL and that is the showstopper.
It's having a stable foundation of the kernel's API that others can code against. As it stands, the (compat) shim layer(s) have to constantly be tweaked for new kernels:
* https://github.com/openzfs/zfs/pull/15681
And "dubious license"? Really? Given the incorporation of CDDL code of DTrace into macOS and FreeBSD, and of OpenZFS into FreeBSD (and almost into macOS), it seems that license is quite flexible and usable, and that any limitations exist on the GPL side of the equation.
How is coding against an API a violation of the GPL? Neither Nvidia's, nor OpenZFS's, code is a derivation of any GPL: (Open)ZFS was created on a completely different operating system, and was incorporated into others (e.g., FreeBSD, macOS), so I'm not sure how anyone can argue with a straight face that OpenZFS is derived from GPL.
Similarly for Nvidia: how can it be derived from GPL Linux when it the same driver is available for other operating systems (and has been for decades: I was playing RtCW on FreeBSD in 2002):
* https://www.nvidia.com/en-us/drivers/unix/
It's one of the reasons the whole DKMS infrastructure had to be created: since API stability does not exist you have to rebuild kernel modules for every kernel—even open source ones like OpenZFS, Lustre, BeeGFS, etc.
* https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support
The hostility doesn't seem to care if things are generic or not.
Remember when they broke ZFS by marking the "save FPU state" function as GPL-only? Telling the kernel to save registers so they can be used for scratch space is one of the most implementation-independent things you can do.
They do.
— Linus Torvalds, 2012
And you get to use modern mobile hardware like what a pinephone has.. lose, lose. Especially that modems simply can’t be open-source for the most part due to governmental regulations, so this everything open-source fairy tale doesn’t make much sense in mobile hardware.
Notice that it's only mobile where that's a problem; x86 machines have apparently been doing the impossible for ~30 years now. And no, modems aren't an excuse; the modem itself can't be FOSS (probably) but it's just a self-contained device running its own firmware; there's nothing stopping you communicating with it via FOSS drivers.
Did you miss the first, say, 25 years of the 30? Because I have literally reset my video driver blindly from terminal once, because it just failed after an update, and similar stuff. Also, this is pretty much due to the oligopoly of a few entities in hardware space, making it easier to support said devices, and standards.
Phones aren’t built from a small selection of devices, and they often have patented firmware that the phone manufacturer couldn’t even publish even if they want to. As I shown, look at the best attempt at building an open-source phone. It sucks majorly.
Although I have thoughts on the matter, I'm not actually arguing about the quality of drivers, I'm arguing that x86 linux has had minimal problems with keeping all drivers in-tree, and if vendors want to support linux they've generally had no problem working with that. It is for some reason only mobile and embedded vendors that find it impossible to upstream drivers and that insist on shipping hacked-up vendor forks of ancient kernels.
And again, firmware is a separate matter; there's plenty of cases of linux shipping blobs for the device to run, so long as the driver to talk to that device is FOSS.
The linux kernel is full of binary blobs. For better or worse..
Do you mean the firmware repo?
Because I don't remember seeing any in the source code... But ist be looking at the wrong place.
Ah. That would depend on the definition of "Linux kernel" then, yes.
You are right, in the firmware tree, not in the pure kernel itself.
The TGZ it's full of blobs.
anti gpl bots don't know the difference. they usually can't read code either.
you're correct, only firmware tree have binary blobs.
Binary drivers are hostile, not the Linux kernel.
Ok no problem, I'll just ideology to run things.
You don’t have to go anywhere near ideological debate to argue against binary blobs in the OS. The blobs could be verbatim from Stallman himself and blessed in the holy waters of EFF, and they would still be bad for dozens of technical reasons.
good. everyone should be hostile to binary drivers.
you have to not understand the first thing about kernel drivers to even consider binary drivers upstream. for starters, who provides a new binary when the api change every every new kernel version?
Android has been booting off mainline for awhile now, so what are you going on about?
Mainline kernel tends to have only basic support (if at all) for many SoCs that actually get used in phones, especially full power management support has been lacking.
Right, but it doesn't seem SoC vendors are budging on that - maybe it's slowly time for Linux to figure out a better approach?
Linux isn't budging - maybe it's time for SoC vendors to figure out a better approach?
Most PC and server hardware has FLOSS drivers (with proprietary firmware), even Qualcomm is upstreaming support for the new Snapdragon Elite (maybe it's made by a different team?).
I think phone SoCs are the odd ones out, which sadly doesn't mean they'll improve any time soon. Supporting an ABI for binary drivers in Linux might help phones, but it would give everyone else a chance to regress in their support, so I understand Linux kernel developers' position.
Stopping planned obsolescence via closed drivers is a start, no?
Android can boot from a mainline kernel, but all that gets you is that phones run a kernel forked from mainline, not forked from Google's fork.
ChromeOS was already guaranteeing 10 years of updates for every device. Has any Android phone ever gotten that many updates?
Highest I could find is 6
The Pixel 8 is promised 7, which I believe is the highest of any phone.
I guess we'll see how things shake out.
Unless something drastic changes, I'm sticking with Apple because of their demonstrated long-term support. Every phone since 2015 has been supported for 6-7 years. And that's actual support, not a "technically correct" mix of real support and security patches only.
https://www.statista.com/chart/5824/ios-iphone-compatibility...
That's overlooking the fact that Apple does not release the full featureset of new iOS versions on older devices and as years pass the number of feature omissions increases.
There’s hardware dependencies for some of the ‘missing’ features and if the hardware’s not there then there’s not a lot the OS can do to support it
which makes sense? like i can’t expect the new ai intelligence (a 3B model) will run at all on iPhone X
Google can’t even keep a project alive for that long, so let’s not jump ahead. We will see when they actually deliver.
Um they already delivered, for example on 8 years of updates for past Chromebooks.
The Fairphone 5 will get 8 years of security updates: https://support.fairphone.com/hc/en-us/articles/180206715370...
it's laughable because i doubt they have any proof that Qualcomm will continue to deliver updates. fairphone is just a intermediary.
they are using the long life Qualcomm parts, that only guarantee production and stock for x years. maybe with software updates by an intern.
it definitely will not guarantee a new driver set compiling to a recent kernel if the old kernel must be upgraded for security or anything else.
so it's kinda bold they promise that with out any disclaimer that they are just hopeful
(b)
https://eur-lex.europa.eu/eli/reg/2019/2021/oj
I can't find all of it, but part of it (Samsung also did the same thing) is triggered by EU environment protection directives.
Librem 5 will get lifetime updates, because it runs mainline Linux without proprietary drivers.
Samsung S24 has 7 years as well
The Nvidia shield vs other 1st party SoC devices running Android (e.g. from S devices with Samsung SoCs or Pixels with Google SoCs) goes to show the short lifespan of Android phone updates really has nothing to do with the underlying hardware or OS.
Also the type of updates guaranteed for 10 years on ChromeOS are not the feature update type that require upgrading the actual OS, just security and bugfix patches.
The Shield TV has had an impressive support lifecycle for an Android device but it still falls well short of a 10 year support cycle.
The Shield was released in May 2015 and its latest software update has an Android security patch level of April 2022 and was released November 2022. No more updates seem to be forthcoming. Notably, all Shield TVs today are vulnerable to remote code execution when displaying a malicious WebP image [0], a widespread issue uncovered last year.
Apple released the Apple TV HD two months after the Shield TV, but it still receives the latest tvOS updates to this day and will be receiving this year's new tvOS 18 [1] [2]. It received a fix for that WebP issue the same day as supported Macs, iPhones and iPads did last September.
Even the best Android device examples with good vendor support still seem to be falling short. The Shield TV is still capable streaming hardware in 2024 used by many people, but it's sitting there doing that while missing important security patches unbeknownst to the users.
[0]: https://blog.isosceles.com/the-webp-0day/
[1]: https://www.macrumors.com/2024/06/10/tvos-18-compatible-appl...
[2]: To be fair it's the only Apple A8 device that receives support until today. The iPhone 6 with the same chip was launched mid 2014 and received its last update in early 2023.
... which confirms that this has more to do with the vendor rather than hardware/OS?
I still have a Shield TV. It is pricey but my understanding is that other Android streaming devices suck.
I was thinking of getting a second Shield for my second TV but it turned out I had a $50 PC sitting around which works fine as a media player.
ChromeOS updates are actual feature updates - for the 10 years of support, your device will be tip-of-tree and have all the features (except a few which are flagged-off, usually for business or performance reasons)
Yeah, you're right. I got a little separated from the comment chain here after writing about the Nvidia Shield and what it implies about phone lifecycles by conflating that story with talking to ChromeOS. While the Shield received major kernel version updates as recently as 2 years ago, going back to the actual comment discussion around the abstracted ChromeOS userspace vs firmware/kernel/driver layer ChromeOS does already drive feature updates of the upper segment of the OS for the full lifecycle without requiring the kernel updates seen in the Shield example. Good catch and thanks for the correction.
lol, no? The most I got from Sony and Samsung in 2012 and 2015 was 1-2 years. Which is when I generally stopped using Android. It was frustraing enough to read all the news about "Google releases new Andoid with (list of features)" then asking when is my Sony/Samsung is getting that? to hear "oh maybe in 4 or 6 months if you're lucky. Or download this random ROM from `appzworrier` on xdaforum. Surely they haven't put any malware in the ROM or you'll hear about it here". I did give it another shot in 2019 with a Pixel 4 phone then after 2 RMAs with battery issues (4-5 hours of battery) I went back to iPhone. Funny thing is in 2013 and 2016 I switched back to the old iPhone I had before the upgrad, I just updated them to the latest iOS version there was at the time. After the shitshow I had with google support in 2019 I just gave up.
GKI didn't exist in 2012/2015, so it was very difficult and expensive for OEMs to support Android for longer. Apple could do so more easily because they control both hardware and software. Google introduced GKI and other related efforts/architectures to address this very problem, which is unique to Arm, Linux and the ecosystem around it.
It's really silicon manufacturer's fault for not wanting to mainline and long-term support their BSP, not Google's.
I would say it's both. Google could put support requirements in the gms certification
And how would that help when the major SoC manufacturer refuses to support their chipsets?
If a device cant get play certified it cant realistically be sold anywhere in the world except china, so no phone manufacturer would choose a SoC for their phone that was not supported.
Let the market decide, then.
Isn't this what capitalism loving companies say, anyway?
Broadcom had to open source their wireless drivers at the end. Same can happen for SoCs too. Add pressure, if not broken, goto 10.
"10 years of updates" for Google products is a bold claim. Will ChromeOS even exist in 2034, let alone run on 2024 hardware?
Why wouldn’t it? It might be rebranded as Android Desktop in the future or something, though.
That's likely due to contractual obligations with Google and the use of well-supported AMD/Intel hardware and BSP. Linux hardware / device architecture makes this difficult on Arm and the many silicon BSP providers and that's what the stable Kernel Module Interface that Android imposes helps with.
They've had arm Chromebooks with the same guarantees. Magically Qualcomm and Mediatek seem to be able to find their firmware and kernel sources when it comes to their laptop chips.
Both real-world and on-paper status are literally the exact opposite of what you're describing.
As others have already mentioned, just stating the facts about long-term support: chromebooks have much longer support than Android devices. Both looking at the 99-percentile, the median, and the 1-percentile.
Chromebooks don't get stupid bugs that require workarounds in userspace like: - BPF maps being broken because someone at Mediatek ran some proprietary static analyser and blindly pushed some fix - Camera only works properly in the OEM app - GL drivers are upgraded once every blood moon and very OEM has its custom GL version - Treble doesn't break because Samsung decided that mount loops were dangerously insecure at a time it wasn't required - You don't need to keep workarounds for mainline kernels that you can't upgrade, because Google/Android forbids you to upgrade your kernel [1]
Android deprecates driver after 3 years [2], so vendor need to do a lot of work on a very frequent basis. As an OEM, I just want to do my (painful) contributions to mainline, and have them maintain it. Android actively hinders me from doing that.
As an Android OS developer, I could go on and on and on about the stupid issues that Android kernels get that Chromebooks don't. All of those issues would happen just exactly the same on Fuchsia.
I'm not saying ChromeOS' development method works for smartphones, it doesn't. I'm not saying it is desirable, I think it isn't (because it completely kills most innovation). But ChromeOS handles fragmentation better than Android on every single criteria you could imagine.
[1] Google/Android added a bunch a new stupid policies that actually prevent me from upgrading kernels on deployed devices
[2] Yes there is actual planned obsolescence nowadays in Android. It wasn't the case few years ago where obsolescence was accidental
My personal (probably non-representative) experience with ChromeOS vs. Android: my ChromeOS device (a Lenovo IdeaPad Duet 2-in-1 Chromebook) tends to get extremely unresponsive (i.e. hangs itself up for minutes before responding again, or just spontaneously reboots) the longer it runs without being rebooted, to the point that I eventually have to hard reset it after about a week of on-and-off use. I basically only use the browser on it (but with lots of tabs open). This never happened to me on any of the many Android phones/tablets I had. Maybe it's some bug in how newer ChromeOS versions work on this by now pretty old device, but it's still annoying as hell...
You could try disabling the Android system ("Google Play"), since they switched running Android in a VM in newer ChromeOS versions. So you have an Android VM running in the background all the time and that might overwhelm the small Mediathek processor in the Duet.
That's the irony of this announcement. The only problems I ever suffered on ChromeOS came from the Android subsystem.
I've been using Linux for a long time, including chromebooks, and this sounds like textbook low memory to me. I love linux, but it doesn't handle low memory situations well[1], and whatever Chrome does in low memory situations makes it much worse. Tab discipline is the most important thing you can do. I have an obscene amount of tabs open too so I feel you. I've started making liberal use of the Session Buddy extension and that is helpful, though my true fix was putting 128 GB of RAM in :-D
[1] This is improving greatly in the last year or two, but that probably wouldn't have made it to your chromebook.
EarlyOOM [1] could help with that quite a lot. Not to sure about using it on chromebooks, but linux got quite a bit more usable because of it.
[1] https://github.com/rfjakob/earlyoom
I know your Duet problem well. I've got a Duet around here. It was slow from the beginning. Noticeably, it would often slow to a crawl while Chrome OS was downloading an update in the background. And sometimes, it would be slow after waking for about 5 to 10 minutes and then suddenly go to a more acceptable speed.
I also have a Lenovo Ideapad Chromebook running on an Intel 4020 with only 4GB. Despite being slow hardware, it runs circles around the Duet. The 4020 can be sluggish at times, but it never had the big slowdowns of the Duet. It could even run Visual Studio Code at acceptable speeds. Something was very wrong with the Duet. Usually Chrome OS is good on lower end hardware.
Gotta admit I'm curious there. I've rocked a rk3399 chromebook for years (stopped roughly when it got deprecated) without issues, and I was stressing it with crouton. My gf is still using a rk3288 chromebook "fine" (it's no longer her daily driver though). Those are low-end 8yo devices.
Anecdotal counter-point I have a colleague with a Samsung Galaxy Tab A 8" 2019 (SM-T290), and after just 3 years it became absolutely unusable, even after a factory reset. And it's "by design": it shipped with 2GB RAM, which was usable in Google/Android 9 (shipped OS), but just completely dead on Google/Android 11 (updated OS). Obviously I saved that device from oblivion with a Google-less Android that takes half as much RAM.
Are you by chance using external storage media such as USB or SD cards?
The bus on these older model chromebook devices had a choking issue where the kernel would fully saturate the transfer bus and cause a cgroup scheduling strategy to give full priority to the processes using the bus at the expense of the gui. Solutions to this problem include a new NVME or software level cgroup reconfiguration of the kernel to revert scheduling priority to equal across all cgroups.
Or it could be something else entirely, but last time i had this issue and bug hunted it the finding was as stated above.
I had issues like that a few years back on memory starved Chromebooks, in my case the biggest savior was the Great Suspender/Discarder extension as I hoard tabs, otherwise the Android subsystem on a 4GB RAM machine is stretching it thin.
Android system deals with so many more different architectures of hardware though
everything is arm ebi... even x86 is long gone (Android 4)
I know the processor architectures, but there are things in play like:
- some phones can read from the screen framebuffer, some are write only - a variety of actual chips used for modem, storage, etc.
Why do you say the same issues would happen on fuchsia?
Technically this one would possibly not be in Fuchsia, though Google let OEMs put so many hooks everywhere in the system that tbh it would still be possible to do this kind of fuckups
Fuchsia or not, if Google allows OEMs to add custom vendor calls from OEM app to camera driver, then OEM will still be allowed to do that. They could already forbid it in Android and choses not to.
GL has nothing to do with the OS, GL vendors pretty much ship the same GL across all OS
This would definitely still happen (in other ways), by all the hooks that Google leave to OEMs to implement their own security systems (and Samsung is the company who brought SELinux in Android, so even though they do a lot of shit, let's not completely ignore them)
This one, ironically, would STILL HAPPEN, at least if we extrapolate how Android team work. When building Android from main, you get 10000 as SDK version number to explicitly mention that this is a development tree not to be used.
Fuchsia theoretical advantage should be that one will be able to upgrade the kernel without upgrading the drivers. But again let's extrapolate what Android team does: They create new API versions every year or more, and PLAN THE OBSOLESCENCE of those API versions within 3 years. So currently when an OEM write their drivers, they get kernel security patches for 6 years after the release of the "original" kernel of that version. With Fuchsia you get down to 3 years.
I want to re-iterate that even though from that perspective, ChromeOS' approach is ideal, it has issues wrt innovation. I believe that the ideal situation is an open but centralized approach like what we have currently with DKMS on GNU/Linux distributions. Except let it focus on LTS Linux kernel versions, and have flags on the DKMS to allow switching to newer LTS. Here "open" doesn't necessarily mean community-led. It can be very well just be that each vendor is responsible for its own product. So like Mali is responsible for their mali kernel driver, BOE is responsible for their panel driver, etc...
This is actually a legit context in which GNU and Linux distinction matters and improves readability. "vanilla Linux" is really referring to GNU.
I actually don't understand the point they are trying to make at all, even from this lens. They say that by not using "vanilla Linux" there is a more well-defined seperation between kernel and user space development, when actually the exact opposite is true. By not using the vanilla Linux kernel, the kernel with Android pacthes is actually more coupled to the Android user space than other user spaces that traditionally run on Linux, like GNU or Alpine.
I believe they are talking more about the hardware abstraction layer, IIRC Android in recent years adopted a more stable interface for hardware drivers.
To my understand in Linux most drivers are codeveloped with the kernel as it does not have a stable interface for drivers (one might say that it is a pro as it discourages proprietary drivers)
Vanilla Linux is not even GNU unles you get Linux-Libre with is mostly FSF-sanctioned to push out every blob.
Another big advantage of Android is the implementation of intents (also compared to Apples new crippled intents API), which may turn out to become the deciding factor in the race towards the winning AI Agent OS, as it may enable AI to use programs programmatically
Care to share what makes Android intents superior to what Apple announced on WWDC?
if I'm not mistaken the Intent categories available now are only
Books Browsers Cameras Document readers File management Journals Mail Photos Presentations Spreadsheets Whiteboards Word processors
One hour 39 + 39 seconds in the demo
I think the context of these were that these contexts were included in the trained model, so it can interact with this kind of “intents”. Ios has a very wide-reaching “intent” system, readily accessible from Shortcuts.
I will start to believe the "7 years of updates" promise for Pixel phones when the first model that was released with this promise is approximately 7.5 years old. At that point, I may begin to reconsider my opinion. I fell prey to Google marketing when they released the very first Pixel model - I spent an exorbitant amount of money on it, only to have it utterly abandoned and deprecated, with support and updates dropped just over a year later.
As in all things relating to anything stated by Google with respect to the privacy, availability or expected lifetime of a consumer product, the maxim is not even "trust but verify", it should be "distrust, watch carefully, and assume the worst".
As far as I can tell from contemporary sources, Pixel 1 launched with a promise of two years of major releases + one additional year of security updates [0]. This was in October 2016. They exceeded that, and actually did three years of major releases, and security updates for a couple of months longer than promised, with the last one in December 2019 [1].
Seems like they a) delivered more than initially promised, b) did not drop it just a year after release. How long a support period do you think they actually promised, and where did they promise it?
[0] https://www.androidpolice.com/2016/10/19/pixel-pixel-xl-guar...
[1] https://9to5google.com/2019/12/02/google-pixel-no-updates/
Is this supposed to be impressive? My current PC is over four years old and I'm in no rush to replace it. The same Linux distribution runs of decades old HW.
Does your PC include a bunch of noname random hardware, like gyroscopes, modems (which have legal requirements and there are only a handful to choose from, none of which has open-source drivers), etc?
Didn't Google push for mainline integration at some point to completely avoid the need for manufacturer specific BSPs?
Close, but not quite. And guess who's triggered it, it's the usual suspect, the EU:
(b)
https://eur-lex.europa.eu/eli/reg/2019/2021/oj
I can't find all of it, but part of it is triggered by EU environment protection directives.
https://www.engadget.com/samsung-pledges-seven-years-of-upda...
Samsung offers 7 years of major version upgrades on their flagship lineup starting with the S24. It is not retraoctive, although their 5 year policy has been in place for 2021 devices.
It's unlikely that they will offer security patches beyond that point and the mid-range segment still only gets 4/5 years.
Chinese OEMs offer "major" upgrades for longer, however they achieve this by backporting both Android mainline and proprietary features to older versions of Android, along with security patches.