return to table of content

Cross-platform Rust rewrite of the GNU coreutils

tetris11
117 replies
5d8h

License.md:

    Copyright (c) uutils developers  
      
    Permission is hereby granted, free of charge, to any person
    obtaining a copy of this software and associated documentation
    files (the "Software"), to deal in the Software without
    restriction, including without limitation the rights to  
    use, copy, modify, merge, publish, distribute, sublicense, and/or
    sell copies of the Software, and to permit persons to whom the
    Software is furnished to do so, subject to the following
    conditions:

Yep. This project is definitely going to be embraced by the community in the long run, and definitely supplant GNU coreutils. The MIT and GPL ideologies are completely aligned.

PoignardAzur
22 replies
5d8h

To be clear, the issue is that this code that people produced for free is too free and evil corporations might... what, profit from it more than they currently profit from coreutils' existence until eventually all software is locked and proprietary forever?

Sarcasm aside, I don't see how the existence of this code or its hypothetical adoption would hurt anyone. Ideology isn't worth following for its own sake, there has to be a mapping to physical reality, and any mapping that interprets a rewrite of coreutils as bad because it's too permissive is extremely suspect to me.

Like, what does the GPL infectiousness even protect here? Coreutils is decades old. Are we worried Microsoft is going to EEE it?

quickthrower2
11 replies
5d8h

Maybe... AWS release a proprietary source-unavailable CoreUtils that they stick on their VMs. They embrace it! they extend it! And then eventually the average dev needs to use the AWSCoreUtils (source unavailable, restrictive license that you can't sue Bezos etc.) to get anything done as everyone else's bash script assumes it's functionality.

arccy
5 replies
5d8h

instead we have EEE by GNU coreutils on proper POSIX tools and we're forced to install the GNU toolset to make scripts work

throwawayqqq11
1 replies
5d7h

Please elaborate on the "extinguish" part from GNU/GPL.

Edit: Doesnt the existence of uutils already contradict that?

Gabrys1
0 replies
5d7h

"forced to install the GNU toolset to make scripts work" is the extinguish part to my understanding

baq
1 replies
5d7h

1) you can install them no questions asked

2) you can modify them no questions asked

3) thus you can get shit done

4) you can’t keep the changes you needed to make to yourself so the next one after you can start at 1)

I don’t see issues here

goodpoint
0 replies
5d4h

4) you can’t keep the changes you needed to make to yourself so the next one after you can start at 1)

Actually you can even keep the changes. You only have to share the sources if you are publishing compiled binaries.

Gabrys1
0 replies
5d7h

Yes, but the source is available and anyone can fix the bugs.

PoignardAzur
4 replies
5d6h

How does the existence/adoption of a Rust version of coreutils under MIT make that more likely to happen?

rakoo
2 replies
5d5h

Because you can't do that with GPL. Any modification to GPL must be GPL, hence available to everyone. That's not the case with non-copyleft.

ksherlock
1 replies
5d1h

The application service provider loophole lets Amazon do that with the GPL software today. GNU Core Utils would need to be relicensed as AGPL to prevent it.

https://www.gnu.org/licenses/why-affero-gpl.html

rakoo
0 replies
4d3h

The AGPL is for services accessed through the network. In the specific example above, it was about a set of utils available by default on a VM, which is closer to what the GPL covers (it's the same thing as the Amazon Linux that they must make available, really)

tcmart14
0 replies
4d23h

This specific case doesn't since there already has been permissively licenses alternatives to GNU core utils for years.

pjmlp
7 replies
5d8h

See Apple and Sony contributions to FreeBSD.

LeFantome
4 replies
5d7h

You missed his point. How is FreeBSD harmed by corporations using it? Specifically, is FreeBSD being extinguished ( other than by Linux )?

Snow_Falls
3 replies
5d6h

They are not harmed, it just that billion dollar companies are taking volunteers hard work to get even richer and refusing to contribute back. Now, the guys developing BSD obviously don't mind but many people do care.

vlakreeh
1 replies
5d6h

Well if the uutils people don't mind and they're writing the code, what's the problem with their choice of license?

burntsushi
0 replies
5d1h

I've asked this question too. And the answers I've generally gotten are something like, "you may not feel like you're being exploited, but in my judgment, you are."

I take this as effectively equivalent to "you're unable to give consent" and/or "you don't have agency."

But that's the kind of mindset you're likely dealing with here.

goodpoint
0 replies
5d4h

End users are very much harmed.

PoignardAzur
1 replies
5d6h

That's pretty terse, and it doesn't answer my question.

Like, let's extrapolate that you mean "Apple and Sony profited from FreeBSD and didn't contribute back, and that's bad". Let's assume that the strongest possible version of that statement is true.

How does that ever translate to "a Rust port of GNU coreutils under MIT license is bad"? If Apple and Sony haven't contributed to GNU coreutils in ~30 years, they're not going to start now, MIT alternative or not. There's not going to be an Apple executive who's thinking "We've held out as long as we could, but now we're going to need to start integrating coreutils in MacOS and contributing changes back... wait, no, these Rust suckers released a MIT version, I guess we can keep being greedy!".

palata
0 replies
5d5h

Companies cannot always do without copyleft dependencies. Even Google cannot write a proprietary kernel just like that (see Fuchsia). For smaller companies, rewriting FFMPEG or Gstreamer is impossible.

The more copyleft code there is out there, the less choice there is for company executives to go for the proprietary solution. As a user and as an employee, you probably should care about that. As a BigCorp executive getting an indecent salary from the proprietary code that your employees wrote but that does not belong to them, of course you will not like copyleft. But most people are employees.

throwawayqqq11
0 replies
5d8h

Its not about profit, that comes second. Its about control. EEE could have started way earlier in history if there was no fundamentalistic GPL. They cant embrace what they do not control via copyright.

We are standing on the shoulders of copy-left giants, while patented nonsense and enshittification is hitting pretty much everywhere.

I think you are ignoring the incentive structure ruling the markets. Without strict control against the next sociopath and the next and the next, its just a matter of time even for uutils to get embraced.

globular-toast
0 replies
5d7h

To be clear, the issue is that this code that people produced for free is too free[?]

No, it is not free enough.

It is indeed paradoxical, but you have to remember that we are not dealing with the natural world, we are dealing with a world with copyright in place.

If the GPL is viral then copyright is a zombie. You can whack it once, but it will never stop coming back. GPL has to be viral because of the zombie-like nature of copyright. It's a hack and a pragmatic solution but defeating copyright for software entirely would be the ultimate solution.

ghusbands
20 replies
5d8h

Yep. This project is definitely going to be embraced by the community in the long run, and definitely supplant GNU coreutils. The ideologies are completely aligned.

By tone, I assume this is sarcasm. Could you perhaps clearly state the issue?

serf
19 replies
5d8h

here's an OK synopsis of why it's different, even if I don't necessarily agree with the 'style of language'

[0]: https://news.ycombinator.com/item?id=37383680

AlienRobot
18 replies
5d8h

Wouldn't GPL be perfectly fine with this if it was a library instead? I don't get it.

Deeg9rie9usi
17 replies
5d8h

People seem to hate GPL for no reason these days. As you said, since it is no library, GPL would be perfectly fine.

baq
12 replies
5d7h

They can’t use GPL software at their $DAYJOB most likely.

cmrdporcupine
11 replies
5d7h

Often can't contribute either. When I worked at Google, contributing to open source projects was permitted with the right paperwork etc. but there was extra caution around GPL licensed projects, esp V3.

Which to me is more of a sign that the GPL actually works than anything else.

anonymous_sorry
6 replies
5d6h

In a corporate context, is there any risk or downside associated with making a specific contribution to GPL code which doesn't also apply to MIT/BSD?

The only downside I can perhaps think of is a strategic one: endorsing the use of GPL in general and contributing to a healthy copyleft ecosystem, which may be disadvantageous to a giant corporate like Google.

rakoo
3 replies
5d5h

If you make a contribution to GPL, the entirety of your contribution must be GPL. If you're using some internal library, you must make your library GPL. If your library is GPL, anything that depends on it must be GPL.

For-profit companies don't like sharing and making the community better if it doesn't make them 10x richer, so they don't like GPL.

oneshtein
1 replies
5d5h

If you're using some internal library, you must make your library GPL.

Not true at all. You cannot alter license of a library at all if you are not the author. You can select or change license for your own code only.

rakoo
0 replies
4d3h

I was talking about an internal library as in "a library developed inside My Company that is not made public"; as someone working for My Company, you'd have the responsibility to change the license of said library.

Of course if you're using a library you're not an author of, the library must already be compatible otherwise you just can't use it.

anonymous_sorry
0 replies
3d5h

You can define what your contribution is, right? You can publish any code you like under GPL, with zero further commitments as far as I'm aware. It doesn't have to be working software, it doesn't all have to be licenced the same. The copyright is still yours and you can do what you like. You can sell the software. You can enhance it and keep the enhancements proprietary and secret. You can use and sell the enhanced software without providing source code

The only obligations come if a company (or individual) accepts GPL contributions from outside, because they don't own the full copyright any more and have to obide by the licence of the code they accepted from outside.

oneshtein
0 replies
5d5h

Yes, there is risk to contribute back to opensource project in form of bug fixes or code updates.

goodpoint
0 replies
4d21h

For a smaller company, [A]GPL would provide only upsides, especially in terms of protections against freeloaders, and that's why companies are using it.

goodpoint
2 replies
5d5h

And, amazingly, people manage to blame GPL rather than blaming google.

baq
1 replies
5d3h

of course! they get money from google and zilch from GPL.

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

goodpoint
0 replies
4d5h

And most of that salary comes from corporate surveillance or SaaS...

palata
0 replies
5d5h

Exactly! As an employee, I don't give a shit about the paperwork needed by my company. What's best for me is to have my code open sourced. If my company is forced to depend on GPL code (because it can't write it from scratch itself), then that's good for me as an employee because it means that my code will have to be published according to the GPL!

Employees should push for copyleft. Whoever owns the code obviously wants the choice to keep it proprietary, but the vast majority of devs do not own their code.

vlakreeh
3 replies
5d5h

People have reasons, personally I don't like placing the burden of a viral license on people. I don't write code to further a copy-left cause, I write it to build things and make it easier for other people to build things.

palata
2 replies
5d5h

But by doing that, you don't protect the users. If a commercial entity writes software that depends on e.g. an LGPL library. Then I as a user totally benefit from the fact that is it LGPL. Maybe it can even allow me to update the dependency myself, be it for security reasons, or for compatibility reasons (I could patch something in the library that would make the whole project work on my machine).

By not using copyleft, you make it easier for others to make proprietary products with your code. But is that what you want? As a user, are you happier with a proprietary Windows or an open source Linux?

AlienRobot
1 replies
5d3h

I prefer Windows, actually.

I think the problem with this argument is that you think the user wants the source code. I agree that would be great ideologically but 99.999% of the people using computers couldn't care less. Not just non-developers but even the best programmers would rather just pay money to not have to fix someone else's bugs. Nobody wants to waste their time replacing libraries. They just want to use software. If we're being pragmatic about what is good "for the user" then the best thing for the user is that there are as few obstacles as possible for developers to create products for the users to use.

If we consider the dichotomy of use the viral license or make no software, then the user will always prefer to have software that exists in proprietary form than having no software at all. If a company doesn't like GPL, they'll simply not use GPL code, and maybe not using GPL means their product is not viable anymore because they have to rewrite everything themselves, and now something that was going to be made isn't going to be made because they can't get free labor. Is that good? According to GPL and FOSS, yes. According to the user that wanted to use that piece of software, no.

I have to add this feels so awkward because every time there's a thread about AI you'll find someone saying that the fact that data is copy-able renders copyright a thing of the past, and then you turn around and you see GPL and no-no-no-no-NO! You want to copy MY source code, my LABOR, and use it for free in your proprietary products? That's completely unfair! And next week there's a new OpenAI lawsuit and everyone's like "if you didn't want to get scrapped, shouldn't have posted it on the internet." It's so awkward.

palata
0 replies
5d2h

I think the problem with this argument is that you think the user wants the source code.

I want the source code, so I want developers to push for copyleft. I am a developer, so I push for copyleft on my end, for the others who want the source code like me.

even the best programmers would rather just pay money to not have to fix someone else's bugs

I regularly need to patch some (open source) software I use. I probably couldn't pay for it (not sure if anyone would fix it for me), so I don't have a choice. The proprietary alternative is that I just can't have the damn software fixed.

If we're being pragmatic about what is good "for the user" then the best thing for the user is that there are as few obstacles as possible for developers to create products for the users to use.

When you make it as easy as possible for developers, you end up with ElectronJS crap and similar. With proprietary protocols, nobody can write a different client (e.g. to make it more accessible). I don't consider this good for the user.

If a company doesn't like GPL, they'll simply not use GPL code, and maybe not using GPL means their product is not viable anymore

First, there is not only GPL. The weakest copyleft I know is MPLv2, which requires to share the modified files. If a company's product is not viable under those conditions, then that product is simply worthless.

Second, as an open source developer, I don't really give a shit if another company that is not paying me is not viable with my copyleft license. I am not working to help them make money while making the world less convenient for me.

I have to add this feels so awkward because every time there's a thread about AI you'll find someone saying that the fact that data is copy-able renders copyright a thing of the past

It's not weird: I don't want AI to train from my code without permission. BigTech can abuse my licenses because they are too powerful for me to do anything about it. It's just a shameful workaround, and I am hoping that new licenses will come out that explicitly forbid ML training, so that I can update my licenses.

everyone's like "if you didn't want to get scrapped, shouldn't have posted it on the internet."

I surely don't say that. People who say that have no idea how copyright works, there is nothing worth discussing there.

rcxdude
18 replies
5d8h

I think at this point the vast majority of linux users are indifferent to the difference between copyleft and non-copyleft licenses (and of those that do care, they are likely to lean against copyleft). Whether this project actually has enough advantages (and small enough disadvantages: compatibility still seems to be far from complete enough) to be adopted by users or distributions is another matter, though.

denotational
11 replies
5d8h

and of those that do care, they are likely to lean against copyleft

What is this based on? Asking in good faith.

rcxdude
10 replies
5d8h

My impression that the vast majority of linux users who are thinking about code licensing are using it commercially and so copyleft represents an extra burden on them.

(also, IMO, copyleft vs non-copyleft doesn't seem to make a huge difference to the outcomes copyleft advocates claim to care about, especially to the average user - most MIT licensed projects still generally receive contributions from people using them commercially, and some GPL projects (linux itself suffering from this greatly) still suffer from being fragmented by proprietary patches and forks, even if the license is being complied with.)

zelphirkalt
6 replies
5d8h

I would say copyleft licenses can even be better for a business, since they make sure, that competitors cannot grab and run, but need to contribute back. Businesses just need to understand, that copyleft also applies to the competition, when they use copyleft licensed projects.

cmrdporcupine
4 replies
5d7h

Agreed, and as a friend pointed out to me years ago, there's nothing stopping people who author GPLd code from making it clear that if a corporation wants a different licensing arrangement for something that's under GPL, they're perfectly capable of negotiating that with the author and paying for it.

That this doesn't happen often shows the motivation of most companies using open source is ultimately ... just looking for free work.

insanitybit
1 replies
5d7h

That this doesn't happen often shows the motivation of most companies using open source is ultimately ... just looking for free work.

Entering into contracts is really annoying. Companies don't just hand out checks, they have accountants who ask "what is this for, how is it being spent, how do we pay taxes on it", etc. "Pay this random developer that you have no pre-existing relationship with" is not as trivial as it sounds.

Someone
0 replies
5d4h

There’s also “how do we know they’ll be around three years from now for support?”. Working with a company feels more secure in that respect, especially if that company is large and reasonably old.

The “this” in “That this doesn't happen often” can also refer to “people who author GPLd code from making it clear that if a corporation wants a different licensing arrangement” not happening very often.

Yes, there are projects that explicitly mention it, but I think those are in the minority.

_flux
1 replies
5d6h

This can only happen if they are truly making the project only themselves and accepting external contributions only via a CLA—and I expect the mere existence of CLA to reduce the amount of external contributions in itself.

cmrdporcupine
0 replies
5d3h

Yes that's a fair point. If I recall Googlers are just completely forbidden from contributing to projects with a CLA.

rcxdude
0 replies
5d7h

It's rare that this weighs heavily compared to the administrative burden of complying with the GPL. Most open-source code (especially things like coreutils) is effectively commons: something which many companies rely on but doesn't represent a competitive edge to them. So there's already a strong incentive to contribute upstream: it's easier than trying to maintain your own fork. A competitor hoarding their own contributions is more likely to be shooting themselves in the foot than giving themselves an edge.

palata
0 replies
5d5h

most MIT licensed projects still generally receive contributions from people using them commercially

My experience working professionally in a permissive open source project is that the vast majority of commercial use is not contributed back (very often it would be very low quality anyway).

some GPL projects (linux itself suffering from this greatly) still suffer from being fragmented by proprietary patches and forks

I don't see this as an argument in favour of permissive licences: proprietary patches and forks are also happening with permissive licences.

palata
0 replies
5d5h

My impression that the vast majority of linux users who are thinking about code licensing are using it commercially and so copyleft represents an extra burden on them.

My impression is that it is like wearing a face mask when you are sick. You should wear a mask to protect the others, but people generally don't give a shit.

There is no extra burden in choosing a copyleft license for your project (you still have the copyright, you can do whatever you want with it). What creates extra burden is when you use copyleft dependencies. But that's not your choice.

And as a user, I'm pretty sure you are happier if code that you don't own is open source. All devs should push for copyleft licenses because it benefits them as users. But for some reason many are pushing for whatever is best for companies who employ them.

nindalf
0 replies
5d7h

most MIT licensed projects still generally receive contributions from people using them commercially

I agree. I’d speculate that projects being hosted on GitHub is a big part of that. With the code and discussions around it happening in the open, and the barrier to contribute substantially reduced it’s hard to justify forking it internally and maintaining the fork.

The path of least resistance is to clone it from GitHub every time and to contribute changes back, regardless of license. For most projects we don’t need the GPL to compel this behaviour because that’s not what many (but not all) companies are doing in any case.

This isn’t a blanket statement though. Obviously the size of the project matters, the scope of the changes made etc. For example phone manufacturers always fork Android and add their changes on top. They only release the code to be compliant with the GPL, and they do so grudgingly. Having Linux licensed under GPL benefits us.

But other projects? Doesn’t make a difference. Who is forking (for example) git and making private improvements they don’t contribute back? No one. It would be just as feature filled, useful and successful whether it was MIT or GPL.

roenxi
4 replies
5d8h

The vast majority of Linux users are indifferent to lots of important things, operating systems are complex and there are too many parts to keep abreast of.

However, the foundations are largely GPL-licensed because the GPL is superior in the long term. If people want to donate their time to companies then good luck to them, but it is a stupid strategy to donate free time to companies. If a company is paying good money for someone to write BSD-licensed software then sure, but otherwise the developer is playing a mug's game. Not a universal rule, I can see why some crypto or standard reference libraries might want to be BSD licenced. But by and large it is an invitation for parasites to attach.

Snow_Falls
1 replies
5d6h

Hell, even library type software could be licensed under "weak" copyleft license such as LGPL.

palata
0 replies
5d5h

Or worst case even MPLv2!

rejschaap
0 replies
5d2h

I would expect more permissive licenses in the future. GPL was instrumental in changing the world to an open source mindset. It fixed the problems with software that became obvious in the 90s. The world has evolved and the restriction just feel outdated now.

cmrdporcupine
0 replies
5d7h

Agree, tho worth pointing out that GPL licensed software is still subject to parasitical behaviour from SaaS companies. While the AGPL can plug that, it is very unpopular.

pas
0 replies
5d7h

Sure, users shouldn't care. The ecosystems ought to serve the users after all, and it should be sustainable, competitive with other ecosystems, etc. But all these instrumental goals are on the developers, and if carefully choosing licenses for each project can help with this, then it makes sense to pay attention to which project has which license.

Coreutils? Probably here the importance is to provide the interface the users want. And provide it everywhere thus allowing the ecosystem to grow and be able to move to other platforms even. This might require a BSD-like license. (These tools evolve slowly, and usually doesn't represent some huge know-how that other ecosystems want to lift, here the well-known efficient CLI experience is the value, which might be fair-use copyable, but as we see - for example Apple - doesn't care, and thus basically users are left with worse defaults.)

Filesystems? Stability, cross-platform compatibility, but also usually decades of extremely valuable battle-testing culminates in a specific codebase+ecosystem context. It's hard to replicate with simple copying, but of course much easier. In practice with ZFS we see that the licensing incompatibility serves to provide fewer options for users.

Drivers? Databases? Kernel? Where does the TiVo problem rears its ugly head? And what does the relative lackluster GPL enforcement track record tells us here? What about the more important problem with non-upstreamed but-shitty-as-fuck Android drivers? Are they good or bad for users? Which ecosystem are they a part of?

So, it seems, that even if one cares about licenses the big takeaway is that they matter a lot less than the other things required for a successful source-sharing community-driven project. OSI approved or nor.

(For example I have no idea what license Terraform has .. okay, I checked, it's MariaDB BSL, and OpenTF is now MPL2. So I don't think MIT/Apache2 or AGPL3 would have made a difference in Hashicorp's hostile stewardship.)

Affric
16 replies
5d8h

Yep.

GPL is the greatest thing that has ever happened to software and this stuff seeks to destroy it.

arghwhat
15 replies
5d8h

As we all know, evil corporations have waited all this time for the opportunity to ship modified versions of specifically non-BSD coreutils without sources to end-users.

toyg
12 replies
5d8h

It's mostly that evil corporations have waited to get free work from the community on everything, and they're now getting it.

cmrdporcupine
8 replies
5d7h

I think a broad bias against copyleft has existed because people felt that it got in the way of just "getting things done" and felt that significant efforts contributed to BSD-style licenses would just "get rewarded" through patronage from the larger entities in silicon valley; leading to jobs, or donations, etc.

I think that worked for many while the job market was pumping, VC was flowing, and the BigCorps were hiring and spending money like crazy. So people didn't feel particularly exploited.

But I'm expecting the bias in the community will shift back to copyleft as the BigCorps show their true nature with the rounds of layoffs, mandatory RTO, cutbacks in perks, etc.

Why spend a single moment of your time doing unpaid hours doing free work for these people?

I've personally gone back to putting my (modest and used by nobody, to be fair) projects under GPLv3.

enriquto
5 replies
5d7h

I've personally gone back to putting my (modest and used by nobody, to be fair) projects under GPLv3.

Consider using the AGPL. Your users will be better protected by that. (If they use your software via third-party server, the middlemen cannot forbid your users to access and modify your source code.)

NavinF
2 replies
5d3h

In practice AGPL is treated like a source-available license, not an open-source license. Most devs keep their distance from it

enriquto
0 replies
5d1h

What a world we live in, mate. What a world.

LaGrange
0 replies
5d2h

Good.

sokoloff
0 replies
5d7h

If you think “your users” could include those middlemen, GPL might provide that subset of your users more useful/usable rights. There are reasons for all three of GPL, LGPL, and AGPL to exist. No one of them is best for all circumstances of software.

cmrdporcupine
0 replies
5d7h

To me it's a balance of things. If I want to ever take external contributions, it's even harder with AGPL. So GPLv3 is sort of the compromise.

The stuff I write isn't likely to be used by anybody who isn't "weird" and nerdy anyways so I don't fear theft from SaaS right now.

palata
1 replies
5d5h

Agreed.

I don't really get why some employees push for permissive licenses. As an employee, my code belongs to my company, not to me. If I somehow have to depend on e.g. a GPLv2 project, then suddenly my code gets higher chances of getting public, which is better for me as an employee.

Say my company allows me to open source my code, then copyleft is (again) better for me as an employee, because I force other companies (those who would use my code) to publish more of their code.

cmrdporcupine
0 replies
5d

I don't really get why some employees push for permissive licenses

It makes sense if you consider that for many people their projects are kind of like a resume or a branding situation. "Owning" a project or framework is like building up a portfolio. And for some GitHub is as much a social network as it is a utility. I was definitely in this mindset after I left my long-term job and needed to hunt for work, and I understand the motives in a sense. If you have your stuff under permissive license, more people will use it, and your own "brand" is bettered.

But I think in the long run it's not great.

arghwhat
2 replies
5d7h

Licenses should be evaluated in context.

1. For anything internal, they already had free work from GPL. They can even modify it - the for them unfortunate side-effects only apply if they need to distribute it to outsiders, as only those receiving a copy need access to the source.

2. For anything external in this particular example, they can either use GPL projects unmodified (no effort required on their part in that case), or they could get all these bits from BSD with a BSD license already and do whatever they want.

GPL - and more importantly, AGPL - is definitely important for full-fledged products that can be monetized and abused, with improvements never made available. Note that no common license require any degree of upstream collaboration.

coreutils on the other hand isn't such a full product. This project hardly enables anything new - they could just use a BSD userspace or other portable coreutils clone - but if I end up exposed to some included coreutils in a proprietary product, I'd much rather have that they could easily include a proper one. The same logic applies to many other things too.

toyg
1 replies
4d4h

> GPL - and more importantly, AGPL - is definitely important for full-fledged products that can be monetized and abused

With GPL3, that covers pretty much everything from laptops to online databases. Which is why Apple, for example, has worked very hard to rip any GPL code out of their systems.

> This project hardly enables anything new - they could just use a BSD userspace or other portable coreutils clone

Yes, but it is a competitive disadvantage to ship something less familiar. Again, the Apple example shows most developers would run to replace the shipped utilities with GNU versions instead, which was good for the ecosystem as a whole - less so for Apple, of course, which lost some control on target toolchains and suffered a competitive disadvantage towards Linux machines (minor, sure, but every little helps).

It's sad that this particular project is headed by Mozilla people, who are supposed to care about the health of the opensource community as a whole, and still does the wrong thing - probably out of a (vaguely desperate) hunt for widespread adoption of Rust, at the expense of the overall free-software community.

sylvestre
0 replies
1d23h

This project isn't related to Mozilla at all.

mhh__
0 replies
5d7h

Clang on Mac is closed source. We're already swinging back away from being able to know what went into our binaries

globular-toast
0 replies
5d7h

Every time someone has said something to the effect of "it'll never happen" it happens, and then some.

You need to understand how evil corporations work. They may be composed of perfectly reasonable people but, taken as a whole, they are literally psychopathic. It's not that they are "waiting" for something to be possible, it's just that at every step they will take anything and everything they can and give back as little as they possibly can.

Have you ever been in such a corporation and tried to say you should release source code when you don't have to? You'd be laughed out of the room.

Don't be fooled by companies like Microsoft taking part in "open source". They have simply calculated that right now it's advantageous for them to appear that way. But they are always taking the maximum and giving back the minimum, no matter what. We won't change this, but we can raise what that minimum is. That's why we have the GPL.

pjmlp
12 replies
5d8h

People like to shit on GPL, yet Linux and GCC would never had happened if it wasn't for it.

FreeBSD has after all enjoyed lots of upstream collaboration from Sony and Apple. /s

I don't care about what UNIX flavour I get to use, had it not been for Linux, most likely I would still enjoy Solaris anyway.

Which is what would have happened if BSD wasn't tainted by the lawsuit.

josephg
8 replies
5d7h

People like to shit on GPL, yet Linux and GCC would never had happened if it wasn't for it.

Why wouldn't it?

FreeBSD seems alive and well, and its doing quite well for itself despite not having as many upstream corporate contributions. And why should freebsd care if Apple, Sony and Microsoft repurpose their software? If something I wrote ended up in macos and the PS5 I'd think that was pretty cool.

cmrdporcupine
3 replies
5d6h

You'd think it was pretty cool, sure, we all would... but imagine you lose your job, the job market sucks, etc. you're on your 8th month of unemployment and having a hard time making mortgage payments and you see a company like Sony shipping something based on your own open source stuff... but without contributing back, without attribution, and without compensation?

Real world scenarios like this is what led people to embrace the GPL.

When times are good, corporate use of open source trickles down to the community in the form of patronage and employment. It is not always the case that this happens during "bad times."

tzs
2 replies
5d4h

> If something I wrote ended up in macos and the PS5 I'd think that was pretty cool.

You'd think it was pretty cool, sure, we all would... but imagine you lose your job, the job market sucks, etc. you're on your 8th month of unemployment and having a hard time making mortgage payments and you see a company like Sony shipping something based on your own open source stuff... but without contributing back, without attribution, and without compensation?

...whereas if you had used GPL you would still be unemployed in a lousy job market on your 8th month of unemployment and having a hard time making mortgage payments, but would have some Sony source code to look at and use.

cmrdporcupine
1 replies
5d3h

But I'd have sweet sweet moral vindication ;-)

josephg
0 replies
4d11h

Nah. Sony would probably just write their own code, or build their software on top of something BSD/MIT licensed. You'd still be unemployed, just you also wouldn't be able to write "My code is in the PS5" on your resume.

pjdesno
1 replies
5d2h

GCC was originally written by Stallman, and greatly improved through the late 80s and early 90s, in part IIRC by a number of hardware vendors who thought it was a better way to get a high quality compiler than paying Whitesmiths, MetaWare etc.

BSD could have become fully free with the portable C compiler (pcc) but in the early 90s it only supported the VAX and Tahoe (CCI Power 6/32) architectures, and I don't think would have been a good foundation for a modern compiler. Various folks had hacked versions of pcc in the 80s to support other architectures, but the code didn't go back upstream.

Without GCC, there's a good chance that the entire FOSS ecosystem wouldn't exist.

pjmlp
0 replies
5d

Also, GCC was largely ignored until Sun broke the UNIX delivery model, splitting Solaris into user, and developer tooling paid separately, quickly followed by other UNIX vendors.

mardifoufs
1 replies
5d4h

Ok, it would be alive. But I'm not sure you can compare Linux and freebsd and not see a difference in "liveliness". I don't think Linux is technically superior to freebsd, or at least wasn't for most of its life, yet Linux still gets orders of magnitude more usage/contributions/drivers etc. So it must be something else, and I think the license is a pretty good factor in this case.

josephg
0 replies
4d22h

I suspect it’s just momentum. Just like how game developers target windows. Success breeds success.

rascul
2 replies
5d3h

People like to shit on GPL, yet Linux and GCC would never had happened if it wasn't for it.

Linux wasn't GPL until 0.12. Less than a year after the first release. Not sure what exactly is meant by "happened" here so maybe it fits or maybe not.

pjmlp
1 replies
5d

0.12 hardly mattered to anyone but Linus.

rascul
0 replies
5d

Indeed. Linux was created without being GPL, but it was GPL before it got widespread usage. I guess you meant the second part for "happened".

lifthrasiir
6 replies
5d8h

That happened already, if you haven't noticed yet. Android has used Toybox (0BSD) as its coreutils replacement for a decade. I don't see any reason to particularly criticize uutils for this exact reason at this point.

lucideer
5 replies
5d6h

embraced by the community

Android

Yes Android is definitely representative of the community...

lifthrasiir
3 replies
5d5h

I've interpreted that as the user community, because your statement would be a tautology if it were the free software community instead.

lucideer
2 replies
5d4h

The "community" in this case would be one having input to selecting which version of coreutils any given distro may bundle. That's typically not users, even in more engaged linux distro projects. It's certainly not android users.

And while any corporation is ultimately made up of individuals with presumably some limited autonomy in software dependency choices, it seems a bit of a stretch to refer to google employees as a "community" fitting into this context.

lifthrasiir
1 replies
5d3h

That's an extremely narrow definition of the community in my opinion. And even that doesn't adaquately consider the vast majority who do have but do not exercise that autonomy---Linux container users won't care if the container is based on GNU coreutils or Busybox or Toybox, they only care about the image footprint. (And this made Alpine Linux huge in the world of containers.) I believe that, even under your definition, it is only a small fraction of that community who explicitly want GNU coreutils. Not necessarily good thing, but just a sad reality.

lucideer
0 replies
5d2h

That's an extremely narrow definition of the community in my opinion

It is extremely narrow, simply because that's the narrow definition that's relevant here. The gp was discussing whether or not it would be used - the people responsible for the decision to use it are... the people in the community making said decision.

This isn't a subjective debate on the definition of the word community. This isn't about excluding people on merit. It's just literally a discussion of who's going to make or break adoption.

I'm not sure what point you think you're making talking about users caring about coreutils - ultimately that depends on those users being engaged in the decision-making process.

Alpine is btw an excellent example - one can make a good argument that that example supports adoption of this Rust rewrite. The only point I was contributing is simply that Android is an extremely bad example to use in such an argument.

berkes
0 replies
5d5h

I guess it very much depends on any definition of "the community" then.

noirscape
5 replies
5d8h

Your sarcasm aside, GNU doesn't hold the exclusive right to make the coreutils, nor the demand to put them under GPL. After all, they're just an implementation of the POSIX spec (which to be fair tries to implementation match) and non-GPL coreutil utilities do exist (Both OpenBSD and FreeBSD for example have their own set of coreutils that are well, BSD licensed.)

Diti
2 replies
5d7h

Where can one read the part of POSIX which describe Coreutils? I always thought those were just GNU tools.

noirscape
0 replies
5d6h

Just check the spec itself. It pretty much describes the expected commandline syntax and behavior for commands the coreutils provide[0].

GNU coreutils is on its own a superset of a part of the POSIX spec (they started as rewrites of several Unix utilities), which mostly means that they added a bunch of extra flags and options. The other half of this is GNU bash which includes the non-command parts of POSIX such as shell syntax.

POSIX itself meanwhile is basically a reverse spec (very much like how the C standard isn't really a standard, it's just a collective set of rules all notable C compilers follow); it was made to unify all the various Unix offshoots that were being developed and define a common set of commandline utilities and rules that they all shared. (Nowadays, that's mostly the BSDs, Linux and MacOS. Windows is the only notable OS that doesn't really do anything with POSIX besides a few aliases.)

[0]: See for example the POSIX reference for cp and you'll see it's pretty similar to the GNU cp manpage - https://pubs.opengroup.org/onlinepubs/9699919799/utilities/c...

ksherlock
0 replies
5d2h

The open group documentation (which is the source of truth) is also available as manual pages (via the Linux man-pages project) so you can access them from your terminal, eg `man 1p mv`. On ubuntu/debian, they're in the manpages-posix (1p) and manpages-posix-dev (7p/3p) packages.

https://mirrors.edge.kernel.org/pub/linux/docs/man-pages/man...

quickthrower2
1 replies
5d8h

It isn't about that though. Of course GNU doesn't hold that right, who said they did?

noirscape
0 replies
5d8h

Even then I think the amount of people who care about MIT vs. GPL isn't large enough to be relevant.

"Communities" (really, just distro maintainers, I don't think most people will install their own coreutils) aren't really that aligned to the GPL as the parent likes to imply they are (the only real license alignment usually is "no proprietary if we can help it", which makes GPL vs MIT equal and probably even leans slightly towards MIT because that permits more functional systems in restrictive environments.)

A distro's choice will probably come down to feature completeness, safety down the line and ease of maintenance. In that regard, uutils is a much scarier thing for GPL adherents than the coreutils, which suffer from the classic GNU Project bloat problems as well as being written in C, neither of which will ever be fixed (due to structural GNU Project reasons as well as it just being a 3 decades old software project.)

globular-toast
2 replies
5d8h

I'm so relieved to find the top comment here is about the licence. The trend towards MIT-style licences and away from the GPL is a worrying one.

I don't understand the problem people have with the GPL. The GPL is there to ensure free software stays free forever. In a world with copyright this is the only way to do it. Permissive licences and public domain do not work.

Companies like Microsoft hate the GPL. This alone should tell you that you, an individual enthusiast, should probably love it.

tcmart14
1 replies
4d23h

It depends, like all things. I am fine with the GPL, but there are ways it hinders open source. A primary example, Linux can benefit from BSD and MIT licensed code, but it doesn't go the other way. FreeBSD can not benefit from GPL code. At least on the BSD side of things, that is why the GPL is disliked. As an example, FreeBSD developers can write a filesystem with the BSD license and Linux can sort yoink that code out of the code tree and use it (supposing nothing special needs to be done with interfaces and such). But the reverse isn't true. If Linux developers write a new FS in the linux kernel and throw a GPL on it, FreeBSD can't utilize that code.

goodpoint
0 replies
4d21h

No, FreeBSD can very much benefit from GPL code. The choose not to use it.

david_draco
2 replies
5d7h

Is it even legal to take a GPL-licensed code, translating to Rust ensuring it is exactly compatible, and then releasing under non-GPL? I thought you would need to extract a clean-room specification from it first, and ideally have separate people extracting the specification and writing the new code.

krylon
0 replies
5d7h

You can use the man pages as your reference. Another comment also mentioned a test suite. I think using that to check how equivalent your attempt is does not violate the GPL.

Y_Y
0 replies
5d

I had the same thought. This very much feels like a derived work. Since they (aim to) replicate the behaviour if the gnutils, and not just some generic posix utiles then I think it would be hard to argue the GPL doesn't apply in a legal sense. I think in a moral sense it very much is a violation, since the GNU authors presumably intended for their work to be built on only by further copyleft works.

oblio
0 replies
5d8h

MIT license

???

krylon
0 replies
5d7h

The BSD projects' userlands have been used by other projects. If I want a Unix-like userland that is permissively licensed, I already have several to choose from.

Is there any pressing need to be bug-for-bug compatible to the GNU counterpart?

(Just to be clear, I am not opposed to this project, but I'm not sure how many people will rejoice and adopt this just because of the license. But I'll admit I am going on vibes here.)

jillesvangurp
0 replies
5d1h

Drop in replacement with less license issues, reactionary types endlessly arguing the notion of freedom, code that just works, and lots of development activity. What is there not to like for the likes of Red Hat, Ubuntu, Google, Amazon and others providing Linux based products.

In all seriousness, if you remove all non GPL licensed software from your Linux distribution because it's not pure enough, you'll be left with something that is a bit less comprehensive than your average operating system. It would miss things like a UI for example. Because both X Windows and Wayland are MIT licensed. A lot of popular server software that made Linux successful. And generally a lot of stuff that most of the OSS community uses and depend on every day. Like OpenSSH which is BSD licensed. Or things like Apache httpd (for which the Apache license was invented), etc.

The GPL vision where all that stuff was going to be GPL licensed just never played out that way. Developers and companies decided otherwise. And this is fine.

It's is not a problem that a lot of people believe needs fixing (as evidenced by a lot of non GPL OSS without good GPL alternatives). But of course if you feel otherwise, best of luck fixing that problem. The world runs on software. A lot of that is free and open source. And most of that is not licensed under the GPL.

devnullbrain
0 replies
5d7h

definitely supplant GNU coreutils

If you're going to make facetious arguments, make them towards beliefs held by people who actually exist.

cmrdporcupine
0 replies
4d19h

FWIW about this topic, gitoxide is another project which seems to be in the vein of "rewrite a fundamental C thing in Rust, and at the same time switch to a non-copyleft license" (in that case, Apache)

An unfortunate trend.

austinjp
103 replies
5d8h

Genuine question: since these are core utils, and probably used billions of times every day, is anyone actually going to switch to this version? I see that the intention is for this to be a drop-in replacement, but some options and behaviours are still different.

To clarify, I'm not intending this as a negative comment. It's an impressive project, and aiming for cross-platform sharing of scripts seems a worthy goal. The graph of progress against the GNU test suite is neat and encouraging. However, I can't imagine anyone in Unixy lands moving to this - although I may be wrong there - so it feels like a "compatibility" project for MacOS and Windows. I'm not familiar with the situation on MacOS but how bad is the compatibility issue? Similarly, on Windows what's the situation with WSL or VMs or even Cygwin? Is performance the issue?

Do people actually want this, is there a market for it? It's got 17.8k stars so seemingly so? Again, I'm not intending to cast aspersions, just trying to understand the audience. This is clearly different from a "for the fun of it" side project.

cdogl
27 replies
5d6h

I mostly agree with your comment, but the source code of of many GNU coreutils is quite gnarly. It's ancient code (from my reference point as a 35yo) developed at a time when coding style was different and a much smaller community maintained it.

I think it's important for free software that people coming into the community are enthusiastic to maintain it. It took the wind out of my sails a little when I realised the GNU code base, while it produces critical tools I use every day, is written in a way that I found extremely (unnecessarily) terse and "clever". Tracing how different combinations of flags are handled is not much fun. Documentation is helpful for the user, less so for the tinkerer who is trying to understand the stack.

If this project manages to hit parity with GNU coreutils, and my distro(s) provide support, I'll switch to it purely on that basis.

dmd
24 replies
5d4h

I've interacted quite a bit with some of the authors, and when I've asked "why on earth did you do it this way" the answer is generally some form of "well, it saves nearly 6 bytes on disk in the source code! disk isn't free, son".

It's a different mindset and one that is no longer useful.

dig1
10 replies
5d2h

The thing is that coreutils is used everywhere - servers, desktops, and embedded devices, including 30-40 years old machines. You want to update coreutils on old SPARC or some even older mainframe? Every byte counts.

So, saving as many bytes as possible is still very relevant.

kstrauser
8 replies
5d1h

How often are we updating coreutils on 40 year old machines today? Are there more than a couple of hobby machines that old and still regularly updated with new packages?

dig1
7 replies
5d1h

You'd be surprised how many banks, electrical grids, oil rigs, and large ships run on old hardware. If ain't broken, you don't replace it. But keeping the system up-to-date is always a benefit.

mschuster91
5 replies
5d1h

You'd be surprised how many banks, electrical grids, oil rigs, and large ships run on old hardware. If ain't broken, you don't replace it.

That attitude has got to die in an age of everything being exploited by bad actors. Anything that has any form of networking should have regular replacement at least of the control plane components budgeted in from the start.

elzbardico
3 replies
5d

Most of those really old devices are usually air-gaped from the internet for incidental reasons. Either they run on closed networks or have no networking at all.

You're not going to find a 30 years old machine that controls the pumps of nuclear reactor on the internet.

spoiler
1 replies
5d

But if these machines are so isolated, and given they've been running with the "if it ain't broken" mindset: why, and more importantly how do they even get updated? If they aren't being updated, why discuss constraints on updating them?

elzbardico
0 replies
5d

I bet they don't get updated, ever.

mschuster91
0 replies
4d22h

You're not going to find a 30 years old machine that controls the pumps of nuclear reactor on the internet.

Stuxnet would like to have a word with you. Airgap isn't enough to prevent malware from spreading.

lbhdc
0 replies
5d1h

On the otherhand, I think you could argue that it is something to strive for in more systems.

The longer our devices can last the less ewaste we generate. It may not be the easiest to secure, but it isn't impossible.

kstrauser
0 replies
4d23h

I don't buy into that here. If it's that mission critical and hard to replace, I wouldn't touch something as essential as coreutils without a gun to my head. In the very best case, the new version would work exactly like the old one so that the 40-year-old shell scripts holding the thing together would continue working as before.

stonogo
0 replies
5d1h

Coreutils might be everywhere, but uutils will only be wherever LLVM targets, because Rust isn't as portable as C. It's much easier to declare yourself cross-platform when you're not actually competing with real cross-platform software (like coreutils).

bayindirh
8 replies
5d3h

When I asked a graybeard why the variable names were so short, he said that "longer names affected compile duration very severely in the past, so this is why we used the shortest name possible".

While user facing programs are not faster by any means, computers got faster in some aspects after all.

johnisgood
5 replies
5d2h

I think it is important to strike a balance here, see Java codebases for the other extreme end of the spectrum where the variable name may not fit within 80 column width.

acchow
2 replies
5d1h

In 20 years with growing display sizes and resolutions, I’d have hoped 100 or 120 columns replaced 80

spencerchubb
1 replies
5d1h

I thought the column recommendation of 80 was about eye movement and reading speed, not display width.

ncallaway
0 replies
5d

I think it was both, and has become more the former now.

My preferred line-width for code would be 80 characters, not including indentation for the line, with a maximum of 120 characters including indentation.

lenkite
0 replies
2d23h

Apple OS programming beats name length for Java anyday but it never gets a bad rep thanks to apple's reality distortion field.

CMMetadataFormatDescriptionCreateWithMetadataFormatDescriptionAndMetadataSpecifications

bayindirh
0 replies
5d1h

Of course. I name things as "configuration_storage", but not "configuration_factory_singleton_configuration_stub_constructor". I've been there, seen the horror.

Never again.

dboreham
1 replies
5d1h

This is mostly wrong. Longer variable names never significantly affected the time to run a compiler. Back in the day the run time for compilers was mostly determined by the overhead to load in each overlay (the whole compiler didn't fit in memory). That said, often there was a hard limit on the length of identifiers that wasn't terribly long (e.g. 8 characters), so very verbose names were not possible. The more obvious reasons for preferring short variable names are: 1. It's more like math, and for simple code looks nicer, and the descriptive variable name mafia were still in diapers and 2. typing was slow and painful, so was editing (use an ASR-33 with 'ed' to find out how painful).

What your greybeard might have been talking about was to do with interpreted languages where the interpreter had to parse through the code in some situations when performing a jump/goto. This meant that the more characters there were between the jump origin and destination, the slower execution was. I remember this being a factor in early 8-bit BASIC implementations, leading to a desire to make programs as terse as possible to achieve fast execution. Later implementations tokenized the source before execution, avoiding that problem.

bayindirh
0 replies
5d1h

No, we were talking about how compilers came in at least two disks (one for compiler and one for linker) when this came up.

He started these things with punched cards, and worked at the prominent companies of the time (like IBM and HP), and seen tons of bizarre computer architectures on the way like registerless processors.

He might be recalling a very specific example, or about a very slow system, but I remember the conversation well.

That thing has been probably sorted out by the mid of 90s, but I was just starting programming back then.

Oh, I'm a proud member of "descriptive variable name mafia", but not affiliated with the Java branch. They're a different breed. On the other hand, being written real math in C++ (differential equations), having three letter variable names for everything is equally painful. :)

amluto
1 replies
5d2h

It's a different mindset and one that is no longer useful.

I disagree. Sure, 6 bytes is approximately free in most contexts, but 100MB is not free, and 5GB is even less free.

But more importantly, writing code under constraints can force good behavior. For example, BIOS is a legacy mess but it’s a small, self-contained legacy mess that fits in a few kB. Compare to UEFI, which is unbelievably complicated and bug-ridden. A mess like UEFI could not have fit within the constraints of BIOS.

This is not to say that writing obscure code to save a couple bytes of source file size is at all worthwhile any more, but the idea that one should constrain bloat (design bloat, code bloat, executable bloat, network bloat, etc) is very much still valuable.

remus
0 replies
5d1h

I don't think it is so black and white. The original authors made trade offs that made sense at the time, but in the intervening period the context has changed and those trade offs don't necessarily make sense. The parents point is obviously exaggerated, but currently disk is cheap and the cost of inscrutable code, especially code that is run by millions or even billions of people, is extremely high: bugs are harder to spot and harder to fix, the barrier to entry for new maintainers is high, new features are harder to add etc.

stevehawk
0 replies
4d22h

You say that, but I recently ran out of disk space on my iPhone.. and the more I looked at the apps the more I realized "I bet these are all framework based apps and not native apps, and no one is tree-shaking their code or doing /any/ of the things you're supposed to in order to minimize disk usage.".. *shakes fist at clouds*

9659
0 replies
5d2h

except when it is.

viraptor
0 replies
5d5h

This is true. I've tried to read some gnu code in the past (tar) and patch another (df) and... ran away instead. It's doable, but so messy I'd rather write my own very specific command, than try improving one of the old-style gnu ones.

Throw839
0 replies
5d5h

This! I am horrified to touch anything old from GNU!

bayindirh
27 replies
5d5h

This is “just” for changing the license of the coreutils so companies can use a compatible version and do not share the source of they get to modify them. In other words, it’s mostly for people who dislike GPL because of its virality.

More extreme people may see it as a major milestone in “killing” GPL.

wirrbel
8 replies
5d5h

It would definitely not be my motivation. I would not modify the gnu coreutils in many cases because they lack tests and are sometimes written in pre-ansi c.

At some point the GNU project focused their attention more on leading “gnu + Linux” debates and not so much on developing a stable and secure OS. So now it’s legacy cruft.

bayindirh
7 replies
5d5h

Then, why not license uutils in GNU/GPLv2+ (or v3+) again as a spiritual successor to GNU Coreutils, but use a "permissive" license like BSD to allow "free for all" incl. but not limited to closing the source?

What happens if the uutils team says that they're not releasing the source of the latest version but the $CURRENT-5 from now on, moving to a closed source, open baggage model? What prevents them from pulling an effective EEE?

Sanzig
4 replies
5d3h

What happens if the uutils team says that they're not releasing the source of the latest version but the $CURRENT-5 from now on, moving to a closed source, open baggage model? What prevents them from pulling an effective EEE?

I don't follow this argument. This can happen under GPL as well. Nothing stops the copyright holder from relicensing future versions of the software under a different license. The existing versions already out there under GPL/LPGL/MIT/BSD, there's no take backs, but the copyright holder is free to do whatever they want with future versions.

What would happen in your hypothetical scenario is that everyone would get really really angry with the uutils team and the latest open source version would get forked by the community. The proprietary one would wither and die, because who in their right mind wants a proprietary set of coreutils?

Fundamentally, the choice between copyleft and permissive is simply if you care that someone takes your software and incorporates it into a proprietary package. Clearly, the uutils team doesn't care if eg. Apple makes a proprietary fork of uutils for OS X. And that's their prerogative.

josephcsible
3 replies
5d2h

Nothing stops the copyright holder from relicensing future versions of the software under a different license.

Accepting contributions without a CLA does, because then there's too many copyright holders to do so.

NewJazz
1 replies
5d1h

Doesn't the GNU project require copyright assignment?

josephcsible
0 replies
4d22h

Yes, but I trust the FSF to not do that. There's basically no other entity I'd give the same trust to, though.

bayindirh
0 replies
5d1h

...and a single veto stops everything on its track (which is good).

wirrbel
1 replies
4d2h

Armin Ronacher once put it like this (quoting from memory), why should I choose GPL if I don’t plan on enforcing copyleft?

The GPL is an attempt at software freedom by restricting freedoms to have a lever that leads to greater overall freedoms.

In some way it has worked for the Linux kernel if you look at contributions wrt drivers. But I am not sure so much that it worked so well for other aspects

bayindirh
0 replies
3d5h

Because first, it's self enforcing since GPL is court tested, second it's also a stance.

I choose GPL, because I do not code these tools with my programmer hat. I code them on my free time, primarily for myself, to be used by people who appreciate the work went into them and find these tools beneficial.

These tools, while vary in sophistication, are high quality items which are built for their users, and not open to be monetized by another company just because they can build something with or on top of it.

I have no qualms with Open Source software when done honestly. Most of today's Open Source projects are not honest.

Try to deploy a service or compile an Open Source tool solely from the provided source code. 99.99% of the time you'll wish that you land flat on your face, which would be easier and less painful.

I choose GPL, because not only I promise that you'll be able to build the thing I released, I promise that I'll make it buildable with minimum fuss and effort as much as possible.

What I put out is complete opposite of a run of the mill Open Source Software. Free, easy to understand, easy to build, no moats whatsoever. It's a gift instead of a window dressing. It's a free offering with no strings attached instead of "fix our code, so you might get internet cookie points in return". It's crafted instead of produced.

wredue
7 replies
5d2h

What is viral about using a GPLd binary executable that doesn’t link?

On the whole, I am firmly against Open Source and would generally go with Source Available, but this seems like FUD.

bayindirh
6 replies
5d1h

Note: Assuming that you're shipping a product which contains coreutils, regardless of it's modified or not.

First, you need to ship the source of that GPLd binary with the binary itself. Second, no derivative of that source code (and the binary as a result) can have a different license unless you're the copyright holder on that source code.

On the whole, I am firmly against Open Source and always go with Free Software, but this seems like incomplete knowledge.

edit: Added the "Note" section in the beginning. There was confusion it seems.

wredue
3 replies
5d1h

Sorry dude, but simply using pre-compiled coreutils in your org doesn’t force you to ship the code, nor does it pose any licensing issues.

If you link, or modify the code, then you have some license issues. FUD.

bayindirh
2 replies
5d1h

Sorry dude, but assuming that you ship a product using a precompiled GNU coreutils, you're operating under GPL.

The license states the following[0]:

3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following:

    a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,

    b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, 

    c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) 
If you're using it under your organization, you don't need to ship anything, but if you're shipping something with it, then you must.

The same applies if you're hosting a SaaS with AGPL license.

Sorry, but I'm too old for this FUD thing. I neither have the motivation, nor the desire, nor the reason to do it.

[1]: https://www.gnu.org/licenses/old-licenses/gpl-2.0.txt

russdill
1 replies
5d1h

He said if you use it within your org, not if you ship it. Hell, you can even modify coreutils to whatever the hell you want. So long as you aren't distributing the binary, you are under no obligation to provide source or a source offer.

bayindirh
0 replies
5d

I think I have addressed that in the comment you just answered. There was a misunderstanding, and I tried to fix it.

I again edited the comment which lead to the confusion.

russdill
1 replies
5d1h

You absolutely do not need to ship the source code with the binary. The source offer is sufficient and the most common way of meeting the license obligation.

bayindirh
0 replies
5d

The source offer is sufficient and the most common way of meeting the license obligation.

Yes, you're right. And you play e-mail ping-pong while trying to find the correct person in most cases. IIRC Linksys got into "warmer than usual" water because of not honoring the offer.

giancarlostoro
3 replies
5d2h

It might be the motivation for companies like Apple to use it instead of the GNU utils, but they're still using the old GNU utils (if they still even bother putting it on Apple Silicon versions of macOS).

tcmart14
1 replies
4d23h

I may be wrong here, but my understanding is they don't use GNU utils, at least not any more. They use old FreeBSD utils, which is permissively licensed.

hollerith
0 replies
4d23h

You are not wrong. (Apple never shipped GNU coreutils or GNU tar. It did ship GCC long ago and maybe related packages like binutils.)

bayindirh
0 replies
5d1h

Apple is not against GPLv2 since it doesn't prevent TiVoization, but GPLv3 is a complete no no for Apple as far as I can see.

russdill
2 replies
5d1h

No, the majority of companies shipping a product like coreutils don't care if it's GPL. They are extremely unlikely to modify it and meeting the licensing requirements with a source offer is not burdensome. The binaries aren't linked against so there isn't some viral nature.

The issue is with GPLv3.

bayindirh
1 replies
5d1h

GPLv3's biggest features are preventing source code from bricking by not running on the target hardware unless it's signed with a confidential private key, and automatically granting any patents which may encumber the source code.

So, companies doesn't like to share their source code in a usable form if they're not against GPLv2 but are against GPLv3.

Companies are being companies. It's not about the license, but allowing others to use their code on their products. So they're after smoke and mirrors, and when their mirrors are taken away, they have moved to mirrors that work.

Am I understanding that right?

russdill
0 replies
5d1h

Previous open source licenses focused on making sure that improvements to open source software were shared. There were created at a time when appliance and IoT like devices did not exist. Many companies have been very supportive of the model, joined open source groups, and even taken an upstream first approach to open source projects. Open source projects provide an enormous asset to companies, and it's often in their best interest to see those projects thrive.

But yes, companies are much less interested in people modifying software on devices they sell. It isn't smoke and mirrors, there are just different reasons that different individuals support open source software. And there's different licenses to suit different purposes.

yyyk
1 replies
5d2h

There are already mostly compatible utils in FreeBSD*, as part of their own attempt to pivot from GPL to BSD following GPLv3 (I think it's understandable that an explicitly BSD-aligned project would seek to avoid GPL). So anyone who wants to avoid coreutils could already do it.

* With the exception of diff3.

https://wiki.freebsd.org/GPLinBase

bayindirh
0 replies
5d1h

I don't expect a BSD to host GPLd software, but these projects are built with the spirit of open source, so they don't pull shenanigans.

The problem with permissive licenses started in my eyes with companies abusing the freedom they provide by creating *ium projects and started to push closed source applications with pseudo-open-source *ium projects. Also the same companies get these software fork it, add a small thanks with a 3pt font, printed to the space between the last page and back cover, and be done with it.

This is not illegal, but against the spirit of open source in general, and harmful on the long run.

As someone said: Open Source is about developer freedom, Open Source is about user freedom.

And developers are users of the software they don't develop, and they're cutting the branches they're sitting on.

hawski
0 replies
5d5h

Toybox has this for a goal. This project I would think is about the blessed safety of Rust.

pletnes
11 replies
5d7h

Where I’m working, WSL doesn’t work (well), VMs are banned, and personal experience with cygwin is awful (that was on win7, but still). Everything that helps me develop software on windows which runs on linux (of course!) is warmly welcome.

I’m sure there are a thousand similar but different enterprise environments like this. At home / solo dev it makes no sense, you’d just install ubuntu / get a macbook and get working without any fuss.

Docker works but it makes test runs 100x slower (yes, I measured, it’s not made up).

VBprogrammer
3 replies
5d7h

If your test runs are slower by two orders of magnitude then something is going wrong.

For example, I've had problems on docker for Mac where accessing lots of (python) files in a volume mount was slow. I had to use some beta setting at the time which was far better (though I had to do a OS upgrade to use it, which also entailed some IT nonsense).

The overheads of docker are there but if it was that bad people wouldn't use it.

pletnes
1 replies
5d5h

File IO seems to be the problem. I’m guessing antivirus + virtualization overhead. Don’t see what I can change about that (use linux, sure).

Docker is still useful, it’s great for sharing a complicated setup across machines.

prewett
0 replies
5d1h

If I recall correctly, NTFS updates a file's access time (or some such) every time anything happens to it, so by default Windows' file I/O is slow. (Antivirus, of course, will not be helping that.) Once upon a time you could disable access time updating in the registry, although I assume if you had those kind of permissions you wouldn't be having this problem.

ParetoOptimal
0 replies
5d3h

It is that bad. I use a Linux VM with docker from OSX and coworkers use docker with OSX.

Their compiles take 2m, mine take 30s.

thesnide
1 replies
5d6h

I use mingw/msys2 with great success.'

I even do some directx dev with it.

All with very portable code that can even be built in the standard github action ubuntu.

pletnes
0 replies
5d6h

Yes, and git bash is quite useful and built on those. In many ways the most useful alternative - it works and everyone already has it. But not a logical choice if you want to «deploy» something I guess.

tahnyall
1 replies
5d2h

Where I’m working, WSL doesn’t work (well), VMs are banned, and personal experience with cygwin is awful

I'm no Windows fan but WSL has worked extremely well on several different machines I've worked on since 2015 and Cygwin was no WSL but I could get a lot done with it back in the day as well.

Docker works but it makes test runs 100x slower (yes, I measured, it’s not made up).

Yeah, I don't know what you're doing that it could be 100 times slower, something else is wrong.

pletnes
0 replies
5d2h

WSL conflicts with many antivirus/security packages due to e.g DNS filtering or other network restrictions. WSL itself on a non-managed box works not-too-bad, sure.

Cygwin has been very useful to me, but also caused its share of issues with its permissions model, creating large directory trees that couldn’t be deleted.

cesarb
1 replies
5d7h

[...] VMs are banned [...] Docker works but [...]

I might be missing something, but isn't Docker on Windows or MacOS actually a VM running Linux (or, on Windows, perhaps it uses WSL2, which is also a VM running Linux)? How could you use Docker if VMs are banned?

master-lincoln
0 replies
5d5h

Docker on Windows can run without VM if the guest in the container is a Windows with a matching kernel and the process isolation is enabled in docker parameters.

Similar situation for MacOS docker containers on a MacOS host (see recently discussed https://news.ycombinator.com/item?id=37655477)

But for running Linux containers on MacOS or Windows I think you are right

wongarsu
0 replies
5d7h

Makes me wonder how docker on Windows handles file access? I know under the hood it uses WSL2, which in turn uses a HyperV VM. In WSL2 it's a big deal where you put there files. While windows can access the files in the Linux VM and the other way around, that requires communication over the hypervisor and is magnitudes slower than accessing the files managed by your own kernel (like C: on Windows and /home in the WSL VM)

YoshiRulz
11 replies
5d8h

I use NixOS so it's feasible for me to use these as true drop-in replacements when they're done. And the reason I'd want to do that is for hardening, as the sibling commenter suggests.

The fact it's released under MIT instead of Apache (or GPL) does worry me though.

silon42
8 replies
5d4h

Someone could release it with a relicense to GPL (and maybe LGPL if dynamically linked).

dartos
7 replies
5d3h

You can’t just release someone else’s BSD code under GPL.

Zambyte
4 replies
5d3h

Sure you can. As long as you follow the terms of the MIT license. That effectively nullifies the value of slapping the GPL on it, but you can do it.

dartos
3 replies
4d21h

Why even comment this?

It adds nothing but confusion to the conversation.

Zambyte
2 replies
4d18h

Because the ability to sublicense is the whole point of permissive copyright licenses. The point of confusion is saying that you can't sublicense, when you very explicitly can.

The reason why it would be useful to sublicense it as GPL would be to intermix it with changes that are GPL. The combined work would be covered by the terms of the GPL, and the original work would remain covered by the MIT license.

dartos
1 replies
3d10h

Because the ability to sublicense is the whole point of permissive copyright licenses.

What makes you think this? The point of licenses like GPL (copyleft licenses) is to prevent sublicensing.

GPL sets very specific rules and you need to follow them. You can’t just ignore them and change the code’s license.

MIT isn’t copyleft in that sense (as you don’t _have_ to release changes you make to MIT code) but any code released based on MIT code must also include the MIT license.

You can’t just change the license all willy nilly, that would defeat the purpose.

Zambyte
0 replies
2d2h

What makes you think this?

The body of the license, particularly in contrast to the copyright granted automatically, and other copyright licenses. The MIT and BSD licenses are not very long and quite easy to digest. I recommend giving them a read.

The point of licenses like GPL (copyleft licenses) is to prevent sublicensing.

The juxtaposition with the previous question makes me think you may have missed that your question was in reply to permissive licenses, or maybe you think that the GPL is a permissive license? Either way: I agree. The GPL does prevent (further) sublicensing.

[...] but any code released based on MIT code must also include the MIT license.

Yes. That's why I said:

That effectively nullifies the value of slapping the GPL on it, but you can do it.

In my previous comment. Because when you simply slap the GPL on some MIT code and release it like that, people can choose to use the MIT licensed code with the MIT license instead. Effectively nullifying the value of the GPL.

You can’t just change the license all willy nilly, that would defeat the purpose.

Never meant to suggest you could.

fanf2
1 replies
5d3h

A fair number of the gnu utilities were originally written as part of the 4.3BSD Net/2 effort to get rid of AT&T code. When the gnu project adopted the code it got relicensed from BSD to GPL.

Zambyte
0 replies
4d22h

Relicensed or sublicensed? Anything can be relicensed (by the copyright holder), because that is outside the terms of a license. Neither the terms of the BSD license nor GPL are relevant for that.

Snow_Falls
1 replies
5d6h

Yeah, I'm not a fan of the rust ecosystem trying to move everything from FSF-style free/libre licenses to permissive licenses.

m4rtink
0 replies
5d4h

Yeah - memory safety, why not (though its not a silver bullet) but why change te license to one that can be quite dangerous over time for something this important?

noirscape
10 replies
5d8h

Well for one, uutils is in rust. That alone can be pretty desirable for some people because that infers assumptions about memory safety, especially compared to the coreutils which are in C.

The license stuff is what people focus on, but to me that is much more interesting. C is a pretty decrepit language and while I don't care much for the "rewrite it in rust" cult, the coreutils are exactly the type of program rust excels at - a non-iterative target that doesn't change often and can be made with "best practices" in mind.

Not having to deal with the utterly inane amount of dead architectures that GNU projects inherit probably helps them too on that end.

lohnjemon
9 replies
5d7h

What kinds of memory safety bugs do you really care about in coreutils? Genuinely curious.

Given how mature and well defined the GNU Coreutils are, how small their scope is, how they are used, I really don't see the supposed security upside here.

There simply has to be a better reason to me, than "Rust good".

wongarsu
5 replies
5d7h

Some CVEs have happened in the past [1]. None of them memory issues, but a couple that seem unlikely in idiomatic rust or are much easier to prevent in rust.

Specifically, integer overflow is much easier to correctly handle in rust, making bugs like CVE-2015-4042 less likely, and correct handling of multibyte strings is basically enforced by the standard library, making issues like CVE-2015-4041 very unlikely in a rust implementation

1: https://www.cvedetails.com/vulnerability-list/vendor_id-72/p...

_flux
3 replies
5d6h

But you still need to pay attention to use overflow checking versions of functions when doing arithmetics, because in release mode regular integers are not overflow-checked at runtime, unless you explicitly enable -C overflow-checks=true—whichin my opinion would be a good default for many non-performance-critical applications.

Arguably it's the "pay attention" part that causes the bug in the first place, so I don't think the performance-oriented default was a good pick.

The issue is mitigated a bit by the remaining runtime array bounds checks, but I must wonder if those checks could be removed by the optimizer when it believes e.g. a variable can never be below a certain value.

wongarsu
1 replies
5d3h

Apart from overflow checking in debug mode or with the right compile flag, rust also makes it a lot easier to do the right thing, e.g. 100u8.saturating_add(255) or even encoding it in the type system (`use std::num::Saturating; let mut x: Saturating<u8> = Saturating(128); x += 200; assert!(x == Saturating(255))`) (obviously you can also use Checking for explicit handling or Wrapping instead of Saturating). Meanwhile overflow handling in C/C++ is difficult, tedious and full of footguns caused by compiler optimizations.

_flux
0 replies
5d3h

It seems it's going to become easier to check them in C++26 with https://en.cppreference.com/w/cpp/numeric/add_sat and friends. Or if you want saturating integer types, you can find https://github.com/StefanHamminga/saturating (granted this does not seem maintained) or https://www.boost.org/doc/libs/master/libs/safe_numerics/doc... from boost for a checked integer type. https://github.com/mbeutel/slowmath gives you exception throwing checking.

I haven't tried that style in Rust—or in C++ for that matter—but is it truly much nicer than the options available for C++? Perhaps out-of-the-box experience is the winner there.

pitaj
0 replies
5d1h

There are lints you can enable to enforce this, rather than just "pay attention".

lohnjemon
0 replies
5d6h

So this one integer overflow in sort, a command which is never ran as root is an issue somehow, because it can cause a denial of service(it crashes)? Am I missing something here. Can I use this to exploit someone's machine?

I can search uutils/coreutils for "overflow" and get way more hits, I don't see how this is a rational thing to be afraid of within GNU Coreutils considering it's a collection of tools, that have been developed and maintained for decades and used by millions over that time period.

https://github.com/uutils/coreutils/issues/1420 https://github.com/uutils/coreutils/issues/886 https://github.com/uutils/coreutils/issues/5149

To be clear, I don't see any problems personally with any of these issues, they don't seem very exploitable to me.

However, I think that relying on Rust to be the bastion of safety merely because the name "Rust" is mentioned is nothing but a fallacy.

To me, logic bugs are the far more egregious category in something like coreutils. Me making the assumption, that something works the way it's documented, but doesn't can lead to horrible things down the road. Much more so, than any integer overflow crash could ever dream to.

VBprogrammer
1 replies
5d7h

Wasn't it just a couple of years ago that we had shell shock? According to Wikipedia that bug was introduced in 1989. Of course, that wasn't a memory safety issue (and it's probably important to reiterate that memory safety doesn't mean free of bugs and / or exploits). But it does demonstrate that issues can exist for a long time in mature code without anyone noticing.

ReleaseCandidat
0 replies
5d7h

Wasn't it just a couple of years ago that we had shell shock?

Yes, but Bash (or any other shell) isn't part of coreutils.

https://en.wikipedia.org/wiki/List_of_GNU_Core_Utilities_com...

Corrado
0 replies
4d9h

I think the upside is that these utils are not set in stone and never updated. Future maintainers will probably find it easier to work on a more modern codebase with more footgun protections. So, yes while the current utils is great, at some point the code will have to be modified and I would much rather modify clean Rust code than 30, 40, 50 year old super optimized C that no one truly understands anymore.

wvh
2 replies
5d4h

The people who originally wrote those utils are slowly disappearing, and with them the knowledge and ancient C hackery skills. It might be a good idea to have a new generation that can take some of that knowledge and interest and carry it forward into the future.

tahnyall
1 replies
5d2h

That'd be great if it were migrating those utilities to modern C using improved modern coding practices but re-writing in some goofy language like rust is dumb.

thecodedmessage
0 replies
5d

Writing in Rust when appropriate is improved modern coding practice.

smt88
0 replies
5d8h

Differences with GNU are treated as bugs.

That tells me that "some options and behaviours are still different" is also being treated like a bug.

Even if this is only used by huge companies like Meta or for new projects, it will still justify its existence..

ranguna
0 replies
5d7h

I don't see why I wouldn't switch.

nonameiguess
0 replies
5d3h

Probably makes sense mostly on Mac. Right now, it's common for developers using Macs to install GNU coreutils from Homebrew as a first step to ensure scripts and build systems that were originally written for Linux still work. Macs give greater ease of corporate device management, so Macbooks are a fairly typical compromise for companies that need to give developers a Unix environment but still want to maintain the level of control over endpoints you'd get with Windows.

I'm sure there's desire out there for a pure-Rust close-to-POSIX userspace you can put on top of Linux, but I don't exactly see a whole lot of progress toward that. At minimum, you'd need coreutils, findutils, diffutils, tar, grep, sed, compression libraries, crypto libraries comparable to OpenSSL or GnuTLS, an editor, a POSIX shell, an init system, a service manager, system-wide DNS and DHCP. Going beyond pure POSIX to a more practical server distro would probably include a PGP implementation, a package manager, sudo or equivalent, ssh, iproute2, probably many other things I'm forgetting.

uutils seems to give you findutils and coreutils. ripgrep is not a drop-in replacement for grep. I'm sure there are other "rewrite it in Rust" projects out there I don't know about, but low-level system utilities aren't exactly the gloryland most developers are interested in rewriting and POSIX includes a lot of stuff.

Frankly, I don't think this is a realistic terrain for any single organization. GNU itself took over a decade to get a practical distro and that was only because other developers provided a kernel, package managers, boot loaders, crypto, and what not. And the GNU project is not really a single organization. It encompasses many people from many different employers. Systems software in general wasn't as fragmented. If you were working in the space at all and cared about being cross-platform and collaborating across organizations in the 70s and 80s, you were using C. Everything else was proprietary. If we're ever going to get something comparable but in Rust, it's not going to be from a single Github org like this.

giancarlostoro
0 replies
5d2h

I'm not familiar with the situation on MacOS but how bad is the compatibility issue? Similarly, on Windows what's the situation with WSL or VMs or even Cygwin? Is performance the issue?

If I remember correctly, Mac has any GNU utility from before they all switched to GPL v3. Which was decades ago now.

duped
0 replies
5d2h

The canonical GNU packages are pretty arcane and difficult to build/understand (*), so alternatives with modern tooling/languages should be welcome.

But also, things like BusyBox and ToyBox are very popular alternatives which contain non-standard ports of only a subset of coreutils. A complete and mostly compatible set of coreutils would be more popular than either, but they also have some different constraints that Rust builds make difficult (eg: binary size).

* nb4 "it's just configure/make/install what do you mean" consider if you want coreutils without glibc, cross compiled, or want to bootstrap your environment without any of them. The GNU ecosystem is "easy" to build/use within the GNU ecosystem, which is not unopinionated about what that looks like and how it works.

dspillett
0 replies
5d7h

> so it feels like a "compatibility" project for MacOS and Windows

I get a similar feeling. This is more attractive than ports like cygwin as a solution because if you have this in every place you can be more sure your environments match. With ports there will be more delay getting changes/fixes in different environments, at least in theory, than with a solution that is cross-platform as a core goal. The other main options, running Linux in a VM (directly or via WSL2) adds an extra layer of friction.

A cross-platform solution is more likely to match bug-for-bug in different environments, which can be as important as matching feature-for-feature: if something is going to fail out there it will fail the same way in your dev/test environments.

Also some will want to use it from a licensing PoV. Many¹ find GPL related licences bothersome and use the GNU coreutils because there isn't (until this matches their requirements) an alternative. Similarly, the language might be an attraction to some, either ideologically or because they might want to dig into the source, though this is more of a factor for projects developing new features rather than trying to be drop-in replacements.

I expect the attraction to be relatively niche though, people won't start using it as a drop-in for the GNU tools en-mass until, for instance, popular distributions use it (or a niche distribution using it becomes more popular for some other reason).

----

[1] I am not one of that many, but they are many so this is a notable consideration.

apatheticonion
0 replies
4d19h

I use them on my Windows machine from PowerShell to make using the Windows command line more tolerable.

You can find the GNU coreutils for Windows on sourceforge but it always feels like spyware when I download something from that site, plus I don't know how old they are or if they are actively maintained.

With this rewrite, I like that I can download them from the repo's releases page. There is an issues page for problems and I can see the project is actively worked on.

Installing them as binaries is simple and, this goes for all Rust/Go projects, I actually know how to compile them (sorry, I know C/C++ is great but between gcc||msvc||clang, missing deps, ./configure ./install and make - I rarely have a pleasant experience compiling something in C/C++).

ParetoOptimal
0 replies
5d3h

If it were GPL licensed, I might try switching because I'm more interested in learning Rust than learning more C.

0x1ceb00da
0 replies
5d6h

Cross compiling rust programs is much easier, so if you end up on a niche system that doesn't come with coreutils, you can install your own version.

Decabytes
17 replies
5d7h

I don’t understand how they can rewrite the core útiles in Rust with a different license, especially if the intent is a like for like end product?

One of the things “open” game engines run into (IE OpenMW) is that if source code for a game engine they are reimplementing leaks, they avoid it at all costs, since it could “contaminate” the project and ruin their protection as a fair use project.

I feel like this sets a bad precedent for the GPL because if anyone can do that, then it weakens the power of copy left licenses as a whole, but maybe I’m missing something because I’m not a lawyer

Aissen
15 replies
5d7h

Think of it like a blackbox reimplementation, not porting existing code to another language:

https://github.com/uutils/coreutils/blob/244693f50e224abf726...

nairboon
7 replies
5d7h

That warning was added 2 months ago.

Aissen
6 replies
5d7h

Maybe they got tired of people making unsubstantiated remarks that were never made for bsdutils, busybox or toybox ? Just look at the level of the current HN discussion.

nairboon
5 replies
5d6h

Is this discussion about bdsutils? Why do you dismiss the whole legal discussion as 'unsubstantiated remarks'? It is certainly a valid issue for a non-GPL rewrite of a GPL software.

Aissen
4 replies
5d5h

I dismiss it because I haven't seen substantiated remarks; i.e proof that anyone actually looked at the source of coreutils to do the the reimplementation.

pama
3 replies
5d1h

Nobody has to look at the code to violate the GPL though. An acccidental identical implementation or one spewed out by an LLM counts as a simple translation would largely place the new code under GPL at a technical level. I don’t see the point really for not simply adding rust to enhance a fork of gnu binutils, except for the license change, and the license change is super hard to defend in the long term future when it will be easy to identify translations of the original code.

Aissen
1 replies
5d1h

Nobody has to look at the code to violate the GPL though

Accidental identical implementation of something trivial aren't covered by copyright, see the Oracle vs Google lawsuit.

LLM completions are something else entirely.

pama
0 replies
4d23h

Agreed on the accidental reimplementation of something trivial. The distinction of what is trivial will be harder to clarify in the future, however the language models in the future can also help identify the provenance of substantial parts of the code and eventually it will be easier for GNU, if they care, to have a strong argument. I suspect GNU will not care until somebody plays foul, so GNU can wait for a long time until that happens and until the tools are substantially better. I just find it a little sad that the effort is bifurcated into competing camps. Maybe eventually gnu binutils will also start a rust clone for parts of their binutils and make this effort obsolete at the technical level.

sokoloff
0 replies
4d19h

GPL is based on copyright law.

Independent creation is a defense against copyright infringement under US copyright law. (How could it not be?)

LaGrange
4 replies
5d2h

They have hundreds of contributors and they don't audit them. This is not a blackbox reimplementation.

Aissen
3 replies
5d1h

They have hundreds of contributors and they don't audit them

Thank you for copying the SCO FUD playbook, I hadn't heard that one in a while:

https://en.wikipedia.org/wiki/Fear,_uncertainty,_and_doubt#S...

LaGrange
2 replies
5d

See, the difference here is that SCO was scum, and in this case the people who re-implement GPL tools on permissive licenses are scum.

andrewshadura
1 replies
4d19h

Wow, that quickly got out of hand. Stop calling people who do things you don't like scum.

LaGrange
0 replies
4d14h

Stop acting like it’s because I “don’t like them.” I have fairly concrete reasons to call them scum. Rewriting GPL software in MIT is a form of being a scab.

cmrdporcupine
1 replies
5d7h

Can we truly believe that not a single one of these authors didn't peak over at the GNU coreutils source to get a handle on how something there worked?

Aissen
0 replies
5d7h

You can believe it or not, but please stop spreading FUD disguised as a question. If you have proofs, you can post them, and then interested people can try to dissect them.

tyrion
0 replies
4d22h

This was brought up in one of the previous discussion on HN [1], and people found out that indeed this project seem to have copied the original coreutils. There were some name of variables/constants taken from the original code [2]. Also, I am not implying that they are violating copyright (as someone else said, not doing a clean-room implementation does not necessarily imply violating the original license). However, I find it very sad that they replaced the license and are effectively damaging the GNU project. (It is also a bit sad to see your comment, which expresses a perfectly valid concern, down-voted).

I wonder what is the official position of the GNU project about this though.

[1]: https://news.ycombinator.com/item?id=26398251

[2]: https://news.ycombinator.com/item?id=26398538

coldtea
14 replies
5d9h

The issue with all these efforts is whether they'll be sustained and maintained long term, or merely until the 1-2 maintainers lose interest.

GNU coreutils on the other hand have been going for decades.

eviks
10 replies
5d8h

Indeed, there are no examples of decades-old projects dying, so that risk doesn't exist

mijoharas
4 replies
5d8h

To be fair, the Lindy effect does imply that it's less likely for a decades old project to die soon than a newer one.

eviks
2 replies
5d7h

it's not fair since the original comment said nothing of the sorts, and this effect is just a theory

coldtea
1 replies
5d6h

The "original comment" basically made the Lindy effect argument, stopping short of naming it explicitly:

"The issue with all these efforts is whether they'll be sustained and maintained long term, or merely until the 1-2 maintainers lose interest. GNU coreutils on the other hand have been going for decades".

So there's that.

Many useful tools are "just theories", Occam's razor included.

eviks
0 replies
5d2h

Nope, "until the 1-2 maintainers lose interest" is a specific cause of project death, not a general observation that the risk is LOWER. Just like "a 40 years project dies because the 1-2 maintainers retire" is a cause of death for old projects would also be a similar risk, but just as useless for comparison.

No need to try to fit everything into some simplistic theory as though it's a Procrustean bed

Many useful tools are "just theories"

As are many harmful tools, and you haven't demonstrated that the Lindy law is useful

sph
0 replies
5d5h

To be fair, the Lindy effect does imply that it's less likely for a decades old project to die soon than a newer one.

True, but doesn't that imply its opposite as well? A decade old project will probably die sooner than later, because 11+ year projects are rarer than 10 year ones.

The Lindy effect can only be observed in comparison to something else, not to deduct how long a single project in and of itself will last. Which means, coreutils will probably last longer than this one, because it's been around 33 years vs 10.

coldtea
3 replies
5d8h

It's almost as if existing longevity is a sign of project community/support structures/resilience [1], and what matters for such an assessment is not a knee-jerk pointing to the existence of counter-examples to show that non-zero risk exists (as if anybody said anything about the risk being zero), but the relative probability of a fresh && much less used project dying vs a widely used mature project that has already proven it can survive for a long period of time...

Who would have thought!

[1] https://en.wikipedia.org/wiki/Lindy_effect

eviks
2 replies
5d7h

It's almost as if you didn't read the first sentence in your own link, which states that this "is a theorized phenomenon", not some established law of practical software development nature. So instead of knee-jerk posting of a "proof" as a response to a criticism of a knee-jerk pointing to the existence of some risk or pointing to some straw man of zero risks you might actually realize that the main thing that matters for such an assessment is an actual assessment

coldtea
1 replies
5d6h

It's almost as if you let the whole point fly over your head, and just hanged on the first thing you read on the link that could serve as a hail Mary attempt to build a semblance of an argument.

Yes, it's a "is a theorized phenomenon", it's not a "law of practical software development nature". The whole point is the statistical relevance of it on accessing such a risk - not it being some kind of absolute law.

It's the second time in this argument thread that you point to the lack of absolute guarantees (as if anybody said that long standing projects don't fail at all, or as if anybody said that the Lindy Effect is some absolute law), failing to see that they don't mean that long-term vs short-term project survivorship rate is the same.

You know that outcomes can have probabilities attached to them too, not just absolute guarantees, and that it's the former that are the most common tool for accessing risks, right?

you might actually realize that the main thing that matters for such an assessment is an actual assessment

You might not actually realize that in practice any such an actual assessment will be based on risk factors, statistical observations, and handy heuristics for longevity such as the Lindy Effect.

Who would have thought, huh?

eviks
0 replies
5d2h

The whole point is the statistical relevance of it on accessing such a risk - not it being some kind of absolute law.

Someone letting the whole point fly over one's head again. You have no statistics! You're just using some empty law to support lazy thinking

to the lack of absolute guarantees

That's just your second straw man. I know these "laws", were they true, wouldn't be absolute. You just can't understand that you don't have any data to support your claim, so when I point that out, you mislead yourself into thinking I demand absolutes

OrderlyTiamat
0 replies
5d8h

There are more (much more) examples of young projects dying than old projects. This is called the Lindy effect: that which has survived tends to survive. Taleb first used the term Lindy effect but it has been noted before.

Taleb suggested that if something non-perishable has survived for a long time its expected remaining survival is just as long, for example if a book has been in print for 40 years it is expected that it wil remain in print for another 40 years.

The point is that projects, books, and other non perishables don't have a life expectancy like biological organisms, they're actually more likely to live on if they've endured a long time.

cmrx64
2 replies
5d9h

uutils just elapsed its first decade and going strong.

sph
1 replies
5d5h

coreutils is 33 years old and going strong.

https://en.wikipedia.org/wiki/GNU_Core_Utilities#History

This one might have been around for 10 years, but it's disingenuous to claim it is as extensive, feature-full and tested as the real thing.

dcsommer
0 replies
5d1h

Who claimed that?

Dinux
7 replies
5d8h

We have switched to Rust about 4 years back for most of our robotics and embedded control systems. I has been a blessing to move away from C/C++ after 10 years. Sure Rust has its problems and issues, especially when it comes to async and concurrency. Yes it has a steep learning curve, yes the compiler gets in the way often but the number of _actual_ bugs (not design flaws) is probably less than 10 over 4 years. Every time I work on a C/C++ i'm painfully reminded how easy it is to shoot yourself in the foot. I hope coreutils and Rust in the kernel will eventually become the default

thesnide
3 replies
5d

i wonder how much of that is due to rust being too young to have myriads of dubious code to copy from.

Perl is even more memory safe than Rust, but the amount of crappy code is overwhelming...

steveklabnik
2 replies
5d

That's the thing about a compiler enforcing rules: you can't even get some kinds of dubious code to compile, so therefore, it will never meaningfully be copied.

Of course, that doesn't mean that all bugs are prevented, or that Rust code has no bugs, or that you can't write bad Rust code. But in the context of robotics and embedded control systems, Rust solves a lot of those "bad code" issues at compile time. And you're not using Perl in that context regardless.

thesnide
1 replies
4d23h

oh, now I'm wondering if Ada might be interesting for robotics.

As it is another language that is said 'compiler driven'

steveklabnik
0 replies
4d23h

It is one of the domains it was created for, so I would assume that it is. I barely know Ada, however, and so can't really say.

PartiallyTyped
2 replies
5d2h

I genuinely can't get enough of rust tbh. It "just" works. Don't get me wrong, a few things could be better, e.g. compile times, but it's so much easier to work with.

klabb3
1 replies
5d2h

Please include the domain you’re working in and what you’re comparing against when making value statements like this. It can be helpful for others and the debate at large.

PartiallyTyped
0 replies
5d1h

I do software analysis, with some query engines, and backend. My comparisons are against java, python, and C, though I did try to get into C++.

I also contribute to rust-lang/clippy and other rust projects.

nairboon
3 replies
5d7h

This is interesting as it is. But just as a heads up for those who get interested specifically because this project's license file says 'MIT'.

It's is going to be very difficult to justify this license for a project that is a ' Cross-platform Rust rewrite of the GNU coreutils'. Rewriting GPL-licensed GNU software while dropping the GPL license? To be on the safe side, one should assume that thise coreutils rewrite is concealed GPL.

tmalsburg2
2 replies
5d7h

My understanding (IANAL) so far was that licenses apply to the code and binaries derived from the code. Why do you think that GPL also applies to a from-scratch rewrite? There are cases where a rewrite was done precisely to circumvent license restrictions.

nairboon
1 replies
5d6h

This is a question of Copyright law [1], which also differs depending the jurisdiction. > from-scratch rewrite rewriting is literally a derivative work. With 'from-scratch' you probably mean some form of 'clean-room implementation', which is specifically required to circumvent copyright protection. Performing such an implementation is quite laborious and needs to be properly documented. That an opensource project with more than 400 contributors can reasonably attest this is somewhat questionable.

1: https://www.copyright.gov/title17/92chap1.html#101

tzs
0 replies
5d4h

Clean-room implementations are not required to avoid infringing when you make your own version of something else. They just make it easier to successfully defend if you are sued for infringement.

If your code does not copy any copyrightable expression from mine it is not a derivative work, so if I sue you (legitimately, not because I just hope a lawsuit will intimidate you into agreeing to my demands) it means I've found things in your work that I think are copies of copyrightable things from my work.

If you haven't done a clean-room implementation your defense is probably going to rely on finding those things in other works besides mine and using one or more of these arguments: (1) these things are widely used and known and your copied yours from somewhere other than my work, (2) the ones in my work are just copies from those other works so I don't have any copyright on them for you to infringe, and (3) there are so few reasonably ways to do these things that everyone who writes them comes up with nearly identical solutions and that's why yours are similar to mine.

If you have done a clean-room implementation and kept good records to prove that, your defense is that you never saw any of my copyrightable expression so cannot possibly have copied.

einpoklum
3 replies
5d2h

Has there really been no effort to modernize some/most/all of the GNU coreutils code - while keeping the language and the license?

Switching language to Rust and changing the license does not make for a drop-in replacement, I would say.

pie_flavor
1 replies
5d

Switching language to Rust and changing the license does not make for a drop-in replacement, I would say.

Why?

steveklabnik
0 replies
5d

Not your parent, but, to "replace" something means that it fills the same need you have as something else. The trick with talking about this in a generic context is that some people "need" some things and some do not. Your parent is saying "I do not like Rust and I prefer the GPL, and so this is not equivalent in my eyes." This can be true, while for a different person, simultaneously, "I like Rust and dislike the GPL, and so this is an equivalent in my eyes" can be true as well.

awestroke
0 replies
5d

GNU coreutils will never modernize.

Nobody is switching language and license of GNU coreutils. This is a greenfield project.

andrewstuart
2 replies
5d8h

With an MIT License - be good in the title/headline.

How certain is it that such utilities would be compatible including any weird edge cases?

cmrx64
1 replies
5d8h

that’s what the automated test suite is for! and the graph in there is looking pretty good.

Someone
0 replies
5d7h

And they (somewhat/largely) avoid the “a test can’t check for issues we don’t know about” problem by running the GNU CoreUtils test suite, not a suite they wrote themselves.

(Browsing https://github.com/coreutils/coreutils/tree/master/tests that test suite is written in shell and Perl)

taspeotis
1 replies
5d9h
nindalf
0 replies
5d8h

No its the same one, but they’ve made progress. The test suite that compares it to GNU had more passes than fails in 2022 and today passes outnumber fails by 2:1.

And it’s been a while since the last discussion.

ingen0s
1 replies
5d7h

Very cool! How long did this take?!

KolmogorovComp
0 replies
5d4h

First commit was 11 years ago.

znpy
0 replies
5d9h

uutils aims to be a drop-in replacement for the GNU utils. Differences with GNU are treated as bugs.

Now this is something I can get into.

I'm okay with having stuff rewritten in Rust, but I'm not into learning some snowflake ls/grep and having to go edit all the scripts written over the years.

w10-1
0 replies
4d23h

Great progress, but the last 10% often takes 50% of the time.

In this case, there may be little value to the last bits, but people are going to wait to use these until that last bit is done, for fear of what might happen.

Which triggers ranting...

Q: Why exactly are we stuck targeting cross-platform compatibility for every possible option in every possible tool?

A: Because there is no good way to check the impact of removing options or tools, or for migrating clients.

By this mechanism, "OS" and "core" features grow forever, and the convention-bound C ABI is, well, as close to forever as we get.

Rewriting in Rust does not help that one bit (nor is it supposed to).

Has no one written literate ABI interfaces that support static validation and backwards compatibility via shims and automatic migrations? Must scripting be such a drag on progress?

We do automatic migrations in databases all the time; we should be able to do it in code.

sntran
0 replies
4d23h

Coincidentally, I was just reading about how VSCode support WASM on their web edition[1] and this was mentioned in their effort to implement the Terminal.

[1] https://code.visualstudio.com/blogs/2023/06/05/vscode-wasm-w...

moby_click
0 replies
5d5h

I am surprised, that these are not called oreutils.

ary
0 replies
4d14h

People seem very focused on “should they or shouldn’t they, and why”, which somewhat perplexes me given that there is at least one really good reason for doing this: to create Rosetta Stones for very common pieces of software. To really and truly know if Rust should become the new C we have to start seriously exercising it in the places where C reigns. That’s not a bake-off entirely defined by technological superiority, but also of practical applicability, maintainability, and ability to ship pervasively deployed solutions. This project is a great test of Rust.

RIIR isn’t merely a “sprinkle Rust magic because I believe” thing. The tech has clear potential and I personally think a version of it will take hold where C was once assumed. To know for sure we have to ship more software with it, and we need good comparisons. I’m quite thrilled to have projects like this pressing on.

andsoitis
0 replies
5d4h

Not that it should represent the rubicon of when to/not to rewrite code, but when you do, you do trade one set of bugs for a new set of bugs: https://github.com/uutils/coreutils/issues