Connecting qualified would-be maintainers with projects looking for a maintainer is a tricky problem. Who here even knew PCRE2 was looking for a new maintainer?
I took over some fairly widely used Go projects, but only after they were archived. I had no idea they were looking for someone to maintain it.
There's a bit of a catch-22 here:
- If a project is already well-maintained then no one really needs to contribute anything.
- If a project is poorly maintained due to lack of interest or time, then this will also discourage contributions – the first think I check before contributing is whether previous PRs are actually getting merged.
For larger projects where there's always something to do, like Exim, this usually isn't a big issue. But for smaller more narrowly scoped projects like PCRE2 this is more of an issue. I'm not surprised he's having a harder time with PCRE2.
I worry about new maintainers who feel the need to leave some sort of a mark on projects by adding unnecessary features and dependencies. I get it, true maintenance of a stable project, where you only fix bugs and security issues, is not glamorous.
It's a tired meme but we really do need some concept of "finished" in our field, along with the necessary incentive structures to enable people to do the needed maintenance on finished software in perpetuity.
isn't the whole point of intellectual property law to align incentives?
it's no coincidence that corporations that own proprietary code don't have this problem.
has anybody considered that maybe Richard Stallman was wrong?
maybe it ISN'T a good idea to volunteer your time to write libraries that corporations will use to make billions, while begging for donations.
maybe, sometimes, libre licensing is a mistake specifically because it leaves maintainers with no reasonable avenue for compensation
If you are writing libraries that are being used by companies as part of proprietary software at all -- much less to make billions -- then you didn't pay attention to Richard Stallman.
Yes: which is why Richard Stallman and the Free Software Foundation specifically came up with a model which uses copyright law against proprietary software via the idea of "copyleft".
I think there are people out there who fundamentally believe in doing service for other people... as long as they aren't taken advantage of! Aligning this incentive by encoding this moral contract into a civil one is the goal of the FSF.
(Now, I won't say they nailed it... GPL2 failed to foresee and prevent DRM, and even GPL3 has issues with the new era of cloud hosting; but like, they did much better than anyone probably should have expected.)
Contrary to the title of this LWN post, PCRE2 is not "Free Software" and is actually licensed under BSD; the result is that, yes: a ton of companies use this library and they make billions.
Permissively licensed software is still free software. The BSD licenses are approved by the FSF as free software licenses. They're simply not copyleft.
Yes but what I think previous poster meant is that you can't use a permissive license and then blame RMS when you feel used by a corporation using it to make proprietary software.
edit: because RMS/FSF's position is not simply "all free software equally good and you should spend your time building some with any license"
(FWIW, I was also trying to add a charitable shift of blame to the article title making the matter more confusing by invoking the term Free Software; but, icouturi's correction of that little quip I added--which was entirely ancillary to my overall argument, as you point out--is, in fact, correct.
It might be more accurate to say that ESR was wrong. It might be even more accurate to say that ESR regards the obvious deficiencies of the model he popularised to be features.
Proprietary programs have a different, interesting problem: They eventually disappear. In 1995, the year PCRE2 was born, I was doing classic MacOS GUI programming on a 680x0 machine, running Metrowerks CodeWarrior as my IDE, and relying on a bunch of tools that are now gone. The proprietary technology I used in those days is now almost universally extinct. I think only BBEdit still exists.
A couple of years later, I switched to Emacs and Linux, and they're still going strong a quarter century later. I hope to get another couple of decades out of VS Code (or a fork). I can deploy Linux apps to containers. And PCRE2 is still going strong. Oh, and I can still typeset math with LaTeX.
I think there is real value in software that is "done", with stable APIs and very conservative maintenance, which can remain in use for decades. That's a world I want to live in. Let me keep using proven technology where appropriate, and switch only when I find a good reason to switch.
I sometimes avoid letting my projects get too successful in order to minimize my support costs. But in general, if you want to earn money from software (open source or proprietary!), you're going to need to build an actual business. Using a proprietary license isn't magic. I can use a restrictive license, find no customers, and still earn no money. It's the easiest thing in the world.
If you want money for an open source project, you're still going to need to focus hard on the business part. The easiest way to do this is consulting. Your users will still capture 99.9% of the value from your software, but a successful open source project can still be turned into decent revenue—if you keep working at the business side, too.
Mostly, when I release open source, it's because I've created something useful, but I know that it would make a lousy startup for one reason or another. My employer is happy to go along. They see that a tool is useful internally, that we couldn't sell it to our customers without a massive pivot into a difficult market, and the tool isn't hugely useful to our direct competitors. So why not share it? Sometimes we get a useful PR! Even better, designing a tool to make sense as open source sometimes makes it more reusable internally.
True, but I was using Windows in '95, and it still exists. I even use the odd bit of software from that era (typically small command line things.) And I'm still using Word and Excel.
So I'm not sure your comparison holds water. In the sense that some companies keep developing a product forever, and some have a history of ending things all the time.
That's of course equally true for Free Software projects. Most have been abandoned. Most have been replaced over time.
Your point about business is spot on. If you want to make software your business then you will spend most of your time on the business part not the software part.
Consulting is one path to income. Unfortunately consulting on proprietary software pays better than consulting on Free Software [1]. Equally consulting on some large (free) product pays better than consulting on your own product [2]. Which of course is all fine. There is no reason your income anc passion have to be related.
[1] obviously I'm talking generally. But for example SAP pays better than PostgreSQL.
[2] still a generalization, but the market for say PostgreSQL consulting dwarfs the market for say MyEditor consulting.
Surely that's a decision to be made by the author(s) of the code?
There is no objective "right" or "wrong" when it comes to libre.
I have written dozens of libre projects. I don't want them to be proprietary. I don't want to make money from them. If I did, I'd simply use a proprietary licence, no one forced me to go libre.
These are definitely questions worth considering.
I would argue they have a similar but worse problem. Someone at google creates an awesome product. They get promoted and leave the project. Someone else is assigned to maintain the product, which slowly gets worse over time either a) because the new maintainers are less skilled/driven or b) because programmers perceive themselves as being paid to write code, and it's fun, so they're going to change things even if nothing needs to be changed.
I've seen so much commercial software get worse over time. I'm not sure if I have the causes right, but there's definitely something wrong with the model. In contrast, I've found open source software to be far better for far longer. It might stop being maintained, but it almost never gets worse in my experience.
There are very few non-trivial projects that are truly "finished" in the sense of "will never need any changes". There's always bugs, there's always a changing ecosystem (even for C), and for many projects once in a while you do want some new features.
For example a new feature added last month is the new pcre2_set_max_pattern_compiled_length() function, to limit the size of compiled patterns. I assume that wasn't added for the craic but in response to a real-world use case. There are also plenty of bugfixes and smaller changes.
"finished" as in "totally free from bugs" is one thing, "finished" as in "feature complete and passes all known test cases, artificial and real world, known at the time" is another. As an industry we need to push the second notion as something someone can build and then set down. building a bridge instead of a steam engine locomotive. a bridge needs some maintenance and upkeep, yes, but after it's built, the team that built it moves on to another project. To contrast, a steam engine locomotor is an ongoing engineering project, which requires constant upkeep to keep the train moving. A SaaS company's backend is a steam engine. The Unix util "ls" is a bridge.
If you read my second sentence above, I think we're in almost perfect agreement. Unless I'm misunderstanding you? My definition of "finished" includes provisions for bug fixes and important features.
What is or isn't necessary is for each developer to decide. Maintainers or community being too conservative can prevent innovation from happening, causing stagnation and loss of interest.
For example, I tried to implement a simple library/module system in the GNU bash shell. I was writing a lot of shell scripts and just wanted an easy built in way to load them from a standard conventional path. I didn't expect this feature to be controversial in any way. I went to their mailing lists to talk about it and it culminated in other users describing it as "schizophrenic". I now view as a huge mistake my decision to write bash scripts instead of using a proper language from the start.
PCRE2 is specified through its implementation, which has so many edge cases and special flags that most people can't reason about what kinds of problems it could cause.
I really wish more people used PEG parsing. I wrote a library for it in Haxe that was surprisingly fast despite being interpreted : https://www.youtube.com/watch?v=CtNQvjyioGQ
Aren't there reimplementations of pcre2? I think that ripgrep has a pcre2 flag or something
There’s a PCRE2 issue that Philip created last week, and which I submitted here, but it didn’t get much traction. https://news.ycombinator.com/item?id=40657607
This LWN article is helping to spread the word.
(I worked with Philip before he retired.)
Yeah, that's exactly the kind of stuff very few people are going to see. Even this HN post here is seen by relatively few people.
Also, about 90% of the people respond will fall through. I'm sure people say "yes" with the best of intentions, but saying "yes" in a wave of enthusiasm is easy, and then spending a lot of hours on it ... not so much.
My favourite example is someone who said "yes, I'll help maintain", was added to the GitHub repo, made a new issue with a long plan on how to deal with the many open issues, and ... was never seen again. Never actually dealt with any of the open issues. I'm sure this was done with the best of intentions (and their profile said they're a student, so I don't want to judge harshly), but this was a rather marked example that made me laugh.
Seems like it would be a great feature on github, or for a standalone site, if a critical mass of usage could be reached and the site was trustworthy (i.e. not trying to monetize the information somehow).