So the first step of this huge mess was: a social engineering attack. Attacking a tired, burnt-out open source project developer and peer pressuring him into giving more control of the repo to the attacker.
I do sometimes wonder if by trying to be "nice" to users and try to see the best intentions of commenters, many developers waste huge amounts of mental energy.
For context, I've really only worked on "fun" side projects, namely emulators and game remakes, where I've explicitly avoided any mention of donations or similar. Both as it's intended to be a distraction from my job, not become part of it. And generally avoid as many issues as possible around said money distribution within the project, and possible copyright problems that such things often skirt.
And those sort of projects tend to be extremely limited in progress not by total interest, but skilled interest with the ability to contribute. I wish as many as 1% of the regular discussion participants in the community could contribute to that level.
There's a natural belief that unless someone is egregiously out of line, all discussion around a project from users is allowed, even encouraged.
But many people just can't seem to help themselves making "suggestions" (or, another reading, "demands"), and that can have a significant impact on motivation of volunteers. I believe the vast majority are from based in good intentions, wanting a "better" result, but arguably are mostly self-defeating in that I find the discussion around much of these things to be draining and demotivating. I want to have "fun" coding game remakes, not defusing drama in discord threads.
But it's expected to have a "community" for all such projects, and not exclude non-direct contributors. Even writing it down here I'm worried I sound like I'm trying to curate a closed community of ego-boosting yes men.
But I sometimes wonder if allowing this sort of open community is better in the long run.
You dont need to wonder. Any long term maintainer of even semi popular open source projects will tell you that engaging with the peanut gallery is completely counter productive.
Engage with people that have earned it in your eyes, whether by contributing to your project via code, assets, bug triage, writing a good and effortful bug report, whatever. Just ignore what the larger internet has to say about you and your creations.
What form should this ignoring take, in your eyes? I’ve never maintained an open source project but I’d imagine this is the hard part, how to politely decline the peanut gallery’s feedback. Do you disable GH issues? Leave them open and ignore them? Decline with some boilerplate language? How do you stop people from being mad that you’re ignoring them? (IME these types of people are likely to take things personally and start harassing you or complaining loudly in other forums…) These are the kind of things that stress me out just thinking about it, if a project of mine actually got popular.
Say no, politely, with your reasons. If they continue to argue: 1) Unwatch the issue/block emails realted to the issue (setup tooling for this) 2) If they continue to harass just block them, they are a waste of your precious time and energy
And yes they will go cry on reddit or HN or their own blogs and call you names. Ignore it. Be happy in the knowledge of the thousands or millions of people whose lives you have improved by your labor. Most of them wont bother to say thank you, but a few will, rememebr those. And be happy in the knowledge that the worlds is a better place because you exist in it, in however small a way.
The key is to realize that human psychology is by default not well equipped to deal with the internet. We tend to react much more strongly to criticism than praise. Recognize that in yourself, and gradually change it.
This is certainly a good idea for the individual who is running a FOSS project.. But I think the projects with maintainers who bend over backwards for people are more often the ones that end up chosen and kept as dependencies when there are for example hundreds of compression libraries to choose from.
The distribution game is kind of stacked for the most likely to burn out to be the most likely to be important in the stack and in desperate need of help.
It makes sense that this could have been an easy mistake to make in the past. Responsive people seem more interested and engaged. But I wonder if it would be best seen as imprudent now? If highly responsive maintainers are susceptible to burnout, then maybe people should consider those other algorithms.
Especially considering the responsiveness here is responsiveness to the peanut gallery.
It would probably be good for the maintainers too. If being responsive is seen as a way to have a bigger impact, people might be inclined to become more responsive and burn out.
Don't even engage to begin with. Embrace elitism. Embrace the natural order. "The lion needn't be polite to the sheep." ;p Plus, they're probably Chinese intelligence agency bots.
Decline with some boilerplate language?
It works for corporate triage. If your project is big enough it might be your best bet.A relevant term is "help vampire". The original post describes how to spot them, and suggestions for addressing it.
I have a tag for my issue tracker that's called “maybelater” which is my polite version of “wontfix” (which I also have); I use it for stuff that might be reasonable, but I have no intention of (ever?) working on.
If you're working on open source as a hobby (as I do), you should make this very clear to your users. You work on stuff you want to, at the pace you find enjoying. You don't take contributions you think will be hard to maintain. You make it clear you don't owe anyone anything. You also accept that, often, the best answer to an issue/contribution/disagreement is a fork.
One example of a useful technique
https://serenityos.org/ apparently only makes source code available. There are no binary images of the OS to install
I think Andreas said this functions like a little test -- if you're not willing to build it from source, then you might not be a good contributor. Or you might not be seriously interested in the project's goals.
---
Likewise, my shell project provides source tarballs only, right now - https://www.oilshell.org/release/0.21.0/
It is packaged in a number of places, which I appreciate. That means some other people are willing to do some work.
And they provide good feedback.
I would like it to be more widely available, but yeah I definitely see that you need to "gate" peanut gallery feedback a bit, because it takes up a lot of time.
Of course, it's a tricky balance, because you also want feedback from casual users, to make the project better.
Based on my experience running open source projects for a couple of decades, not replying/engaging is the only way.
Don't reward incompetence with a response -- are they paying attention, have they done their homework? (paraphrasing Maria Popova)
You say, "Sorry, I don't have the bandwidth for this right now. I am closing this issue for now but it can be reopened if anyone else is sufficiently motivated to submit a patch."
How do you stop people from being mad that you’re ignoring them? (IME these types of people are likely to take things personally and start harassing you or complaining loudly in other forums…)
You don't (and can't) stop people from being unreasonable, mad, angry, or upset. That is their own choice. What you describe is bullying, plain and simple. Being nice to them because they might escalate later is counter-productive to your own mental health and gives them agency over your own time and energy. With bullies, the only way to win is to not play the game.
I think this is a wider social problem of the internet.
Most people want to be helpful to others. This works fine when it is with a limited no of in real life interactions with mostly reasonable people. It can get wrecked by one or two unreasonable people, but the more fundamental problem is that it does not scale well, and the internet takes it to a far greater scale.
It is one reason so many of us essentially work for free for FB and the like: we do things to help others (advice and information), and the side effect is creating content and data for FB.
Most people want to be helpful to others. This works fine when it is with a limited no of in real life interactions with mostly reasonable people.
I don't think it has anything to do with the internet (whose scale pose many problems but not this one): anyone that founded a product/start up can tell you that people will give you “advices” and “suggestions” all the time when you talk about your project, and “don't listen to random suggestions from people you've just met” is one of the first advice you're being given when you're talking to other founders.
It doesn't even need to scale above a handful of people to be a nuisance that harm your project if you listen to them.
Even writing it down here I'm worried I sound like I'm trying to curate a closed community of ego-boosting yes men.
There's this weird trend that's been happening for some time now that tries to make non-fully-open groups look wrong, but honestly, has anything ever actually been done by such open groups? As far as I can tell, "closed community" is a necessary (but not sufficient) condition for any kind of quality creative output (this includes individual projects as single-person closed group). Having a core of creators surrounded by distinct group of fans and secondary contributors is a natural organization that forms spontaneously.
I think the linux kernel developement is quite open. But Linus is famous for telling people directly in not so nice words, that they are not helping.
I don't think insults are the solution, but giving a clear no, is a skill many people struggle with. And if some cannot cope with that, blocking individuals also works.
My understanding was that Linus is cooling his jets a bit?
Yes, he wants to stop using insults, which I think is good. But he still won't tolerate BS, or timewasting.
It is also a cultural thing, being from Southern Europe took me some time to get used to the Northern Europe approach to be direct without any yes and buts.
Linux development is distributed (largely, between commercial entities), not really "open". If you aren't intimately familiar with the culture, you're likely to be ignored unless your issue also affects somebody who is.
My layperson’s opinion is that anything to do with gaming is particularly riddled with people that interact poorly with maintainers. Again, purely my opinion: gamer culture invites a particular sort of Dunning-Kruger-prone ‘power user’ type. Every gamer community carries with it a corpus of baseless, fictitious, technical information. “The developers didn’t do this because x”, “it’s ridiculous that they didn’t just y”, “They just aren’t listening to users!”
I’m not much of a gamer at all. But the once in a blue moon where I dip my toes in, it’s not that long before I’m met with a forum post or reddit comment making some technical assertion that sounds incredibly presumptuous if not downright incredibly unlikely.
You need to look no further than what was happening on the xz commit threads post disclosure; there were multiple people preaching as to how to do "real" software development.
I think what you said is true but the major reason is simpler: there are way more gamers than developers.
So if your project's audience is gamers, of course it's going to attract more people which also means lower quality community almost automatically.
The gamers seem to have particularly short fuses. It's like they skipped the socialisation phase of childhood development.
My favorite know-nothing gamer meme these days is "bad optimization". What they mean is that the performance is worse than what they would expect (?) on a given hardware configuration and game settings.
They haven't poked around with RenderDoc, they don't have the debug symbols to profile the CPU code, they have zero idea what kind of work the software needs to do, they simply know the lazy devs have failed to "optimize" their game.
The points you make aren't unreasonable.
It is necessary to establish clear boundaries of what can and can't be provided by the maintainers. If not done at an earlier stage of the project, the support burden becomes too much to bear at which point the maintainer transfers ownership, and the project suffers from catastrophic consequences such as the xz backdoor we're talking about here, or other cases where the project mostly stalls and serves as an ego-boosting platform for the new maintainer, as was the case with PhantomJS[6] before it was shut down.
This can also happen in your life, where a "friend" sees that you possess a certain skill, and then gradually tries to push an inordinate amount of their personal work related to this field onto you.
Personally, I think it's best to use an approach with extremely clear communication as to what the maintainer can and cannot provide. This can be seen, for example, in yt-dlp[1], where the consumer is clearly informed upfront that not providing detailed information as requested will lead them to block said consumer; or sqlite where their position regarding contributed patches[2] and support[3] is similarly made clear.
Having a shouty BDFL like Torvalds can also help improve code quality[4] and questionable contributions[5], though it is better that the shouty BDFL makes statements that are professional and do not show as much aggression; so for example, "Mauro, shut the fuck up"[7] would become "Mauro, your response is completely unbecoming for a Linux kernel maintainer, and is not in line with the promise of not breaking userspace."
[1] https://github.com/yt-dlp/yt-dlp/issues/new?assignees=&label...
[2] https://www.sqlite.org/copyright.html
[3] https://www.sqlite.org/support.html
[4] https://www.theregister.com/2024/01/29/linux_6_8_rc2/
[5] https://cse.umn.edu/cs/linux-incident
"Mauro, your response is completely unbecoming for a Linux kernel maintainer, and is not in line with the promise of not breaking userspace."
This is just PC corporate speak. Linus is an actual human being who says what he really means.
I'm assuming the apology[1] from Linus must have come off on complaints raised to the HR department of companies with active contributions to the Linux kernel. Because this was Linus, this went over relatively smoothly for him, but for another person, this may not have been the case.
There is also no telling if the person is interacting in good faith and just doesn't know, in which case the aggression is a bit rude.
You can tell people to fuck off without saying that explicitly, and it also allows you to save face in case the situation isn't what you expected it to be.
[1] https://arstechnica.com/gadgets/2018/09/linus-torvalds-apolo...
That LKML episode always makes me feel pretty bad. Mauro Carvalho Chehab maintains the ZBar project now. I briefly worked with him when I sent some patches in for a feature I wanted. There were things I had to address that I hadn't even considered and he worked with me all the way, ended up learning quite a bit about DBus. He's an awesome maintainer and set the bar for me for what qualifies as a great experience in contributing to free and open source projects.
Usually, I try to use projects in languages I work with so if the issue is not important enough for me to make a PR, then it’s not important enough. One thing I wish were possible is to sponsor an issue. I don’t usually sponsor open source projects, but I would likely sponsor a lot of issues just to speed up resolution. I think this could move a lot of projects in the right direction. Depending on difficulty, the maintainer could define a budget to have it in the roadmap with a reasonable timeline defined by them, maybe measured in quarters.
As long as that money is held in escrow. Otherwise vaporware would make a killing.
You could Make them supply a test case that unlocks the escrow.
This both makes a bar that the bug creator has to pass, filtering some of the drive by time wasters, and gives a reward to the developers for fixing it.
An issue is that the maintainer may have other priorities. By sponsoring an item you "force" them to look at that sooner.
I've been thinking about contributing to some open source projects, and starting a couple of things of my own, and the community aspect is a significant downside for me, and I say that as somebody who was quite active in several open source projects and things like the IETF years ago.
I'm not saying the community shouldn't be there, it's just that the barrier to entry is so low, that people can "contribute" with very little effort, that's become a significant problem as Internet usage (and acceptance of shit-posting trolls as just being a cost of anything online), so the signal/noise ratio is awful and we end up in an easy slide to controversy and spite and the good actors wanting to walk away.
I think this is why I don't do social media any more. I think it's why I'm not on the mailing lists I once was, or have interest in discord or a great many open community sites. It's just exhausting. HN, slashdot and some group chats in Signal with people I have known for many years is the closest I get to anything like that any more.
I've thought about doing some blogging without comments and a means to contact me directly another way privately. I've thought about making contributions to other projects with my only conversations being about my own commits, but that doesn't work for leading my own projects if I want to be "nice".
I think leading a project for me might involve _not_ letting others contribute to my version without clearing some arbitrary bar. No mailing list, no discord, no developer community. That doesn't feel optimal on some metrics, but it might be optimal for the metrics I care about.
I think this is why I don't do social media any more
Guess what you’re doing here, buddy.
IMO, it's not social media unless there's a social graph. HN has no way to follow or otherwise establish some relationship with specific users, so it's just a forum, not social media.
Classic phpBB style forums had smaller populations, there might not have been a follow feature, but the relationships formed naturally. HN doesn’t really feel like one.
I don’t really know how I’d classify the differences specifically…
I'm always surprised at how much developers and maintainers will willingly put up with. For example, if you visit the Matrix feed for the Asahi Linux project, you'll see hoards of trolls and time-wasters that are regularly engaged by Asahi team members who are giving them the benefit of the doubt, when they really should be removing those posts without comment or acknowledgement.
I believe this masochistic behaviour stems from a combination of both general optimism and dedication to a particular cause. The more that developers are passionate about what they are doing, the more they are willing to engage with the peanut gallery.
I recently discovered a bug in the react-hooks-form library.
The author there does the absolute minimum of interaction.
Issues not following exactly the template or lack a repro gets closed immediately without any comment, actual bugs also see very limited comment or openness to discussion.
It felt a bit weird at first, but I do get the author and the library is succesful, maybe exactly because of this approach.
It made me reconsider some of the effort I spend in my open source projects with useless issues / comments.
Boomer hat: it's just toxic positivity, and the frustrating trend lately of assuming that everyone is equally skilled when writing software. Everyone's input is valid, or else you're just being negative and overly critical.
I'm not saying everyone need to be Linus Torvalds circa 2012, but I do think more people need to be a bit less precious and sensitive, especially when receiving direct communication about their abilities.
You don't need ego-boosting yes men, you just need to work with more experienced folks, and that's okay. But the problem is that there is an army of people online who will take offense to that, and I don't know what the solution is. Best of luck.
Another old person opinion: the issue tracker of an open source project needs to be firmly anchored into the needs of the maintainers and not the users. The argument that bugs/issues need to be kept open forever unless they're fixed and the rejection of WONTFIX as a reason for closing them needs to die in a goddamn fucking fire. When the maintainer decides to close an issue because they're just not getting to it in this lifetime the users need to suck it up.
Thank, you for participating in the open source community by sharing your opinion. Your contribution is valuable but not consistent with the direction we are currently going in at this time. Please don't let this stop you for further participation.
There are no stupid contributions, just stupid people.
There's a natural belief that unless someone is egregiously out of line, all discussion around a project from users is allowed, even encouraged.
I mean sure, but how do you intend to stop people? You can't stop them from setting up a forum somewhere.
This is why github discussions is useful though. A feature request in a ticket is something that needs to be handled. A feature request in a discussion is just some users talking about stuff they think would be cool.
Thanks for writing this! Your comment and the article have prompted me to explicit the social contract for my Open-Source project: https://github.com/cljoly/.github/pull/4/files#diff-eca12c0a...
With a personal repo I maintain, I once had someone look me up on linked in, and then file a support ticket with my employer demanding I reply to his issue on github. He was asking for handholding following a clearly documented process and ignored all the warnings.
Needless to say, I did not assist him in any way and closed the issue as wontfix.
I've also had people demand that I expand the narrow scope of my project to cover a whole class of devices with completely different APIs, instead of the sole focus on the series from the same manufacturer of devices I own and use. Closed as wontfix as well with a clear statement that I will not be expanding the scope, but am more than happy to link to their project covering it.
Apologies in advance for the swear words, but this deserves swear words.
WHO FUCKING GRANDSTANDS ENDLESSLY ABOUT CYBERTHREATS AND THE NEED FOR BETTER SECURITY?
WHO FUCKING SITS ON BILLIONS IF NOT TRILLIONS OF DOLLARS IN FUNDING?
That would be the FUCKING GOVERNMENTS OF THE FIRST WORLD. Whose modern economies increasingly rest on open source to a degree they possibly don't understand.
Yes I understand part of this comes from funding from another part of these governments. Oh well, cat and mouse. Once we wrung our hands over these governments having power over these. Well, these days we are facing far more totalitarian state threats and totalitarian wannabes in the private sector.
Government save me, as the less totalitarian option!
Linux and BSD the operating systems should UNQUESTIONABLY be funded to the degree of multiple billions per year by, I'll just put it, "NATO". Here's the true value of open source in this model: it is the perfect public record, unlike untold other aspects of government output.
You must produce public vetted code in Linux/BSD.
This person should not have been "a person". This should have been 5 people, funded likely at least 10-50k/year for their roles.
The described "hit" by the intel services of course is frighteningly easy. But what is most horrid is that this poor person is being subject to very common abuse that comes from non-state-actor manipulation. The article says it:
"this is where our software comes from" -- this poor dude that is trying to help and being abused by state actors on one hand, and ridden thanklessly and for no monetary or fame benefit by massively deep pocketed corporations, governments, and billionaires.
For all of Linus's famous swearing abilities, he hasn't used it to address this publicly. He should be dressing down all corporations and governments at this point that need Linux. There isn't a "going back" from Linux at this point to some closed source option.
Linus and Linux have massive sway here. They could threaten to stop work on the kernels.
I guess fundamentally, this makes me angry because this is basically bullying the nerd in high school to get his homework/answers. The nerds need to realize their power here.
Other vulnerabilities snuck through the commit chain? I get that, usual spy stuff. They actually kind of messed up here. Usually it is within the "cordiality" of things, just quietly ride the overworked unpaid nerd to benefit. Here they inadvertently exposed the real truth of everything: the hidden contempt we treat these people with as a society.
Our no-longer-reasonable requestor also offers a suggestions. Notice there is no offer to actually help.
Help in maintainship how? Patches had already been made and were awaiting to be reviewed and merged. This was up to the maintainer to do and requestor couldn't help with it.
Help in maintainship how?
Pay.
Huh? So anyone who asks/says something in OSS, should just throw money into the ring to have an opinion?
No. But if the maintainer is burning out and doesn't have free time available for it, paying them so they can take time off to actually work on the thing is a nice way of fixing issues.
I don’t understand this view of “If I give someone unsolicited money, they will do what I want”.
You have it backwards: The notion that open source developers can not or should not ask for monetary compensation for their work is what leads to their exhaustion and their project's demise.
Of course if the developers don't want to be paid, then that's that. But otherwise, there is a very heavy atmosphere in the open source community of excommunicating anyone who dares to ask for payment as heathens of the vilest order.
there is a very heavy atmosphere in the open source community of excommunicating anyone who dares to ask for payment as heathens of the vilest order
I fully agree that forcing payment or using dual licensing is unfortunately heavily frowned upon. But a voluntary Patreon/donation option is perfectly acceptable to the same anti-payment people.
Not everyone can take time off from their day job just because somebody paid them a nominal amount of money.
Besides, the maintainer in this case was already taking time off regularly, not to work on xz, but to get away entirely from any kind of programming work. Throwing money in his general direction probably wouldn't have helped with the burnout, unless you were offering to help him hire somebody.
The more money we give, the more viable it becomes for maintenance to become their day job. It's very likely that more money here would've mitigated the burnout. Aside from just being able to quit their actual job and focus on their passion project, it's acknowledgement that the world finds this work valuable. In many cases, burnout comes from a lack of recognition, or the sense that you've done all this work and nobody really cares.
No, maintaining such software should be a paid job. Not that that guarantees all, but it could be a step.
Who is the employer? I’m all for supporting open source and am a maintainer and contributor myself, but I don’t think blindly stating it should be a job is the solution.
That assumes the maintainer wants to be paid. There are plenty of us who maintain FLOSS projects that do it for other reasons and any monetary exchange would burden us, since it might pressure one that this is now a job and you have to execute on tasks - there are enough headaches handling other things as it is.
Agreed - I already have a job where people come asking. If developing OSS came with obligations, I’d do something else instead.
That’s of course a fine approach for you. I think a big part of the problem in this case is not with the maintainer, but with the critical software that took a dependency on this hobby project (in the maintainer’s own words).
Agreed.
Pay is not the only thing regarding maintainership.
Time is another factor. It takes time to maintain software, improve the codebase, add features, etc. Then there are the other tasks such as answering questions, reviewing PRs, triaging bugs and feature requests, etc.
So getting more contributors, people to assist with bugs and bug investigations, etc. is arguably more important. Especially projects developed by a single person, or a small number of people. That's the avenue that opened up this attack.
It is easy to get burned out implementing features that end up being more complex than expected, interacting with users that want different things from a project, and having a growing list of issues and PRs. That's the scenario that happened with xz, and is common with popular software that is maintained by a solo developer.
The other aspect to this is the direction the maintainer wants to take the project in. If another maintainer has a different direction in mind, that's going to cause tension.
Time is money
Time and money are not actually 100% fungible, but there is a lot of truth to it, especially given enough money.
Maintainers are human. They need to eat, to sleep, to visit the doctor, to rest when they get sick, to participate in activities that reduce stress and foster human relationships. Money makes all of that much easier.
So getting more contributors, people to assist with bugs and bug investigations, etc. is arguably more important.
Pay is how you get more contributors.
To me it would make more sense to add more eyeballs looking at what gets committed. For example, in this case who would you pay? The new (co-)maintainer was compromised and it would not help to pay him. Thus, in order for payments to help one would need to have some assurance that the person getting paid is not compromised. The easiest way to have some level of such assurance seems to be to pay ones own employees. This is of course not bulletproof, but certainly adds another layer to pulling something like this off.
At the same time, this attempt nicely illustrated that the chain is only as strong as the weakest link since, as I understand it, no part of the backdoor was committed to the git repository in cleartext. Instead, the part of the backdoor that was at least somewhat identifiable was only included in the tarballs that would be downloaded and used by Debian/Fedora when building the packages for these distributions, thus giving a very nice trade-off between the chance of someone detecting what was going on and the potential impact of the backdoor.
The new (co-)maintainer was compromised and it would not help to pay him
Depends upon your perspective.
Hacker: "Oh it was hilarious, you should have been there! They donated $1M to the project after I hacked the code, so I took the $1M too."
Collin gives his thoughts on both PRs and essentially says they just need more cross-arch testing with a focus on performance. The requester could absolutely help by doing some of those tests and providing back the results. Seems very straightforward to me what Collin is needing help with.
If I were a chinese hacker trying to do something evil, why on earth would I use a chinese handler/username? Wouldn’t it be better to use an English/European name to gain (even more) trust from open source maintainers?
On the other hand, if I were a non-chinese hacker trying to do evil, then using a chinese handler does make more sense (China is evil, blah blah blah)
Something I've observed as an English speaking immigrant is that in general, the US/UK is very forgiving when it comes to foreigners trying to speak English. Where certain types of phrasing and harsh words would not be tolerated with first language English speakers, the benefit of the doubt is given to 2nd/3rd language speakers because they
1. Might not have the vocabulary to express themselves correctly
2. Not understand the nuance and connotations that their word choices imply
I've seen my colleagues get frustrated in meetings while simulatenously having to assume good intent.
Choosing names like the bad actor(s) did gives them an advantage in this scenario.
The English used in the commit messages I’ve seen was pretty much perfect, but unfortunately the repo has been suspended now.
I agree. Either native English or very very good second language. Given the weird laziness of some parts of the attack I'd put my money on native English.
It sounds to me like you're trying to seek a correlation too eagerly. It's not obvious whether it's more implausible that he's a lazy Chinese programmer with very good English, or a lazy native English-speaking programmer who wants to pretend to be Chinese.
The former is more implausible because Chinese people with that level of English are quite rare, whereas English people able to create a fake Chinese username are not.
I feel like you're making three statistical mistakes at once.
- First, we're not discussing the English level of the general Chinese population, but specifically of the group that's also good at programming. That conditional probability is much higher.
- Even if we did speak of the general Chinese population, that is a lot of people so something rare in that group is true for many people still!
- It's not like everyone in the US has great English either. (Though open source maintainers are again probably more likely to.)
A mirror is here[1] for someone who wants to read the commit messages, though as I understand it is insufficient for reproducing the backdoor at compile time because it is missing the m4/ files required for this purpose.
If someone has the additional details to reproduce the backdoor, please let me know and I'll add these files in the repository.
Since we're reading tea leaves:
They also used a sock puppet with a seemingly German name (Hans Jansen). However "Hans" has not been a popular baby name in German speaking countries for many decades. You'd expect any "Hans" to be over 70 or 80 by now. Unless they are American, like Hans Niemann.
This could be a cultural oversight on the attacker's side, which points towards any culture that has a wide spread belief that typical German names are Hans or Fritz. From my experience, that is especially true for English speaking countries, mainly because of stereotypes displayed in Hollywood WWII movies. I wouldn't be surprised if cultural perception of Germany was different in China.
Edit: some clarifications
They also used a sock puppet with a seemingly German name (Hans Jansen). However "Hans" has not been a popular baby name in German speaking countries for many decades. You'd expect any "Hans" to be over 70 or 80 by now. Unless they are American, like Hans Niemann.
Jansen is a Danish/Norwegian surname and Hans is still fairly popular in Denmark.
https://www.dst.dk/en/Statistik/emner/borgere/navne/navne-i-...
Interesting. Jansen is pretty common in Germany aswell. How old would a typical Danish Hans be? From what I can see in [1], Hans hasn't been a popular baby name for decades in Denmark either.
Edit: Another country that seems to have retained popularity for "Hans" for a longer time than Germany is the Netherlands. According to [2], its popularity seems to have gone down significantly as well, with a further sharp dropoff in the 90s.
There are plenty of other names to choose from if you want to create a fake personality. It's curious they chose one that sticks out among the age cohorts you would expect participating on Github.
[1] https://www.dst.dk/en/Statistik/emner/borgere/navne/navne-ti...
Interesting.
Ironic.
> This could be a cultural oversight on the attacker's side,
100% this. I don’t know why this obvious angle is being ignored. The other puppet accounts had Indian and German names - it’s very clear they were trying to obscure their real origins.
The hacker's name "Jia Cheong Tan" actually includes phonics from both variants of Chinese from China, Singpore and Hong Kong region, hence no way it is a name of a real person, so the name probably don't indicate anything. However, since the Hong Kong and Singapore variants is rarely used, the group behind the hacker probably has good knowledge of Chinese linguistics. Either way, the fact that the hacker's account shows GMT+8 and that the hacker's activity is mostly in Southeast Asia working hours (read GMT+8) would only make sense if the hacker is originated from Asia countries. I doubt that group of people behind the hack can consistently stay late night for 2+ years just to fake the timezone.
that the hacker's activity is mostly in Southeast Asia working hours (read GMT+8)
Opposite is true. His activity is focused around midnight in UTC+0800, with significant activity at 2AM in that timezone.
Seeing an Asian (any part of Asia) name wouldn’t give me any pause in the slightest. I don’t want to start anything here, but aren’t they disproportionately represented in the American software industry as compared to their percentage of the population?
What would raise my hackles would be someone with a name that codes western who had grammatical idiosyncrasies common to ESL Asians.
So, a couple of thoughts, this likely be chosen in the event that they wouldn't be caught, not that they were. Thus, them choosing a Chinese sounding name to frame China or to go along with the narrative of China being the evil communist country (etc) doesn't really make sense. Also, even if they did that, only westerners would be fooled as others have pointed out "Tan" is a dialect surname (hokkien may be), pointing to someone in Southeast Asia, not mainland China.
A better assumption imo is they chose their identity first and foremost with the intent to garner trust first with Lasse Collin.
If I were a chinese hacker trying to do something evil, why on earth would I use a chinese handler/username?
1) Why not?
2) You'd likely want a relatively common-sounding name, I suppose. Chinese names are relatively common-sounding.
According to a comment on another thread it is not a mainland Chinese name, so that might still be chosen to mislead in a more subtle way. https://news.ycombinator.com/item?id=39869047
Western and Chinese are not the only possibilities, it could be a Russian, or Indian or pretty much anywhere else that engages in intelligence gathering operation.
At the end of the day this is such low hanging fruit that I don’t think that it’s even worth paying attention to. This was almost certainly a state actor and this detail would be blindingly obvious to all state actors capable of pulling this off. You could choose to try to dissect the 10 dimensional game of chess involved in picking fake names, or you could focus your attention on more meaningful heuristics.
I'm starting to feel that one of the lessons here is that individuals invited into trusted positions should be identifiable. Jia Tan is not a real person. We don't know who they are, so there is no way to hold them accountable.
While all of this is of course quite serious, we also need to remember that these types of things are actually fairly rare. Last major one was that JS event-stream thing, and that was in 2018 (5 and a half years ago).
I don't think this is really a structural problem requiring these kind of sweeping changes; it's just an occasional rare incident.
No, the problem is that we don’t know how common it is.
We only know about the ones found.
Or there isn't anything else to be found.
It's more or less impossible to prove the absence of these type of bugs, so you can always say "we only know about the ones found" because that will always be true.
Either way, you're going to have to do better than "this could perhaps possibly maybe be a more common problem" if you want such a huge sweeping change as maintainers of open source projects to "be identifiable". What does that even mean in practical terms? They upload their passports? To who? Who and how do they verify this? How do we prevent edited passports? What about privacy? etc. etc. etc.
All for something we don't even know is a problem.
As far as we know, they're rare...
I despise real name policies on social networks, but it's a very different thing to be anonymous while in a position of public trust.
"built a compression lib some people used" seems a stretch for "position of public trust".
How about some responsibility for the IBM devs who could have contributed to the library they decided to paste into sshd?
Reputation works with pseudonymous identities too, and those have security upsides (eg can't be as easily pressed into service of others by extortion or rubber hose). And of course privacy is a value in itself.
Think of it like politics: politicians are public figures who, by virtue of becoming a public figure, give up many of their rights to privacy. Why? Because they have power, and the public has a right to know who exercises that power.
For security-hardened distributions, I can easily imagine a security control where contributor identity must be public and publicly verifiable, and to reject code which cannot be reliably attributed. Don't like it? Contribute to software with less impact instead.
You can build multiple pseudonymous identities in parallel. If one is burned, it doesn't matter, you still have 9 other.
As much as I support the privacy of maintainers I do feel like this is necessary to raise the bar for these types of social engineering attacks.
However that would've likely done little to impede this attack if it is backed by organized crime or by a state as is being speculated. It is trivially easy for those types of actors to simply use a stolen identity or create an entirely plausible one out of thin air.
Such identity checks would also hurt honest contributors who want to hide their identity for other, legitimate reasons.
It is trivially easy for those types of actors to simply use a stolen identity or create an entirely plausible one out of thin air.
Can you provide a historical example of when something like that has happened?
It sounds like hyperbole to say that it is 'easy' (even for a state) to impersonate a real-life identity or create one for a professional developer with a multi-year work history.
Humans can always be compromised. This specific case can only be solved by improved tooling, workflows, visibility, runtime environments and so on.
I'm under no illusions that this is a totally new thought, but for me first with cryptocurrencies, then "AI", and now this, the fundamental issue that the biggest problems come back to is one of trust. Cryptocurrencies try to code around it, LLM boosters try to dazzle you into it, and the attacker here half-succeeded in laundering it. The most consequential (rightly or wrongly) technologists of our time are failing to properly think about trust. In this case, the failure was 100% understandable – I have all the sympathy for burnt-out and (almost always) unpaid open source developers. In each case, capital either lures people away from thinking about trust, or neglects and exploits people to the point that they're broken down.
The most consequential (rightly or wrongly) technologists of our time are failing to properly think about trust.
It is implicit in your statement that you know some "proper" way to think about trust. If it is the case, please do share with us your thoughts on trust.
I didn't read it that way. I read it as in "actually giving it thought" as opposed to just barely acknowledging it or even at all.
So, not so much a judgement of the quality of the thought but rather of its existence in the first place.
Yes, exactly this. I certainly don't claim to have any big answers, but at the same time I think these problems with trust were very clear very early on.
Trust is a lot easier if real-life identities and real-life consequences are involved.
Unfortunately the Open Source community has a long history of ties to anonymity culture, which is fundamentally untrustworthy.
A few minutes reflection on “who can I trust” would lead one to discard the extremely egalitarian ethos built up around open source software. But casting out meritless ankle-biters will immediately lead to being attacked as an unreconstructed egotist who needs to be hung from the neareat Code Of Conduct
I have never seen a code of conduct that would prevent a polite rejection of whatever new suggestion or help to a project.
I have never seen a Code of Conduct that emphasized that the overriding goal of all social interactions under the domain of project is to deliver a quality product
The conversation is about trust in software systems, not some code of conduct grievance you want to wedge in here.
The most consequential (rightly or wrongly) technologists of our time are failing to properly think about trust.
Because that's an insanely hard problem that's outside our area of expertise.
Consider how much money governments spend on all the red tape they add to increase trust. If there was naturally perfect objective alignment and trust I bet any infrastructure project would cost about 10% of what it does.
It is hard, but I think corporate open source users and entrepreneurial software developers have a moral responsibility to do more than I see being done in the areas that I listed. To be crystal clear, in this case where I don't think Lasse is at all culpable, the responsibility would be with the companies that profit from the work done on xz-utils.
I think the biggest problems come from single points of failure. This wouldn't be as problematic if _everyone_ didn't use the same libraries. I guess you could argue that's similar to trust.
In any case, I agree technologist aren't thinking about it because the only way to extract massive amount of wealth is by centralization/monopoly. Hard problem though, the appropriate amount of redundancy is hard to know.
So I see software as a form of literacy. And we treat some literacy as “fun” (most books) yet privilege some literacy (law, a DMV policy manual).
Laws are highly regulated. The DMV policy manual is, like most government day to day policy, run through “managers” who report to a elected official (ie it can be very well written and considered or made up on the good after a bad newspaper headline)
Anyway I think I am saying there is likely to be a consolidation of FOSS - where we spend a fortune trying to bring it under some “approved and controlled” process.
We can call that trust.
I am not sure.
My take away from this is that people are still far too blasé about introducing hard dependencies and complexity, even after the left-pad incident as a warning. OpenSSH is a massive wall of code. Such complex systems are inherently untrustworthy to me, no matter what language they might be written in. Even with ernest devs there are still more opportunities for mistakes.
Agreed, which is why IMO ssh in production is a terrible idea that reveals even worse problems.
A server with ssh generally implies there is a shell, and cli tools, and an administration model that involves humans connecting to servers to manually update them in place like pets.
Having a full workstation-optimized distro like ubuntu or debian with hundreds of packages constantly shifting and updating as a critical production server is wildly high risk in terms of both reliability and security.
I understand this is how most sysadmins were taught but it is a 70s unix mainframe mindset from a time before security was a thing and everyone on the internet was a good actor.
Only a workstation or dev server should have tools for humans like ssh installed.
Production servers should be hardened immutable appliance kernels with read only root filesystems that verify and run signed containers, or run a tiny shim init system in a couple hundred lines that spawns a single application specific binary you trust.
No production systems I launch today even have xz installed, or a package manager, or a shell.
Production servers should be hardened immutable appliance kernels with read only root filesystems that verify and run signed containers, or run a tiny shim init system in a couple hundred lines that spawns a single application specific binary you trust.
And then you need to debug something. What do?
Have good logging and virtual machines that can replay those logs offline?
One of many viable methods depending on the application, yes.
Debug it with a binary with debugging enabled on a debug dev system. Just like you would any other immutable firmware appliance.
You could also in some cases have a debug container you pull in on demand, say if the host OS is an appliance-style k8s runtime like TalosOS.
Except the debug environment isn't _quite_ like the production environment, is it? Now you've got it working on the debug environment but weirdly broken on prod still..
This wasn't a vulnerability in openssh though, this was from systemd, where I think your point in fact is stronger.
It’s not a vulnerability in systemd either, it’s a vulnerability in xz, which libsystemd depends on, and libsystemd is only integrated into OpenSSH in certain distributions, and only to avoid writing ~5 lines of C code to write the string “READY=1” to a socket specified in an environment variable.
Systemd itself doesn’t even use lzma/xz… libsystemd does, and that’s a library meant to help other software integrate with systemd. It’s not really the same thing.
awkward, but heard very recently that open source is not "vc backable, go away". maybe it will change now, after the infrastructure pillars of the modern world ruins in front of those many saas/ai/web3/cloud/whatever investors
Why should VCs back oss infrastructure directly? It doesn't help the VCs and only creates perverse incentives for the oss projects.
The SaaS/cloud/etc. companies themselves should fund the projects they depend on. They actually know what those are and don't have to force their own monetization/growth models onto the projects.
The SaaS/cloud/etc. companies themselves should fund the projects they depend on.
There is a bit of a free rider problem it seems.
Noooooooo you can’t ask them to pay! You’re supposed to do this for the good of the community only! They ought to be able to take whatever you do and resell at their leisure, after all you made it open source!
And don't forget that GPL is bad, only BSD and MIT are the real free licenses because they let us take the code and sell it on without forcing us to contribute back.
The SaaS/cloud/etc. companies themselves should fund the projects they depend on.
If anything, the same "valuable" "community" members, a lot of which includes such SaaS businesses and other freeloaders, seems to shout at projects trying to ensure their survival through formal ownership structures and license changes.
Sure, may be VCs specifically shouldn't be where the money comes from but this labour NEEDS to be paid otherwise shit like this will keep happening. Dear god, I looked it up just now, but next week will be the 10th anniversary of Heartbleed being introduced to the world. A decade, and really little has changed.
definitely. not talking about the vc specifically, but the investment chain. a typical free rider, directly or indirectly.
“Community desires more”
At which point if “the community” consists of one guy doing everything and a couple whiners making demands, fuck it.
Well, in this case it's even worse. The "community" here in all likelihood was overrun with sock puppets of a malicious state-level actor.
I also don’t really understand the notion of a community around a fucking data compression program. “Bro, do you even compress?”
Is there a community of ½" plumbing pipe enthusiasts?
As someone who has a community who follows my stuff and demands more....
I somewhat get it.
I became the central spot for this particular product. I get occasional emails from people saying the did this same thing, but on a smaller scale. I have scale, so I can do things much bigger than anyone else.
I have a million people using this, vs a startup who has ~100.
(Btw, I'm barely profitable, I was selling to frugal people...)
This line in light of what we know now makes me think that while OP has a point about open source, this thread and others like it definitely were sockpuppets trying to goad Collin into transferring maintainership quickly.
I by default distrust any statement that claims to speak on behalf of some community.
Would ssh servers with port knocking set up be safe from this backdoor?
I'm not sure I got it correctly, but seems the RCE can only be performed after connecting to the ssh server, but if the port is hidden behind a reasonable sequence of tcp/udp knocks, then it won't happen?
I've been using port knocking on ssh servers, and it definitely does not replace proper ssh configuration, but so far seems like a cheap extra layer of defense that might either prevent or give extra time to respond when these ssh vulnerabilities appear.
I do not use or recommend security through obscurity.
Such things can always be automatically discovered.
https://github.com/eliemoutran/KnockIt
Any effort spent on setups like this is likely better spent removing the need for having ssh at all by moving to immutable appliance distros be they specialized like homeassistant or general purpose like TalosOS.
Such things can always be automatically discovered.
Wouldn't fail2ban + reasonably sized (10+ ports using a mix of udp and tcp) knock sequence prevent brute force attacks like these?
security through obscurity
This is not obscurity, as someone said on HN earlier, good security is good armor, good port knocking is camouflage. They are both important.
Well, it appears that it would be as safe as any other vulnerable server behind port knocking. Although there's still the possibility that the backdoor is doing something we aren't aware of yet. So yeah, it's as "safe" as you consider port knocking to be.
That said, just make sure you aren't using the backdoored versions. I don't see how you can reasonably skip this step.
Considering how kind(naive) our open source projects maintainers are, backdoors are probably already everywhere.
theyre much harder to add if people use a buildsystem from the last, idk, 20 years
Didn't Jia Tan add a single '.' in cmake script to disable the landlock?
That was some sort of theoretical security loosening, it didn't contribute to the backdoor.
I don't think this kind of backdoor is that widespread. But intentional vulnerabilities for sure are.
The Jigar Kumar account should be treated with extreme suspicion. It seems like it was part of a social engineering attack.
Given how all participants in the discussion were very focused on transferring ownership to the attacker I'd treat everyone in the entire e-mail thread with suspicion. I would not be surprised if Dennis Ens, the one who started the thread in the first place, was also a sock puppet of Jia Tan.
The word that springs to mind is “campaigning”. If persons campaigning are nobodies as their contribution is concerned, whatever they are pushing for should be considered noise and put straight into the bin.
I don’t agree with speculating like this, but I also can help wonder how many participants in this thread are the same person.
At first I thought it was paranoid to suggest that people on the linked thread might be complicit in the attack but then I read this message:
> You ignore the many patches bit rotting away on this mailing list. Right now you choke your repo. Why wait until 5.4.0 to change maintainer? Why delay what your repo needs?
It genuinely looks like we’re seeing a demonstration of supply chain psyops in retrospect. Amazing how sophisticated and patient this attack was.It genuinely looks like we’re seeing a demonstration of supply chain psyops in retrospect.
Worthwhile noting: It happened in the open, archived for the world to see at any time! The attacker needed to be extra careful not to raise suspicion, both acutely and for accumulated evidence in history. Of course, easy to say "Hindsight is 20/20", but we can probably agree in actual hindsight, we easily see a lot of suspicious acts around the issue. An individual incident may be chance, but as a whole things become clear more easily.
So, I think there is a teachable moment here: When something seems just a tiny bit fishy, do a background check, investigate. You may not catch all something, but you probably won't miss everything.
Now, looking at the level of sophistication and risk sneaking people and code into open infrastructure, imagine chances of getting caught infiltrating a large company writing closed source binary blobs for drivers, firmware, ...
There are not that many suspicious acts tbh. Randos complaining about unmaintained repos and unclosed issues is a constant in FOSS world.
Sometimes it's even true, some projects really die and stop actually addressing real issues.
But that's just inherent downside of the "bazaar" model. I don't think how we can "treat maintainers better" without going full corporate/without going full "cathedral".
Also the authors of these messages never interacted on that mailing list before or after. They only turned up to put pressure on the original maintainer.
I think the idea this was HUMINT operation by a state sponsored intelligence service is more likely.
The twitter thread here was interesting.
https://x.com/thegrugq/status/1774392858101039419
Raging about these being inconsiderate people, when they were likely fictional personalities that were part of a long con seems to be a bit foolish to me.
I think the idea this was HUMINT operation by a state sponsored intelligence service is more likely.
It's not an either/or proposition. I definitely think it was state sponsored, AND one method used was social engineering a burned out maintainer.
It seems to me that people are very much exaggerating how "professional" this attack was. Yes, it doesn't look like the actions of a single bored teenager but I don't think the government of a country like the USA or China would deliberately permit their employees to get involved with crap like this. Any backdoor they try to insert would look exactly like an innocent bug. So my (uninformed) guess would be that this is done by criminals, something like a ransomware gang branching out a bit. Though North Korea sometimes sponsors activities that are indistinguishable from those of a criminal gang so it could come from there.
I'm just speculating, of course. I don't know anything really.
I don't think the government of a country like the USA or China would deliberately permit their employees to get involved with crap like this.
Not the USA, but I can easily imagine this is China. Because, as of right now, this seems like the way China does business. The Chinese/the PLA fund a hacking complex of contractors, and when one gets caught, they simply deny involvement. [0],[1]
[0]: https://www.npr.org/2024/02/22/1233178131/leaked-document-tr...
[1]: https://www.washingtonpost.com/world/2024/02/21/china-hackin...
Serious question, but what is your expectations for linux and all these libraries/dependencies after 20-40+ years when most of old school or current devs are retired? How can anyone guarantee reliable continuity by good actors in all these dependencies?
Steve Jobs said that someone needs to be the keeper and maintainer of the vision. I think that’s the key to the making of anything great over the long run: there needs to be a Benevolent Dictator For Life who has the vision, competence, energy, passion, and caring to consistently iterate the thing well.
The best case scenario for a project that loses its BDFL(s) is to hold on maybe as decently as Apple has under Tim Cook. They haven’t done anything truly innovative since Jobs died in my opinion — they just figured out more ways to apply the combination of a multitouch display, compact computer, and sensors; but they’ve iterated well on everything they produce and have held on to their lead in terms of hardware, user experience, and ecosystem cohesion.
In short: someone very competent has to care a lot about the project for the right reasons, and continue driving it forward. If the old guard is rotating out, someone has to step up.
If no one steps up, then leeches of various types will attach themselves to the project and simultaneously milk it for value and kill it.
44% of active open source projects have only one maintainer. Out of all maintainers, 60% would describe themselves as unpaid hobbyist.
They already are BDFL for themselves or one other person. The problem is a chronic deficiency of maintainers, which is at the root of this attack.
Your comment implies there is a wide pool of peoplecwho want to do that, along with mythical "someone".
Here is a little story:
There was an important job to be done and Everybody was sure that Somebody would do it.
Anybody could have done it, but Nobody did it.
Somebody got angry about that because it was Everybody’s job.
Everybody thought that Anybody could do it, but Nobody realized that Everybody wouldn’t do it.
It ended up that Everybody blamed Somebody when Nobody did what Anybody could have done.
- Charles R. Swindoll
Your comment implies there is a wide pool of peoplecwho want to do that, along with mythical "someone".
I don’t know where you got that? I didn’t say there are a lot of people who can or would step up. It’s the opposite: competent, motivated people are rare… and why would they step in to maintain someone else’s project?
I was implicitly hinting at this by saying you need a Steve Jobs. We all know how rare such people are.
Funny. I was just saying the same thing to one of my partners just 4 hours ago.
Culpability also must be laid at RedHat's feet for sanctioning the practice of side loading libraries into such a critical service's address space. Their drive to cellularize systems management has overtaken their common sense.
The idea that they could not be bothered to answer the call of the sole maintainer of a library used in such a critical path in his time of need is, well, astonishing.
1) Debian includes this downstream patch, also.
2) A potential explanation for "why now" is that systemd DID prevent these dependencies from loading automatically in a patch one month ago [0], and the patches to lzma enabling the backdoor merged a few days later, followed by (as we know) an immediate and somewhat heavy push to get distros to upgrade driven by sockpuppets. It could be a total coincidence, or it could be that the attacker jumped to pull the trigger before the window of vulnerability started closing on them
[0] https://github.com/systemd/systemd/pull/31550#issuecomment-1...
I think it's a bit cheap to blame systemd here, and systemd does not equate directly to Red Hat either.
I believe they were talking about sideloading a binary tarball instead of building and repackaging.
Choosing a distro is nothing but choosing where you place your trust.
I can understand debian cutting corners here and there. But RH have little excuses with the money they make. Yet, even a superficial analysis, show them to be less trustworthy than the anime-avatars maintaining gentoo or arch.
Arch was doing the same..
Some random thoughts:
- every Fortune 500 company tracks exactly which FOSS code it includes in its ecosystem (usually code scanning and fingerprinting - can’t remember the usual Provider of such)
- this is essentially the software BOM that Biden signed a while back.
- this (made public) would give a real time map of the dependancies of all organisations - and linking that to things like the above thread (“cry for help”) would be an interesting place for “intervention” - anything from plain old cash to “here are three interns doing two years in gov.uk. They will help for the next 2 years
(It’s not a great idea that last one but we need to start somewhere- no way can government “pick winners” but also no way can society just sit back and hope. And commercial incentives break this horribly.
Essentially we are going to find at some point we treat some developers like lawyers - this stuff is our societies laws, rules, processes
“here are three interns doing two years in gov.uk. They will help for the next 2 years
Having temp workers come and go into all kinds of various open source projects.. does that help? :)
Agreed it’s a decent way to create a “map” of vulnerable projects. But it’s over-fitting the MO of adversaries - plugging an arbitrary hole.
Why not play into the strengths of OSS instead of trying to replicate frictionous trust models from elsewhere?
Like, listen to ourselves. We are admitting here that open source software is not feasible to audit? Like did we just accept that an adversary can basically smuggle exploit payloads in plain sight?
Can we at least try to simplify and reduce the messiness of checked in generated code, irreproducible builds, and magical binary blobs, conditional compilation, “ifuncs”? Wtf? Granted I’m not a domain expert in this area, but I’m horrified by all the complexity for what I understand to be a compression library.
I would much rather play whack-a-mole with obscurity, which as a side effect is great for software quality overall.
>> Software developers are not fungible cogs that you can swap in and out at will.
I am thinking a lot about this. One of the issues is scale and proof. I suspect that I am interested in introducing gated ability to comment / participate in a community
Say for example github introduces “gates”. The first might be add a test to the test suite that generates a haha of the version number and test output. Then adding that number to your profile means github trusts you have at least downloaded and run make test. It shows some level of commitment. (I suppose the zero level is logging in to github and commenting on some maintainers mental health)
Are you suggesting to ask a state actor to demonstrate commitment before participating in the discussion? :)
I mean, the idea of commitment -> privilege isn’t bad. But it’s got nothing to offer in the story of xz.
What would that magic number achieve? Having basic coding ability can be checked by looking at the user's repos and contributions. But the primary attacker had pretty good skills, and the other commenters (the attacker's sockpuppets) could also demonstrate some skills.
The multi-entity (sock puppet or not hard to say) social engineering attack that was apparently two years in the making (?), is to me a much more salient story than "users are often mean." Although it's true that the expectation of mean users let the attack fit right in, I guess.
The "users are mean" story is something that we can all do something about, and, honestly, I prefer stories with morals that most of us can actually put into practice.
Ah, well, I guess my thought is that we need to figure out what to do about the "take over from a burned out maintainer and then inject malware" attack.
It's not obvious what to do about it to me either. Which is why I'm concerned to talk about it.
That this attack was run by someone who had been participating in the project for possibly years before making the attack -- is not what i would have expected, and makes it even much harder to defend against. Before I was thinking it was about awareness of "don't just turn over the thing to someone new who just showed up, they might be an attacker." But not that easy in this case.
I suppose "try to get users to be less mean, by doing our part by being less mean individually" is arguably one piece of strenghtening defenses to this kind of attack, I guess, ok.
And this is why, for my build system coming out in two days, you need to pay to have me respond to bug reports.
That would stop a nation state or other nefarious actor because? This isn't spam that works in volume
It means I can ignore loud but non-paying users.
I also don't take outside contributions.
This thread is a microcosm of the interactions in Open Source projects. Consumers make demands (some polite, some not-so-polite) of one maintainer (rarely two) that does everything.
Make no mistake. This is the way it works.
It needs to change.
How?
Deny by default anything from a State that's hostile to American interests.
How would that help with the problem of people being rude to unpaid OSS maintainers?
Sadly, this is how it works in many large social systems, left unchecked. The “reasonable consumers” drain the kind contributors until they cannot do this anymore. The curse of kindness strikes in software and other industries alike.
This is all due to the ongoing attempt to convert FOSS into an obligation. The original idea behind FOSS was to code to scratch your itch and then leave it out there for anyone to find, modify and use. Instead we now treat FOSS code like commercial projects with performance goals and expectations.
Why? My guess is that this new culture is promoted to extract free labor from hobby developers. Take for example, Github. It has a bot to close stale issues after a given interval of inactivity. But what if the issue was valid and unresolved? What's the problem in leaving it open for someone interested to take it up? But what GH wants to promote is a culture where open issues are considered as bad performance on the maintainer's side. It indirectly prompts the maintainers to take open issues too seriously.
Why?
Well, also a lot of people get into FOSS for altruistic reasons. They want to pay it forward and make the world a better place.
For these people, applying good operations principles allows them to more efficiently do good -- just like it allows a commercial company to deliver more end-user value for lower cost.
Running an efficient operation in FOSS isn't necessarily bad. Sometimes it takes an obligation viewpoint to get there. It can be unhealthy, but it can also give us crazy levels of neat software, as we see proof of daily.
As a maintainer of a security-oriented open source library, the paranoia of "is this person trying to help or to exploit?" weighs down on me every time I have to read a PR (EDIT: even though the libraries are by no means as widely used), regardless of whether it's from a long-time contributor or someone new. I think accepting a slower pace of development is the only viable solution (as I'm not interested in making these libraries my full-time job), but that comes with the same feelings reflected in the article listed.
If there was a simple way of advertising the need for help to a community of experts that I/we could trust, I'd take that any day.
As a maintainer of a security-oriented open source library, the paranoia of "is this person trying to help or to exploit?"
That's an excellent mind set when reviewing code, no matter security or not. But especially for security. How could this be wrong? What are the corner cases? How could anyboy break this? What do we need to test? That kind of scrutiny is crucial for keeping the quality of your code base high, no matter who posts the PR.
This is another interesting link about the issue:
https://lcamtuf.substack.com/p/technologist-vs-spy-the-xz-ba...
It mentions something not being reported, the issue is with a Linux specific patch to OpenSSH. If the patch did not exist there would be no issue.
This is not an OpenSSH issue but a Linux issue.
It mentions something not being reported
This has been reported and discussed from the start, even in the initial report. Also it's not a Linux specific patch or issue, rather a patch used by some Linux distros for tighter systemd integration.
Could we somehow search for such takeovers done in the past?
Stop making software open source, start making it closed source and charge for licenses.
That works! Unfortunately it's bad for open source.
This is the way it works. It needs to change.
Well, it’s on you to set your boundaries. "Feel free to submit a PR" works wonders.
”Progress will not happen until there is new maintainer. … The current maintainer lost interest or doesn’t care to maintain anymore. It is sad to see for a repo like this.”
While this might be a sockpuppet deliberately manipulating the maintainer, the tone of this post is all too common, and the real thing that pisses me off is actually that last sentence.
The use of the words "sad", "shame" and "pity" have been weaponized by a generation of people to dump negative emotions onto to overworked/understaffed people in issue trackers.
If you're using those words to express an opinion in an issue tracker, you're probably being a fucking asshole.
This should make us all reconsider any criticisms we've levied towards Linus Torvalds, Richard Stallman, or any other gruff open source project lead
I agree that the maintainer doesn’t really owe anything to anyone. Including the so-called consumers on the issue tracker. Furthermore the consumers have no real power over the maintainer; since they aren’t doing anything for them they in turn have nothing to withhold from the maintainer. And they can’t bother the maintainer unless they decide to turn truly vile and evil by doxing and harassing the maintainer. Thus the complaints of these consumers can be completely tuned out.
It follows then that the maintainer can just log off. Because either you believe that the maintainer has some minimal obligation to the community (like a quarterly update about maintainership status or making a notice in the readme if you decide to abandon the project) or you don’t.
But sometimes in these discussions things get muddied because people claim that
1. The maintainer doesn’t owe anything to anyone whatever
2. But she’s a nice gal which means that she will triage some bugs and fix some bugs and explain to a few of the issue reporters that they haven’t encountered a bug they just haven’t installed the program so that PowerShell can find it and
What’s the groans moral angle, here? We’ve already established (1). But what ought the maintainer do for herself? That’s also a moral problem. Are you willing to say that the maintainer ought to log off if they are experiencing burnout? (Again: this is about the moral obligations that the maintainer has to herself only.) Or does that trample on her rights?
And if you are unwilling to say that the maintainer ought to log off, what does that make the maintainer? Are you going to claim that they have (a) started an altruistic hobby by themselves (b) got too caught up in it and (c) are now a victim of circumstance/outside forces because they have no moral obligation to log off? Considering how much power the maintainer has (and how the consumers have none), how is the maintainer a victim of circumstance when the whole enterprise was created by them and can be terminated at will by them?
Even if other users (or possibly, sock puppets of the attacker) had not been complaining, the original maintainer probably would have began to trust the attacker after enough real contributions.
Not an issue of OpenSSH, but some GNU/Linux distros patching and linking OpenSSH against xz.
Saying that it's a microcosm for "open source" is a bit hyperbolic. What would be more accurate is saying that it's a microcosm of "there is a certain set of users to be ignored" when writing open source software. One has to learn to develop a thick skin and guilt free ignoring of those who don't contribute anything to a project such as constructive suggestions, bug reports, or actual code.
In the end you're only pressured as much as you allow yourself to be pressured.
"I don't feel like it, if it's important to you then feel free to fork".
That's really all that's needed. "I don't feel like it" is all the justification you need.
Some guy just made a compression tool, because some people like doing that kind of thing, or because it was useful for him. He didn't ask to be made "critical infrastructure" or to be responsible for the security of sshd or to have some business depend on it, or anything like that. No one even asked him.
This ignores the very fact that peer pressure works and puts the entire blame on the victim. No, people react differently when pressured vs when not pressured. That's the entire reason why peer pressure works.
I didn't blame anyone; I just made some observations on how this type of thing can be avoided, partly based on my own experience doing volunteer work for the last 20 years (as open source maintainer and as scout leader), and being subject to the same pressures at times.
I hate this trend of shouting "victim blaming!" once someone tries to explain things or analyse anything. Not everything needs to be a value judgement. "X happened, and Y could have prevented it" is not a judgement.
Well, when you say "the victim could just not have succumbed to the pressure" I don't see how that doesn't blame the victim. I understand it's not your intention, but peer pressure works exactly because it gets around people's "wait, I don't actually want to do this" defense, and to say "just don't do it, nobody is physically forcing you to" ignores that fact.
If I were in the maintainer's shoes, and was feeling ambivalent about handing over maintenance to a fairly unknown person, this kind of social attack would definitely push me over the edge, exactly as it was planned to.
Anything can be viewed as a judgement statement if you paraphrase or stretch things enough. I don't think your paraphrasings are a fair representation of what I actually said.
But you can insert "I'm not trying to blame anyone, but here are some suggestions to modify cultural norms so these things are less likely to happen in the future" if you want. Or you can just assume good faith and take that as implied unless demonstrated otherwise.
But what's the advice here? "When people are trying to peer pressure you, don't accept?"
More or less, yes. Also see my other post: https://news.ycombinator.com/item?id=39883598
Also: as far as I'm concerned there is no "peer pressure" here because these people aren't "peers". They're just some random people who, as near as I can tell, have done fuck all. There is not even an attempt to help out. Not even the question on how to help out. These people are supposed to be the maintainer's peers? Yeah nah. They're just shouty entitled internet nobodies that have not even attempted to contribute anything constructive or signal any willingness to do so (not even "I have been using the patch in production for half a year without problems", which would actually be a small but useful way to help out).
When it comes to these types of things you need to accept that you can't change every person in the world; you can only change yourself. If you cycle a lot you better learn to anticipate assholes doing asshole things. Is that fair? No. But it beats being run over and getting hospitalized, or worse.
That’s what I find incoherent about it.
1. The consumers are little ants that have no real power over anybody, certainly not to help the project itself
2. On the other hand they are powerful enough to peer pressure the only person who has the power to drive the project forward
Yes? Block, move on. It's the internet, you aren't forced to interact with anyone. If you are, get a better source code hosting platform.
I think their paraphrasing was a fair representation. You want to have it both ways. You said some incendiary victim-blaming thing, but now you're backpedaling with a "no no, you misunderstood".
Giving out advice for concrete steps someone can take to prevent a problem isn't victim blaming. Unless we're all content with people just sitting and whining all the time instead of actually doing anything to help themselves. And let's be real, we're talking about open source contributions, not getting mugged for wearing the wrong clothing here.
Peer pressure happens when someone like a teenager wants or has to be around some other peers (teenagers) but has to follow the whims of the peers in order to continue to be around them or to not be harassed by them.
The peanut gallery of non-contributors are only peers in the sense that they pretend to speak on behalf of some OSS community. And the fact that they are spokespersons is by default suspect. The attacker is a peer in the sense of being a contributor. So is he a peer pressurer? Again we come back to the teenager who has to follow the whims of his peers in order to be included. The maintainer is already inside of his own playground. So the pressure to be part of the “community” is really the incredibly abstract thing that the peanut gallery was referring to: you ought to do so-and-so in order to be whatever I think of in my head as an OSS maintainer.
This can be rejected out of hand if you really believe that maintainers don’t owe anyone anything (because of free labor).
But this gets incoherent if you want to assert both of these things:
1. There is no social contract for OSS maintainers: they can toss their PC out of the window and go on a five-year pilgrimage without telling anyone
2. There is some community which has power over the maintainer to peer pressure them
If you really want to double down on (1), the “cure” is what the OP suggested: say no and walk away.
Many maintainers want to please their users, and be helpful (which is admirable, and more power to them), which means #2 applies. Sure, the maintainer is entitled to say "fuck you, I want to sit on my project and you can fork it if you want", but he was, presumably, trying to be helpful and succumbed to pressure.
I don't think the maintainer is at fault to any degree here. Sure, this could have been avoided if the maintainer refused to be pressured and kept sitting on the project and letting it die, but it's not his fault that he didn't do that, and I wouldn't want that to be the default for maintainers either.
All the pro-social benefits with a side-dish of the nuclear option. That’s coherent I have to admit.
In that case one can limit one’s interactions to other invested parties, i.e. contributors. Granted then you are still interacting with the attacker but you’re spared from the peanut gallery.
In real life volunteering you don’t get random drive-by input from outsiders. The input (and whatever peer pressure) is only from other invested parties.
I’m having a hard time understanding moral arguments. “Fault” and “blame”. Everyone is condemning the peanut gallery for complaining about passing on the maintainer stick to someone else. Yes, including people who say that he could have just “not got peer pressured”. To be clear it’s not about the maintainer having “fault” or the peanut gallery/the attacker being wrong. Both can have “fault” in different ways. Like, clearly the attacker is the one who did something bad. Now there’s only a question of what other people could have done differently.
The maintainer could have done something different but he didn’t and that’s not his fault. It seems that we all agree that he had a live option. You just want to not associate it with “fault”.
In another comment[1] I asked what moral obligation a maintainer has to herself. Only to herself.[2] Focusing on that angle seems more fruitful than talking about “fault” in the abstract since that just leads to back and forths about whether people should protect their wallets better or whether or not people should just stop pickpocketing people.
The goal of this subthread seems to be about how maintainers might protect themselves (for their own sake) from this kind of thing. Laying out the options that are in their hands (and not just how the world around them should become better) seems pertinent to the issue.
[1] https://news.ycombinator.com/item?id=39882721
[2] Like asking about whether someone has a moral obligation to eat healthy. It’s not about other people.
If you think peer pressure is most relevant to teenagers, you really need to sit down and rethink peer pressure.
It is certainly the most obvious and kind of archetypal kind of peer pressure, which doesn’t mean that it’s exclusive to teenagers.
"I don't feel like it, if it's important to you then feel free to fork".
That's really all that's needed.
that's much harder than it sounds.
having someone fork your project can give you the feeling of loosing control over the project as potentially all your users might go with the fork. that fear is often strong enough to push yourself to do things that will avoid a fork.
it's a desire for harmony and a fear of conflict
You're right of course, but there are cultural norms and expectations at play here, which I feel need to be modified somewhat.
And that doesn't really change that things really are that simple, kind of. How do you stop smoking? By not lighting up any cigarettes. Of course it's not that simple, but ... it also kind of is.
there are cultural norms and expectations at play here, which I feel need to be modified somewhat
this is an important observation.
a change of culture is really what is needed here, because it implies that not only maintainers change their behavior, but everyone involved, and the question is not so much what any individual can do for themselves, but how we can help others with that change and spread it
It's impossible for everyone to change their behaviour, because that's too many people. Some people are just assholes and don't care. Or they're idiots and don't understand. Nothing we can do about that. Most people are alright, but with 8 billion people on the planet even a small percentage being an asshole means a lot of assholes.
For the most part, the people who care already care (you, me, most others commenting here) are not really the problem.
So ultimately the only thing you can do as a maintainer is set your own boundaries for yourself. And the original message in that thread ("there is no activity, what are your plans?") is a fair question, although the follow-ups are not.
All of this is true for a lot of the internet by the way. The best thing of the internet is that anyone on the planet can talk to anyone else on the planet. When I was a kid I did some ham radio with scouting, and you could talk to people from places like Russia and the US. Wow!
The worst thing of the internet is that anyone on the planet can talk to anyone else on the planet. You're constantly exposed to a seemingly never-ending stream of assholes and idiots, and a single asshole can ruin the day of dozens, hundreds, or even thousands of people every day. Maybe this is just 1% of people, but 1% of 8 billion is 80 million.
i would not say it's impossible, but for sure it's an uphill battle. and it might take a few centuries for people to change, just like it took a few centuries before we abolished slavery and we are still working on racism, gender, different orientations, etc.
being nice to each other is just one of the many things that we as humans need to work on. and we are working on it, and hopefully some day we will be able to achieve it.
I'm reminded of this recent post about the Redis fork to be maintained by Drew Devault: https://andrewkelley.me/post/redis-renamed-to-redict.html
(he's been rude/mean in the past)
xz should be pretty much finished as well, major overhauls like the "ifunc" feature to inject alternate function implementations are not really justified. Beware of busybodies and this whole "the community demands vibrant evolution of the project" thing. xz does its job already!
And being rude/mean is not a plus, it's a minus, but it does seem to correlate strongly with leading a high-quality project. Even if it's not the best way, this person has the guts to say no in unequivocal terms. You might just have to accept the downside of a rude/mean maintainer as a common unwanted side-effect of an effective maintainer. (Linus Torvalds also comes to mind of course.)
It's important to note that rudeness is somewhat up for interpretation. Rejecting a patch because it's full of bugs might be considered rude to some people, but not to me. There are definitely times where Drew rubs me the wrong way, but I know he's good at what he does, and I know he's just as fed up as I am about the nonsense. He doesn't suffer fools, and that can seem rude and mean if, well, you're a fool. (To be clear, he's been totally out of line before, but I know he's been working hard to change that, and I think he's been successful.)
I've made absolutely stupid 'contributions' to open source projects in the past, not out of malice but pure ignorance.
If someone had slapped me down for being an idiot I'd have most definitely found something else to spend my free time on.
Today, I'm not super great at coding but can hunt down segfaults like a truffle pig and submit bug reports (with code) that demonstrates the exact issue if it's something I can't figure out on my own.
This fear doesn’t harmonize with the mantra about how no one owes anyone anything. If the code is OSS then you can’t lose control over the project since you have no inherent control over it to begin with.
That's the issue right there. Why would you care?
Clearly, the maintainer is invested in having a "community". Why? They must expect something positive coming out of it, so they are invested in having that community, which then means they have something to lose if that community moves somewhere else and they are left behind.
That's what enables this social exploit. A takeaway can be not to get so invested in having a community. These are users of your software, and their "utility" is in helping find bugs, perhaps suggest improvements or even provide patches, making the thing you as the original author made better. But that can clearly become a burden.
When it was created, it was a different time. There was a sense of community around open source, much more tightly nit. And the more socially minded you are, the more vulnerable you are to these kind of attacks.
2007 wasn't that long ago, and these type of maintainership issues aren't new – they were a thing when I was starting out in the early 2000s as well. What changed are the stakes, and also the amount of effort bad actors are willing to spend to mine their cryptoblahblah or whatever.
And sure, I understand why people feel a responsibility. And it's fine to take this responsibility too. I'm just saying: there is no need to.
The entire point of this Free Software/Open Source is to give people the freedom to do whatever you want with some piece of software, without having to be beholden to the original author. That's pretty much the entire point.
Anyone in the world can be a maintainer for xz. By forking it and applying useful patches.
2007 was 17 years ago, and it was another world. This time 17 years ago, the iPhone had not been released. Facebook was still new and hot, the first big operation to be unashamedly built on PHP (!). GitHub didn't exist, opensource happened on mailing lists, Sourceforge, private Subversion repos.
It was definitely another world. I do agree that maintainership issues were already there, but I think they were smaller in number. Between the explosion of projects and the explosion of users, they are now on a different scale.
MySpace, Yahoo! Mail, eBay and probably other large scale operations were all built on PHP long before Facebook and I don’t think any of them were particularly ashamed about it.
PHP was a popular stack at that time ? Wikipedia and Wordpress existed and were very popular and unashamedly PHP.
What PHP lacked and Ruby had was rails like framework , the team from 37signals singlehandedly shifted the momentum out of PHP with their work on rails.
> 2007 wasn't that long ago
It was almost 20 years ago, and no, I generally agree with the person you're responding to that OSS has changed significantly in these regards since the early-to-middle 2000s.
I'm not even disagreeing with the rest of your take, just poking at this idea that time hasn't passed and changed things. Some days I look around our industry and feel like it's nowhere near the one I was working in before.
This is incredibly naive. Anyone that thinks that pressure doesn’t work is exactly who I’d personally put top of my list to try to social engineer. Everyone is human. Nobody has infinite strength against persistent pestering. Everyone is capable of finding oneself in a scenario where they feel unsolicited responsibility. All you’re saying here is that you haven’t personally experienced it.
The GP did not state that pressure does not work. They stated how to they think one should think about people trying to pressure. Or about contributions. They named the industry's self inflicted problems.
Peer pressure works excellently against me as a teenager [when I was one] who just wants to fit in and have friends. It works much less well against me when I get unsolicited calls and contacts via phone and the Internet.
I didn't say that; I just said you can maintain these projects with different attitudes.
"I feel a huge responsibility to please every user who reports a problem, shortcoming, or asks for a feature, which I need to address ASAP" is one attitude.
"I work on it whenever I feel like it, and if you don't like that then I don't care" is another.
Those are on the extreme end and for most people it's somewhere in-between.
It's very common for people doing volunteer work to lean too much towards the first. They have trouble saying "no" and bite off more they can chew, even without direct pressure. Learning when to say "no" and setting boundaries for yourself is absolutely a vital skill for any kind of volunteer work – without it sooner or later you will burn out.
When I was a scout leader burnout due to this was a major cause of attrition. We made it very clear and very explicit there was no pressure for anyone to do anything they didn't want to, and that there was no shame in not wanting to do something just because you didn't feel like it. But it still happened, because people still feel this pressure, even when it doesn't actually exist.
The maintainer did mention mental health issues. It can be hard to say no in the best of times, let alone when your own mind is trying to screw with you.
That's brave on his part and honest. I would never admit that in public personally and shows his personal level of honesty, unfortunately it can make you the victim of even more predators. Just say "no" and move on with your own agenda on your own personal projects. It's none of anyone's business, but one may share at times, just realize that others may try to take advantage. That's just my take.
I'm curious: do you maintain a popular open source project?
The reason I ask is maybe the people who do are (in general) self-selected from the subgroup of people who don't just say "sod off" when someone is rude or inconsiderate.
If that was their stance in life, they would likely not put up with maintaining a popular open source project for very long.
This is harsh but tree, but it's not simple. People are different. Some shrug off things naturally, others try to be empathetic almost to a fault. I've learned to be the former, but when I was younger was much more the latter. I try not to act like it's easy because it took me a while to have that skill. And it is a skill and not built in to me, I still feel the urge but built up the skill of saying "no"
You don't even have to say, "I don't feel like it." You don't have to say anything.
The tiredness or other problems of the developer seems an easy narrative, but did anything actually happen that wouldn't in any understaffed open source project? A contributor shows up and does work for 2 years, I feel most projects would have given the person full project developer status by then.
It's not like organizations writing proprietary software are magically immune to sleeper agents either. Social engineering is not a software or tech problem in general. Trust is required to get anything done, and can also be abused to hell and back by a sufficiently motivated actor.
But important software needs to be identified and proportionally more scrutinized by multiple independent parties, that's the lesson. Identification is the hard part. You can't easily determine that half of the world relies on this particular piece of software, or that it enables access to desirable targets.
This is why OSS can be more secure. How much software has the build scripts, the code, all of it, locked away and hidden behind propriety software? Instead of lots of eyes, just 2 DEVs?
Yes, this almost succeeded... but can you imagine how many scenarios where someone such as Andres Freund would have found irregularities, but then.. what? Just had to report it to some webpage's contact page? Without being able to even dig further?
Would he have known what was happening, or would it have just ended up as an oddity, with no source code, and with the binary purposefully obscured, and so on?
Or... even worse, it's reported directly to the 2 guy team, and the guy who put the back door in... takes the bug report?!
From where I sit the sleeper/mole problem exists in companies, and can be far harder to detect.
I see people saying "this almost succeeded". It didn't almost succeed, it succeeded entirely. It made it to production, for a long time (? I'm not sure how long it was in repos).
Success is making it to some machine, and then it's "how long it's successful for". Who knows how many other such vulnerabilities there are, and the openness of the source didn't protect us as quickly as we thought it would.
Did it? At least in Debian, where it was found in the first place, it only existed in sid/unstable and I think perhaps in testing. It never made it into stable which is the only thing you should be running on a production system.
I think in Fedora it was also only in a testing or RC of the next stable version but I'm not sure. Also unsure about others. I have seen several distros putting out advisories (Gentoo, Arch, OpenSuse and more) but I didn't go into the specifics.
I don't know specifics either, but Debian stable has years-old packages, so I'm not sure if it's a good indicator of how widespread this got. If it's in the latest Ubuntu release, I'd say it's pretty successful.
I don't personally know anyone who would run any Debian release that's not stable on production systems. On your desktop? Fine. I do it myself. On a server, with ports open to the public Internet that's a big nope.
There's nothing wrong with Debian stable having years-old packages either. In any case:
https://www.debian.org/releases/
That's just over 1 year. Sure, packages themselves will be older than that since they take some time soaking frozen in testing before they become part of stable but that's generally a good thing.
In any case, if there's pressing need for some newer packages, it's always possible to use apt pinning to pull those on top of a stable base.
I didn't say there's anything wrong, I meant that choosing the OS distribution with the oldest packages isn't an indication for how widespread the distribution of this package was.
I was initially responding to
I never said it didn't make it to production. I was asking if that's the case, and giving one data point.
It apparently didn't make it into Ubuntu LTS either, so there's another one.
If you have information that proves or disproves either point, please present it.
Fortunately, it didn’t make it into an Ubuntu release: https://discourse.ubuntu.com/t/xz-liblzma-security-update/43...
Though it was frighteningly close, 24.04 is scheduled to be released on April 25.
Wow, ok, that's close. I wonder how many have actually made it.
My impression is that it did not get into latest Ubuntu, they were in the process of trying though.
The end goal seems to have been to get it into the LTS releases of rhel and Ubuntu. Since this was a valuable it's probably something kept around for when other methods fail. I doubt it would be used before the really valuable systems are compromised (rhel and Ubuntu in cloud and governments)
I think there are huge factors that push things both for and against open source here.
Yes, you get more eyes and people like Andres Freund.
However, if this had been a mole in a company, he wouldn't be able to hide behind a possibly anonymous fake persona and (likely) be immune from any consequences/fallout from this attack. It would be harder to gain entry in the first place, he would have needed a real identity. Background checks may not be a big hurdle, but they're at least something more than signing up for a GitHub account. He also wouldn't have had his posse of anonymous sock puppet accounts to add pressure to the original maintainer.
I also think that just being able to ramp up on liblzma and its dependencies to undertake this effort in the first place is a huge head start vs. trying to execute the same attack on a closed source corporate product.
At the same time, there are probably lower hanging fruits to attack if you really do get a mole inside a company, in addition to there being fewer eyes/opportunities for the attacks to be discovered as you pointed out.
I honestly don't know how all of these factors add up. I expect the argument opposite to yours to be raised (again) in the wake of this incident. I'd personally be hesitant to raise this argument (not that it matters here on HN).
Organisations shield moles and infiltrators very effectively. They are rarely alone. It may be harder to get in as an outsider, but once in they've protections a lone wolf does not enjoy.
You can save a lot of time reading John LeCarre novels and take a short summary like this one of Adam Curtis [0].
If you watched the recent Oppenheimer film you'll know the name Klaus Fuchs. But what about Guy Burgess, Kim Philby and Anthony Blunt? If MI5, MI6 and GCHQ are. almost by tradition stacked to the rafters with defectors and spies, enclaves of enemies within, and enemies within enclaves of enemies... how does anyone expect a commercial company motivated by money and with such a weak perimeter as a "job market", to do better?
Trust does not have an organisational solution.
[0] https://www.bbc.co.uk/blogs/adamcurtis/entries/3662a707-0af9...
In a corporate proprietary code base this is REALLY easy. Just commit a bug. Happens every day, everywhere. Just that normally these are mistakes. You can easily mask a deliberately inserted exploitable bug as a mistake. Make the code a bit convoluted, leave out a crucial corner case from your test, slip this in in a moment when your code reviewers have time pressure and/or are stressed somehow, and even if you are caught, the plausible deniability is convincing. How many eyes will see it after the fact? None.
And yet industrial espionage in companies is quite common, by both competition and, if you’re big and important enough, state (yours or enemy). The means are different, though.
Lots of companies hire remote workers sight unseen, and not all require proof of identity, and where they do, that proof can possibly be easily faked by someone willing to break the law.
Larger Open Source projects - e.g. Debian Developers, also have real-identity verification through chain of trust.
GitHub doesn't allow more than one free account per person. Companies might not look for employees sharing an IP address as much as GitHub does.
Depending on the software, it might not be that hard to be a customer (or even pretend to be an employee of a known customer, if they aren't validating incoming requests claiming to be from customers enough) and add pressure for features that will provide cover for a backdoor, or even for a product that isn't a commercial success to be handed over to an external developer.
You should check the thread posted yesterday, the Lastpass guy who raised a PR for a go binding for xz but was otherwise unrelated to this fiasco already faced a bit of questioning regarding their motivations from their employer based on a user reporting them from a contact form.
Moreover, many companies already have information from background checks, and in certain countries, they also have the tax identification number of the employee which can pretty much identify who put in the backdoor.
It’s pretty intellectually dishonest to call this a big win for open source. It’s completely and utterly bonkers that this was noticed. Unthinkably low odds. There are certainly other backdoors out there like this, already in production.
If this happened on a proprietary software project, …I have verified proof of identity documents for everyone on my team. These can of course be faked or fraudulently obtained, but it heightens the barrier to entry and leaves more of a paper trail. That’s not something that we have in this scenario. There’s way more give and take between open and closed source than open source purists make out. They just like the idea of being able to see source code.
True, but organizations tend not to accept online-only identities.
The attacker really did useful work for 2 years, before taking over the project and injecting malware? I wasn't clear on the timeline.
That is pretty hard to defend against. I wonder if they were planning this all along, or if they wound up with a greedy plan later, or what.
Enabled by customers who don’t pay or donate.
Are these customers or additional attackers who have never posted before and will never post again?
Tired maintainers have no way to distinguish one from the other, that's the problem.
Even if we say "no payment, no customer," it won't prevent determined attackers from paying significant amounts of laundered money in order to be treated as customers.
Also one thing that is easy to come across at this type of work is money. I mean in the cases where someone is injecting backdoors or vulnerabilities. Might not be for individuals or criminal groups. But once agencies and corporations get involved, the sums are trivial...
On the other hand, accepting significant amounts of money causes overhead (accounting, taxation). Plus it reinforces the psychological obligation and it's not fun anymore. Thus many maintainers avoid it.
Occams razor says that those commentators were not in on it; that pressure is common on all projects, there is no need to think that they were part of an attack.
In fact, Occam’s razor says that the malicious code was injected by a compromised account and not a malicious actor who spent years steadily getting into position to attack?
I don’t think that’s a good application of Occam. Does it really seem parsimonious to think that someone who shows up with no prior history or subsequent activity is really just a random open source user who cares deeply about new maintainers for a low-level library they’re otherwise silent about?
But the point of the article, which matches my own observations, is that comments pouring guilt onto maintainers is commonplace. The depth of feeling it invokes in the maintainer is likely orders of magnitude more than the depth of caring on the part of the commentator.
The normal commentator who complains is probably not malicious, probably not aware of the pain they might cause, is probably just not even thinking of that angle.
Yes, I think that’s what made this attack so effective: that kind of abuse is normalized in much of the tech world so it’s very easy to miss that in this targeted case it was coming from accounts with no prior involvement in the community. I like the open feel we’ve had for the last 3 decades but I do think this will likely mean a lot of projects becoming less open, which is warranted but going to suck for people trying to start a career.
I often debate if I should go into the hacking world, best case I get bug bounties, worst case I get rich and I contribute immoral actions.
I think its far easier to make $3,000,000 as a hacker than a worker/entrepreneur.
Its way easier to find flaws/bugs than to do the entire Capitalism thing correctly.
Then I see that half of these major attacks required social engineering.... Maybe being a hacker is significantly easier. I only need to fool 1 person, there are a lot of people, and merit isnt exactly how everyone got to their position.
Anyway point being: People are amazed by hacking, they shouldn't be, its relatively easy if you are a mere 10 year programmer. Most of us pick relatively moral work, so the number of attacks are small. It is also why we really need to treat security on computers like its no stronger than your home's front door lock. There are too many attack vectors to be perfectly safe.
+1
Between roughly 1999 and 2013 I was primarily a test engineer for networking switches/routers/telephony products. I found bugs for a living and wrote them up, and in the process I found plenty of security vulnerabilities and wrote them up. Security bugs aren't really that different from other bugs. Yet for some reason we lionize people who find security bugs.
Most security issues are simply quality issues. But by calling them security issues we shift the focus away from the software producer creating shit code to an attacker doing something bad.
The case with xz is a little different, because we have someone who intentionally added bad code and tried to hide it. But for unintentional bugs that rise to security vulnerabilities it's 99% a QA problem.
QA is focused on testing a product or system against how it should work. Security analysis tests against how it shouldn't. The second of those is a much larger search space. Both are important.
The argument around blame shifting is apt. The same case has been made for the usage of the term 'bug' (aka an externality). It's 2024, we don't have moths crawling into relays on our computers. We have implementation faults, invalid designs, unsound architecture, inaccurate documentation, ambiguous requirements, and a myriad of other ways to express how software may be defective.
Using those terms hurts, and may even invoke some level of concern from those outside of the engineering org - this is a great reason to embrace them.
It would be pretty poor QA who only tested happy paths . Any competent quality analyst will be expected to test for failure states, boundary conditions and so on .
Well before that: letting a central lib and project to be maintained by a single tires burned out project dev (see xkcd2347)
Here’s the rub: peer pressure counts if done by peers. Users who subscribed to a mailing list only to campaign for a governance change are not peers. Longtime contributors who have earned street cred, can be.
(I guess in the case of xz, Jia Tan has earned the cred before going rogue; the one-off “maintainer needs to be replaced” campaigners, however, haven’t.)
Here's the deliciousness:
Let's take on face value that it was the Chinese, and that China is communist. I mean, "Kumar" and "Tan"? Maybe it wasn't, but it doesn't matter for my purposes:
They took an overworked peon of the capitalist enemy that provides a ...
... do I even need to expound? well it's fun ...
... collectively and idealistically produced common operating system "for the people"
... that is exploited and neglected by the rich and powerful, to a degree that society, not just computers, overall society operates on this operating system, and the profits from that are hoovered up by the powerful.
The communists attacked the exploited proletariat to get to the enemy. And they are forcing the enemy capitalists to either pay the proletariat properly (they won't) or continue to be vulnerable to the growing communist power in the far east.
That's what this distills so well, within a political/ideological conflict that has now spanned 100 years: capitalism vs communism.
To wit, a proper functioning capitalist system to reward market value for produced value would properly pass compensation to this poor soul, and incentivize others to help him. Barring that, a proper functioning government would recognize the public good of this and provide support to the core infrastructure software to enable other private enterprise to produce tax revenue.
THOSE AREN'T HAPPENING, so the Communists can attack this with impunity and in perpetuity.
Here's the thing folks, we've been living in, as they used to say "uninteresting times". Post-WWII Pax Americana, even the Cold War was basically peace, has been cranking for 80 years now.
People, that is coming to an end:
- Russia / China are destabilizing demographically and simultaneously becoming militant and totalitarian
- Global Warming will ramp up the pressure on populations, food shortages, production
- The US will likely retreat to a more regional focus as production is onshored, regionalized (or at least centralized to our hemisphere)
There's going to be more state conflict, the Ukraine war is just the beginning. The world stakes are rising.