It's not clear if the author was hired to do this pentest or is a guerilla/good samaritan. If it is indeed the latter, I wonder how they are so brazen about it. Does chattr.ai have a responsible disclosure policy?
In my eyes people should be free to pentest whatever as long as there is no intent to cause harm and any findings are reported. Sadly, many companies will freak out and get the law involved, even if you are a good samaritan.
Pretty clear to me, "it was searching for exposed Firebase credentials on any of the hundreds of recent AI startups.", running a script to scan hundreds of startups
Yeah, but that also ends with that company being shamed a lot of the time
What is wrong with shaming when it's warranted?
It’s an ineffective tool if your goal is change.
Shame is absolutely a valuable tool for change. Without it society would not function since many of our 'rules' are self-enforced.
Nope, shame is ineffective as a tool for change. More often people shut down or ignore you if you attempt to shame them than actually make the change you want. Besides, it's frequently just about vengeance anyway. Shame is really hate of other, for the most part.
As a tool for oppression however, yes it's quite effective.
There are different types of shame. Shame related to a decision situation (endogenous) and shame not related to a decision situation (exogenous). In the endogenous case the shame is said to be a 'pro-social' emotion.
This is backed by studies.
"Using three different emotion inductions and two different dependent measures, we repeatedly found that endogenous shame motivates prosocial behavior. After imagining shame with a scenario, proself participants acted more prosocially toward the audience in a social dilemma game (Experiment 1). This finding was replicated when participants recalled a shame event (Experiment 2). Moreover, when experiencing shame after a failure on performance tasks, proself participants also acted prosocially toward the audience in the lab (Experiment 3). Finally, Experiment 4 showed that this effect could be generalized beyond social dilemmas to helping tendencies in everyday situations. Therefore, it seems safe to conclude that shame can be seen as a moral emotion motivating prosocial behavior." [1]
You can also contrast 'humiliation' shame with 'moral shame', with moral shame being prosocial. This is also backed by studies.
"Our data show that the common conception of shame as a universally maladaptive emotion does not capture fully the diversity of motivations with which it is connected. Shame that arises from a tarnished social image is indeed associated with avoidance, anger, cover-up, and victim blame, and is likely to have negative effects on intergroup relations. However, shame that arises in response to violations of the ingroup’s valued moral essence is strongly associated with a positive pattern of responses and is likely to have positive effects on intergroup relations."[2]
[1] de Hooge, I. E., Breugelmans, S. M., & Zeelenberg, M. (2008). Not so ugly after all: When shame acts as a commitment device.Journal of Personality and Social Psychology, 95(4), 933–943.
[2] Allpress, J. A., Brown, R., Giner-Sorolla, R., Deonna, J. A., & Teroni, F. (2014). Two Faces of Group-Based Shame: Moral Shame and Image Shame Differentially Predict Positive and Negative Orientations to Ingroup Wrongdoing. Personality and Social Psychology Bulletin, 40(10), 1270-1284.
Would you care to summarize what "related to a decision situation" means for those of us who don't have access to those articles?
Just a guess, but I imagine it's the difference between "I'm ashamed I can't make enough money to save anything" vs. "I'm ashamed I blew all my savings on crypto". One is shame about your situation (which are likely to be out of your own desires and control too), the other is shame about your decision (which you likely had better control over).
There’s a reason your citations are nearly a decade old at best; the science has changed.
A 2021 meta-analysis showed that, “shame correlates negatively with self-esteem and is large effect size.” [0] So unless the goal of your shame is to actively harm the people involved, then no, shame is not an effective tool at behavior change, given the damage it causes.
You may be thinking of “guilt” rather than shame:
[0] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8768475/
[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3328863/
The comment above lacks essential nuance and is overly confident.
The comment above lacks contributory value and is also (ironically) overly confident.
Shame isn't always for oppression, although it certainly can be - it's also a pretty useful tool to impose reasonable rules that allow you to live peacefully among your neighbors.
That's not shame, that's guilt. Shame is existential, guilt is situational. The cost of shame is too high for whatever value it may bring.
shame as a tool of change does not work on the person being shamed at the time, it works on that person for the future hopefully as they will be afraid to be shamed again and it works on changing the behavior of other peoples because they don't want to get shamed either.
Thus as a tool of oppression, as you pointed out, it works great. But also as a tool for enforcing otherwise non-enforced social rules - until of course you meet someone shameless or who feels at least that they can effectively argue against the shaming.
Shame can't fight lawyers and handcuffs.
How so?
Because everyone makes mistakes, if you antagonize someone they are less likely to care about you and feel more obligation to protect their own.
Using plain text passwords goes well beyond a simple “mistake” in my book. It is negligent.
This is absolutely true at the scope of personal relationships. Not at all when it comes to companies, which have a different set of incentives
Security is at a point where shame is required. You deserve to feel shame if you have an unjustifiable security posture like plain text passwords. The time for politely asking directors to do their job has passed. This is even the governments take at this point. Do it right or stop doing it at all.
Is it, often times hacks like this drive people out of business.
Like how Apple says about the App Store rejections:
Except of course, in reality we know that it ABSOLUTELY DOES. In fact, it has been often times the ONLY thing that has helped.
With humans. With companies it's pretty effective - especially if the post hits front page.
Ask Troy Hunt: https://www.troyhunt.com/the-effectiveness-of-publicly-shami...
Says some pests
---
Shaming for businesses and politicians should be encouraged, not just warranted.
Product Recalls are a form of corporate shaming, but public discourse about companies or politicians should be encouraged, and shaming them should always be warranted.
The issue is it is often impossible to distinguish from a white hat or a black hat hacking your live systems. It can trigger expensive incident response and be disruptive to the business. Ethically, I think it crosses a line when you are wasting resources like this, live hacking systems. There is usually a pretty clear and obvious point where you can stop, not trigger IR, and notify the companies. Not saying that was the case here, but I have been doing cybersecurity assessment work for 17+ years. Even when you have permission sometimes the juice isn't worth the squeeze to keep going as you often have proven the thing you needed to or found the critical defect. There is a balance to whtie hat activities and using good sense to not waste resources.
The potential downside of stopping once you find a critical defect is that the company may not take it seriously unless you go just a bit further and show what you can do with the defect. In this case, showing that it gives you access to the admin dashboard.
Those who are tasked - and are being paid(!) - to "[do] a cybersecurity assessment" will typically be given a brief.
For those who aren't tasked - or being paid(!) - to do this stuff, things are much less clear. There's no defined target, no defined finish line, no flag you have been requested to capture.
(I don't work in cybersecurity now, but <cough> I did get root on the school network way back when, and man, that took some explaining..)
I agree with everything you wrote except this sentence. There is no ethical obligation not to waste a company's time.
Plain text passwords, seriously. At that point, I'm not sure what would be a similarity with any other engineering profession. The plain text passwords are beyond any rhyme or reason... and then returned to the end user client. If anything, I'd consider it malicious negligence - in the EU the leak would be a GDPR issue as well.
Don't worry, it was only a couple passwords for their admin accounts.
Do you feel the same about physical security? It's fine for people to walk around your building, peak in the windows, maybe pick the lock on the door, maybe even take a little walk inside, as long as they don't steal anything?
Weird, I don't feel nearly as touchy about some ones and zeros on a computer as I do my physical body's safety, without which I would not exist.
OK, make the comparison more direct, then. Say you have a filing cabinet with all of your important and \ or embarrassing documents in it. Are you OK with houseguests giving the handle a little wiggle when they come over to check if its locked? What about the neighborhood kids?
A closer analogy would be your friendly neighbour warning you that you left your garage door open. And yes I would appreciate him telling me.
Still missing something - the garage would have to be on your private property, not visible from public property, and the only way he could check for you is if he entered your property and tried to get into your garage.
On the contrary, I would say that this is a garage you rent on a public space. The internet is open and I can do requests to any server. If you don't want your system to answer me, make sure it does not. If I am in front of an ATM on the public street, it doesn't give me money without authorization. Make sure your server does the same.
Streets are generally open. My house is on a public street - that doesn't entitle anyone to attempt to operate my garage door, let alone exploit a security vulnerability in its software to gain access. That's just trespassing.
See my reply above.
What if he says that he has discovered that if he stands on one foot in the street in front of your house, holds anyone's garage door opener above his head, and clicks it 25 times at precisely 9:01am while shining a laser pointer at the top of the door, your garage door will open.
All in all, you will still be thanksfull he found out and warned you about it before someone malicious does.
Would I be upset at him? No. Would I want to have been told? Yes. Would I think he's a little weird? Yes. Would I want him to keep doing weird shit and letting me know if he finds any other similar issues? Yes.
The closer analogy would be your friendly neighbour warning you that he determined your garage door code was easily guessable after he spent 45 minutes entering different codes.
If I left my filing cabinet on the pavement outside my house, I ought to expect it to happen, and would thank a good samaritan telling me if I left it open
But you would leave it on the pavement right? Little honeypot for nosey punks.
This analogy is more akin to exposing your database to to public internet with no credentials or weak credentials. Thinking about it just like the company in the blog post did... Oh and the filing cabinet is out on the street corner as the other commenter mentioned.
As someone else mentioned this would be more akin to a security officer of some sort waking me up and letting me know I left my front door open. I'd sure as hell be shaken but they were doing their job and I'd be thankful for that.
Communes exist. The internet is supposed to be a giant commune of researchers watching each others backs.
Would you drive over a group of people with a bus? Would you do it in GTA?
There is a big difference between the digital world and the physical one. Many actions e.g stealing are very different in these 2 worlds and have very different implications.
If I owned a bunch of vending machines, and someone came to me and said "Hey, I found out that if you put a credit card in the dollar bill slot, it gives out free soda and empties all its coins through the return slot," I would a.) be pleased to have been informed and b.) not be upset that they did this.
If a neighbor came to me and said, "Hey, your mailbox that's located at the end of your long dirt driveway is protected by a wafer lock that can be opened by simply slapping the side of the mailbox in a funny way," I would maybe wonder why they were slapping my mailbox but I would be grateful that they told me and I would want them to continue doing whatever weird shit they were doing (so long as it wasn't causing damage).
When you put property in a public (or practically public) space, there's an expectation that it will not be treated as though it is on private property. There's a big difference between someone jiggling the door to your home (where you physically reside) and jiggling the lock on a mall gumball machine or the handle on a commercial fire exit.
The web is insecure enough as it is, I just want to do my part to make it that little bit safer :)
I salute you for it. Take caution though.
The bad guys don't play by the rules so the rules only hinder the good guys from helping. I think Internet security would be in a better position if we had legislation to protect good samaritan pentesters. Even moreso if they were appropriately rewarded.
Why, you’d never catch a black hat hacker again. The authorities would ust reeling in one Good Samaritan after another!
There is a big difference between discovering a vulnerability that allows you to forge tokens and immediately reporting it versus dumping terabytes of data on the darknet for sale.
Unfortunately, door 1 is maybe $200 bounty and weeks or months of back and forth (if the corp doesn't have a clear bounty program) whereas door 2 has infinite upside. Honestly, it might make sense for a gov group to run a standardized bounty program for exploits with notable financial / privacy impact.
The solution is to have fines in place for insecurities and award them to discoverers.
This is an awesome idea. The next time a glibc CVE comes out every company in the world pays a fine, if they are impacted or not! Hey - you could even file 1000s of frivolous CVEs (which is already common) you know would affect your competition! (which is how that would pan out)
What a wonderful idea. Im sure our nobel politicians will ignore their donors this time and craft legislation that puts large companies at constant threat of more fines. This could never be weaponized against small businesses that pose competition to the bigger fish.
Giving corps even more excuse not to run proper bug bounties,
or care even less about shipping secure code?
Pass.
I don't know. I think you could perhaps align incentives such that any bounty claimed via the government program is competitive, public, and companies are ranked by the number and severity of bounties. Then the company would have an incentive to run a bounty program where they had a chance of controlling the narrative a bit.
How do you propose such a law would work?
From one Paul to another, best of luck! For the goal of improving overall web security, widespread shame doesn't work. My hunch is that we need to be more prideful about having verifiably robust security practices. Kind of like getting corporations to realize that the data is more valuable if you can prove that nobody can breach it.
Thank you, the kindness goes a long way!
Does this bug work across all applications that use Firebase? Or just those that didn't push the update with security?
Everybody has that goal until they get a knock on their door at 6am: https://github.com/disclose/research-threats
Either way it is a fascinating write-up. It will hopefully be a cautionary tale for other businesses and companies out there, and will inspire them to lockdown this credentialing issue. I've noticed a similar blasé attitude when implementing SSO; the devil is in the details as they say.
Sometimes these events provoke regulators to take a closer look at the company.
https://www.ftc.gov/news-events/news/press-releases/2023/11/...
Lack of proper regulations, engineering standards, and tangible fines means that the only democracy that exists is the people themselves taking action. The corps being hacked have plenty of malicious intent, perhaps focus on that.