return to table of content

I pwned half of America's fast food chains simultaneously

cedws
67 replies
15h56m

It's not clear if the author was hired to do this pentest or is a guerilla/good samaritan. If it is indeed the latter, I wonder how they are so brazen about it. Does chattr.ai have a responsible disclosure policy?

In my eyes people should be free to pentest whatever as long as there is no intent to cause harm and any findings are reported. Sadly, many companies will freak out and get the law involved, even if you are a good samaritan.

KomoD
29 replies
15h52m

It's not clear if the author was hired to do this pentest or is a guerilla/good samaritan

Pretty clear to me, "it was searching for exposed Firebase credentials on any of the hundreds of recent AI startups.", running a script to scan hundreds of startups

Sadly, many companies will freak out and get the law involved, even if you are a good samaritan.

Yeah, but that also ends with that company being shamed a lot of the time

pests
22 replies
15h31m

What is wrong with shaming when it's warranted?

redcobra762
20 replies
15h15m

It’s an ineffective tool if your goal is change.

Eisenstein
11 replies
10h43m

Shame is absolutely a valuable tool for change. Without it society would not function since many of our 'rules' are self-enforced.

redcobra762
9 replies
10h39m

Nope, shame is ineffective as a tool for change. More often people shut down or ignore you if you attempt to shame them than actually make the change you want. Besides, it's frequently just about vengeance anyway. Shame is really hate of other, for the most part.

As a tool for oppression however, yes it's quite effective.

Eisenstein
3 replies
9h47m

There are different types of shame. Shame related to a decision situation (endogenous) and shame not related to a decision situation (exogenous). In the endogenous case the shame is said to be a 'pro-social' emotion.

This is backed by studies.

"Using three different emotion inductions and two different dependent measures, we repeatedly found that endogenous shame motivates prosocial behavior. After imagining shame with a scenario, proself participants acted more prosocially toward the audience in a social dilemma game (Experiment 1). This finding was replicated when participants recalled a shame event (Experiment 2). Moreover, when experiencing shame after a failure on performance tasks, proself participants also acted prosocially toward the audience in the lab (Experiment 3). Finally, Experiment 4 showed that this effect could be generalized beyond social dilemmas to helping tendencies in everyday situations. Therefore, it seems safe to conclude that shame can be seen as a moral emotion motivating prosocial behavior." [1]

You can also contrast 'humiliation' shame with 'moral shame', with moral shame being prosocial. This is also backed by studies.

"Our data show that the common conception of shame as a universally maladaptive emotion does not capture fully the diversity of motivations with which it is connected. Shame that arises from a tarnished social image is indeed associated with avoidance, anger, cover-up, and victim blame, and is likely to have negative effects on intergroup relations. However, shame that arises in response to violations of the ingroup’s valued moral essence is strongly associated with a positive pattern of responses and is likely to have positive effects on intergroup relations."[2]

[1] de Hooge, I. E., Breugelmans, S. M., & Zeelenberg, M. (2008). Not so ugly after all: When shame acts as a commitment device.Journal of Personality and Social Psychology, 95(4), 933–943.

[2] Allpress, J. A., Brown, R., Giner-Sorolla, R., Deonna, J. A., & Teroni, F. (2014). Two Faces of Group-Based Shame: Moral Shame and Image Shame Differentially Predict Positive and Negative Orientations to Ingroup Wrongdoing. Personality and Social Psychology Bulletin, 40(10), 1270-1284.

xpe
1 replies
2h24m

Would you care to summarize what "related to a decision situation" means for those of us who don't have access to those articles?

dataflow
0 replies
1h0m

Just a guess, but I imagine it's the difference between "I'm ashamed I can't make enough money to save anything" vs. "I'm ashamed I blew all my savings on crypto". One is shame about your situation (which are likely to be out of your own desires and control too), the other is shame about your decision (which you likely had better control over).

redcobra762
0 replies
1h5m

There’s a reason your citations are nearly a decade old at best; the science has changed.

A 2021 meta-analysis showed that, “shame correlates negatively with self-esteem and is large effect size.” [0] So unless the goal of your shame is to actively harm the people involved, then no, shame is not an effective tool at behavior change, given the damage it causes.

You may be thinking of “guilt” rather than shame:

In sum, shame and guilt refer to related but distinct negative “self-conscious” emotions. Although both are unpleasant, shame is the more painful self-focused emotion linked to hiding or escaping. Guilt, in contrast, focuses on the behavior and is linked to making amends. [1]

[0] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8768475/

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3328863/

xpe
1 replies
2h27m

The comment above lacks essential nuance and is overly confident.

redcobra762
0 replies
1h17m

The comment above lacks contributory value and is also (ironically) overly confident.

boredtofears
1 replies
44m

Shame isn't always for oppression, although it certainly can be - it's also a pretty useful tool to impose reasonable rules that allow you to live peacefully among your neighbors.

redcobra762
0 replies
16m

That's not shame, that's guilt. Shame is existential, guilt is situational. The cost of shame is too high for whatever value it may bring.

bryanrasmussen
0 replies
9h19m

More often people shut down or ignore you if you attempt to shame them than actually make the change you want.

shame as a tool of change does not work on the person being shamed at the time, it works on that person for the future hopefully as they will be afraid to be shamed again and it works on changing the behavior of other peoples because they don't want to get shamed either.

Thus as a tool of oppression, as you pointed out, it works great. But also as a tool for enforcing otherwise non-enforced social rules - until of course you meet someone shameless or who feels at least that they can effectively argue against the shaming.

direwolf20
0 replies
9h50m

Shame can't fight lawyers and handcuffs.

pests
3 replies
12h2m

How so?

pastage
2 replies
11h43m

Because everyone makes mistakes, if you antagonize someone they are less likely to care about you and feel more obligation to protect their own.

janalsncm
0 replies
10h56m

Using plain text passwords goes well beyond a simple “mistake” in my book. It is negligent.

bruh2
0 replies
2h16m

This is absolutely true at the scope of personal relationships. Not at all when it comes to companies, which have a different set of incentives

zelon88
0 replies
1h39m

Security is at a point where shame is required. You deserve to feel shame if you have an unjustifiable security posture like plain text passwords. The time for politely asking directors to do their job has passed. This is even the governments take at this point. Do it right or stop doing it at all.

sleepybrett
0 replies
58m

Is it, often times hacks like this drive people out of business.

boxed
0 replies
3h48m

Like how Apple says about the App Store rejections:

Running to the press never helps.

Except of course, in reality we know that it ABSOLUTELY DOES. In fact, it has been often times the ONLY thing that has helped.

BeetleB
0 replies
2h20m

With humans. With companies it's pretty effective - especially if the post hits front page.

Ask Troy Hunt: https://www.troyhunt.com/the-effectiveness-of-publicly-shami...

samstave
0 replies
58m

What is wrong with shaming when it's warranted?

Says some pests

---

Shaming for businesses and politicians should be encouraged, not just warranted.

Product Recalls are a form of corporate shaming, but public discourse about companies or politicians should be encouraged, and shaming them should always be warranted.

bitexploder
3 replies
1h57m

The issue is it is often impossible to distinguish from a white hat or a black hat hacking your live systems. It can trigger expensive incident response and be disruptive to the business. Ethically, I think it crosses a line when you are wasting resources like this, live hacking systems. There is usually a pretty clear and obvious point where you can stop, not trigger IR, and notify the companies. Not saying that was the case here, but I have been doing cybersecurity assessment work for 17+ years. Even when you have permission sometimes the juice isn't worth the squeeze to keep going as you often have proven the thing you needed to or found the critical defect. There is a balance to whtie hat activities and using good sense to not waste resources.

troupe
0 replies
1h48m

The potential downside of stopping once you find a critical defect is that the company may not take it seriously unless you go just a bit further and show what you can do with the defect. In this case, showing that it gives you access to the admin dashboard.

logifail
0 replies
1h6m

There is usually a pretty clear and obvious point where you can stop [..] sometimes the juice isn't worth the squeeze to keep going as you often have proven the thing you needed to or found the critical defect

Those who are tasked - and are being paid(!) - to "[do] a cybersecurity assessment" will typically be given a brief.

For those who aren't tasked - or being paid(!) - to do this stuff, things are much less clear. There's no defined target, no defined finish line, no flag you have been requested to capture.

(I don't work in cybersecurity now, but <cough> I did get root on the school network way back when, and man, that took some explaining..)

anticorporate
0 replies
1h1m

Ethically, I think it crosses a line when you are wasting resources like this, live hacking systems.

I agree with everything you wrote except this sentence. There is no ethical obligation not to waste a company's time.

xxs
1 replies
11h30m

Plain text passwords, seriously. At that point, I'm not sure what would be a similarity with any other engineering profession. The plain text passwords are beyond any rhyme or reason... and then returned to the end user client. If anything, I'd consider it malicious negligence - in the EU the leak would be a GDPR issue as well.

MrBruh
0 replies
11h9m

Don't worry, it was only a couple passwords for their admin accounts.

jjeaff
17 replies
13h23m

Do you feel the same about physical security? It's fine for people to walk around your building, peak in the windows, maybe pick the lock on the door, maybe even take a little walk inside, as long as they don't steal anything?

afterburner
13 replies
13h9m

Weird, I don't feel nearly as touchy about some ones and zeros on a computer as I do my physical body's safety, without which I would not exist.

idiotsecant
12 replies
12h41m

OK, make the comparison more direct, then. Say you have a filing cabinet with all of your important and \ or embarrassing documents in it. Are you OK with houseguests giving the handle a little wiggle when they come over to check if its locked? What about the neighborhood kids?

prmoustache
8 replies
12h9m

A closer analogy would be your friendly neighbour warning you that you left your garage door open. And yes I would appreciate him telling me.

oh_sigh
3 replies
1h51m

Still missing something - the garage would have to be on your private property, not visible from public property, and the only way he could check for you is if he entered your property and tried to get into your garage.

albuic
1 replies
1h7m

On the contrary, I would say that this is a garage you rent on a public space. The internet is open and I can do requests to any server. If you don't want your system to answer me, make sure it does not. If I am in front of an ATM on the public street, it doesn't give me money without authorization. Make sure your server does the same.

freejazz
0 replies
9m

Streets are generally open. My house is on a public street - that doesn't entitle anyone to attempt to operate my garage door, let alone exploit a security vulnerability in its software to gain access. That's just trespassing.

prmoustache
0 replies
1h12m

See my reply above.

troupe
2 replies
1h44m

What if he says that he has discovered that if he stands on one foot in the street in front of your house, holds anyone's garage door opener above his head, and clicks it 25 times at precisely 9:01am while shining a laser pointer at the top of the door, your garage door will open.

prmoustache
0 replies
1h13m

All in all, you will still be thanksfull he found out and warned you about it before someone malicious does.

bastawhiz
0 replies
58m

Would I be upset at him? No. Would I want to have been told? Yes. Would I think he's a little weird? Yes. Would I want him to keep doing weird shit and letting me know if he finds any other similar issues? Yes.

freejazz
0 replies
23m

The closer analogy would be your friendly neighbour warning you that he determined your garage door code was easily guessable after he spent 45 minutes entering different codes.

wuiheerfoj
1 replies
12h31m

If I left my filing cabinet on the pavement outside my house, I ought to expect it to happen, and would thank a good samaritan telling me if I left it open

pineaux
0 replies
5h7m

But you would leave it on the pavement right? Little honeypot for nosey punks.

jpc0
0 replies
9h36m

This analogy is more akin to exposing your database to to public internet with no credentials or weak credentials. Thinking about it just like the company in the blog post did... Oh and the filing cabinet is out on the street corner as the other commenter mentioned.

As someone else mentioned this would be more akin to a security officer of some sort waking me up and letting me know I left my front door open. I'd sure as hell be shaken but they were doing their job and I'd be thankful for that.

z3phyr
0 replies
56m

Communes exist. The internet is supposed to be a giant commune of researchers watching each others backs.

yard2010
0 replies
8h6m

Would you drive over a group of people with a bus? Would you do it in GTA?

There is a big difference between the digital world and the physical one. Many actions e.g stealing are very different in these 2 worlds and have very different implications.

bastawhiz
0 replies
50m

If I owned a bunch of vending machines, and someone came to me and said "Hey, I found out that if you put a credit card in the dollar bill slot, it gives out free soda and empties all its coins through the return slot," I would a.) be pleased to have been informed and b.) not be upset that they did this.

If a neighbor came to me and said, "Hey, your mailbox that's located at the end of your long dirt driveway is protected by a wafer lock that can be opened by simply slapping the side of the mailbox in a funny way," I would maybe wonder why they were slapping my mailbox but I would be grateful that they told me and I would want them to continue doing whatever weird shit they were doing (so long as it wasn't causing damage).

When you put property in a public (or practically public) space, there's an expectation that it will not be treated as though it is on private property. There's a big difference between someone jiggling the door to your home (where you physically reside) and jiggling the lock on a mall gumball machine or the handle on a commercial fire exit.

MrBruh
16 replies
15h50m

Good Samaritan

The web is insecure enough as it is, I just want to do my part to make it that little bit safer :)

cedws
10 replies
15h43m

I salute you for it. Take caution though.

The bad guys don't play by the rules so the rules only hinder the good guys from helping. I think Internet security would be in a better position if we had legislation to protect good samaritan pentesters. Even moreso if they were appropriately rewarded.

nkrisc
7 replies
15h38m

Why, you’d never catch a black hat hacker again. The authorities would ust reeling in one Good Samaritan after another!

cedws
6 replies
15h31m

There is a big difference between discovering a vulnerability that allows you to forge tokens and immediately reporting it versus dumping terabytes of data on the darknet for sale.

eyegor
5 replies
12h50m

Unfortunately, door 1 is maybe $200 bounty and weeks or months of back and forth (if the corp doesn't have a clear bounty program) whereas door 2 has infinite upside. Honestly, it might make sense for a gov group to run a standardized bounty program for exploits with notable financial / privacy impact.

Eisenstein
2 replies
10h41m

The solution is to have fines in place for insecurities and award them to discoverers.

jacobsenscott
0 replies
1h23m

This is an awesome idea. The next time a glibc CVE comes out every company in the world pays a fine, if they are impacted or not! Hey - you could even file 1000s of frivolous CVEs (which is already common) you know would affect your competition! (which is how that would pan out)

Geisterde
0 replies
8h52m

What a wonderful idea. Im sure our nobel politicians will ignore their donors this time and craft legislation that puts large companies at constant threat of more fines. This could never be weaponized against small businesses that pose competition to the bigger fish.

DANmode
1 replies
11h43m

Giving corps even more excuse not to run proper bug bounties,

or care even less about shipping secure code?

Pass.

zaphar
0 replies
3h41m

I don't know. I think you could perhaps align incentives such that any bounty claimed via the government program is competitive, public, and companies are ranked by the number and severity of bounties. Then the company would have an incentive to run a bounty program where they had a chance of controlling the narrative a bit.

freejazz
1 replies
26m

How do you propose such a law would work?

cedws
0 replies
2m

  1. White hat submits a "Notice of Vulnerability Testing" document to target company (copy also sent to government body) including their information, what systems will be tested, and in what time window
  2. Company is required to acknowledge the notice within X hours and grant permission or respond with a reason that the test cannot take place
  3. White hat performs testing according to the plan
  4. White hat discloses any findings to the company (keeping government body in the loop)
  5. Company patches systems and may reward white hat at their discretion
  6. Government body determines if fines should be applied and may also reward white hat at their discretion
Something like that.

pharrington
1 replies
12h25m

From one Paul to another, best of luck! For the goal of improving overall web security, widespread shame doesn't work. My hunch is that we need to be more prideful about having verifiably robust security practices. Kind of like getting corporations to realize that the data is more valuable if you can prove that nobody can breach it.

MrBruh
0 replies
11h53m

Thank you, the kindness goes a long way!

nigamanth
0 replies
15h3m

Does this bug work across all applications that use Firebase? Or just those that didn't push the update with security?

mmsc
0 replies
15h34m

Everybody has that goal until they get a knock on their door at 6am: https://github.com/disclose/research-threats

mise_en_place
0 replies
27m

Either way it is a fascinating write-up. It will hopefully be a cautionary tale for other businesses and companies out there, and will inspire them to lockdown this credentialing issue. I've noticed a similar blasé attitude when implementing SSO; the devil is in the details as they say.

ericalexander0
0 replies
15h40m

Sometimes these events provoke regulators to take a closer look at the company.

https://www.ftc.gov/news-events/news/press-releases/2023/11/...

devwastaken
0 replies
40m

Lack of proper regulations, engineering standards, and tangible fines means that the only democracy that exists is the people themselves taking action. The corps being hacked have plenty of malicious intent, perhaps focus on that.

quickthrower2
47 replies
16h10m

Firebase is a shitshow. I say this as someone who really tried to like it and sadly built a project for a client using it.

Other than this security vuln, the issues vs. just using postgres are:

* It is more work! Despite being a backend as a service it is much less code to just write a simple API backend for your thing both in time to do it and time to learn how to do it. Think of Firebase as being on the abstraction level of Sinatra or express and you may as well just use those. Things like Firebase and Parse etc. are more complicated. For the same reason it is more complicate to walk to work with just your arms and no legs (even though there are fewer limbs to deal with and no backend!).

* Relational is king. Not being able to do joins really sucks. Yes you need to make async calls in a loop. NoSQL is premature optimisation.

* Lots of Googlization. This means lots of weird, hard to find out clickops configuration steps to get anything working. Probably why this security flaw existed(?).

* Emulator is flakey, so for local dev you need another cloud DB, and yes all that Googlized setup RSI inducing clickops.

* I reckon it is slower than postgres at the scale of starting a project. Traditional architecture are blitz fast on modern hardware and internet. Like playing a 90s game on your laptop.

* Apparently as you scale it gets pretty pricey.

The main thing is: it actually slows you down! The whole premise is this should speed you up.

refulgentis
18 replies
16h8m

Supabase is the iPhone to Firebase's Palm V -- highly recommend, if you're a fellow millenial like me who grew up on mobile, and things like "much less code to just write a simple API backend for your thing" sounds like 6 months and paying another engineer.

EDIT:

loud buzzer

Careful, Icarus: "permissions can be setup to allow global read-writes" is a "vuln" of every system.

p.s. Any comment on why her blog has you guys "remembering Chattr" then getting a seedy Firebase pwner GUI, and yours has you diligently looking through .ai TLDs?

n2d4
11 replies
16h1m

I think Supabase is much better than Firebase, but I find its security model worse; Firebase was very clearly designed with this in mind, while Supabase is just a Postgres DB with RLS as an afterthought.

One particular thing that annoys me with SB is that by default, or when you create a table with SQL, they're publicly accessible, which is very bad! (Firebase defaults to no access in production mode.)

Sai_
6 replies
15h40m

Don’t believe tables are readable by default even if you have defined any RLS policies for that table. I’m building something on SB right now and have been burned more than once because I thought that the absence of policy meant open access to everyone.

n2d4
5 replies
15h34m

I just checked, and newly created tables without RLS are accessible to anyone: After running `CREATE TABLE x` in my SQL client (which succeeds with no warning), if I go back to the table UI on Supabase it says "WARNING: You are allowing anonymous access to your table". (It's good that there's a warning in the official interface, at least, but what if I use my own SQL client? What if my ORM is creating tables?)

Your confusion probably stems from how you can have RLS disabled, or RLS enabled with no policies. If you have RLS enabled with no policies, the access is restricted. But if RLS is disabled (or never enabled!), then your table is blasted to the entire internet.

This confusion kind of proves my point; if DB access from untrusted clients were baked into SQL since birth, RLS would probably be enabled by default.

nop_slide
2 replies
13h58m

I don’t understand the RLS is disabled warning thing. I also have that warning on a project where I migrated to Supabase from a sql dump/restore from another PG instance.

I’m using supabase as “just Postgres” at the moment and the only access to the data comes from a server I control.

Could you explain how my data is being “blasted to the internet”?

Genuinely concerned if I’m grossly overlooking something.

n2d4
1 replies
13h1m

If you don't use the client library (and never expose the anon key) you're most likely fine. If you do (even if just for Supabase Auth or so) your data is exposed and you need to enable RLS on all affected tables ASAP or an attacker can access the entire database, in a similar fashion in which OP did that with Firebase.

nop_slide
0 replies
4h22m

Gotcha, yeah I’m not using the client lib at all. Good to know.

refulgentis
1 replies
15h15m

The "when I create a table via SQL statements at shell it does what I say" isn't a vulnerability, I don't think.

The comment chain went long enough that I got confused and thought I was missing something, I started a brand new account, brand new project, brand new table, RLS is enabled by default, has a big recommended next to it highlighted, it is checked, the entire section is highlighted, and has documentation right below it. Source: https://imgur.com/a/X9oJ2i9

It's enabled by default, quite forcefully so

but I'm not a Postgres admin, maybe there's a stronger way you know of to enforce it, so you can prevent the footgun of CREATE TABLE?

n2d4
0 replies
15h5m

I mean, I don't disagree, but what I'm saying is that SQL/Postgres (hence also Supabase) was not designed for databases accessed from untrusted clients, instead, it's an afterthought and it shows.

Whether it's a "vulnerability" or by design is another question, but it's definitely a footgun (particularly for new Supabase users that use an ORM like Prisma, which has its own UI and creates tables by itself).

The solution might just be to not let untrusted clients access your DB.

anoncareer0212
2 replies
15h59m

In what way do you perceive it to be an after thought?

It's front-and-center constantly, and has _all_ access disabled by default on tables every time I use it.

n2d4
1 replies
15h50m

It only has access disabled if you enable RLS on that table. If you do `CREATE TABLE`, or don't check the checkbox in the UI (TBF it's big and green and has a warning that's hard to miss), then access is public.

I guess my main concern is that it's hard to setup RLS correctly using SQL. Because it's two separate statements, if your `CREATE TABLE` succeeds, but the `CREATE POLICY` does not, you're also exposed. And it is more annoying than it should've been to test the rules (Firebase has a dedicated tool for that).

I now just use Supabase to host a normal Postgres that only my backend connects to. That works well.

bravura
0 replies
14h3m

I built a supabase app the past two days, and I agree.

I did find it a footgun that creating a table through SQL was not private by default. (Why doesn't Supabase apply RLS by default to tables created through SQL?)

Serverless also turned out to be more trouble than it was worth. In particular:

* Doing DB business logic in JS is gross.

* It's tricky to secure a table to be semi-public. e.g. you have a bookmark site and you don't want users to browse all URLs, just the ones they have bookmarked. The best solution appears to be disabling foreign-keys until transactions are done and then having a complicated policy.

* It's a pain to set up a CLI client that interacts with the DB. I think you have to copy-paste the access AND refresh tokens to it. I couldn't figure out a way to create my own API tokens.

A backend is nice, because it is private by default.

asciimike
0 replies
13h56m

Firebase was very clearly designed with this in mind

Yes and no ;)

The original release of the Realtime Database didn't have security rules (though they were thought of at the time), and they were added in late 2013/early 2014 (IIRC). At that point, in the name of "easier getting started experience (don't force users to learn a custom DSL)", the default rules were `read: true, write: true`. As you might expect, it resulted in a high potential for this type of thing, and sophisticated customers cared _a lot_ about this.

This changed at some point post acquisition (probably 2016?) when the tradeoff between developer experience and customer security switched over to `false/false` (or picking something slightly more secure than `true/true`.

Firebase Security Rules were upgraded and added to Firebase (Cloud) Storage and Firestore, with both integrations being first class integrations, as _the whole point_ of those products was secure client-side access directly to the database from day 1.

The tricky part of all of any system in this space was designing a system that's simple enough to learn, highly performant, and also sufficiently flexible so as to answer the question "allow authentication based on $PHASE_OF_THE_MOON == WAXING_GIBBOUS" or some other sufficiently arbitrary enterprise parameter. Most BaaS products at the time optimized for the former, then the latter, and mostly not the flexibility; however, over time, it turns out that sufficiently large customers really only care about the last one! Looks like Firebase solved this recently with auth "blocking functions" (https://firebase.google.com/docs/auth/extend-with-blocking-f...) which is sort of similar to Lambda Authorizers (https://docs.aws.amazon.com/apigateway/latest/developerguide...), which I believe is a pretty good way of solving this problem.

Disclosure: Firebase PM from long ago

zilti
0 replies
6h26m

At that point, you might as well just use PostgREST.

rezonant
0 replies
16h4m

sounds like 6 months and paying another engineer.

If you take this approach, it's "pay now or pay later".

-- Fellow millenial

quickthrower2
0 replies
9h45m

I'll happily take 6 months pay to knock up a quick node api. :-). Just need to find a beach first.

What I found is you are right and FB is easier for the Millennial, Gen Z, Boomer or whatever IF everything you need can be done by rules and schema.

As soon as you need to write functions (because rules are not sophisticated enough or too slow/expensive, or you want to know why the thing got denied) then you are writing backend code.

It is actually easier to write the same code in a NextJS template - like there is less to learn, less docs to read. And then chuck it on Vercel which will deploy and devops it for you. So you have all the devops done for you like Firebase would and you have spent less time. Now if you are talking to postgres instead of firebase from the backend, it is actually easier IMO. A line to connect to pg. A line to issue a query.

Guess this is just my opinion, but it is less code to do so, less environment variable farting around, downloading a weird .json with all the credentials. If I were inclined I would write a blog post showing how much less lines of code are needed, how much less understanding is needed, and with the managed infra/DB offered by Vercel etc. you are still serverless, etc.

hot_gril
0 replies
15h49m

"permissions can be setup to allow global read-writes" is a "vuln" of every system

Question is how much effort that is. It's scarily easy on Firebase, idk about Supabase.

codesnik
0 replies
14h4m

Used it and can't actually recommend it. RLS policies slows down even simplest queries 1000x times sometimes, and postgres' current EXPLAIN ANALYZE isn't helping much. Testing app on it is still a pain. Default migration engine is oneway. Backed in database backups are close to useless. I mean, I managed to solve a lot of those issues for myself, but it still felt like I'm reinventing bicycles instead of doing actual work, and I still had a subpar experience.

MrBruh
0 replies
16h5m

loud buzzer

Sorry, but supabase has a similar issue.

Another blog going over that has or will be made by Eva (referenced on the site)

eddiewithzato
7 replies
14h36m

All roads lead back to RDBMS, it's amazing how this piece of theory just works.

randomdata
2 replies
13h46m

In my experience, the roads lead back to SQL. It deviates from the relational model. It may even be that SQL was successful because it deviated from the relational model. Perhaps the theory doesn't just work?

int_19h
1 replies
10h4m

Roads lead back to SQL because it became a de facto industry standard for "relation-like" stuff.

Can you give an example of a query that cannot be expressed well in relational algebra, but can be in SQL because it deviates from that?

randomdata
0 replies
2h1m

> Roads lead back to SQL because it became a de facto industry standard for "relation-like" stuff.

But what was in question is why SQL is the standard. Did it take that position because of its deviation? If so, that would suggest the theory doesn't just work. Without actually profiling, I suspect that the deviation allows some real-world optimizations to take place, enabling SQL databases to be faster than something with strict adherence to the theory. That would be a good reason why you might have to choose SQL over a strict alternative.

> Can you give an example of a query that cannot be expressed well in relational algebra

Seems not. CloudFlare blocked the submission, complaining that I was submitting a SQL query, which it thinks is a security concern for some reason...

In lieu, just think about what a relation is and how SQL is not relational. Even some of the simplest select queries you can imagine can demonstrate your request.

SkyMarshal
1 replies
14h12m

> this piece of theory

Key words right there. The relational model is a timeless mathematical model for data that gains both logical consistency and adaptability as a result. It has and will continue to stand the test of time.

quickthrower2
0 replies
9h58m

And in practice it has a superpower: agility. The pointy haired boss wants your OLTP to be an OLAP, and you can kind of hack it. You want to put the user's birthday on the settings page this quarter? Sure. Even if that is in another table. You can even make it efficient.

whaleofatw2022
0 replies
13h53m

I mean FFS I can get a process to write more rows/sec to AuroraPG than Dynamo with needed semantics, with less code and lower IOP cost

giantg2
0 replies
14h26m

I'm coming to this conclusion as well.

Something like DynamoDB can be great for simple data. I liked the idea of Graphql (technically the API query and not the database). Both of them turn into hot garbage once you get into complex data, especially if it's being aggregated from multiple sources. Or maybe the systems I work with just implemented them poorly.

resolutebat
6 replies
15h50m

Firebase's whole premise is seamless syncing between locally cached data and your backend. If you "just use Postgres", life is simpler until your user goes offline/runs out of mobile data/whatever, and then they're immediately screwed.

joshspankit
2 replies
15h38m

This is the exact use-case I want to optimize for. Offline-first with robust and seamless syncing. Firebase keeps promising it but I would love to find more transparent tools that work better on mobile + web.

quickthrower2
0 replies
14h37m

I feel this needs a framework (not a library) to take care of it all. Abstract away the webbyness. Something like Elm, with a type that represents data, and behind the scenes it does all the ServiceWorker and syncing crap for you.

ochiba
0 replies
14h31m

It's worth noting that Firebase doesn't have a true offline-first architecture, but rather cloud-first: By default, queries run against the cloud and results of those specific queries are temporarily cached on the client. By default Firestore will try to reach the server first before falling back to the local cache, which can result in a subpar UX on a patch network connection. It does also provide store-and-forward of updates from client to server. But it's not a true offline-first architecture since it does not preemptively sync a database to the local user device for offline-by-default access.

Regarding Postgres, that is where tools like PowerSync (disclosure: co-founder) and ElectricSQL are useful, which are both sync layers for Postgres for offline-first architecture.

lukevp
0 replies
15h34m

Most apps are online by default these days and don’t even gracefully degrade without internet. Firebase does have the offline DB, but it had a ton of more features and I wouldn’t say the offline db is the only selling point of FB.

btown
0 replies
14h40m

https://supabase.com/blog/react-native-offline-first-waterme... may be of interest. https://supabase.com/blog/postgres-crdt seems to be abandoned but would be the next logical step beyond this.

endisneigh
4 replies
15h54m

I see this kind of post all of the time. If you’re using relational data with a key value store you’re doing it wrong. You can do anything you can do with a relational database with a key value store, but there are trade offs since now you have to heavily denormalize for performance and figure out how to keep things reasonably consistent.

Firebase is not an alternative to Postgres alone. You need an actual API server. The value of Firebase is you don’t need that, nor do you need to worry about ops, authentication, queues or other things.

The issue the OP found could have been easily fixed by simply reading the docs, but that seems to be a rare activity these days.

quickthrower2
3 replies
15h46m

There is no such thing as “relational data” here. There is the data I need to store to implement my app. No matter how I shaped it, it was suboptimal. Where it might shine is a subsystem like chat with just messages. Oh just got a flashback about Firebase rules. That alone is a time sink where you could have got the project done in Rails already :-)

The hard work of using Firebase’s apis, libraries, reading it’s docs (which are detailed but badly organized) is more than the delta between not needing a backend. And for a non trivial app you will end up using functions: infact if you want a guarantee that your user has a name then you will need to write a function. And that is… a backend, like writing an app.route statement.

endisneigh
1 replies
15h38m

From this post I can tell you’re not really understanding how Firebase is supposed to be used, which is fine. For you it’s better to use the traditional approach with database and app server.

And yes, there is such a thing as relational data. If you do not believe this then you really shouldn’t use Firebase (or dynamodb for that matter).

quickthrower2
0 replies
9h52m

I know I am holding it wrong etc! But I really tried in earnest, as a fanboy of firebase, for quite a long time. The problems I had were with basic things. You have companies, a company can have many users, users might belong to more than one company (hello Slack...) and then there can be relationship between users.

Putting aside the problem between chair & keyboard.

Another difference is more if you make a mistake in your relational schema, you can SQL your way out of it - add an extra join or group by. And you can also fairly easily migrate you way out of it to a new schema that is the right structure.

This requires actual code with firebase, and a lot of patience, and probably a lot more downtime. So you need more of a waterfall approach, I would suggest, to design a schema ahead of time, and know all of your requirements. NoSQL document-oriented schemas just aren't flexible (unless the DB supports something like materialized views to help you get out of it)

hot_gril
0 replies
15h38m

While NodeJS+Postgres is my go-to, I think it's harder than you're making it sound. Firebase would probably be easier for someone who's new to this altogether or somewhere in between.

There's still nothing that holds your hand through a proper client-server interface, good relational schema design, and all the glue in between. Partially because nobody agrees on what those are.

ufmace
2 replies
13h42m

Maybe I'm like a Luddite or something, but I feel like I keep hearing about Firebase but still have no idea what it really is or why/how I would use it in a project. I'm just sitting here on my own building projects with mostly Postgresql DBs, once in a while MySQL, and not suffering massive security breaches. Thanks I suppose for giving me a data point that I'm most likely not missing anything.

ElFitz
1 replies
13h10m

It’s a hands—off database and auth service, initially intended to be directly accessed by thick clients, with little to no backend logic (although they have since added FaaS).

When mobile apps started out, most had little to no online features.

As the mobile apps market grew, more and more of these apps started requiring account persistence, sharing content with other users, real-time online interactions, etc.

That's when Backend as a Service became a thing (eg Parse), targeting developers with little to no server-side experience. And that's when Firebase popped up.

ufmace
0 replies
2h23m

Ahhh Backend As A Service. I guess that makes sense. Not something I could see myself ever using, but I suppose I can see how somebody might use it if they don't know how to write and run their own backends or don't have authority to spin one up.

Guess I'm a little lucky in that I can spin up personal backend services just for kicks, and even though DayJob is pretty corporate and locked down, I can still spin up a new backend on my own with not much oversight as long as it doesn't touch certain sensitive things.

Thanks for a brief and clear description - it's surprising how few people can't seem to write one, and how many official corporate sites bury what their service actually does behind 10 pages of marketing fluff and stock photos.

laurieg
2 replies
15h24m

The flakey firebase local emulator is the bane of my existence, and poorly documented to boot.

On top of the Googlized clickops, there's the whole Firebase vs Google cloud situation, where you end up having to drop down to "real" google cloud for certain specific features. The docs appear to be detailed but you often end up with more questions than answers.

If you are ever thinking about using firebase, give Supabase a try. The emulator works well, the dashboard is there for prototyping but you can just write SQL to clearly define your database and migrations. Since it's just postgres you have a clear route to leave Supabase if you should ever want to.

habosa
1 replies
13h56m

Just curious, what’s flakey about it?

I’m not at Google anymore but I was a core contributor to the Firebase emulators project when I was. I can think of many flaws with the emulators but flakey is a new one to me

quickthrower2
0 replies
10h1m

It often just crashed with an error. Now I am a Windows user, so MMMV, and this might be the reason. In some places the behaviour was slightly different and I had to work around that. I don't recall the specifics. And the idea of a test suite that starts the emulator, runs the tests and gives a result, that can reliably run.... well I gave up on that.

hot_gril
0 replies
15h47m

Firebase is a whole platform with auth, file storage, functions, etc besides just its DB feature, but maybe this wasn't always the case. Anyway, yes, I don't look past Postgres unless I have a very specific reason.

TheAceOfHearts
0 replies
13h49m

When I was evaluating Firebase a few years back, the thing that most annoyed me was that their frontend library wasn't open source. Google just shipped an obfuscated and minified JS library. The lack of source mixed with their terrible docs made it a non-starter for me.

I remember having some issue, and thought: well, it's JS, let me just check the source like I normally would! Only to find out that you couldn't browse the full client source code anywhere. At that point my only option was to reverse engineer the minified source which just seemed silly and like a waste of time.

Firebase moat has nothing to do with their frontend library, which anyone could reverse engineer with a little bit of time. And yet they still kept it closed source. I don't know if anything has changed since then, but that was the primary reason why I lost interest in the service.

Aurornis
47 replies
14h17m

Timeline (DD/MM)

06/01 - Vulnerability Discovered

09/01 - Write-up completed & Emailed to them

10/01 - Vulnerability patched

Note those dates are DAY-MONTH. At least they patched it within a single day.

I find it funny that the author found a massive vulnerability but chose to wait a couple days to report it so they could finish a nice write-up.

Reminds me of my experience with HackerOne: We had some participants who would find a small vulnerability, but then sit on it for months while they tried to find a way to turn it into a larger vulnerability to claim a higher prize.

Then when they finally gave up on further escalation and submitted it, they'd get angry when we informed them that we had already patched it (and therefore would not pay them). The incentives in infosec are weird.

LeoPanthera
14 replies
13h28m

In what year was this? January 10 is tomorrow, even on the east coast, at the time of writing this comment.

nerevarthelame
8 replies
13h18m

Someone living beyond the US's east coast? Impossible!

LeoPanthera
7 replies
13h10m

I don't think it was an unreasonable assumption given that the article talks specifically about American fast food chains.

kelnos
5 replies
12h16m

I guess three clues:

* They were just trolling Firebase accounts for anything left open, and the first hit was a company that works with a bunch of American fast food chains. That doesn't require OP to live in the US.

* They specified "America's fast food chains"; someone living in the US probably wouldn't qualify it with "America's".

* They used a $DAY/$MONTH date format, which is uncommon in the US.

tempestn
3 replies
10h12m

* If they are in America, they're a time traveller.

InCityDreams
1 replies
8h45m

"[T]hey are", and "they're", in the same sentence. I don't know...I don't know....

tempestn
0 replies
7h46m

The first 'are' is emphasized.

bryanrasmussen
0 replies
9h27m

they discovered the vulnerability by reading about it on HN and then going back with the posted write up, classic lazy time travelers.

Gabrys1
0 replies
10h3m

* They specified "America's fast food chains"; someone living in the US probably wouldn't qualify it with "America's".

I call that US-centrism. Quite annoying to non-Americans living in the States.

bdcravens
0 replies
12h14m

The way the dates were written should be an indication that they aren't in the US.

dailykoder
2 replies
12h21m

That's what i was thinking too, not because it's not already 10th January in europe, but because i doubt you can except a 'thank you' in <8 hours. So I assume this might have been 2023?

andreareina
1 replies
12h16m

It's 2024-01-10 07:11 in France

dailykoder
0 replies
11h20m

Duh, my head was still not awake. I wanted to write 'it's not even 8 am in europe'.

sexy_seedbox
0 replies
12h30m

Not everyone lives in the US of A. Half the day is over already in East Asia.

MrBruh
0 replies
13h24m

New Zealand GMT+13 Moment

busterarm
5 replies
10h16m

    The incentives in infosec are weird.
Full disclosure is the only honest way to operate. For everyone involved.

Much smarter folks than me have been saying it for decades.

jampekka
4 replies
9h53m

Why should you be honest and open with companies? They for sure aren't with you.

busterarm
3 replies
9h46m

It's not about companies. It's about their customers.

Do you even know what Full Disclosure is?

jampekka
2 replies
7h14m

Why should the researchers or other vulnerability spotters care about the company's customers? The companies don't care further than what they can profit from the customers.

Yes, I know what full disclosure is. Companies don't do full disclosure about anything. Full disclosure is better than not disclosing publicly. But monetizing the vulnerability is akin to what companies do.

I find it utterly bizarre that it's totally OK and even lauded that companies are selfish profit maximizing machines that DGAF, but individuals should pamper them like babies.

busterarm
1 replies
3h21m

Full disclosure isn't something for _companies_ to do. It's what _researchers_ do. Full disclosure isn't compatible with the monetization incentives offered by companies. You're publishing in public and immediately.

I think you clearly do not understand what full disclosure is.

jampekka
0 replies
3h2m

My understanding of Full Disclosure is that researchers publish the vulnerability (and potentially exploit) publicly without coordinating with the software vendor. This contrasts with Coordinated Disclosure (sometimes "Responsible disclosure" in corporate propaganda) or No Disclosure (and potentially e.g. selling the exploit).

I admittedly used disclosure in a bit different sense for companies in that companies typically don't give out any (truthful) information they have if they aren't required by law. And they lie when profitable.

The symmetric action from a researcher is to sell the exploit to the highest bidder. Of course if the researcher wants to do other disclosures, that's fine too. But what I don't like is the double standard that researchers are scolded for being "unethical" but companies, by design, not caring about ethics at all is just fine and the way it should be.

biosboiii
5 replies
11h26m

When you turn actual, creative and exhausting work (vulnerability research) into some kind of high stakes gig job you deserve this problem.

I am not against bug hunting by any means, but if you want to me act like I care about your product and not about my money, pay me monthly.

LMYahooTFY
1 replies
11h6m

How do you measure productivity? How do you budget for a bug hunting department?

advael
0 replies
10h43m

Measuring productivity in a useful way is pretty close to impossible in a vast swath of jobs, though people make a killing (and make everyone involved considerably more miserable) pretending otherwise

The reason most people have converged on a preference for salaried work is that most jobs don't actually need consistency to be useful, but most people do need consistent pay to focus on a job

Aurornis
1 replies
4h5m

When you turn actual, creative and exhausting work (vulnerability research) into some kind of high stakes gig job you deserve this problem.

You don’t make HackerOne your primary source of security testing. It’s a fun thing you do in addition to your formal security work internally.

The reason people do it is because so many people expect or even demand payment and public recognition for submitting security issues they found. Just look at how many comments in this thread are insisting that they pay the author various amounts of money. The blog post even has a line about how they have not provided recognition (despite being posted exactly on the day it was fixed, giving the company almost no time to actually do so).

HackerOne style programs provide a way to formalize this, publicize the rules (e.g we pay $25K for privilege escalation or something) and give recognition to people finding the bugs.

Pentesters like it not only because they get paid, but now they can point to their record on a public website.

This isn’t a “gig economy bad” situation.

j0hnyl
0 replies
1h9m

Furthermore, companies that don't already have very mature security programs will not benefit from bug bounties. I've run a bug bounty program before on H1, and it was a nightmare. No one reads the scope and you're inundated with 99/100 really trashy reports. Managing such a program is a full time job for one or more people especially if it's a big company.

marcod
0 replies
10h51m

Most vulnerability reports I see at work come from security researchers in Pakistan and India.

I have never found out if this is a side gig, a full-time job, or a hobby for people.

laserbeam
4 replies
11h8m

Yeah... Is it ok to do a public writeup on the same date the vuln was patched without an acknowledgement from the client? I would have scheduled this blog post at least a week later.

marcod
2 replies
10h50m

Once they changed the credentials and no longer share them, this particular issue should be gone, no?

laserbeam
1 replies
10h38m

Maybe... But bashing the client on the day they patched because they haven't communicated is somewhat shaky. Bashing them a week later is totally cool in my books.

tgsovlerkhgsel
0 replies
10h17m

What "client"? This looks like a researcher reporting a bug for free (or maybe through a bug bounty program). They have zero obligation and the vendor is not a "client".

jampekka
0 replies
9h50m

What client? They haven't even answered the guy's email.

MrBruh
2 replies
13h51m

I feel I should clarify, the writeup was not the blog but rather than vulnerability disclosure report (PDF) I sent to them directly.

MrBruh
0 replies
9h19m

To clarify the dates, the vulnerability was discovered on a Saturday (Friday evening) their time. It was reported on Tuesday (Monday their time)

The only email listed on their site was for the sales team which would not be checked on a weekend.

Aurornis
0 replies
4h9m

Yes, I understand, but that’s my point: In my experience, the detailed write-ups that external pentesters sent us could have been replaced by a 1-2 paragraph email for our engineers to read and fix ASAP.

dumpsterdiver
1 replies
10h10m

In cases where a small vulnerability is successfully turned into a larger vulnerability, everyone wins, right?

Considering that there is “more than one way to skin a cat”, it is not a given that vulnerabilities further along the chain will be resolved by closing the initial vector.

When a chain of vulnerabilities is reported it might become clear that not only does the initial attack vector need to be closed, but additional work needs to be done in other areas because there are other ways to reach that code which was called further along the attack chain.

Aurornis
0 replies
4h4m

In cases where a small vulnerability is successfully turned into a larger vulnerability, everyone wins, right?

Nope! The two vulnerabilities are usually one and the same. The person is just trying to find a clever way to access additional data to make their payout larger.

From the customer perspective, getting the initial vulnerability fixed ASAP is the best outcome.

When they start delaying things to explore creative ways to make their payout larger, everything goes unfixed longer.

UberFly
1 replies
13h53m

"No contact or thanks has been received back so far, I will amend this comment if/when they do so :)"

They couldn't even be bothered to send a proper thank you.

sailfast
0 replies
12h29m

To be fair... that's today. Guessing something might be in the works but it's 1AM Eastern Time in the US.

xivzgrev
0 replies
11h24m

thanks for the clarification - I also read this as it took them a MONTH to fix the vulnerability.

risyachka
0 replies
11h47m

Companies can always better align incentives by paying more and not try to downplay vulnerabilities.

purplebandit
0 replies
12h20m

because writing up a detailed report takes 30 seconds

mikeodds
0 replies
13h23m

Very much agree the incentives aren't fully aligned.

From a bug hunters perspective, certain issues are often underpaid or marked as non-issues (and then subsequently fixed without paying out) so it’s in their interest to find a chain of issues or explore to show real impact.

Then from the programmes perspective you have to content with gpt generated reports for complete non issues so I can also understand why the might be quick to dismiss without hard impact evidence rather than a “potentially could be used to”

jacobsenscott
0 replies
59m

The incentives in infosec are weird.

Well - only the amateur infosec world where you try and force someone to be your client after you do the work, and then get butthurt when they don't become your client.

In the professional infosec world the clients choose to hire you first.

davedx
0 replies
9h27m

Maybe also include in your quote that they didn't thank him for reporting it

bocytron
0 replies
9h49m

I find it funny that the author found a massive vulnerability but chose to wait a couple days to report it so they could finish a nice write-up.

Maybe it's because the write-up was well written that they could patch in a day?

Beldin
0 replies
9h25m

I find it funny that the author found a massive vulnerability but chose to wait a couple days to report it so they could finish a nice write-up.

That's what you'd expect: finding != understanding, and you need some understanding before you can submit a sensible, actionable report to the vulnerable party. And then you need to write it up in a way that will be understood by the recipient. Going from initial finding to submitting a detailed report in a few days is excellent turn-around time.

lulznews
18 replies
9h12m

With an upbeat pling my console alerted me that my script had finished running

Forget the pwn how do I do this

Also, HN used to think this was cool now there are 20 posts blaming the hacker…

orpheansodality
3 replies
8h22m

I've appended `; tput bel` to the end of long-running scripts to get the same effect.

Fun fact: the `bell` control character is part of the ascii standard (and before that the baudot telegraph encoding!) and was originally there to ring a literal bell on a recipient's telegraph or teletype machine, presumably to get their attention that they had an incoming message.

To keep backwards compatibility today's terminal emulators trigger the system alert sound instead.

pge
0 replies
5h9m

The Apple II+ still had a ‘bell’ key on the keyboard (I can’t think of a more recent computer that had that)

mnw21cam
0 replies
1h55m

I always used to just have 'echo "^G"' instead (where ^G is typed as CTRL-V CTRL-G).

austin-cheney
0 replies
7h50m

In Java and JavaScript it’s just:

    \u0007
It’s handy to put in your shell code that takes a few seconds, or more, to complete.

FrustratedMonky
2 replies
4h19m

Yeah, what happened to the "Hacker" in Hacker News. (responding to people blaming the 'hacker', not the sites).

This guy just grabbed publicly available information, and by 'public' I mean put out onto the web un-protected, just put out there. If you can just basically browse to something, is it really his fault for finding it.

It's like if I have a front door on my house, and just in the front hallway I have a huge naked picture of my wife. If I leave the door open, can I get mad at pedestrians walking by, for seeing the picture. Maybe they walkup to ring the door bell just to get closer look, walking up to the door, but not going in, is allowed.

m0rissette
1 replies
4h8m

I think according to the law and related suits based on accessing publically available URLs without authorization is still technically prosecutable - I’m not a CFAA expert but I’d double check there

FrustratedMonky
0 replies
3h40m

I think you are correct, that the law says that.

I think the law is pretty wrong.

It means I can break law by just accidentally browsing to something. Can be breaking the law just by seeing it, before knowing I'm doing something wrong.

Basically, just see something and be guilty before being able to look away.

williamdclt
1 replies
7h55m

on macos I just add `; say done` to my command. If I didn't think of doing it before starting the command (which is most of the time), I just type it and press enter while the command is runnign, it gets buffered and executed right after the command finishes (be careful that it's not an interactive program that you're executing though, or it might take your "say done" as an interactive entry)

zopa
0 replies
7h40m

You can also do Ctrl-Z to pause the running process, and then `%1; say done` (or whatever) to restart the first queued job and then run the new command. Avoids the interactive issue

weinzierl
1 replies
3h10m

Debian (and derivatives like Ubuntu) come with a handy shell alias called `alert`.

It is meant to be used after a command or a chain of commands to give feedback about success or failure. The alias by itself doesn't issue a ping, but can easily be amended to do so.

What worked for me is to add an invocation of `paplay`. Actually it is two different invocations, one sound for success and another one for failure.

In addition to that I also send an ASCII 0x07. I have both `tput bel` and `echo -e "\a"` in my alias, but don't remember why. Probably one of them is enough. I do this because I have my terminal emulator set to visual bell an that causes the tab to change color when the command is finished and I can immediately see it even if I am in another tab.

weinzierl
0 replies
1h9m

That being said, there might be an easier way by configuring the desktop environment to make a noise on notification, but that is route I did not want to go.

qznc
0 replies
4h0m

My fish shell shows a desktop notification if some other window has the focus and the command ran longer than 10 seconds.

Fish config: https://github.com/qznc/dot/blob/master/config/fish/config.f...

Notification script: https://github.com/qznc/dot/blob/master/bin/notify_long_runn...

I stole it from some zsh solution originally.

palmfacehn
0 replies
9h7m
noir_lord
0 replies
8h15m

    #!/usr/bin/env zsh

    (mpg123 /path/to/processing3.mp3 > /dev/null 2>&1)
processing3.mp3 is the "task completed" sound from star trek,

then it's just `./foobar.sh && boc` or `./foobar.sh; boc` as appropriate.

mmsc
0 replies
7h40m

Also, HN used to think this was cool now there are 20 posts blaming the hacker…

I'm not sure whether it's HN thinking this is uncool (it is cool!) or it's HN taking the unfortunate realistic position that this type of stuff only gets the reporters into trouble, after seeing it happening time and time ago. People doing cool stuff get in trouble, and it's sad to watch.

johndough
0 replies
8h35m

On Kubuntu, you can use paplay to play short audio files. Change the path to an audio file of your choosing.

    sudo apt install pulseaudio-utils
    ./some_script ; paplay /usr/share/sounds/freedesktop/stereo/complete.oga

ginko
0 replies
7h49m

I have this in my .bashrc

  beep() {
    if [ $? -eq 0 ]
    then
      file=/usr/share/sounds/purple/receive.wav
      ret='true'
    else
      file=/usr/share/sounds/purple/alert.wav
      ret='false'
    fi

    (aplay $file 2>/dev/null >/dev/null &);
    $ret
  }
Can be called like this:

  $ command ; beep
Depending on the return value it'll give a different alert. It preserves the return value so you can still chain other dependent commands after it.

This depends on the libpurple sounds to be where they are (works in ubuntu at least)

binwiederhier
0 replies
5h21m

I suggest using ntfy [1] for this. It's open source and self-hostable. It lets you push notifications to your phone like this:

   ./myscript.sh; curl  -d "Script done" \
  ntfy.sh/mytopic
Disclaimer: I am the maintainer of ntfy.

[1] https://ntfy.sh/ + https://github.com/binwiederhier/ntfy

mellosouls
16 replies
17h28m

No contact or thanks has been received back so far :)

MrBruh
6 replies
17h21m

I wasn't expecting a bug bounty, but not even a 'thank you' does hurt my soul :(

PopePompus
3 replies
17h14m

Well, they're incompetent - is it a big surprise that they have poor manners too?

ryandrake
2 replies
16h40m

Yea, and if they were actually breached and there were victims, the first thing they would do is issue a press release telling the world "We Take Security Very Seriously."

ethbr1
1 replies
16h11m

Is it legally differentiated if they respond to the reporter?

Or is there some weird loophole of "We didn't take action because of your message. We just happened to patch the same vulnerability after you mentioned it. We are not aware of any penetrations, because we didn't notice your message"?

hnfong
0 replies
13m

Is it legally differentiated if they respond to the reporter?

Nobody knows.

But between taking an unknown legal risk, vs being seen as ungrateful, the choice for legal is quite clear.

wt__
0 replies
1h1m

If it's any consolation, people these days frequently ignore (or read but don't bother acknowledging) pretty much any email that was intended to be helpful, not just security disclosures.

sjfjsjdjwvwvc
0 replies
16h44m

Often when pointing out how people fell victim to a con they won’t thank the person who tells them about the con but rather attack them. Basically they can’t admit to being so stupid as to have bought into a con. On some level you can be happy they didn’t come after you or something.

I totally understand how you feel though.

j-bos
5 replies
17h10m

It's kind of wild that when businesses lose control of people's personal info, they get no punishment. And when someone saves them from losing people's personal info, they give no thanks.

Seems well funded companies are immune from data liability or responsiblity.

forward1
3 replies
1h48m

"Wild" is the unreasonable expectation your data is "personal" after sharing it with a third party, under a terms of service agreement no less.

ziddoap
2 replies
1h5m

terms of service agreement

Are those the documents, often dozens of pages of barely understandable legalese word salad, that we've conditioned nearly everyone to click past?

While I certainly agree that people share way too much data, I personally think hiding behind "it's in the terms of service agreement" is getting quite tired when they are designed in such a way that you are encouraged to skip past it, and they are worded in such a way that a lay-person doesn't have a chance of understanding what the ramifications of agreeing to the agreement is.

Not to mention that, quite often, you don't really have a choice in the matter if you want to have a relatively normal life (e.g. being forced to agree to the terms of service of some random service to submit an application to a job, and not having a job isn't an option).

forward1
1 replies
52m

What makes you think you're entitled to anything, let alone a "normal" life, in this world? No one forces you to live in and participate in society, but if you choose to, it's at your own risk.

ziddoap
0 replies
47m

This reply seems rather… unrelated to my comment. But perhaps it'd be a fun philosophical debate at some other time.

smegger001
0 replies
16h27m

honestly at this point is there anyone whos PII hasn't been leaked in a major companies/orgnaizations data breech?

also wikipedai as a list of major data breaches https://en.wikipedia.org/wiki/List_of_data_breaches.

Narkov
2 replies
17h4m

To be fair, it looks like it was only patched in the last 24 hours so not totally unreasonable...yet.

zilti
0 replies
8h17m

If you have time to fix it, you have time to say thanks.

internetter
0 replies
1h59m

it is 11:26:45 EST. Ready. Go.

"Hi, we have fixed the issue you reported to us. Thank you so much. We are willing to offer a reward of <x> dollars to you, because you have protected our customers. Please reach out with a payment address or any other questions you might have. Thanks again, Tim from <Large corporation>"

and... stop timer. 11:27:38

was that so hard?

j-bos
16 replies
17h6m

If this had been exploited and the job applicants to Target, Subway, Dunkin et al, had bank/credit fraud committed in their name's, would the big companies be liable for not performing due dilligence on chatter.ai? To be clear, I'm asking from a legal standpoint not a practical one.

n2d4
10 replies
16h19m

For more crucial PII (such as SSN, health data, payment info, etc), vendors are generally required to have certifications from a third-party auditor (such as SOC2). If the big companies fail to check that, then yes, they can be made liable.

adrr
9 replies
12h32m

No rules or laws that require it. Closest requirement would be PCI around credit cards but you need lots of volume to be required to do an audit. HIPPA just requires you to do risk analysis and implement risk management. SOX is up to the auditor, when I was CTO at a public company, they were fine with me signing at attestation of all things we had implemented. Same with banks, no explicit requirement in both glba and fdic rules. Core bank systems are so old, none of that data is even encrypted neither is network traffic. Stuff is still in cobol.

Forcing function would be cyberinsurance policies that typically want to see audit results if you have multi-million dollar policy limits.

supafastcoder
3 replies
7h39m

No rules or laws that require it

It will just be FTC knocking on your door…

malfist
0 replies
4h0m

The FTC will absolutely not knock on your door if you expose users's SSN to the internet.

jacobr1
0 replies
3h13m

Or the SEC, because one the breach/incident is public, the share price drops and failure to have disclosed those factors prior constitutes "securities fraud." Increasingly this it the default method of corporate regulation.

adrr
0 replies
2m

FTC will come knocking at your door even if you do pass an external audit and have Soc2/soc1/iso certification. Equifax is an example.

zelon88
1 replies
1h51m

There are state laws that this runs afoul of. https://www.mass.gov/regulations/201-CMR-1700-standards-for-...

adrr
0 replies
5m

https://www.mass.gov/doc/201-cmr-17-standards-for-the-protec...

It’s very basic. There’s no best practices clauses it’s all “reasonable” clauses. Also no requirement for an external audit.

throwawaaarrgh
1 replies
7h18m

HIPAA requires that no entity involved leak any PHI or penalties will be applied, you absolutely have to do more than "do risk analysis/management".

adrr
0 replies
13m
l33t7332273
0 replies
12h5m

I think the fact that this is true and well known(amongst those that could abuse it) is evidence that infosec, by and large, is overemphasized.

callalex
1 replies
10h47m

Someone applying to work at Taco Bell or Subway couldn’t afford a lawyer even if they worked for a full year and saved every penny.

amenghra
0 replies
8h3m

That why the existence of class action suits is a good thing (imho). It balances power to some extent. The unfortunate reality is that only the lawyers make money in such cases.

qznc
0 replies
3h54m

There is probably at least one European citizen in there, so GDPR applies.

mihaaly
0 replies
7h24m

I assume it is more like the chattr.ai where the responsibility falls, except if the companies were using it as a tool and configuring the services falls with them. Down to the contractual circumstances. 'Tool provided works well' vs. 'using the provided tool well' kind of thing I guess.

MrBruh
0 replies
16h55m

Probably yeah, although it just comes down to if they get Sued or not I guess.

NOTE: I am not a legal professional, just making my guess.

simonebrunozzi
11 replies
15h43m

No contact or thanks has been received back so far

WTF.

troupe
5 replies
1h40m

I'm curious if the best monetary approach for a white hat hacker would be to show them the problem, give them time to fix it, and then give them an option to pay a consulting fee for the discovery in exchange for NOT publishing the exploit (after it has been fixed). The idea being the showing what you have found on other sites has marketing value for a white hat hacker, but had the company hired you to discover the flaw, you wouldn't be publishing it.

jacobsenscott
2 replies
1h7m

The best approach is not to do it. Demanding money from someone that didn't hire you is never ethical - just childish. Would you like it if I showed up at your house, mowed your lawn, and then started banging on your door demanding $100 for mowing your lawn?

Also, what marketing value - if you're just pwning random web sites rather than getting hired to test a site's security you aren't in any market.

z3phyr
1 replies
50m

The grass in the lawn may not be that dangerous to other people. However, if your house is emitting radiation, and a hero breaks in to clean it up for the sake of other people you service, (because the town does not need to wait for you to hire someone) the hero deserves a reward and the owner of the house deserves punishment.

hnfong
0 replies
23m

Unless the "hero" is law enforcement or some other government agent with a warrant, he will likely have broken a bunch of laws by breaking into a person's house uninvited, and not very likely rewarded.

That's modern society for ya.

hnfong
0 replies
28m

give them an option to pay a consulting fee for the discovery in exchange for NOT publishing the exploit (after it has been fixed)

I'm not making any moral judgments, but purely from a legal perspective this sounds dangerously like blackmail. If anyone decides to take this path, be sure you understand the risks involved.

KomoD
0 replies
7m

and then give them an option to pay a consulting fee for the discovery in exchange for NOT publishing the exploit (after it has been fixed).

So... you're suggesting blackmail?

paxys
1 replies
12h27m

It has been less than a day, relax

ConSeannery
0 replies
4m

It was reported to them in September of last year

KomoD
1 replies
15h29m

It's pretty common in my experience, especially from larger companies

Recently I reported an issue to a company valued at >$10bil issues were quietly fixed, not a single response back, not even a "thank you"

philsnow
0 replies
14h39m

Some companies intentionally Gray Rock security reports, because they neither want to attract attention by giving bounties, nor do they want attention for not giving bounties. If they just say nothing, the researcher usually just leaves them alone.

One could speculate that these companies want to pretend that infosec isn't a problem for them, and if they ignore the "problem", it will go away.

dataengineer56
0 replies
9h30m

They fixed the bug which is the important thing. Corresponding back to the hacker probably involves the legal dept and it's probably safer to not respond at all.

intern4tional
11 replies
17h16m

This isn’t owning fast food chains; rather compromising some AI startup that has some of them as a customer.

Title is misleading.

MrBruh
3 replies
17h0m

It exposed PII of the managers & employees of ~half of the most popular fast food companies.

Personally I feel the title is justified but I understand and respect your viewpoint.

Also keep in mind that trying to clarify the such would also make the title much longer than I desired.

intern4tional
1 replies
14h38m

Title: I pwned Chattr.ai via Firebase misconfiguration

That’s what you should call it. It explains to readers what’s going on without over sensationalism.

That isn’t too long either.

namdnay
0 replies
9h8m

that's a bit unfair, I think it's pretty important that it has real world consequences. nobody knows what Chattr is and who their users are

borissk
0 replies
15h7m

Aren't you afraid one of the companies involved may file a complain with FBI or police and get you arrested?

isatty
2 replies
16h45m

I think it’s incomplete. The startup needs to be named and shamed on the title.

intern4tional
0 replies
14h35m

I don’t disagree with this either, I just didn’t think of it when I put my response in.

Naming and shaming does work.

giaour
0 replies
16h22m

The article is not shy about naming the startup (chattr.ai)

thaumasiotes
1 replies
16h55m

This isn’t owning fast food chains; rather compromising some AI startup that has some of them as a customer.

By this argument, getting access by phishing a company employee also wouldn't count as an attack on the company.

intern4tional
0 replies
14h40m

No, as company employee is directly tied to and the responsibility of the company.

These companies are responsible for their employees behavior and data but they are not responsible for nor legally liable for (in most cases, some exceptions apply) the actions of a third party that they have retained to help with hiring.

In fact the contract they have with said third party likely absolves them of any liability.

The title should be: I owned an AI startup via Firebase misconfiguration.

You can even name the startup if you want. That’s not flashy though and this person wants marketing.

mellosouls
1 replies
17h10m

TBF your proposed title is less snappy.

intern4tional
0 replies
14h36m

Of course, but it that’s good in most cases as then you don’t get an overreaction.

The right people will read it (Chattr.ai’s customers) and respond . Right now everyone looks at it and some CISO will overreact and make everyone go check their Firebase configurations which may likely be a non-value add.

pstuart
10 replies
14h58m

I was looking at jobs for my son at Safeway supermarkets and lazily put https://www.safeway.com/jobs in the browser.

That redirects to https://www.careersatsafeway.com/desktop/home -- which is very much not about jobs at safeway -- appears to be an Indonesian gambling/gaming site.

Safeway.com has zero email contacts published and expects communication to be via phone call or chatbot. I found their domain admin email and sent them info with no response, and no change to their site behavior.

This makes me think that they might be ripe for more monkey business but that's not my thing. Oh well.

iamhamm
3 replies
5h13m

Hi Albertsons/Safeway VP of Security Engineering here. Thank you for disclosing this. I’ll have it fixed along with the fact our VDP submission link is missing from the Safeway site. Here it is for future reference https://albertsons.responsibledisclosure.com/hc/en-us

pstuart
0 replies
1h24m

Hi! I was wondering if it would get noticed here ;-)

But as noted elsewhere, it's still not fixed.

And the link you shared is a good thing but is that going to be easy to find to someone who sees an issue with your websites? I'd recommend putting a link here: https://www.safeway.com/help/contactus

forward1
0 replies
1h56m

It's definitely not fixed: the (likely malicious?) redirect still happens for me now. How embarassing (for you).

alright2565
0 replies
3h1m

Please also add a security.txt file so that it is not necessary to navigate through a labyrinthine site to get this information.

https://datatracker.ietf.org/doc/html/rfc9116

0xDEADFED5
3 replies
13h36m

what the hell, i see the same thing. it's crazy to me when large companies don't even have an option for: in case of dumpster fire, send an email here.

pstuart
2 replies
12h42m

Technically it's not my problem (or on any other basis), but it bothers me because I'm weird.

I was tempted to find their CTO on linked in and post a message there, along with the fact that there was no reply to my outreach nor a proper channel to do so.

I think the only think in their defense is that they must get a lot of angry customer messages and they just don't want to deal with that.

namdnay
1 replies
9h10m

I very much doubt it's got anything to do with their CTO - the management of a corporate website is usually jealously guarded by marketing/corporate communications

pstuart
0 replies
1h28m

Yes, the CTO hopefully has nothing to do with lower level operations like that. But if they get a public burn they're going to issue a decree that will be addressed.

RamblingCTO
1 replies
8h5m

Seems to be fixed: This request was blocked by our security service

PopAlongKid
0 replies
3h25m

Not fixed where I am.

zharknado
9 replies
12h12m

From Eva’s post:

we didnt know much about firebase at the time so we simply tried to find a tool to see if it was vulnerable to something obvious and we found firepwn, which seemed nice for a GUI tool, so we simply entered the details of chattr's firebase

Genuinely curious (I’ve no infosec experience), wouldn’t there be a risk that a tool like this could phone home and log everything you find while doing research?

bushbaba
5 replies
11h33m

Yes, but that might also be caught by infosec users of said tool who have things similar to “littlesnitch” alerting them to the outbound API call attempt.

genewitch
4 replies
10h57m

there used to be windows GUIs for forcing new connections to ask, but i haven't seen anything like it. I can't recall the name of the one i used to use, but it scored perfectly on shieldsUp - oh, Zone Alarm.

Littlesnitch iirc is macos only, but it sounds lovely for this sort of thing.

hexadec
0 replies
1h26m

You can set this with Windows' default firewall. Setting to strict mode with no whitelist causes a UAC alert every time a process attempts communication.

flexagoon
0 replies
48m

There's a very good relatively new open-source GUI firewall app like this called Portmaster:

https://safing.io/

It's available for Windows and Linux

callalex
0 replies
10h46m

The generic term is “outbound firewall”.

Hrun0
0 replies
1h9m

You are looking for simplewall

Geisterde
2 replies
8h47m

That would be referred to as a honeypot. Sometimes administrators will set up their own honeypots to see the type of threats they are facing.

sunbum
1 replies
8h39m

No, a honeypot is intentionally insecure infrastructure setup to see who and how it gets attacked. A backdoored pentesting tool is a backdoored pentesting tool.

Geisterde
0 replies
7h51m

Im not saying the pentesting tool is a honeypot, but thanks for asking.

speps
8 replies
16h41m

Who's to say they're the first to discover this? They're the first to discover it and do something to fix it.

I thought there was a US law now where breaches like this have to be reported?

RamblingCTO
2 replies
8h8m

In the EU this would hurt so bad they probably would've needed to close shop.

supermatt
1 replies
7h33m

Thats complete FUD. GDPR fines are proportional to the size of the business and scope of the violation. There are companies that have had data breaches, failed to report them, and still only been fined ~300 EUR. There are others still who have been fined nothing subject to compliance.

xxs
0 replies
7h24m

As for size: the companies are large. The data processor - not so much.

MrBruh
2 replies
16h30m

I thought there was a US law now where breaches like this have to be reported?

Yes.

Will they report it?

Probably not (unless forced imo).

smegger001
1 replies
16h23m

i seem to recall a case of hackers anonymously reporting a data breech when a the company they hacked refused to pay up and didn't report it as required by law.

n2d4
0 replies
16h15m

Yes, ALPHV/Blackcat blackmailed MeridianLink by hacking them and then filing a SEC whistleblower complaint [1]. As always, Matt Levine has a wonderful article on it: https://archive.ph/Yffbh

[1] https://www.burr.com/cyber-security-law-blog/ALPHV-extort-Me...

itsdrewmiller
1 replies
14h43m

You're probably thinking of recent SEC regulations requiring disclosure for public companies - https://www.sec.gov/news/statement/gerding-cybersecurity-dis...

Chattr is a private company - https://www.crunchbase.com/organization/chatrr

namdnay
0 replies
9h14m

the clients are public companies, and in the contracts they've signed with Chattr there will definitely be a clause that Chattr has to disclose everything to their clients, so that they themselves can raise to the markets

MattDaEskimo
8 replies
16h50m

Full permissions for a user is blatant negligence.

For anyone who's never used Firebase before this is as simple as a single piece of logic that appears basically as:

if authUserID is UserDirectoryID

That simple.

akersten
7 replies
16h42m

I've never used firebase before. But are you saying that, in it's default configuration, anyone who registers a firebase account has R/W access to any firebase database as long as the database owner forgot to put that line in there somewhere?

That seems like an insane design...

curtisf
4 replies
16h30m

No, the default is no access to anything. You have to write rules that allow access to each record in the database.

It sounds like the rule that they wrote only checked that the request _is logged in_, because they assumed that visitors can't create their own accounts.

lolinder
2 replies
14h49m

Which, even if that assumption were true, is still bonkers, because from what I see in the article they had no partitioning between tenants or permissions checks for different user roles. So even if they hadn't accidentally allowed creating new accounts, any account on any one of their existing customers had full access to every row in the database.

MrBruh
1 replies
14h35m

any account on any one of their existing customers had full access to every row in the database.

Correct. :/

meandmycode
0 replies
12h4m

It's mind blowing to me, as someone who's built a SAAS and then talked to customers and ultimately their CTOs and CDOs that KFC and co ended up using such a service, either they would isolate the level of data exposed to the service and trust them on their contract - and then ruin them in court, or they would require some kind of compliance like SOC2 which should at least mean the solution was pen tested, and any pen tester worth anything will immediately find firebase is part of the solution and immediately test access rules..

The fact that the company/CEO/cto seems to just get away with this is depressing, because why should anyone else? it's not good business sense to invest in security if there's no serious repercussions

hot_gril
0 replies
16h21m

Yeah, the whole design of Firebase is that the client interacts directly with Firebase, not via your server. Which makes sense for auth since you don't want to be handling that manually, but the database? That makes me uneasy.

throwaway0665
0 replies
16h14m

I've seen many many firebase projects with rules disabling access only if "auth != null" instead of implementing some kind of even rudimentary access controls. It's a very dangerous habit that seems to come straight from the firebase docs[1]:

When the user requesting access isn't signed in, the auth variable is null. You can leverage this in your rules if, for example, you want to limit read access to authenticated users — auth != null. However, we generally recommend limiting write access further.

[1]: https://firebase.google.com/docs/rules/rules-and-auth

n2d4
0 replies
16h31m

When you create the database, you're asked whether you want to give everyone access (development mode), or whether no one gets it (production mode). If you choose the development mode, it will automatically disable that access after a certain timestamp, so you don't forget to update it before shipping. This of course doesn't stop people who don't care about security from just manually giving out public R/W, or extending the timestamp.

sampli
6 replies
13h41m

If you view this page in Safari, it’s just a text document

black3r
2 replies
7h45m

Since this is a post about security, this is your daily reminder to update your browser to stay safe on the internet. Up-to-date versions of Safari support AVIF images, and there have been multiple RCE vulnerabilities with known exploits fixed last year in Safari...

hospitalJail
1 replies
24m

iphones are the scariest device to do anything important on.

I had a moment of total freakout when I realized the person across from me at lunch had an iPhone on the table. Actually he had an Android, and we continued talking like no big deal.

To be clear, we were talking about a 10-100M dollar problem, this wasnt small potatoes.

Too many exploits, I can't imagine having anything of value on an iphone.

novagameco
0 replies
5m

I had a moment of total freakout when I realized the person across from me at lunch had an iPhone on the table

Why?

MrBruh
1 replies
13h20m

It is using the Avif format (for images) for a 2x compression bonus over PNG while still maintaining a higher quality over JPG.

If you can't view the images then it means you are likely using an outdated browser, all current versions of browsers support it (afaik) except Internet Explorer.[0]

...And if you are using Internet Explorer, then god help you.

[0] https://caniuse.com/avif

novagameco
0 replies
6m

I'm on Edge 120 (released a month ago) and can't see it

frakkingcylons
0 replies
12h3m

I'm not seeing it that way on Safari 16.1 on mac.

andrecarini
6 replies
12h3m

How much would this leak go for in the darknet?

crooksey
5 replies
7h34m

Deciding to sell this on the darknet is a life changing decision, white to black overnight and imagine not really something most would contemplate. Payment in BTC probably from an already compromised address so loads of factors. Probably an easy + quick 2BTC though

mathverse
2 replies
2h9m

This is an easy and obvious exploit so an attacker would need to extract the data from all sources ASAP. High risk of getting caught and ending in jail to be honest for measly 2BTC. Not worth it for anyone in the US or even Europe.

gretch
1 replies
28m

Not worth it for anyone in the US or even Europe.

Lots of crimes are not "worth it". And yet criminals do it anyway. Because criminals (nor most humans) are not perfectly rational.

There's routinely reports where people try to rob a gas station with a loaded gun - a $200 haul if everything goes perfect. It doesn't and now they have 10 years in jail...

leoqa
0 replies
6m

One crime is skilled, one crime is not. The skilled person has more options to earn 2BTC than the unskilled person does to earn $200.

If you’re a felon and unskilled you’re as desperate as it gets in America.

novagameco
0 replies
10m

Yeah it's like the difference between buying a handgun to go to the range and buying one to rob a liquor store

make3
0 replies
41m

I feel like for a pro-level security person, 2 btc is not worth stressing about it for approx five years after that your whole career can be taken down at any time, as security ppl can absolutely not get jobs if they have a criminal file

8organicbits
6 replies
9h51m

I would have stopped once I confirmed the leaked keys were valid. Looking at what types of data you had access to wasn't required. Downloading plaintext passwords of other people is probably too far. Impacted users may need to be notified about a breach. If needed, create an account of your own and target only that.

If there was a pentester agreement, safe harbor, or other protection that's different. Be careful out there.

Topfi
4 replies
9h17m

Looking at what types of data you had access to wasn't required. Downloading plaintext passwords of other people is probably too far. Impacted users may need to be notified about a breach. If needed, create an account of your own and target only that.

I'd argue that it was absolutely necessary to gauge the severity of this misconfiguration and furthermore, that Chattr.ai must contact every affected user, not MrBruh.

Their configuration allowed anyone to create an account and access plaintext passwords. There is no telling whether and how many outside of this disclosure have previously accessed this information and may intend to use it. This was negligence of the highest order, and it shouldn't be on the one finding and reporting this issue to rectify it.

zilti
2 replies
8h46m

That is not just negligence, that is stupidity on an order of magnitude that the responsible people should never again be allowed to work on a software project.

8organicbits
1 replies
7h38m

Every company I've worked for, and every pentest contract I've done has found plaintext passwords or credentials stored somewhere they shouldn't. It's unfortunately very common.

Topfi
0 replies
4h20m

Customer credentials as in this example? I'll be totally frank, I'm having some trouble reconciling that with Article 34 of the GDPR and 1798.150 of the CCPA. Do none of these organizations have EU/CA customers or is the approach they take to laws the same as the one they employ for database security?

8organicbits
0 replies
7h40m

absolutely necessary to gauge the severity of this misconfiguration

Possibly. But what's the legal basis that allows random external parties to make that determination? Report the leaked credential, and let the company assess impact.

The problem is that pivoting to accessing user passwords may cause the companies to spend money notifying customers and harm their reputation. If they want to pursue legal action, those are clear damages.

Chattr.ai must contact every affected user, not MrBruh.

Agreed, a pentester directly contacting impacted users would increase the risk legal gets involved.

There is no telling whether and how many outside of this disclosure have previously accessed this information

Typically the company would review logs to determine that.

denysvitali
0 replies
9h47m

I would argue that looking at the type of data you're dealing with is actually a very important part to assess the impact, but looking at the data itself is beyond this part.

Knowing that they store passwords in plaintext is a security issue on top of the R/W credential

digitcatphd
5 replies
41m

This is extremely annoying. Instead of fucking with other people’s companies why not build your own?

You pwned them? What are you twelve? All you did was commit a felony and post it online.

monsieurgaufre
1 replies
34m

Pretty sure that poking around for holes/exploits is part of the definition of what is a hacker. They notified the relevant organization as well. Not sure why you take that stance.

digitcatphd
0 replies
10m

And then posted it online? If his intentions were good he wouldn’t post their name.

kipukun
0 replies
38m

I'll bite -- he discovered a vulnerability in a large company and responsibly disclosed it to them. How is that a felony? Why would you post a felony online?

causal
0 replies
33m

How did the author "fuck with" the company beyond discovering a vulnerability and helping them fix it?

Kuraj
0 replies
37m

If I understand correctly, the article was published only after the vulnerability was patched. That sounds OK to me.

pierat
4 replies
15h34m

And folks, this is why you sell your exploits to the highest bidder.

Being "good" and giving companies free work is a HORRIBLE idea. They're never gonna pay, or even than you. If they're not willing to treat security researchers properly, I see no reason to return the favor.

Remember security groups: if your company wont pay, there are others that will.

mapster
2 replies
4h9m

Did you not see the part where applicants info was exposed? Make a few bucks by selling their data to <whoever> is 10000x worse than the chatr dev not securing the files.

qznc
0 replies
3h50m

Could have submitted it to https://haveibeenpwned.com/

Chances are some blackhat already discovered this data and sold it.

pierat
0 replies
1h49m

Selling exploits (the words explaining how to) is a 1st amendment protected act.

Actually downloading the data from a hack and selling it is expressly illegal.

Now if the person/group you're selling to expresses illegal actions as a result, you have a duty not to sell. So, don't ask, and dont tell!

The real solution: companies all should allow for bug bounties and good-faith reporting and proper compensation for reported issues. But as long as they don't another group WILL pay.

poisonborz
0 replies
10h21m

Sadly this is the right direction. With time, companies will learn, but we can all be afraid on what world they will push for to solve this (it will be less like "put more resources on proper opsec" and more like "browser attestation").

lwhi
4 replies
8h23m

It seems crazy that no thanks or recognition has been given.

Is this because doing so might be seen as an admission of liability, and could be used in any legal cases that are brought?

didntcheck
3 replies
7h46m

To give the benefit of the doubt, it appears he only contacted them less than 48 hours ago. Their first priority should correctly be to fix the problem. They could be discussing a bug bounty right now and just haven't finalized the email yet

pge
1 replies
5h3m

American readers may not have noticed that the dates are in European DD/MM format, so they thought disclosure was Sept 1 rather than Jan 9.

MrDunham
0 replies
4h16m

I 100% saw it as MM/DD and was wondering why it took them three months to write up the vulnerability and a month to patch it.

Thanks for the clarification

zopa
0 replies
7h33m

“Thanks for coming to us with this, we’re looking at it right away” wouldn’t take a lot of time or commit then to anything

counterpartyrsk
4 replies
11h41m

This is the most perfect blog post. ZERO fluff, straight to the point. Win.

az226
3 replies
10h18m

Except it is almost perfect — it would have been perfect had he been thanked and rewarded. Of course that is not on him, but felt so disappointed reading that at the end.

brojo_
1 replies
9h51m

It's been less than 24hours. I don't think any company works at that speed.

davedx
0 replies
9h25m

Oh come ON. Sending an email acknowledging the report and saying "Thanks so much for reporting this - we will look into it ASAP" takes about 10 seconds

jb1991
0 replies
9h55m

"Perfect" is referring to the blog post, not the outcome.

KTibow
4 replies
17h7m

The timeline omits when the article was put online

MrBruh
1 replies
16h26m

It was posted earlier today (NZ Time). If they do end up reaching out though, I will amend that part with a revised statement :)

samstave
0 replies
51m

You could ostensibly make a great tool from this data for those seeking employment....

Make a tool which will look at the list of all the franchises within radius of person, and have it auto submit applications to all of them simultaneously...

not2b
0 replies
17h1m

That can be easily deduced.

boomboomsubban
0 replies
16h41m

According to the Wayback Machine, it first appeared January 10 2024. http://web.archive.org/web/20240000000000*/https://mrbruh.co...

theonething
3 replies
10h40m

If you grab the list of admin users from /orgs/0/users, you can splice a new entry into it giving you full access to their Administrator dashboard.

I'm not clear on this. Splice a new entry into what? The list of admin users? And then do what with it?

urbandw311er
1 replies
9h53m

I read this as worse - splice being a client side JavaScript function to add items to arrays. My concern here is whether the “is admin user” perms checks were done solely on the client side and not enforced on the API endpoint!

missblit
0 replies
1h15m

He's using the word as meaning "insert", not the JS function. He is saying he inserted a new row into a database to get admin access to a dashboard.

'splice" means to join two things as if by weaving them together. If used as "splice into" or "splice in" there is a sense of breaking something apart, inserting something into the gap, and joining it back together.

This all makes a bit more sense if you look up the etymology which was about ropes (despite splicing being about uniting, it's closely related to the word 'split').

iamflimflam1
0 replies
10h31m

Once he had access to Firebase (the database) he was able to add an entry to the list of admin users. With that done he could login as an admin user to the website and access the administrator dashboard.

hot_gril
3 replies
16h45m

Article gets to the point very quickly, nice.

MrBruh
2 replies
16h29m

Much appreciated, I am always open for further feedback too! (If there are ways I can improve my writing)

listenallyall
0 replies
11h2m

Article was good, and your instincts proved correct -- but if you want some truthful feedback, your headline is clickbait. You pwned a single vendor that happens to work with some fast food restaurants, you did not find a vulnerability within the restaurant companies themselves. "I pwned an applicant management system" is a lot less compelling than the headline you used.

howon92
0 replies
12h6m

My feedback: keep the same style :)

Sparkyte
3 replies
8h56m

Ethical hacking is a good thing.

Nice to see someone doing good.

refulgentis
2 replies
3h8m

They reworded things since yesterday:

Before, one collaborator had them in a chat sneering about chattr, checking their Javascript, then getting a GUI pwn tool for firebase.

i.e targeted attack with malice, followed up a blog post wildly exaggerating what happened, with a disclosure policy of 'we emailed them once and they fixed and didn't email us back so we'll just publish'

Only spelling this out because it's important to point out the significant gaps between white hat culture and these actions, not only for the authors, but for people who are inspired and want to practice it

internetter
1 replies
2h3m

Before, one collaborator had them in a chat sneering about chattr

"Wow this thing looks crappy"

checking their Javascript

"I wonder if it is crappy"

then getting a GUI pwn tool for firebase.

"Huh, it seems crappy. Let's just check to be sure"

with a disclosure policy of 'we emailed them once and they fixed ...'

"Well, this thing is really crappy. we don't want to harm people. Let's tell them about how crappy it is to avoid harm"

and didn't email us back so we'll just publish'

"They fixed the thing, nobody will be harmed. We still think it's crappy so let's talk about it"

Why wouldn't they go public at this point? They've gotten nothing else out of it, and since the issue has been fixed there is zero harm to customers. Do you propose they go like

"Hey company, we found this really embarrassing thing you did. I see you fixed it now, so can we talk about it"

silence

"Oh well the company didn't say anything so we won't talk about it. So sad"

In what world do we not hold companies accountable? In what world do we blame the people who find these issues for free?

refulgentis
0 replies
20m

Note I'm not claiming disclosure is bad, but rather, this is a copy of a copy of a copy of a copy of a copy of a copy of how professionals handle these situations, to the point there's nothing left except the "1) pick a target 2) email them 3) write a blog post when fixed" parts.

habosa
2 replies
13h52m

Sad that in 2024 people continue to set their Firebase security rules to be wide open. Back in maybe 2015-2019 that was excusable because that was the default but now it’s just lazy.

Don’t expose your database / api / blob storage bucket / etc to the public! It’s not that hard to do it right, or at least “right enough” that you can’t get owned by someone scanning a whole TLD.

thatwasunusual
0 replies
1h38m

Sad that in 2024 people continue to set their Firebase security rules to be wide open. [...] Don’t expose your database / api / blob storage bucket / etc to the public!

What is additionally sad, is that your comment - in 2024 - is being downvoted.

araes
0 replies
50m

Partially, this seems like an issue with Firebase, where the defaults are possibly set on something that is not sane from most professionals perspective.

Having slightly tried Firebase, I can also say that the Google cloud tool environment was really confusing the last time I tried using it. Just this enormous maze of switches, and dials, and widgets, like a lot of the popular IDEs.

If the defaults are not set on something sane, and I, a personally evaluated competent tech user with some background in security (fed work) can barely find the settings, then most normal humans with limited grasp of those issues probably won't even know to look.

unoti
1 replies
1h33m

I worked with Firebase for a while, lured in because of how easy it was to do certain things. It makes certain kinds of operations essentially zero effort, such as getting realtime updates on the frontend when something changes. But it also creates a huge amount of effort that is trivial with other frameworks, such as creating a huge effort for security. I found that what I gained in convenience, I lost by needing to do so much work continuously battling with security rules. I left it behind and never looked back, and it made me much more cheerful about the work that I needed to do to establish and maintain more conventional backend data systems.

robertlagrant
0 replies
1h14m

I made an app in Firebase once and did it so that people could collaborate but they used per-session IDs that were linked to their real IDs behind the scenes, so people couldn't spot trends of activity over time.

I found it a little tricky to start with while getting familiar with the rules, but it worked really well after I got the hang of it.

thekombustor
1 replies
13h41m

At the time of writing, accessing the link returns a bunch of prometheus metrics... interesting.

MrBruh
0 replies
13h19m

Shouldn't anymore, was a "pushing to production" moment. I wanted analytics since my site was getting flooded \w traffic.

mmsc
1 replies
15h47m

Stepping aside for a moment and thinking about the scope of this, I think it’s a good example of why technological diversity is something to long for. If Chattr can be pwned like this so easily, they likely have many much more serious issues which in turn will affect half of America’s fast food chains.

Aachen
0 replies
34m

I've heard it told that's why BIND and unbound exist alongside each other

hazebooth
1 replies
16h50m

i love the picture of your cat on the home page :)

MrBruh
0 replies
16h25m

That's my lovely cat, Jingles. She is getting a bit old so I thought I would immortalize her on the homepage of my site.

bomewish
1 replies
5h17m

They need to pay this guy 100k. And fire someone.

mapster
0 replies
4h19m

They can’t because most of the people firing will also be held accountable and fired

SoftTalker
1 replies
15h50m

At this point I would not apply for a job if the employer used a third party online service. Seek out employers who do their own hiring and talk to candidates face-to-face.

If they steer you to one of these third party services, send your resume by snail mail directly to the HR director with a cover letter highlighting all the data breaches such as this one, LinkedIn, Indeed, etc. You'll stand out as someone who pays attention.

alwa
0 replies
14h41m

Not to be pessimistic, but consider the applicant pool MrBruh targets here. One wonders how widely people with the sort of research skills and communication habits you describe are represented in the population applying for a fry cook position at a Checkers franchise. Or even amongst the franchisees themselves...

And for that matter, how that kind of initiative would be received by your potential future manager at the drive-thru.

I feel like I sound a little patronizing, but my broader point is it’s not other people’s job to be responsible for this kind of data security, especially in a relationship so imbalanced as that between a job seeker and the potential employer who offers only one pathway to gainful employment.

As to the remedy you propose, I’m reminded of the inimitable @patio’s Seeing Like A Bank [0], where he points out that banks (like other firms) use techniques like the paper letter that you described as subtle shibboleths to distinguish people who are likely sophisticated customers from the rank and file.

[0] https://www.bitsaboutmoney.com/archive/seeing-like-a-bank/

Erratic6576
1 replies
4h8m

So the hacker worked for free?

iinnPP
0 replies
2h1m

Yes, the way the incentives are aligned ensures nobody ever goes to jail and the little guy pays all the bills.

946789987649
1 replies
36m

If they're already using firebase, can anyone think why they are storing passwords? Firebase Authentication is incredibly easy and quick to setup and use (less than a day for someone new to it), which means you have no need to worry about passwords.

jzig
0 replies
31m

offshore workers

ysofunny
0 replies
15h43m

then again, the people in potential harm's way seem to be the poor sods trying to get hired by these companies for a meager hourly wage

I don't see how this "p0wns" the companies themselves

yieldcrv
0 replies
15h33m

does this count as authorized access under CFAA?

I’m curious what the limits are

tmaly
0 replies
15h49m

Dude should have gotten some free chicken for his efforts.

tech_ken
0 replies
1h38m

Lol. Lmao even. Great writeup

mihaaly
0 replies
7h23m

"move fast and break things" - Mark Elliot Zuckerberg

miek
0 replies
16h20m

Well done, well written, great tact. Luckily we have HN to fill the gap on the missing kudos. What an unprofessional firm (chattr)

lxe
0 replies
14m

This is my problem with the whole architecture of FE -> DB. Without a middle server layer, things like token storage, authentication, and other things become really easy to screw up.

bikamonki
0 replies
8h38m

You are a good human. Seems they had not tweaked the database rules correctly, maybe even left the default setup! That means you could have executed this:

Firebase.database().ref('/').set('All your data is gone').

Better yet, download the whole DB and then:

Firebase.database().ref('/').set('I have all your data, pay me to get it back').

ashu1461
0 replies
15h50m

Firebase is like a half baked product which lures people who are just starting out .It helps build products which can quickly go to market, but then once you start to scale, a lot of their products like firestore, firebase auth have basic features missing

alalbertson
0 replies
16h45m

no contact or thanks for potentially avoiding a lawsuit for them.

1-6
0 replies
3h51m

We need Whitehat awards and give this person that.