return to table of content

Thanksgiving 2023 security incident

marcinzm
115 replies
21h49m

we were (for the second time) the victim of a compromise of Okta’s systems

I'm curious if they're rethinking being on Okta.

BytesAndGears
98 replies
21h31m

My company will only give us new laptops that are preinstalled with Okta’s management system.

I am grandfathered in to an old MacBook that has absolutely no management software on it, from the “Early Days” when there was no IT and we just got brand new untouched laptops.

They offered me an upgrade to an M1/M2 pro, but I refused, saying that I wasn’t willing to use Okta’s login system if I have my own personal passwords or keys anywhere on my work computer.

Since that would hugely disrupt my work, I can’t upgrade. Maybe I can use incidents like this to justify my beliefs to the IT department…

verve_rat
36 replies
21h17m

Why do you need personal passwords on your laptop to do your work? I'm not understanding this.

BytesAndGears
23 replies
21h7m

Fair question, but I use a lot of things that are varying degrees of helpful for my work:

* personal ChatGPT and copilot subscriptions, since company doesn’t pay for these

* Trello account for keeping track of my todo list (following up with people, running deploys)

* Obsidian for keeping notes, as a personal knowledge-base (things like technologies and reminders)

* Apple account for music, copy/paste, sharing photos from my travel with coworkers, synching docs related to my work visa and taxes

* Personal slack login for communicating with my partner in our private server

* personal GitHub account credentials for synching my private dotfiles repo with my neovim config. basically can’t work without my dotfiles, but I could theoretically email these to myself or something, to prevent this one.

And sure, I could be stubborn and not use any of this, but I’d be way less productive and kinda miserable.

MichaelZuo
10 replies
20h58m

Why don't you just do those on a second, personal, laptop?

Does your workplace restrict you from bringing it in?

jen729w
5 replies
20h39m

I’ve been in the same situation. With two laptops you lose the ability to, say, send email directly to your task system.

It’s really easy to say ‘don’t use your personal stuff at work’, but when work is some locked-down behemoth whose view of productivity software is ‘just use Office’, and you’re really trying to be better at your job, using your own tools can be the only solution.

And in my situation, yeah, they didn’t want you bringing things in. I worked in a secure area.

pests
4 replies
20h20m

Then you need to let the employer see your lack of productivity when you are limited by the locked-down system.

Finding solutions to work around the systems, on your own time and dime, only hurts in the long run.

They think everything is fine. Nothing will ever get fixed. Voice these concerns.

jen729w
2 replies
15h20m

I’m not being snarky, but have you ever worked for a company the size of, say, HP?

The tools are the tools. There’s nothing me or my boss or theirs can do about it.

They just don’t care. But I care, because if nothing else it’s my reputation.

(HP used purely for size comparison. I’ve never worked there.)

MichaelZuo
1 replies
11h43m

Then why not leave if there's no prospects of change in the near future and if you really care?

jen729w
0 replies
7h17m

Because I was working with friends, on a project that was interesting, mentally valuable, and in the national interest.

Just because the org that hires you is a shambles doesn’t mean you give up and quit. Thank fuck we don’t all think like that.

And, again, reputation. I have a stellar reputation because I stick it out, and I care. I’ve worked with people who quit because ‘it’s shit here’. Nobody will ever work with them again.

comex
0 replies
16h33m

Hurts who? If you’ve worked around it then it doesn’t hurt you, at least not too badly. It still hurts the company as a whole. But is that your problem?

You might feel a sense of social obligation or solidarity with the company. I usually do. But if I was placed in a dehumanizing situation like that – forced to work inefficiently due to overly rigid policies that assume everyone’s needs are the same – well, whether I worked around it or not, my empathy for the company would be at a nadir whenever I thought about it.

Xeyz0r
1 replies
20h29m

Why carry a second laptop when you can log in wherever you need to on your work laptop? It's easier for me to store all my passwords in a password manager and log in to the websites I need from my work laptop.

DANmode
0 replies
18h30m

If raw ease of use dictates your tech decisions, you're eventually gonna have a bad time.

BytesAndGears
1 replies
20h55m

Convenience, I suppose… and that doesn’t solve all of the issues (eg Copilot)

I’m fine with it because I know there’s no management software on this laptop, but yeah it’s a totally different story if I had to use a newer one with SSO and management software

MichaelZuo
0 replies
11h42m

Part of the comp. package is presumably paying for you to endure some level of inconvenience at the company's request. Such as isolating work and personal things on separate systems.

At least that's how it works in the vast majority of companies.

sneak
3 replies
20h15m

personal Trello account for keeping track of my todo list.

personal Obsidian for keeping meeting notes, and recording conversations as a personal knowledge-base

I'm not a lawyer, but I'm pretty sure these could subject a lot of your other personal data to potential subpoena should your employer get sued by a sufficiently determined attacker.

Don't cross the streams.

marksomnian
1 replies
20h2m

Also it's a violation of Obsidian's license:

Obsidian is free for personal and non-profit use. However, if you use Obsidian for work-related activities that generate revenue in a company with two or more people, you must purchase a commercial license for each user. Non-profit organizations are exempt from this requirement.

https://obsidian.md/license

tuckerman
0 replies
18h43m

Perhaps they pay for a commercial license?

Q3. Can I buy a license for myself, or do I have to ask my company to buy it for me? > Yes, you can buy a license for yourself; just put your name in the company > field. You can use such a license to work for any company.

https://help.obsidian.md/Licenses+and+payment/Commercial+lic...

BytesAndGears
0 replies
20h11m

Ooh actually that’s the most compelling reason I’ve heard. I think I might actually split out those accounts with this reasoning.

_boffin_
3 replies
20h16m

Let me get this straight… you’re taking privileged company information and transferring it to personal…

I’m now understanding how people get sued when going from company to company.

BytesAndGears
2 replies
20h2m

Certainly nothing privileged! Moreso just reminders about “follow up with person x” and that kind of thing

foobarian
0 replies
18h53m

We are probably all little people here, and nothing like this would ever happen, but say you were high profile enough like that Google self-driving guy that Uber poached, and there was a lawsuit - anything you did on that computer would be up for grabs. All your personal projects, documentation, DMs.. it would be super messy. I’m pretty sure companies like having this kind of situation because it gives them legal ammo in the rare case where there is an action.

_boffin_
0 replies
19h45m

Let me ask you one follow up question: if you and the company had a disagreement of sorts and they examined your activity, would you believe they find nothing that they’d deem privileged?

kccqzy
1 replies
20h7m

You use so many personal accounts for work that it's unfathomable for me. If some hacker manages to hack into your account and find so much valuable information about your work, your work is going to be mightily pissed about it. Imagine you are working on an upcoming product launch and the attacker used your personal account to leak the launch. Or imagine they just decided to leak your company's internal source code. Or imagine they simply use the technical information in your personal attacks to steal user data (even Cloudflare says they worry about this: "Our aim was to prevent the attacker from using the technical information about the operations of our network as a way to get back in").

You are making your work take on an extraordinary risk in hiring you.

BytesAndGears
0 replies
19h14m

It’s a fair concern, but there really isn’t so much there. If they compromise my trello account then they know I have some meetings coming up with people, and that I’m starting on ticket 927109 and planning to deploy ticket 901223 on Tuesday. Just referencing items in our ticket management system with very sparse details.

My notes are text files on the computer, so we’d have problems regardless if they got that. But maybe I should’ve left it out of the list above in that case… nothing else seems very damning.

But you do raise a valid concern, and it’s worth reevaluating!

hinkley
0 replies
20h30m

* Jetbrains

* Stack Overflow

* Job Search sites

I don't remember if Jetbrains needs a password to get to personal licenses, but they definitely do to use their bug database. I suspect they're not the only one.

Letting other people blow off steam can be an act of self-preservation. Insisting that people only ever do 100% work things at work or on work hardware slightly raises your low-but-never-zero chances of being murdered by coworkers. Or less ironically, hilariously intense bridge-burning activities.

Also most of this conversation is happening during work hours so I think we can infer that grandparent is being a little hypocritical.

DANmode
0 replies
18h31m

This all makes perfect sense - but just seems like your employer is too large to be effective, they're not offering you the right tools/you're not demanding them, and you're in an abusive relationship with them - probably because they pay you well enough.

deathanatos
11 replies
20h34m

The parent's view does seem a bit extreme, but there is always some overlap. Whatever HR system you have is going to be in a weird area of personal/employee overlap, as it'll need to have a password that your personal life has access to. (As tax documents, pay stubs, benefits stuff, etc. all impact the "personal" side of one's life. E.g., I need to store — in my personal archives — the years W-2.)

Also, people just do things for convenience. (Although I tend to pipe these passwords over an SSH connection, so that they're not resident on the work laptop. Though there is a good argument to be had about me permitting my work laptop SSH access to my personal laptop. From a technical standpoint, my employer could hack/compromise my personal laptop. From a legal and trust standpoint, I presume they won't.)

sneak
6 replies
20h13m

It's not extreme at all, it's the bare minimum that professionals do.

Absolutely none of my personal stuff ever touches a corporate machine. Ever. I wouldn't even log in to the W2 downloading app as an employee from the work machine.

Granting work ssh keys access to your personal machine is crazy; if your work machine gets compromised, they steal your entire personal system's home directory too. Why would you unnecessarily expand the blast radius of a compromise like this?

lmm
2 replies
17h34m

What's the realistic threat model here? Someone hacks your company and during their exploitation window they're going to focus on... keylogging/MITMing random devs (likely far more paranoid/observant than the average computer user) so that they can get access to their personal machines via some artisan crafted attack to maybe make a fraudulent transfer from one person's bank account? In what world is that a low-hanging fruit to go after?

sneak
1 replies
17h17m

Devs in small companies often have a ton of access to systems and almost certainly aren’t heavily scrutinized about random novel binaries (being devs), so those are some of the first machines you’d target in an org.

You wouldn’t keylog “random devs”, you’d keylog all of the ones doing ops.

lmm
0 replies
16h59m

Would someone making a serious, targeted attack on the company focus on ops staff, and maybe go to the trouble of keylogging them? Sure. But those are precisely the attackers who wouldn't get distracted (and risk detection) going after those staff's personal machines.

cqqxo4zV46cp
2 replies
18h56m

I love these sorts of comments. Could you please just be more direct and call GP “not a professional” for not working in the way that you do? It’s so unnecessarily passive-aggressive.

bpt3
1 replies
17h58m

You are really, really, really sensitive about this. I wonder why?

GP said nothing of the sort.

Symbiote
0 replies
10h47m

GP wrote "it's the bare minimum that professionals do".

pbhjpbhj
3 replies
19h59m

From a technical standpoint, my employer could hack/compromise my personal laptop. From a legal and trust standpoint, I presume they won't.)

You trust all personnel with access to your employers network?

What's more surprising is that they trust you to setup adhoc ssh connections to arbitrary endpoints; unless you're the person in charge of network security?

Would anyone notice if you, or an intruder, dumped terabytes of data over that connection?

I don't work in IT but this just doesn't feel right to me.

cqqxo4zV46cp
1 replies
18h57m

Honestly it sounds like you’re sheltered due to working in a certain sort of organisation and have had no exposure to the myriad ways in which organisations tend to be run. You’re acting like this is a big surprise, but it’s not.

pbhjpbhj
0 replies
6h25m

Fair comment, probably true.

bsimpson
0 replies
18h55m

I've used a corp laptop to SCP data onto a non-corp device. Technically both devices were corporately owned, but nobody logging the packets would have known that.

yjftsjthsd-h
34 replies
21h16m

if I have my own personal passwords or keys anywhere on my work computer.

Well... don't do that? Why would you ever have personal anything on a work computer?

Symbiote
31 replies
21h6m

A Github account, for one possible example.

_boffin_
12 replies
20h19m

Doesn’t matter. No personal stuff on company devices.

I just don’t understand any rational otherwise.

babypuncher
5 replies
19h39m

So you just don't listen to music at work?

_boffin_
3 replies
19h32m

Do you have a phone and headphones?

2devnull
2 replies
18h50m

If you use your phone at work, doesn’t that then become discoverable in the legal sense?

DANmode
1 replies
18h33m

Having a cable or radiowaves coming out of your bag or pocket is considered "use", in nonsecured areas?

dghlsakjg
0 replies
16h17m

That’s very much up to the judge to decide…

brobinson
0 replies
17h35m

There's a huge difference between using your personal Spotify account at work and using your personal Github account at work.

cqqxo4zV46cp
2 replies
19h0m

I have personal stuff on my work machine. I don’t need to say any more than that because in your eyes it’s inherently unjustifiable.

So, would you care to more explicitly tell me what you think about my intelligence or ability to behave rationally compared to you? Or is there potentially some room for nuance here?

_boffin_
1 replies
18h9m

or if you take it another way... i currently don't understand any rational that has been presented that allows me that frame of thought, but i'm always wanting to learn more.

aenis
0 replies
6h42m

I'd propose the following line of reasoning:

- people tend to use company devices for private stuff, even when its explicitly prohibited, - draconian policing leads to employee dissatisfaction; you won't be able to fire that great engineer you spent 3 months hiring because he logged in to Spotify running within Chrome, and if you can - and do - soon you will be unable to hire top talent,

Thus, even with those policies in place, end user devices still need to be considered un-trusted. Specifically, that they can be key-logged and remote accessed by the attackers.

Hence, (a) anything sensitive should involve transaction level validation, not just end user authentication, (b) for logging in an out, as well as for confirming sensitive operations, proper MFA needs to be in place (physical key + token on a mobile device, for instance), (c) apply lightweight, reasonable restrictions to reduce the chances of device compromise dramatically (e.g., no downloading of 3rd party apps or binaries - but do whitelist things like Skype or Spotify, force strong password for devices, etc).

This means reasonable personal use is perfectly fine, employees happy, and you are safer vs. assuming local devices are clean.

Symbiote
2 replies
18h12m

How about a PhD student working on open-source software?

A more senior academic?

_boffin_
1 replies
17h44m

How about a PhD student working on open-source software?

- Is the open-source software something that the company is sponsoring?

- If not, do you have permission to use company equipment for personal use?

A more senior academic?

?

Do you do the above? If so, do you have a personal laptop? if yes, why utilize company property instead of personal, unless given permission to do so?

Symbiote
0 replies
11h13m

I'm not an academic, but I have worked with a lot of academics and I think most of them would have no concerns about accessing personal data on their work computer. I thought a university would be an example of a very 'friendly' employer.

An example university policy [1]

11.5 reasonable personal use of College IT resources is permitted provided such use does not disrupt the conduct of College business or other users. Recreational use of the Halls of Residence network is also permitted, subject to these conditions;

We have a similar policy where I work. I have a personal laptop, but I don't take it to work. I am signed in to my personal GMail account on my work computer, along with many other accounts — like this HN account. If work needed to look at an employee's computer, we'd have someone from IT + someone from HR overseeing the process, and wouldn't look at anything clearly private, e.g. a personal email account. Doing otherwise would be a breach of the GDPR.

[1] https://www.imperial.ac.uk/admin-services/ict/self-service/c...

yjftsjthsd-h
10 replies
19h46m

Okay, I'll bite; what about a github account? You don't generally own code you write for an employer, so why would you be an personal repos from a company machine? (Likewise, there's generally no good reason for the company to have access to personal repos, so those security domains should never overlap)

bsimpson
7 replies
19h1m

Even among engineers, most people don't think like a security engineer.

I'm sure there are plenty of people who have access to their company's private repos through their personal GitHub accounts.

icedchai
6 replies
18h26m

At every company I've worked for, past 12+ years, this has been the rule, not the exception. They invite your personal github to corporate repos.

0cf8612b2e1e
4 replies
17h54m

I have read a couple of horror stories where it then becomes impossible to separate the account once you leave the employer.

No thanks. New account per job.

icedchai
2 replies
17h50m

Sounds like a misunderstanding. They just remove you from the org. (And if they don't, it's not your problem.)

0cf8612b2e1e
1 replies
17h46m

But being part of an organization, don’t they have admin control over your account? Could delete all of your repos, reset your keys, access private repos, etc.

Even if a tiny risk, it seems silly just to bolster the GH activity graph.

icedchai
0 replies
17h30m

No. They have control over your membership in their org and which of their repos you can access, not your repos. Note that a GitHub account can be members of multiple orgs.

Symbiote
0 replies
11h8m

If the organization has public repositories, and the ex-employee has issues/PRs in those repositories, then I think they will continue to get notifications about followups to those issues.

Involvement with private repositories is removed as soon as the organization removes the employee, or the employee removes themselves.

I think the horror stories could only happen if the individual's account has been used for generating many API keys or similar, but there are other reasons not to rely on that sort of thing.

tekla
0 replies
16h51m

So?

Symbiote
1 replies
18h15m

My Github profile is part of my CV: it shows the projects I've worked on, and those organizations to which I have commit access. Some of those projects are likely to continue even if I change jobs.

I think this is fairly common for people who work on open source projects.

cowsandmilk
0 replies
16h23m

On the other hand, I’ve known engineers who were harassed on their GitHub accounts because they stopped working on a project when the company transferred them internally. Some people take you no longer corresponding on a GitHub issue extremely personally. Being able to abandon an “work identity” and move on is useful.

pizzalife
4 replies
20h38m

Well, why? It just seems risky. Everything you make on your work laptop / during work hours is typically owned by your employer. If your employer is paying you to contribute to OSS, don't use your personal github account. Just don't ever mix personal and company accounts on company hardware.

electroly
2 replies
20h4m

Note that if you do make a second account, at least one of them must be a paid account. A single person cannot have multiple free accounts and GitHub does not care if it's because one is for work; it's in the TOS.

computerfriend
0 replies
11h46m

I have blissfully been unaware of this. Have even linked free accounts with the account switcher. I'd say this is fairly unenforced.

2devnull
0 replies
18h52m

There is no way around that restriction either.

da768
0 replies
19h3m

The typical work contract also extends to outside work hours and personal devices.

deathanatos
1 replies
20h38m

This is why I use a separate Github account for work?

(& then just rotate the credentials on it when you part ways with the employer.)

Some of my co-workers even do a Github account per employment.

_boffin_
0 replies
19h43m

That’s me—a GitHub account per employer with employee email.

2devnull
1 replies
18h54m

HR forms require personal information and do not allow anyone to access from anything but a corporate device.

DANmode
0 replies
18h35m

Open smartphone to the relevant information, type the government ID data they're asking for into corporate machine, end.

samcat116
22 replies
21h13m

new laptops that are preinstalled with Okta’s management system

Okta doesn't make device management software, thats made by companies like Jamf. Okta can integrate with them but Okta isn't what manages your laptop at all.

I wasn’t willing to use Okta’s login system if I have my own personal passwords or keys anywhere on my work computer.

Do not do this, its not a personal device.

michaelt
20 replies
19h47m

> Do not do this, its not a personal device.

You think nobody's logged into their personal spotify on their work computer? All those guys wearing headphones in the office have brought in CDs to play in their laptop CD drives?

And that business traveller away from their partner and kids for a week+ isn't going to video call them? Or watch some netflix in their hotel room in the evening?

That's so unrealistic, you could write IT security policy for a Fortune 100 company :)

rthomas6
5 replies
18h45m

Not until just now I didn't. Do they not have a smartphone? A personal laptop? I'm waiting for something to build as I'm typing this right now. On a separate computer. I would never go on Hacker News on my work computer.

Why would I use a device to do personal things that they MITM everything I do on it? Privacy is too important to me to give it away like that. I'm sure all traffic on the corporate network is logged. Why open myself up for grounds for termination if my company hits hard times and wants to lay people off?

michaelt
4 replies
18h1m

If you're sitting in the office waiting for something to build, and you get out your phone to go on HN I'm sorry to say that is probably not the sort of professionalism that's going to afford you much protection from layoffs.

rthomas6
2 replies
17h41m

Probably so, but at least my company can't MITM and log all my traffic.

ethbr1
1 replies
16h45m

Agreed. The presumption should be that anything on a work computer is visible to, logged, and retained by your employer.

It was a public case, but the essentially unanimous Supreme Court opinion in City of Ontario v. Quon [0, 2010] shows what expectations of privacy you should have on any work devices -- none.

[0] https://en.m.wikipedia.org/wiki/City_of_Ontario_v._Quon

Symbiote
0 replies
10h58m

Unsurprisingly, the EU has a different idea about employee's privacy when using a work computer.

Reasonable or limited private use of a work computer remains private.

https://edps.europa.eu/data-protection/data-protection/refer...

fsociety
0 replies
16h53m

This comes off as passive aggressive and misinformed. I agree with not putting personal things on work devices as much as possible.

Been in the industry for a while now, no one cares if you pull out your phone. Generally, people treat others like adults not children.

dylan604
4 replies
15h3m

All those guys wearing headphones in the office have brought in CDs to play in their laptop CD drives?

I've worked for large media companies where this is exactly the only way to have music available. The production network was blocked from accessing the www. To ensure content wasn't pirated, the original media had to be used. No CD-Rs were allowed. Personal devices were kept in lockers outside the restricted areas, so no streaming from them either.

Email was from a remote session. If you were emailed an attachment necessary for production work, there was an approved workflow to scan the data and then make it available to the production network.

So, while you were trying to be sarcastic, there are networks that are set up exactly like you thought didn't exist because it was too outlandish.

dalyons
1 replies
13h40m

In the last 10 years? Outside of govt? That sounds horrifically inefficient for 2024

dylan604
0 replies
13h27m

This is the default knee jerk reaction, but I didn't have an issue with it. I'm not addicted to my device, so leaving it in a locker was perfectly fine with me. It was actually kind of refreshing to not approach a co-worker doom scrolling a social platform.

Symbiote
1 replies
10h53m

Presumably the company then takes on the task of passing personal messages from outside to their staff, e.g. if a school phones to say a child is sick.

dylan604
0 replies
3h3m

you're free to check your device as necessary. the personal device rule is for things that have cameras and storage. so a smart watch that can receive messages would be fine. we seem to think that the only way to communicate with someone is via personal device, but in the corp world there is always a corp phone on employee's desk. you could provide that number to whatever contacts you wanted, so this lame concept of passing notes is just so unimaginative on your part that it just feels like someone grasping for straws.

midasuni
2 replies
18h49m

I’ve had company devices for over 20 years. I’m currently on the way back to my hotel.

I refuse to carry more than one phone or one laptop, and I sure ain’t brining a personal device into a country I wouldnt go to on vacation.

autoexec
0 replies
17h52m

Totally agree on travel. If I'm getting on a plane for work I don't want to bring my own devices. I can't even trust that my own country won't steal/copy my devices at the border.

DANmode
0 replies
18h36m

I refuse to carry more than one phone or one laptop

Footgun, but maybe tolerable with your chosen threat model.

xmcqdpt2
0 replies
5h45m

All your examples are blocked on my employer's network. So yeah.

prg20
0 replies
2h59m

You think nobody's logged into their personal spotify on their work computer?

I don't think anyone thinks that. No one also thinks logging into a personal account on a device owned by someone else gives you any claim of ownership over it.

The computer belongs to the company. You will do what the company says you need to with their computer.

da768
0 replies
19h6m

Nothing a smartphone can't do.

Even companies with these policies preinstall Spotify on work computers.

DANmode
0 replies
18h38m

Hiring people who don't understand technology to build your technology: my path to the Fortune 100 List.

AeroNotix
0 replies
14h48m

I use two laptops.

0cf8612b2e1e
0 replies
18h37m

Parent didn’t say nobody used the device for personal actions, only that they refused to do so. Which is the only reasonable stance. Especially for well paid engineers who can trivially afford a dedicated device.

derivagral
0 replies
3h10m

Do not do this, its not a personal device.

Agreed, but I knew many devs in my career who mix personal stuff into work hardware. Maybe its just spotify/pandora, maybe some HR thing they needed their personal gmail to make it easier.

This included "senior" and other levels, it isn't just ppl out of college.

prg20
0 replies
3h2m

IT departments do not care for the musings of self-proclaimed neckbeards. The arrogance of such users to comfort to security policy is a well known risk and usually grounds for swift disciplinary action.

ocdtrekkie
0 replies
17h34m

I wouldn't use Okta at work, but as a network administrator, I also wouldn't allow your improperly managed laptop to talk to business resources (and I'd demand anyone overruling me sign a written statement demanding it to exempt me for responsibility for it). Wild you work somewhere that is letting you get away with that.

giancarlostoro
0 replies
26m

At a former employer, the company wanted everyone's machine to have full hard drive encryption, thankfully we were on Linux and the vendor they wanted to use was not compatible. Even so, I always encrypt my installs on laptops, not so much workstations (I didn't install it, and reinstalling would be a mess) since at that point you're physically in the office building and we have more serious problems.

twisteriffic
8 replies
18h24m

This wasn't really an additional failure at Okta. This was credentials lost during the original Okta compromise that CloudFlare failed to rotate out.

Okta deserves criticism for their failure, but this feels like CloudFlare punching down to shift blame for a miss on their part.

bigbluedots
4 replies
16h24m

They did this by using one access token and three service account credentials that had been taken, and that we failed to rotate, after the Okta compromise of October 2023

It's fair to "punch down" imo as that's how the credentials were originally compromised. I'd agree with you if CF were trying to minimize their own mistake but that doesn't seem to be what is happening here

BeefWellington
3 replies
13h46m

If a breach is disclosed and some time later your systems are compromised because you didn't bother to take appropriate action in response to that, it's not "fair" to punch down, or even reasonable to do so.

cheeze
2 replies
13h21m

Okta was painfully negligent, with CF going as far as posting "recommendations for Okta" because it was their only way to get through to them.

I don't love CF, but IMO Okta deserves to be punched down on.

longcat
0 replies
11h52m

In both situations Okta and Cloudflare a generic or system account has been compromised. CloudFlare would have had to upload or provide a session tokens or secret to Okta's support system.

BeefWellington
0 replies
12h50m

Sure but for how long and in what contexts?

Is it really reasonable to come out and say your company utterly failed a pretty basic security practice when faced with a compromise but that it was really some other company's problem originally?

Of course it's not. It's still your company's failure. Own it.

sophacles
0 replies
12h47m

How is Cloudflare ($384M in revenue, q3 2023) punching down at Okta ($584M in revenue, q3 2023) by stating exactly what happened.

If anything Okta is a bigger company (by revenue, by employee count) and they were founded a year earlier.

jamiesonbecker
0 replies
18h2m

Agreed, but Okta's still a $14B company.

cowsandmilk
0 replies
16h30m

This wasn’t a new compromise, but there were still two Okta compromises that impacted CloudFlare

January 2022: https://blog.cloudflare.com/cloudflare-investigation-of-the-...

October 2023: https://blog.cloudflare.com/how-cloudflare-mitigated-yet-ano...

Icathian
6 replies
21h43m

The challenge being, who else could possibly handle Cloudflare's requirements? I imagine the next step is to build their own, and that's obviously not an easy pill to swallow.

whalesalad
4 replies
21h37m

They already run their own zero trust infrastructure for customers, kinda surprised they are not dogfooding it. https://www.cloudflare.com/plans/zero-trust-services/

tomschlick
0 replies
21h33m

They are, but they don't have management for user accounts, 2fa, etc. You setup a connection to something like Okta, Google Apps, O365, SAML, etc to be your persistent user db and cloudflare just enforces it.

I wouldn't be surprised if they are working on first party IAM user support though.

mikey_p
0 replies
21h34m

Did you read the article?

They are using zero trust and explained that it's why the scope of the security incident was extremely limited.

margalabargala
0 replies
21h20m

There are good reasons not to dogfood critical services like that; it can make recovering from unexpected issues much harder if you introduce mutual dependencies.

For example, if Slack devops team were to exclusively communicate over Slack, then a Slack outage would be much harder to resolve because the team trying to fix it would be unable to communicate.

jgrahamc
0 replies
21h34m

We use our Zero Trust stuff extensively. In fact, we built it for ourselves initially.

amluto
0 replies
21h38m

Why not? Cloudflare already operates a system that can help customers to require SSO for access to their services — why not try to capture more of that vertical by becoming an IdP?

sevg
34 replies
21h49m

Even though we believed, and later confirmed, the attacker had limited access, we undertook a comprehensive effort to rotate every production credential (more than 5,000 individual credentials), physically segment test and staging systems, performed forensic triages on 4,893 systems, reimaged and rebooted every machine in our global network including all the systems the threat actor accessed and all Atlassian products (Jira, Confluence, and Bitbucket).

The threat actor also attempted to access a console server in our new, and not yet in production, data center in São Paulo. All attempts to gain access were unsuccessful. To ensure these systems are 100% secure, equipment in the Brazil data center was returned to the manufacturers. The manufacturers’ forensic teams examined all of our systems to ensure that no access or persistence was gained. Nothing was found, but we replaced the hardware anyway.

They didn't have to go this far. It would have been really easy not to. But they did and I think that's worthy of kudos.

barkingcat
23 replies
21h18m

I think they did have to do that far though.

Getting in at the "ground floor" of a new datacentre build is pretty much the ultimate exploit. Imagine getting in at the centre of a new Meet-Me room (https://en.wikipedia.org/wiki/Meet-me_room) and having persistent access to key switches there.

Cloudflare datacentres tend to be at the hub of insane amounts of data traffic. The fact that the attacker knew how valuable a "pre-production" data centre is means that cloudflare probably realized themselves that it would be a 100% game over if someone managed to get a foot hold there before the regular security systems are set up. It would be a company ending event if someone managed to install themselves inside a data centre while it was being built/brought up.

Also remember, at the beginning of data centre builds, all switches/equipment have default / blank root passwords (admin/admin), and all switch/equipment firmware are old and full of exploits (you either go into each one and update the firmware one by one or hook them up to automation for fleet wide patching) Imagine that this exploit is taking place before automation services had a chance to patch all the firmware ... that's a "return all devices to make sure the manufacturer ships us something new" event.

nolok
7 replies
20h39m

It would be a company ending event

Given they got out of cloudbleed without any real damage let alone lasting damage, I disagree.

(I don't disagree with your point about how bad of a problem this would be, I'm just insisting that security failure is not taken seriously at all by anyone)

chx
5 replies
20h6m

Presuming taviso is not exaggerating and why would he CF's reply to cloudbleed was ... not quite nice.

https://twitter.com/taviso/status/1566077115992133634

True story: After cloudbleed, cloudflare literally lobbied the FTC to investigate me and question the legality of openly discussing security research. How come they're not lobbying their DC friends to investigate the legality KF?

For those not familiar with the history , this tweet started the cloudbleed disclosure to cloudflare:

https://twitter.com/taviso/status/832744397800214528

Could someone from cloudflare security urgently contact me.

This followed: https://blog.cloudflare.com/incident-report-on-memory-leak-c...

eastdakota
2 replies
13h45m

This came up before and it was super confusing to me because I had no idea what it was referring to but I also believe Tavis isn’t one to make something up. So I took some time to investigate.

Turned out, no one on our management, legal, communications, or public policy team had any idea what he was talking about. Eventually I figured out that a non-executive former member of our engineering team was dating someone who worked as a fairly junior staffer at the FTC. On the employee’s personal time they mentioned being frustrated by how the disclosure took place to the person they were dating. I believe the employee’s frustration was because we and Project Zero had agreed on a disclosure timeline and then they unilaterally shortened it because an embargo with a reporter got messed up.

There was never anything that Cloudflare or any executive raised with the FTC. And the FTC never took or even considered taking any action. The junior FTC staffer may have said something to Tavis or our employee may have said something about telling the staffer they were dating, but that was the extent of it.

I understand Tavis’s perspective, and agree it was inappropriate of the former Cloudflare employee, but this was two people not in any position of leadership at either Cloudflare or the FTC talking very much out of school.

nolok
1 replies
9h33m

we and Project Zero had agreed on a disclosure timeline and then they unilaterally shortened it because an embargo with a reporter got messed up

This is not what happened at all. What happened is that after the initial discovery, the gzero team realized it was much worse than expected AND the cloudflare team who he synced with for the disclosure started ghosting him, and yet gzero still kept to the full timeline.

If you working there and having done research can get it this wrong while it's super easy to find the event log in the open, it doesn't give a very good vibe about the attitude inside cloudflare regarding what happened and fair disclosure.

Full even log on project zero is here : https://bugs.chromium.org/p/project-zero/issues/detail?id=11...

The examples we're finding are so bad, I cancelled some weekend plans to go into the office on Sunday to help build some tools to cleanup. I've informed cloudflare what I'm working on. I'm finding private messages from major dating sites, full messages from a well-known chat service, online password manager data, frames from adult video sites, hotel bookings. We're talking full https requests, client IP addresses, full responses, cookies, passwords, keys, data, everything.

Meanwhile link with Cloudflare went from this

I had a call with Cloudflare, they reassured me they're planning on complete transparency and believe they can have a customer notification ready this week.

I'm satisfied cloudflare are committed to doing the right thing, they've explained their current plan for disclosure and their rationale.

To this

Update from Cloudflare, they're confident they can get their notification ready by EOD Tuesday (Today) or early Wednesday.

Cloudflare told me that they couldn't make Tuesday due to more data they found that needs to be purged.

They then told me Wednesday, but in a later reply started saying Thursday.

I asked for a draft of their announcement, but they seemed evasive about it and clearly didn't want to do that. I'm really hoping they're not planning to downplay this. If the date keeps extending, they'll reach our "7-day" policy for actively exploited attacks. https://security.googleblog.com/2013/05/disclosure-timeline-...

If an acceptable notification is not released on Thursday, we'll decide how we want to proceed.

I had a call with cloudflare, and explained that I was baffled why they were not sharing their notification with me.

They gave several excuses that didn't make sense, then asked to speak to me on the phone to explain. They assured me it was on the way and they just needed my PGP key. I provided it to them, then heard no further response.

Cloudflare did finally send me a draft. It contains an excellent postmortem, but severely downplays the risk to customers. They've left it too late to negotiate on the content of the notification.

So it was not project zero but cloudflare that moved the disclosure timeline around, and did so without keeping pzero in the loop, about an active in the wild exploit.

chx
0 replies
8h46m

If you working there

For context: you are answering to the co-founder & CEO of Cloudflare.

hughesjj
0 replies
16h20m

Yeah cloudflare is pretty sketchy too imo. They present as transparent but they've had some actions over the years that signal otherwise. Heck, pretty much every performance blog post they hype up buries the caveats, kinda reminiscent of Intel always using their custom cpp compiler for benchmarks. Not technically lying, but definitely omitting some context.

cowsandmilk
0 replies
16h4m

I love this quote:

However, Server-Side Excludes are rarely used and only activated for malicious IP addresses.

So… you’re celebrating that you only had buffer overruns for malicious IP addresses?

Hrundi
0 replies
20h30m

I don't remember any companies that ended thanks to cloudbleed, but I'd be happy to be proven wrong

tgsovlerkhgsel
3 replies
20h52m

It would be a company ending event if someone managed to install themselves inside a data centre while it was being built/brought up.

It wouldn't. Most people like to assume the impact of breaches to be what it should be, not what it actually is.

Look at the 1-year stock chart of Okta and, without looking up the actual date, tell me when the breach happened/was disclosed.

mschuster91
2 replies
19h43m

Look at the 1-year stock chart of Okta and, without looking up the actual date, tell me when the breach happened/was disclosed.

The problem with this is that while security minded people know what Okta is and why to stay the fuck away from handing over your crown jewels to a SaaS company is warranted, C-level execs don't care. They only care about their golf course or backroom deal friends and about releasing PR statements full of buzzwords like "zero trust", "AI based monitoring" and whatever.

The stock markets don't care either, they only look at the financial data, and as long as there still are enough gullible fools signing up, they don't care and stonk goes up.

cqqxo4zV46cp
1 replies
18h51m

Yes, that’s literally the point being made. The point is that it isn’t a company-ending event. You are going on an unrelated rant about how those darn dumb executives aren’t as smart as God’s gift to earth, engineers.

mschuster91
0 replies
18h8m

The thing is, some events should be company ending. Something like Okta shouldn't even exist in a halfway competent world in the first place - given how many Fortune 500 companies, even governments use it, it's just a too fucking juicy target for nation states both friendly and hostile.

Instead, even the "self correcting" mechanisms of the "free market" obviously didn't work out, as the free market doesn't value technical merit, it only values financial bullshittery.

And the end result will be that once the war with China or Russia inevitably breaks out, virtually all major Western companies and governments will be out cold for weeks once Okta and Azure's AD go down, because that is where any adversary will hit first to deal immense damage.

meowface
3 replies
19h25m

Getting in at the "ground floor" of a new datacentre build is pretty much the ultimate exploit.

I can just imagine the attackers licking their lips when they first breached the data center.

Good reminder to use "Full (Strict)" SSL in Cloudflare. Then even if they do get compromised, your reverse-proxied traffic still won't be readable. (Of course other things you might use Cloudflare for could be vectors, though.)

ownagefool
2 replies
19h2m

Cloudflare is essentially a massive mitm proxy. If you manage to pwn a key, you have access to traffic.

I'm sure they're better than this than me, but ipxe & tftp are plain text, so it wouldn't be shocking if something in the bootstrap process was plaintext.

At the very least you need to tell the server what to trust.

meowface
1 replies
18h58m

If you manage to pwn a key, you have access to traffic.

That's why I mentioned "Full (Strict)" SSL. If you configure this in Cloudflare then the entire user <-> Cloudflare <-> origin path is encrypted and attackers can't snoop on the plaintext even if they have access. They'll get some metadata, but every ISP in the world gets that at all times anyway.

is39
0 replies
9h7m

While both client and origin network connections are encrypted with "Full (Strict)" SSL mode, Cloudflare proxy in the middle decrypts client traffic and then encrypts it towards the server (and vice versa). It does have access to plaintext, which is how various mitigations work. So it's indeed MITM proxy, by design.

swyx
2 replies
20h54m

do manufacturers share some of the cost of this kind of security related return or is this a straight up "pay twice for the same thing" financial hit?

eastdakota
1 replies
20h48m

We have very good relations with our network vendors (in this case, Cisco, Juniper, and Arista). The CEOs of all of them 1) immediately got on a call with me late on a weekend; 2) happily RMAed the boxes at no cost; and 3) lent us their most senior forensics engineers to help with our investigation. Hat tip to all of them for first class customer service.

swyx
0 replies
20h40m

shows how much they value you as a partner and i'm sure they appreciate your overall business.

thanks Matthew! love the transparency and dedication to security as always. really sucks to have this be continuing fallout from Okta's breach. wish large scale key rotation was more easily automatable (or at least as a fallback, there should be a way to track key age on clientside? so that old keys stick out like a sore thumb). i guess in the absence of industry standard key rotation apis someday you might be able to "throw AI at it".

sneak
2 replies
20h30m

Imagine getting in at the centre of a new Meet-Me room and having persistent access to key switches there.

This wouldn't get you much. We already assume the network is insecure. This is why TLS is a thing (and mTLS for those who are serious).

sophacles
0 replies
19h30m

I suspect "we" is a much smaller group than you imagine. I've gotten pcaps from customers as recently as this year that include unencrypted financial transaction data. These were captured on a router, not an end host, so the traffic was going across the client's network raw.

noizejoy
0 replies
20h8m

We already assume the network is insecure.

Maybe naively, I wish this assumption became universal.

vasco
0 replies
21h13m

What I think they meant is customers would keep paying them. And they are right, one just has to look at Okta, Solarwinds and other providers that have been owned, not done half of this and somehow are still in business. Everyone whistles to the side and pretends they shouldn't switch vendors, rotate all creds, cycle hardware, because it saves lots of work and this stuff falls under "reasonable oopsie" to the general public, when in fact there should be rules about what to do in the event of a breach that should be much stricter. So they do some partial actions to "show work" in case of lawsuits and keep going. The old engineers leave, new ones come in, and now you have systems who are potentially owned for years to come.

It takes some honesty and good values by someone in the decision-making to go ahead with such a comprehensive plan. This is sad because it should be tablestakes, as you say correctly, but having seen many other cases, I think although they did "the expected", it's definitely above and beyond what peers have done.

tptacek
2 replies
21h4m

This is why old secops/corpsec security hands are so religious about tabletop exercises, and what's so great about BadThingsDaily† on Twitter. Being prepared to do this kind of credential rotation takes discipline and preparation and, to be frank, most teams don't make that investment, including a lot of really smart, well-resourced ones.

If Cloudflare is in a position where their security team can make a call to rotate every secret and reimage every machine, and then that happens in some reasonable amount of time, that's pretty impressive.

https://twitter.com/badthingsdaily?lang=en

akira2501
1 replies
20h41m

It'd be more impressive if they actually got all the credentials.

It's good that you think you can absorb a complicated security task, it's useless if you have no way to test or verify this action.

swyx
0 replies
20h21m

yes but this is a nice #2. not many fortune 500s would 1) even know they were breached and 2) if they were breached, have the breach be so contained.

readyplayernull
2 replies
21h32m

The manufacturers’ forensic teams examined all of our systems to ensure that no access or persistence was gained. Nothing was found, but we replaced the hardware anyway.

Aha, the old replace-your-trusted-hardware trick.

zitterbewegung
0 replies
20h57m

Manufacturers have had security vulnerabilities for hardware to the point that the firmware on device couldn’t be trusted to be replaced so they said to get new hardware so it’s not a bad strategy.

AzzyHN
0 replies
19h44m

In a corporate environment, standard procedure when an employee's computer gets infected is to re-image it. Even if it was a stupid virus that was immediately caught, the potential risk of undetected malware running amuck is just too high.

Now imagine, instead of Steve from HR's laptop, it's one of Cloudflare's servers.

syncsynchalt
0 replies
21h23m

Honestly I wish we'd had an excuse/reason to do an org-wide prod creds refresh like this at some places I've been.

You find some scary things when you go looking for how exactly some written-by-greybeard script is authenticating against your started-in-1990s datastore.

schainks
0 replies
20h7m

Having seen the small number of DEFCON talks that I've seen, I would have absolutely gone that far.

orenlindsey
0 replies
19h50m

Cloudflare is showing how to correctly respond to attacks. Other companies should take note.

ldoughty
0 replies
19h36m

The nuclear response to compromise should be the standard business practice. It should be exceptional to deviate from it.

If you assume that they only accessed what you can prove they accessed, you've left a hole for them to live in. It should require a quorum of people to say you DON'T need to do this.

Of course, this is ideal world. I'm glad my group is afforded the time to implement features with no direct monetary or user benefit.

jrockway
34 replies
21h45m

Which "nation state" do we think this was?

meowface
20 replies
21h39m

For these kinds of attacks it's nearly always China, Russia, US, or sometimes Iran. 95% chance it's either China or Russia, here.

2OEH8eoCRo0
18 replies
19h48m

When has it been the US?

toyg
12 replies
19h41m

Stuxnet?

2OEH8eoCRo0
11 replies
18h50m

Stuxnet targeted the uranium enrichment facility at Natanz run by the Iranian government.

When does the US attack private enterprise?

0xy
5 replies
17h36m

The NSA spied on French private companies according to Wikileaks docs from 2015. [1]

There's many such cases. They're well known for spying on Siemens as well. With allies like the United States, who needs enemies?

[1] https://www.spiegel.de/politik/ausland/wikileaks-enthuellung...

dmix
2 replies
17h31m

And NSA worked with Canada to penetrate a Brazilian oil company, which Snowden leaked

There was also inferences that they penetrated Huawei.

2OEH8eoCRo0
1 replies
16h25m

Petrobas is state-owned.

I might be willing to give you Huawei if you cite a source. They're a gray area (by design) due to China's strategy of military-civil fusion.

https://en.wikipedia.org/wiki/Military-civil_fusion

vikramkr
0 replies
10h53m

Dude you're replying to a comment that's replying to a comment with a source for what you're asking. I'm not sure why you want it to be the case that the US' cyber warfare capabilities are worse than competing nations but Snowden et al made it pretty clear that we're even invading the privacy of our allies and our own citizens. America is going to be fine we're perfectly capable of hacking foreign private enterprises to protect our interests

2OEH8eoCRo0
1 replies
16h26m

I don't know German but nothing on that translated page says anything about hacking or attacking.

0xy
0 replies
15h3m

Allow me to translate: "According to the new revelations, however, contracts for French companies have apparently been intercepted by US secret services for years"

Given hacking means unauthorized access to data, can you explain how intercepting confidential documents in an unauthorized manner could not possibly meet the definition?

Additionally, we know much more detail on Siemens, including the planting of malicious code, which absolutely meets any definition of hacking. [1]

[1] https://www.reuters.com/article/idUSBREA0P0DE/

askvictor
3 replies
18h28m

When it suits them (i.e. when there is data to be gained). But it's more often done through the courts, and when it needs to be a covert op, I'm guessing they'd get their buddies in friendly countries to do the dirty work.

2OEH8eoCRo0
2 replies
17h47m

Well how about some evidence then?

jrockway
0 replies
16h13m

I mean, did we already forget about Ed Snowden and "SSL added and removed here :-)"?

askvictor
0 replies
14h39m

I mean there's https://www.schneier.com/blog/archives/2022/06/on-the-subver... for one. And you can look for instances of warrant canaries to see where else they've used the existing legal system.

As for covert ops, well, they're covert. I don't have any evidence (hence I said "I'm guessing") but that's how I understand secretive agencies do things. If you look at all of the agencies involved in Stuxnet, you'd get the idea that allied countries' secret services tend to work together (or for each other) to some degree when it suits them.

hughesjj
0 replies
16h12m

Linus Torvalds claims the NSA reached out to him with a backdoor

Also remember the Google sniffing?

https://www.theregister.com/2013/11/07/google_engineers_slam...

AzzyHN
4 replies
19h42m

We do a lot of hacking

2OEH8eoCRo0
3 replies
18h49m

I'm sure we do. I don't agree that we attack private civilian enterprise.

vikramkr
2 replies
11h10m

If the usa does I don't understand why you expect we'd know about it. Also the us totally does and we do know about it - the nsa buys zero days - it's not exactly a secret lol

2OEH8eoCRo0
1 replies
6h55m

The same way Cloudflare can report on foreign state hackers, other countries can discover and report on ours, no?

vikramkr
0 replies
1h51m

Right- and they probably do, and they don't know it was us just like cloudflare doesn't know what country it was. For all we know that cloudflare attack was the US. I don't know why China/Russia/Iran/nk would be able to carry out the cloudflare attack without cloudflare being able to pin who exactly did it while the US is supposed to be so incompetent that we would be immediately identified and called out?

toyg
0 replies
19h39m

Their response program being called "Code Red" is likely a hint.

jedahan
6 replies
21h31m

The writeup contains indicators, including IP addresses, and the location of those addresses. In this case, the IP address associated with the threat actor is currently located in Bucharest, Romania.

tomschlick
4 replies
21h23m

No nation state is going to use IPs from their own country if they don't want to be caught. They will use multiple layers of rented VPS's with fake identities to pay for those resources.

jrockway
3 replies
21h19m

Yeah. I've dealt with definitely-not-nation-states before, and their pattern was to sign up for free/cheap CI services (CircleCI, Github Actions, that sort of thing) and launch their attacks from there. The VPS thing also sounds very very plausible to me, I figured there was a long tail, but until I was looking up every network that was attacking us, I really had no idea how deep the long tail goes. I now feel like half the world's side hustle is to rent a server that they never update and host a couple of small business websites there.

bredren
2 replies
19h43m

I now feel like half the world's side hustle is to rent a server that they never update and host a couple of small business websites there.

Do you mean people are offering build / host services for small biz, and leaving their servers in such a state they can be owned and used as jump points for intrusion?

Reason I ask is long-hosted small business websites are sometimes established with the intent to legitimize some future unrelated traffic.

outworlder
1 replies
18h35m

Do you mean people are offering build / host services for small biz, and leaving their servers in such a state they can be owned and used as jump points for intrusion?

Probably not what's happening.

I've tried to build a cloud CI service a while ago. Per their nature, you _have to_ allow arbitrary commands to be run. And you also have to allow outbound connectivity. So you don't need to 'own' anything in order to be dangerous. They will not run with heightened privileges but that's of little help if the target is external.

It is pretty difficult to reliably secure them against being used as a source of attacks as there's a lot you can do that will mimic legitimate traffic. Sure, you can block connections to things like IRC and you can throttle or flag some suspicious traffic. You can't really prevent HTTPS requests from going out. Heck, even SSH is pretty much required if you are allowing access to git.

Generally speaking, a build service provider will try to harden their own services and sandbox anything that is run in order to protect themselves from being compromised. Most providers won't want to be known as a major source of malicious activity, so there's some effort there. AWS and other large providers have more resources and will easily ban your ass, but that doesn't matter if it happens after a successful attack was launched.

jrockway
0 replies
18h7m

That's exactly right. CI providers are good anonymizers for unsophisticated attackers because they provide an extra layer of obfuscation. But if they were doing something significantly harmful, I'd obviously be talking to those providers and asking for their own logs as part of the investigation, and then it would clearly link back to the actual culprits. So that was one popular technique to use to circumvent IP bans after abusing our service.

The whole hosting provider thing was another type of problems. I would always look at who owned the IPs that malicious sign-ups were coming from, and found a lot of ASNs owned by companies like "hosturwebsite4u.or.uk" and things like that. Those I assumed were just forgotten-about Linux boxes that the attackers used to anonymize through.

Ultimately, this was all to get a "free trial" of our cloud service, which did let you run arbitrary code. We eventually had a fairly large number of ASNs that would get a message like "contact sales for a free trial" instead of just auto-approving. That was the end of this particular brand of scammers. (They did contact sales, though! Sales was not convinced they were a legitimate customer, so didn't give them a free trial. Very fun times ;)

I should really write up the whole experience. I learned so much about crypto mining and 2020-era script-kiddie-ing in a very short period of time. My two favorite tangents were 1) I eventually wrote some automation to kill free trials that were using 100% CPU for more than 12 hours or something like that, and so they just made their miner run at 87% CPU. 2) They tried to LD_PRELOAD some code that prevented their process from showing up in the process table, but didn't realize that our tools were statically linked and that they were running in an unprivileged container, so the technique doubly didn't work. But, good old `ps` and `top` are linked against glibc, so they probably fooled a lot of people this way. They also left their code for the libc stub around, and I enjoyed reading it.

CubsFan1060
0 replies
20h49m
lijok
3 replies
20h42m

Which nation state has good enough employment protection laws that they can take weekends off while doing recon on a top value target?

toyg
0 replies
19h37m

Might be a coincidence. A certain nation-state is currently engaged in all-out war; the intruder might have been summoned to another, more urgent task.

papertokyo
0 replies
17h31m

I assume the break is to have less chance of their activities be discovered and/or connected.

icepat
0 replies
20h14m

Yes, they must have been a member of the Norwegian Foreningen Svartehattehackere. They are a very strong union.

wubbert
0 replies
19h4m

Israel.

godzillabrennus
0 replies
21h39m

China.

BytesAndGears
25 replies
21h38m

Writeups and actions like this from cloudflare are exactly why I trust them with my data and my business.

Yes, they aren’t perfect. They do some things that I disagree with.

But overall they prove themselves worthy of my trust, specifically because of the engineering mindset that the company shares, and how serious they take things like this.

Thank you for the blog post!

nimbius
21 replies
18h48m

Then, the advertisement worked.

- Insist that you have better integrity than your competitors

- share a few operational investigations after your latest security event

what cloudflare doesnt do is provide their SOC risk analysis as a PCI/DSS payment card processor. Cloudflare doesnt explain why they ignored/failed to identify the elevated accounts or how those accounts became compromised to begin with. They just explain remediation without accountability.

They mention a third-party audit was conducted, but thats not because they care about you. Its because PCI/DSS mandates when an organization of any level experiences a data breach or cyber-attack that compromises payment card information, it needs to pass a yearly on-premise audit to ensure PCI compliance. if they didnt, major credit houses would stop processing their payments.

tptacek
7 replies
13h2m

None of this would have had anything to do with PCI (nobody gives a shit about PCI; the worst shops in the world, the proprietors of the largest breaches, have had no trouble getting PCI certified and keeping certification after their breaches). At much smaller company sizes than this, insurance requires you to retain forensics/incident response firms. There's a variety of ways they could do that cheaply. They brought in the IR firm with the best reputation in the industry (now that Google owns Mandiant), at what I'm assuming are nosebleed rates, because they want to be perceived as taking this as seriously as they seem to be.

It's a very good writeup, as these things go. Cloudflare is huge. An ancillary system of theirs got popped as sequelae to the Okta breach. They reimaged every machine in their fleet and burned all their secrets. People are going to find ways to snipe at them, because that's fun to do, but none of those people are likely to handle an incident like this better.

I am not a Cloudflare customer (technically, I am a competitor). But my estimation of them went up (I won't say by how much) after reading this writeup.

mardifoufs
5 replies
12h28m

Yeah at best PCI is somewhat hard to get at first, but after that it's basically only good, or less shady, corporations that bother keeping up compliance or make sure that they follow the guidelines at every step. Shady/troubled operators don't, and to an extent don't have to really be afraid of losing said certification unless they just go fully rogue.

tptacek
4 replies
11h56m

It's not hard to get at first, either. It's the archetypical checklist audit.

mardifoufs
2 replies
10h29m

Ah I think I'm just not used to those then, I hated the whole checklist busywork that we had to do even though we were barely related to the sales infra. But yeah, it was a bit like soc2 in that regard. Is there any certification that isn't just checklist "auditing"? That involves actual monitoring or something? Not sure if that's even possible

yardstick
0 replies
8h57m

PCI DSS does require periodical review of lots of elements, and I believe daily log reviews (which let’s face it no one does outside of very big firms with dedicated security teams and fancy SIEM tools).

JakeTheAndroid
0 replies
46m

PCI is the most checklist framework around. SOC 2 can be a checklist audit, depending on how much effort your internal compliance team puts into it. I've never had SOC 2 be really a checklist in the way PCI is. SOC 2 requires you to design and write your own controls and scope in or out different aspects of the business. SOC 2 does include monitoring and stuff like that.

The difference really is point in time vs period over time audits. PCI is a point in time audit, SOC 2 is a period over time audit. So for SOC 2 you do need monitoring controls, and then they test that control over the entire period (often 6-12 months). So you are monitoring the control effectiveness over a longer period of time with SOC 2. And even PCI has some period over time controls you need to demonstrate.

From the outside all compliance will seem like checkboxes to most people once controls are established. Because really the goal for most of the business is to make sure the control they interact with doesn't break, and the compliance team will likely give a list of things that the business can't afford to have broken. Which does seem like a checklist similar to PCI. But really, only PCI is straight up a checklist, as you don't really get to decide your controls.

dijit
0 replies
4h45m

PCI:DSS Tier 1 is difficult to get and keep.

But everyone here is missing the point of it, it's not to make sure you never get breached it's to ensure forensics exist that cannot be tampered with.

Separation of concerns to keep any single party from within the company from doing anything fraudulent or for an attacker to cover any tracks.

It's not intended at all to be any kind of security by itself outside of the damage an employee can do. Bad code exists and PCI will do nothing to prevent this, because that's not the purpose of the compliance.

yardstick
0 replies
8h55m

nobody gives a shit about PCI; the worst shops in the world, the proprietors of the largest breaches, have had no trouble getting PCI certified and keeping certification after their breaches

IMO it’s more a risk reward trade off. I know some companies are paying relative peanuts in non-compliance fines rather than spend money on some semblance of security which they may still not be compliant with and have to pay the fines anyway…

eastdakota
4 replies
18h21m

I was the one who made the call to bring in CrowdStrike. It had zero to do with PCI/DSS or any other compliance obligation. It was to 1) bring in a team with deep experience with a broad set of breaches; and 2) to make sure our team didn’t miss anything. The CrowdStrike team were first class and it was good to confirm they didn’t find anything significant our team hadn’t already. And, for the sake of clarity, no system breached touched customer credit card or traffic information.

elitistphoenix
3 replies
15h38m

Was the self hosted environment running a AV like the Crowdstrike agent? Or was it running different AV and that's why you chose to use Crowdstrike as someone different?

I guess no need to specific names. I'm just using that as examples.

tptacek
2 replies
13h0m

What's an AV going to do about the fact that Okta got popped?

de-moray
1 replies
5h37m

Perhaps the parent commenter was referring to the section in the report which stating the IOCs indicated that the attackers used the known third-party command and control system named Sliver. There are multiple public yara signatures for Sliver.

tptacek
0 replies
1h0m

Ahh, that makes sense. Thanks!

autoexec
2 replies
18h3m

Cloudflare doesnt explain why they ignored/failed to identify the elevated accounts or how those accounts became compromised to begin with. They just explain remediation without accountability.

They did, and they admitted that it was their fault. I have to give them credit for that much.

They did this by using one access token and three service account credentials that had been taken, and that we failed to rotate, after the Okta compromise of October 2023...The one service token and three accounts were not rotated because mistakenly it was believed they were unused. This was incorrect and was how the threat actor first got into our systems and gained persistence to our Atlassian products. Note that this was in no way an error on the part of AWS, Moveworks or Smartsheet. These were merely credentials which we failed to rotate.
tru3_power
1 replies
14h24m

The fact that they got their internal source/all bug reports is so bad. Literally every known and unknown vuln in their source is now up for grabs.

kristjansson
0 replies
13h58m

I mean, per TFA they didn’t get all the source and all the bugs, they accessed only a few hundred jira tickets, and less than a hundred repos.

mynameisvlad
1 replies
15h22m

I’m sorry did we read the same write-up?

Like I get cynicism, but they very clearly explained the lead-up to the accounts being compromised and the mistakes that caused that. They took full accountability of it. Which is frankly more than most companies dealing with security incidents. This entire write-up is more than most companies obligations or responses.

michaelt
0 replies
3h19m

> This entire write-up is more than most companies obligations or responses.

The thing is: What standard of security would you expect of someone who was decrypting 1/3rd of your internet traffic?

I would say "better than most companies" is too low a bar. Hell, I'm not sure any organisation could be secure enough to be trusted with that.

ziddoap
0 replies
14h14m

Its because PCI/DSS mandates when an organization of any level experiences a data breach or cyber-attack that compromises payment card information

No payment card information was compromised.

bimguy
0 replies
14h38m

Nimbius, you sound like you work for a Cloudflare competitor.

No competitors were mentioned in Cloudflares article, they explained what kind of information was breached, nothing to do with payment/card info... so I doubt you even read past the first few paragraphs/conclusion.

JakeTheAndroid
0 replies
17h26m

I am not sure where you're getting your information on requirements for PCI service providers. There isn't anything inside of PCI DSS that requires some sort of SOC report to be generated and distributed to customers. And Cloudflare does make their PCI AoC available to customers.

They clearly defined the scope of impact, and demonstrated that none of this impacts systems in scope for PCI. There was no breach to change management inside of BitBucket, and none of the edge servers processing cardholder data were impacted. They will have plenty of artifacts to demonstrate that by bringing in an external firm. So I am really not clear why you're bringing up PCI at all here. They made it clear no cardholder data was impacted so your perspective on the required "on-site" audits is moot.

Cloudflare operates two entirely different scopes for PCI; The first being as a Merchant where you the customer pays for the services. This is a very small scope of systems. The second is as a Service Provider that processes cards over the network. The network works such that it is not feasible to exfiltrate card data from the network. There are many reasons as to why this is, but they demonstrate year over year that this is not something that is reasonably possible. You can review their PCI AoC and get the details (albeit limited) to understand this better. Or you could get their SOC 2 Type 2 Report which will cover many aspects of the edge networks control environment with much better testing details. After reading that you can then come back to the blog and see that clearly no PCI scoped systems were impacted in a way that would require any on-prem audit to occur.

And they are not a card network. They are a PCI Service Provider because cards transit over their network. They are not at risk of being unable to process payments or transactions for their Merchant scope even if there are issues with their Service Provider scope. Because, again, these are two separate PCI audits that are done, testing two different sets of systems and controls.

And, as an aside, Cloudflare effectively always has on-prem PCI audits occur. Because the PCI QSA's need to physically visit Cloudflare datacenters to demonstrate not only the software side of compliance, but the datacenters deployed globally.

overstay8930
0 replies
20h19m

We're one of their larger enterprise customers and stuff like this makes it easy to get renewals approved easily, keeping engineers in the loop makes it such an easy sell.

encom
0 replies
3h33m

Better hope you stay on their good side, and don't say anything their CEO doesn't approve of.

Xeyz0r
0 replies
20h42m

Nobody is perfect, but Cloudflare indeed inspires confidence. Especially thanks to cases where they don't hesitate to talk about the issue and how they resolved it. It's precisely these descriptions of such situations that demonstrate their ability to overcome any challenges.

kccqzy
15 replies
19h52m

Analyzing the wiki pages they accessed, bug database issues, and source code repositories, it appears they were looking for information about the architecture, security, and management of our global network; no doubt with an eye on gaining a deeper foothold.

For a nation state actor, the easiest way to accomplish that is to send one of their loyal citizens to become an employee of the target company and then have the person send back "information about the architecture, security, and management" of the target company.

Fun (but possibly apocryphal) fact: more than a decade ago in a social gathering of SREs at Google, several admitted to being on the payroll of some national intelligence bureaus.

toyg
7 replies
19h45m

Not if such citizens are sanctioned. Code Red. Hint hint.

eep_social
3 replies
17h43m

we redirected the efforts of a large part of the Cloudflare technical staff (inside and outside the security team) to work on a single project dubbed “Code Red”.

Code red is a standard term in emergency response that means smoke/fire. In general, in order to “redirect” that much effort one must do some paperwork to prove the urgency and immediacy of the threat.

The MO screams China to me but I wouldn’t read anything into the name “code red” which would have been selected before they identified the specific threat actor anyway.

eastdakota
1 replies
11h0m

The name has nothing to do with where we believe the attacker came from. We borrowed it from Google. At Google they have a procedure where, in an emergency, they can declare a Code Yellow or Code Red — depending on the severity. When it happens, it becomes the top engineering priority and whoever is leading it can pull any engineer off to work on the emergency. Those may not be the exact details of Google's system but it's the gist that we ran with. We'd had an outage of some of our services earlier in the Fall that prompted us to first borrow Google's idea. Since our logo is orange, we created "Code Orange" to mitigate the mistakes we'd made that led to that outage. Then this happened and we realized we needed something that was a higher level of emergency than Code Orange, so we created Code Red. At some point we'll write up how we thought of the rules and exit criteria around these, but I think they'll become a part of how we deal with emergencies that come up going forward.

hn_go_brrrrr
0 replies
10h13m

Yeah, that's a pretty accurate description of the color code system. There's some additional nuance to it, but a code red is an immediate existential threat to the business.

4gotunameagain
0 replies
10h44m

The MO screams China to me

How exactly ?

Nothing out of the ordinary/regular infiltration, investigation and attempt to move laterally is exposed.

elashri
2 replies
19h30m

I think this probably was a name after the famous Code Red worm [1], not a reference to China.

[1] https://en.wikipedia.org/wiki/Code_Red_(computer_worm)

duskwuff
0 replies
14h8m

Or after the flavor of Mountain Dew which was that worm's namesake. Not all names have to make sense. :)

curiousgal
0 replies
18h53m

It's the tech scene on the Internet, everything is a reference to the CCP! /s

neilv
4 replies
12h47m

Fun (but possibly apocryphal) fact: more than a decade ago in a social gathering of SREs at Google, several admitted to being on the payroll of some national intelligence bureaus.

They had government engagements with Google's consent, and all those various engagements could be disclosed to each other?

If not, what kind of drugs were flowing at this social gathering, to cause such an orgy of bad OPSEC?

slowbdotro
3 replies
10h21m

Knowing google employees, it's coke. Lots of coke

neilv
2 replies
9h13m

At last, an explanation for their fratbro interviews.

aitchnyu
1 replies
8h22m

Could you explain? Never been exposed to fraternities or Google.

dijit
0 replies
4h41m

Ritualistic hazing of newcomers is a large part of american "frat" culture.

What the parent is likely referring to is: Things like "Wearing the noogler hat" or the hoops you jump through in interview (that have nothing to do with the job) are similar in spirit to some university fraternities admission processes, or are depicted as such in media.

owlstuffing
0 replies
3h46m

For a nation state actor, the easiest way to accomplish that is to send one of their loyal citizens to become an employee of the target company

Precisely. Particularly in the case of US businesses. Why bother picking a lock when you have both the key and permission?

_kb
0 replies
9h43m

Payroll? You guys are getting paid?

Australians get the 'opportunity' to be part of that sort of that sort of espionage as a base level condition of citizenship [0].

As an upside, I guess it helps with encouraging good practices around zero trust processes and systems dev.

[0]: https://en.wikipedia.org/wiki/Mass_surveillance_in_Australia...

belltaco
8 replies
21h39m

Great write up.

Over the next day, the threat actor viewed 120 code repositories (out of a total of 11,904 repositories

They accessed 36 Jira tickets (out of a total of 2,059,357 tickets) and 202 wiki pages (out of a total of 14,099 pages).

Is it just me or 12K git repos and 2 million JIRA tickets sound like a crazy lot. 15K wiki pages is not that high though.

Since the Smartsheet service account had administrative access to Atlassian Jira, the threat actor was able to install the Sliver Adversary Emulation Framework, which is a widely used tool and framework that red teams and attackers use to enable “C2” (command and control), connectivity gaining persistent and stealthy access to a computer on which it is installed. Sliver was installed using the ScriptRunner for Jira plugin.

This allowed them continuous access to the Atlassian server, and they used this to attempt lateral movement. With this access the Threat Actor attempted to gain access to a non-production console server in our São Paulo, Brazil data center due to a non-enforced ACL.

Ouch. Full access to a server OS is always scary.

spenczar5
1 replies
21h30m

12k git repos can happen if the team uses github enterprise with forking internally.

It can also happen in franken-build systems which encourage decoupling by making separate repos: one repo that defines a service’s API, containing just proto (for example). A second repo that contains generated client code, a third with generated server code, a fourth for implementation, a fifth which supplies integration test harnesses, etc…

Sound insane? It is! But its also how an awful lot of stuff worked at AWS, just as an example.

sesm
0 replies
20h39m

I can relate to this, it seems that code hosting providers push their users into having more repos with their CI limitations. I’ve noticed that with GitHub Actions, I assume Atlassian does the same.

Aeolun
1 replies
21h33m

Is it just me or 12K git repos and 2 million JIRA tickets sound like a crazy lot. 15K wiki pages is not that high though.

I think my org has on the order of 3 repositories per dev? They seem to have 3200 employees, with what I assume to be a slightly higher rate of devs, so you’d expect around 6-7 thousand?

2M Jira tickets is probably easily achieved if you create tickets using any automated process.

trollied
0 replies
20h53m

They might create a JIRA ticket for each customer support interaction. Would make sense.

ummonk
0 replies
21h31m

The number of repositories sounds really high. The number of tickets doesn't.

pkkim
0 replies
20h26m

At least 25% of that is a single Jira project consisting of formulaic tickets to capture routine changes in production, and a large portion of that is created by automated systems. There may be other such projects too.

Source: former Cloudflare employee

keltraine
0 replies
17h51m

Blog updated:

They accessed 36 Jira tickets (out of a total of 2,059,357 tickets) and 202 wiki pages (out of a total of 194,100 pages)

OJFord
0 replies
21h15m

They probably have thousands of devrel/app engineer type example/demo/test repos alone - it doesn't say they're active.

2M tickets - in my 4.5y at present company we've probably averaged about 10 engineers and totalled 4.5k tickets. Cloudflare has been around longer, has many more engineers, might use it for HR, IT, etc. too, might have processes like every ticket on close opens a new one for the reporter to test, etc. It sounds the right sort of order of magnitude to me.

Aeolun
8 replies
21h39m

Reading this 2 months after the fact feels a bit late, but I guess it’s better for your stock price if these revelations happen with remediation already in hand?

Since they didn’t really have reason to believe my data was accessed, maybe that’s ok. I know from firsthand experience how hard rotating all your credentials across the whole org is.

Cthulhu_
6 replies
21h35m

The final security report was only released yesterday, and the amount of work they did to make sure all of their systems were secure after the incident was A Lot; two months is pretty quick for a project of that scale IMO.

Aeolun
5 replies
21h31m

Yes, but if after two months they’d found out that customer data had been compromised, that would be a little late for me to do anything about it.

eastdakota
2 replies
20h55m

Had customer data been impacted we would have disclosed it immediately.

htrp
0 replies
19h41m

^ eastdakota is part of the cloudflare mgmt team (CEO)

Aeolun
0 replies
18h17m

I trust that. Something about having someone consistently show up to explain gives me a lot of faith in the company.

My concern was mostly around the situation where you believe it had not been, but it had.

nemothekid
1 replies
21h21m

What do you expect them to do? It sounds like you are complaining that they weren't able to instantly ascertain if customer data had been compromised.

Aeolun
0 replies
18h26m

No, I’m saying that if they’d found out after the fact that it had, it could have been bad for me.

What I’d expect is a note on the 24th that they’d kicked a threat actor of their Jira system, and that they had no reason to believe the rest of their systems were compromised, but that they were taking action to prevent it from happening again and starting a full investigation.

I get that the business might not want to do that if they are not certain there is any cause for alarm. Uncertainty might be even worse for some.

lavezzi
0 replies
19h19m

Since December 18, new SEC rules require companies to report “material” cybersecurity incidents on a Form 8-K within four business days of their materiality determination.

The rule does not set any specific timeline between the incident and the materiality determination, but the materiality determination should be made without 'unreasonable delay'.

sebmellen
7 replies
22h4m

The most surprising part of this is that Cloudflare uses BitBucket.

infecto
2 replies
21h34m

Maybe but maybe not. I don't like Bitbucket but there are a number of large companies where they worry about using services owned by competitors in one of their verticals.

kccqzy
1 replies
19h55m

Bitbucket doesn't have to be a service. It can be an old-fashioned downloaded software that you install on your own machines. Not everything is SaaS.

infecto
0 replies
19h32m

Not sure what you mean? If you are alluding to the OP that said it was surprising...I don't think he found it suprising they they use Bitbucket over Mercurial. I think its safe to assume he meant bitbucket over a Github.

In the git universe there is a pretty short list of services, locally or hosted that you would probably use as an entity as large as cloud flare.

toyg
0 replies
19h42m

Integrates with Jira and the rest of Atlassian's stuff, and it's just another git server at the end of the day.

gempir
0 replies
8h26m

A lot of very big companies use Bitbucket, it's just a lot more cost effective than Gitlab/Github.

aitchnyu
0 replies
8h16m

Wonder how powerful is Scriptrunner for Jira. They got the security certifications but I cant tell how sandboxed it is.

Cthulhu_
0 replies
21h36m

How so? It integrates well with the other Atlassian products they use.

fierro
7 replies
20h16m

The one service token and three accounts were not rotated because mistakenly it was believed they were unused.

This odd to me - unused credentials should probably be deleted, not rotated.

pbhjpbhj
5 replies
19h51m

This smells weird, surely? I'd be looking at who chose not to rotate those particular credentials.

1: "what are these accounts?"

2: "oh they're unused, they don't even appear in the logs"

1: "we should rotate them"

2: "no, let's keep those rando accounts with the old credentials, the ones we think might be compromised ... y' know, for reasons"

?

pphysch
4 replies
17h48m

More likely: "no one has any idea what these old credentials do, so let's not touch them and potentially break everything"

sodality2
2 replies
15h17m

Sounds like the perfect time to revoke the credentials and find out what uses them, so we can find why they weren't registered as credentials in use. Personally I'd rather do that, have a team ready, and break production for x minutes in order to properly register auth keys.

I'd definitely consider a "silent" credential - a credential not registered centrally - to be a huge red flag. Either it could get stolen, or break and no one knows how to regenerate it. And it's pretty easy as devs to quickly generate an auth key that ends up being used permanently, without any documentation.

pphysch
1 replies
1h23m

Personally I'd rather do that, have a team ready, and break production for x minutes in order to properly register auth keys.

Sure, but you aren't going to do all that when your team is juggling N other priorities. At least, it will be very difficult getting mgmt and others on board. Unless it's explicitly in the context of a recent breach.

sodality2
0 replies
1h19m

Very true. Ideally the culture would be that we’re experiencing some pain now to avoid more later, so we should do it - I’d hope management was on the same page. Real world, unfortunately, often differs.

fierro
0 replies
10h28m

this is more plausible to me

mparnisari
0 replies
13h41m

Agreed. This whole post reads as "I'm the victim" but they don't admit on the one mistake that snowballed

OJFord
6 replies
21h24m

The one service token and three accounts were not rotated because mistakenly it was believed they were unused.

Eh? So why weren't they revoked entirely? I'm sure something's just unsaid there, or lost in communication or something, but as written that doesn't really make sense to me?

phyzome
1 replies
19h33m

Betting they have a new item in their compromise runbook. :-)

OJFord
0 replies
18h20m

No I don't think so, I do think something's just difficult to say because of what they can't say, or they just neglected to say/didn't word it well, or something. i.e. a bug in the writing, not the post mortem itself.

Because if you take it exactly as it's written it's just too weird, I'm not a security expert with something to teach Cloudflare about err maybe don't leave secrets lying around that aren't actually needed for anything, that's not news to many people, and they surely have many actual security people for whom that would not even be a fizzbuzz interview question reviewing any kind of secret storage or revocation policy/procedure. And also the mentioned third-party audit.

htrp
1 replies
19h42m

blameless post mortem most likely

Great call out too

Note that this was in no way an error on the part of AWS, Moveworks or Smartsheet. These were merely credentials which we failed to rotate.
OJFord
0 replies
18h28m

It can still be blameless though? The 'because' makes it sound like that's a correct reason to leave it; that the only error was thinking they were unused. (i.e. that it's fine to leave them if unused, only a problem if they're used)

i.e. instead of 'because they were mistakenly thought to be unused' you can say 'because they were mistakenly thought to be ok to leave as unused' (or something less awkward depending on exactly what the scenario was) and there's no more blame there? And if you really want to emphasise blamelessness you can say how your processes and training failed to sufficiently encourage least privilege, etc.

stepupmakeup
0 replies
20h49m

Rotating could have been manual and the person in charge wanted to save time. Stress could be a factor too.

crdrost
0 replies
18h24m

I would assume that "believed" is not meant to be interpreted in an active personal sense but in a passive configuration sense.

That is, I'd expect there was a flag in a database somewhere saying that those service accounts were "abandoned" or "cleaned up" or some other non-active status, but that this assertion was incorrect. Then they probably rotated all the passwords for active accounts, but skipped the inactive ones.

Speaking purely about PKI and certificate revocation, because that's the only similar context that I really know about, there is generally a difference between allowing certificates to expire, vs allowing them to be marked as "no longer used", vs fully revoking them: a certificate authority needs to do absolutely nothing in the first case, can choose to either do nothing or revoke in the second case, and must actively maintain and broadcast that revocation list for the third case. When someone says "hey I accidentally clobbered that private key can I please have a new cert for this new key," you generally don't add the old cert to the revocation list because why would you.

wepple
5 replies
18h48m

They mention Zero Trust, yet you can gain access to applications with just a single bearer token?

Am I missing something here?

There’s no machine cert used? AuthN tokens aren’t cryptographically bound?

This doesn’t meet my definition of ZT, it seems more like “we don’t have a VPN”

asmor
2 replies
18h40m

these were service accounts used by third parties to provide jira integrations, not a user account

Bluecobra
1 replies
13h26m

If they are using Active Directory, wouldn’t a service account be no different than a regular employee account? Both a Jira service account and the CEO of Cloudflare are still Domain Users in AD. Granted, a service account should be way more locked down and have the least amount of access possible.

asmor
0 replies
10h51m
prg20
0 replies
3h25m

You're not. The article makes no sense. They claim robust security controls but apparely lacked a proper accounting of service accounts with external access, especially with admin access to freakin' Jira.

Bluecobra
0 replies
13h31m

Yeah it seems odd to me that their internal wiki, code repo, and Jira is exposed directly to the internet and arbitrary IPs could connect to it. Atlassian had a rash of vulnerabilities recently, who knows how many undiscovered ones still exist.

If they had a VPN in place secured with machine certs, that would be yet another layer for an attacker to defeat.

mmaunder
5 replies
18h28m

Thing about a data breach is once the data is out there - source code in this case - it’s out there for good and you have absolutely no control over who gets it. You can do as much post incident hardening as you want, and talk about it as much as you want, but the thing you’re trying to protect against, and blogging about how good you’re getting at preventing, has already happened. Can’t unscramble those eggs.

burnished
2 replies
18h22m

Whats your point?

mmaunder
1 replies
18h11m

That this is messaged and received as a net win. It’s not.

malwrar
0 replies
17h59m

Are they just supposed to be invincible? Next best thing is an incident response with this level of quality and transparency. Thats definitely a win in my book, I want to know the provider of a core part of my infra is able to competently and maturely respond to a security incident and this post strongly communicates that.

arp242
0 replies
18h12m

The source code next year is not the same as source code this year.

The customer data next year is not the same as the customer data this year.

BandButcher
0 replies
15h19m

agreed, to me this is a big deal for CF. especially coupled with confluence documentation which most likely includes future plans and designs, org charts, meeting minutes... you could also find other easter eggs in any legacy code, almost all companies have undocumented backdoors

obviously a customer data breach would be worse but this is really no bueno

lopkeny12ko
2 replies
11h30m

The manufacturers’ forensic teams examined all of our systems to ensure that no access or persistence was gained. Nothing was found, but we replaced the hardware anyway.

This seems incredibly wasteful.

Replacing an entire datacenter is effectively tossing tens of millions of dollars of compute hardware.

rjzzleep
0 replies
11h23m

It is, but for most of these components there is no other choice since there is no way to guarantee that nothing was changed. lvrick would say that's what why want to attest everything.

Anyway, I really hope that the hardware isn't just tossed into the recycling, but provided to schools and other places that could put them to good use.

perlgeek
0 replies
11h22m

The sentence before...

To ensure these systems are 100% secure, equipment in the Brazil data center was returned to the manufacturers.

It doesn't say all equipment, and that would have been very helpful. But if it's just two or three access devices sitting on the border, it's not so bad.

Also, the manufacturer likely just sold the hardware to a different customer, sounds like it was pretty new and unused anyway. Just flash the firmware and you're good.

londons_explore
2 replies
21h17m

Am I the only one who just sees a totally blank page?

Viewing the HTML shows it's got an empty body tag, and a single script in the <head> with a URL of https://static.cloudflareinsights.com/beacon.min.js/v84a3a40...

overstay8930
0 replies
20h14m

Happened to me on my iPhone too

chankstein38
0 replies
20h56m

No, that's also what I see. I'm not sure why you're getting downvoted.

EDIT: re-opened the link a few minutes later and now I see the post

jshier
1 replies
22h5m

Fascinating and thorough analysis! I guess if you think an account is unused, just delete it!

phyzome
0 replies
19h30m

Probably safer to rotate the credentials and then schedule it for deletion later. Then if you discover it wasn't unused after all, you have an easier recovery... :-)

wowmuchhack
0 replies
17h12m

Such a beautiful report and beautiful ownage.

Whenever some shitty Australian telco gets owned, people are angry and call them incompetent and idiots; it's nice to see Cloudflare gets owned in style with much more class and expertise.

Like the rest of the HN crowd, this incident has only increased my trust in Cloudflare.

this_steve_j
0 replies
8h39m

This is an excellent report, and congratulations are due to the security teams at CS for a quick detection, response and investigation.

It also highlights the need for a faster move in the entire industry away from long-lived service account credentials (access tokens) and toward federated workload identity systems like OpenId connect in the software supply chain.

These tokens too often provide elevated privileges in devops tools while bypassing MFA, and in many cases are rotated yearly. Github [1], Gitlab, and AZDO now support OIDC, so update your service connections now!

Note: I’m not familiar with this incident and don’t know whether that is precisely what happened here or if OIdC would have prevented the attack.

Devsecops and Zero Trust are often-abused buzzwords, but the principles are mature and can significantly reduce blast radius.

[1] https://docs.github.com/en/actions/deployment/security-harde...

prg20
0 replies
3h33m

Then, from November 27, we redirected the efforts of a large part of the Cloudflare technical staff (inside and outside the security team) to work on a single project dubbed “Code Red”.

Why didn't they start this effort BEFORE there was an incident?

we undertook a comprehensive effort to rotate every production credential (more than 5,000 individual credentials

Bearer credentials should already be rotated on a regular basis. Why did they wait until an incident to do this?

To ensure these systems are 100% secure

Nothing is 100% secure. Not being to see and acknowledge that is a huge red flag.

Nothing was found, but we replaced the hardware anyway.

Well that is just plain stupid and wasteful.

We also looked for software packages that hadn’t been updated

Why weren't you looking for that prior to the incident?

we were (for the second time) the victim of a compromise of Okta’s systems which resulted in a threat actor gaining access to a set of credentials.

And yet they continue using Okta. The jokes just write themselves.

The one service token and three accounts were not rotated because mistakenly it was believed they were unused.

Wait, wait, wait. You KNEW the accounts with remote access to your systems were UNUSED and yet they continue to be active? Hahahahaha.

The wiki searches and pages accessed suggest the threat actor was very interested in all aspects of access to our systems: password resets, remote access, configuration, our use of Salt, but they did not target customer data or customer configurations.

Totally makes sense, I'm sure the attacker was just a connoisseur of credentials and definitely did not want them to target customer data.

orenlindsey
0 replies
19h48m

Cloudflare being compromised would be enormous. Something between 5 and 25% of all sites use CF in some fashion. An attacker could literally hold the internet hostage.

muzso
0 replies
19h54m

The threat actor searched the wiki for things like remote access, secret, client-secret, openconnect, cloudflared, and token. They accessed 36 Jira tickets (out of a total of 2,059,357 tickets) and 202 wiki pages (out of a total of 14,099 pages).

In Atlassian's Confluence even the built-in Apache Lucene search engine can leak sensitive information and this kind of access (to the info by the attacker) can be very hard to track/identify. They don't have to open a Confluence page if the sensitive information is already shown on the search results page.

londons_explore
0 replies
20h59m

So after the Okta incident they rotated the leaked credentials...

But I think they should have put honeypots on them, and then waited to see what attackers did. Honeypots discourage the attackers from continuing for fear of being discovered too.

joshbetz
0 replies
15h56m

What makes them think it's a nation state?

jbverschoor
0 replies
18h14m

It's almost valentine

j-rom
0 replies
15h46m

To ensure these systems are 100% secure, equipment in the Brazil data center was returned to the manufacturers. The manufacturers’ forensic teams examined all of our systems to ensure that no access or persistence was gained. Nothing was found, but we replaced the hardware anyway.

The thoroughness is pretty amazing

htrp
0 replies
19h45m

They did this by using one access token and three service account credentials that had been taken, and that we failed to rotate, after the Okta compromise of October 2023. All threat actor access and connections were terminated on November 24 and CrowdStrike has confirmed that the last evidence of threat activity was on November 24 at 10:44.

Okta hitting everywhere

culopatin
0 replies
16h10m

Im growing more and more annoyed at Cloudflare and their stupid “are you a human” crap.