Given the level of impact that this incident caused, I am surprised that the remediations did not go deeper. They ensured that the same problem could not happen again in the same way, but that's all. So some equivalent glitch somewhere down the road could lead to a similar result (or worse; not all customers might have the same "robust and resilient architectural approach to managing risk of outage or failure").
Examples of things they could have done to systematically guard against inappropriate service termination / deletion in the future:
1. When terminating a service, temporarily place it in a state where the service is unavailable but all data is retained and can be restored at the push of a button. Discard the data after a few days. This provides an opportunity for the customer to report the problem.
2. Audit all deletion workflows for all services (they only mention having reviewed GCVE). Ensure that customers are notified in advance whenever any service is terminated, even if "the deletion was triggered as a result of a parameter being left blank by Google operators using the internal tool".
3. Add manual review for any termination of a service that is in active use, above a certain size.
Absent these broader measures, I don't find this postmortem to be in the slightest bit reassuring. Given the are-you-f*ing-kidding-me nature of the incident, I would have expected any sensible provider who takes the slightest pride in their service, or even is merely interested in protecting their reputation, to visibly go over the top in ensuring nothing like this could happen again. Instead, they've done the bare minimum. That says something bad about the culture at Google Cloud.
This is so obviously "enterprise software 101" that it is telling Google is operating in 2024 without it.
Since my new hire grad days, the idea of immediately deleting data that is no longer needed was out of the question.
Soft deletes in databases with a column you mark delete. Move/rename data on disk until super duper sure you need to delete it (and maybe still let the backup remain). Etc..
There are many voices in the industry arguing against soft deletes. Mostly coming from a very Chesterton's Fence perspective.
For some examples https://www.metabase.com/learn/analytics/data-model-mistakes...
https://www.cultured.systems/2024/04/24/Soft-delete/
https://brandur.org/soft-deletion
Many more can easily be found.
For the use case we're discussing here, of terminating an entire service, the soft delete would typically be needed only at some high level, such as on the access list for the service. The impact on performance, etc. should be minimal.
Precisely, before you delete a customer account, you disable its access to the system. This is a scream test.
Once you've gone through some time and due diligence you can contemplate actually deleting the customer data and account.
I think the reason why someone wouldn't want to do this is because it will cost Google money to keep it active on any level.
OK, but those examples you gave all boil down to the following:
1. you might accidentally access soft-deleted data and/or the data model is more complicated 2. data protection 3. you'll never need it
to which I say
1. you'll make all kinds of mistakes if you don't understand the data model, and, it's really not that hard to tuck those details away inside data access code/SPs/etc that the rest of your app doesn't need to care about
2. you can still delete the data later on, and indeed that may be preferable as deleting under load can cause performance (e.g. locking) issues
3. at least one of those links says they never used it, then gives an example of when soft-deleted data was used to help recover an account (albeit by creating a new record as a copy, but only because they'd never tried an undelete before and where worried about breaking something; sensible but not exactly making the point they wanted to make)
So I'm gonna say I don't get it; sure it's not a panacea, yes there are alternatives, but in my opinion neither is it an anti-pattern. It's just one of dozens of trade-offs made when designing a system.
My impression of GCP generally is that they've got some very smart people working on some very impressive advanced features and all the standard boring stuff nobody wants to do is done to the absolute bare minimum required to check the spec sheet. For all its bizarre modern enterprise-ness, I don't think Google ever really grew out of its early academic lab habits.
I know a bunch of way-too-smart PHD types that worked at GOOG exclusively in R&D roles that they bragged to me earnestly was not revenue generating.
gdpr compliance precludes such approach
It sounds like the problem is that the deletion was configured with an internal tool that bypassed all those kinds of protections -- that went straight to the actual delete. Including warnings to the customer, etc.
Which is bizarre. Even internal tools used by reps shouldn't be performing hard deletes.
And then I'd also love to know how the heck a default value to expire in a year ever made it past code review. I think that's the biggest howler of all. How did one person ever think there should be a default like that, and how did someone else see it and say yeah that sounds good?
It's a joke that they're not doing these things. How can you be an giant cloud provider and not think of putting safe guards around data deletion. I guess that realistically they thought of it many times but never implemented it because our costs money.
It’s probably because implementing such safeguards wouldn't help anyones promo packet.
I really dislike that most of our major cloud infrastructure is provided by big tech rather than eg infrastructure vendors. I trust equinix a lot more than Google because thats all they do.
I work in GCP and have seen a lot of OKRs about improving reliability. So implementing something like this would help someone's promo packet.
It is funny Google has internal memegen but not ideagen. Ideate away your problems, guys.
This is exactly the kind of work that would get SREs promoted.
Understandable, however public clouds are a huge mix of both hardware and software, and it takes deep proficiency at both to pull it off. Equinix are definitely in the hardware and routing business.. may be tough to work upstream.
Hardware always get commoditized to the max (sad but true).
As a customer of Equinix Cloud... No thank you. Infrastructure vendors are terrible software engineers.
Replacing actual deletion with deletion flags may lead to lead to other fun bugs like "Google Cloud fails to delete customer data, running afoul of EU rules". I suspect Google would err on the side of accidental deletions rather than accidental non-deletions: at least in the EU.
A deletion flag is acceptable under EU rules. For example, they are acceptable as a means of dealing with deletion requests for data that also exists in backups. Provided that the restore process also honors such flags.
I highly doubt this was the reason. Google has similar deletion protection for other resources eg GCP projects are soft deleted for 30 days before being nuked.
I certainly hope not, because that would be incredibly stupid. Customers understand the significance of different kinds of risk. This story got an incredible amount of attention among the community of people who choose between different cloud services. A story about how Google had failed to delete data on time would not have gotten nearly as much attention.
But let us suppose for a moment that Google has no concern for their reputation, only for their legal liability. Under EU privacy rules, there might be some liability for failing to delete data on schedule -- although I strongly suspect that the kind of "this was an unavoidable one-off mistake" justifications that we see in this article would convince a court to reduce that liability.
But what liability would they face for the deletion? This was a hedge fund managing billions of dollars. Fortunately, they had off-site backups to restore their data. If they hadn't, and it had been impossible to restore the data, how much liability could Google have faced?
Surely, even the lawyers in charge of minimizing liability would agree: it is better to fail by keeping customers accounts then to fail by deleting them.
Not really how it works, GDPR protects individuals and allow them to request deletion with the data owner. They need to then, within 60(?) days, respond to any request. Google has nothing to do with that beyond having to make sure their infra is secure. There even are provisions for dealing with personal data in backups.
EU law has nothing to do with this.
Hard agree. They clearly were more interested in making clear that there's not a systemic problem in how GCP's operators manage the platform, which read strongly and alarmingly that there is a systemic problem in how GCP's operators manage the platform. The lack of the common sense measures you outline in their postmortem just tells me that they aren't doing anything to fix it.
“There’s no systemic problem.”
Meanwhile, the operators were allowed to leave a parameter blank and the default was to set a deletion time bomb.
Not systemic my butt! That’s a process failure, and every process failure like this is a systemic problem because the system shouldn’t allow a stupid error like this.
If you're arguing that that was the systemic problem, then it's been fully fixed, as the manual operation was removed and so validation can no longer be bypassed.
I think you glossed over the importance of the term process failure.
The idea is that this one particular form missing the appropriate care is indicative of a wider lack of discipline amongst the engineers building it.
Definitionally, you cannot solve a process problem by fixing a specific bug.
"we removed the system that can enable a process failure" fixes the process failure. I didn't misunderstand anything.
I’m completely baffled by Google’s “postmortem” myself. Not only is it obviously insufficient to anyone that has operated online services as you point out, but the conclusions are full of hubris. I.e. this was a one time incident, it won’t happen again, we’re very sorry, but we’re awesome and continue to be awesome. This doesn’t seem to help Google Cloud’s face-in-palm moment.
It looks like they could read the SRE book by Google. BTW available for free at https://sre.google/sre-book/table-of-contents/
A bit chaotic (a mix of short essays) and simplistic (assuming one kind of approach or design), but definitely still worth a read. No exaggeration to state it was category defining.
Can you imagine if there was no backup? Google would be in for to cover the +/- 200 billion in losses?
This is why the smart people at Berkshire Hathaway don't offer Cyber Insurance: https://youtu.be/INztpkzUaDw?t=5418
I’d be very surprised if there wasn’t legalese in the contract/ToS about liability limitations etc. Would maybe expect it to be more than infrastructure costs for a big company custom contract, but probably not unlimited/as high as that, because it seems like such a blatant legal risk…
Disclaimer: Am Googler who knows nothing real about this. This is rampant speculation on my part.
I wouldn’t be surprised if VMware support is getting deprecated in GCP so they just don’t care - waiting for all customers to move off of it
My point is that if they had this problem in their VMware support, they might have a similar problem in one of their other services. But they didn't check (or at least they didn't claim credit for having checked, which likely means they didn't check).
That sounds reasonable. Perhaps they felt that a larger change to process would be riskier overall.
No it would probably be even worse from Google’s perspective: more expensive.
I would add one more -
4. Add an option to auto-backup all the data from the account to the outside backup service of users choice.
This would help not just with these kind of accidents, but also any kind of data corruption/availability issues.
I would pay for this even for my personal gmail account.
most of this complaint is explicitly answered in the article. must have been TL...
FWIW, you're solving the bug by fiat, and that doesn't work. Surely analogs to all those protections are already in place. But a firm and obvious requirement of a software system that is capable of deleting data the ability to delete data. And if it can do it you can write a bug that short-circuits any architectural protection you put in place. Which is the definition of a bug.
Basically I don't see this as helpful. This is just a form of the "I would never have written this bug" postmortem response. And yeah, you would. We all would. And do.
Could it have been a VMware expiration setting somewhere, and thus VMware itself deleted the customer’s tenant? If so then Google wouldn’t have a way to prove it won’t happen again except by always setting the expiration flag to “never” instead of leaving it blank