return to table of content

Removing data transfer fees when moving off Google Cloud

jdon
32 replies
23h23m

Looks like it's a response to the recent cloud services market investigation by the CMA [1].

Which highlighted "Egress fees harm competition by creating barriers to switching and multi-cloud leading to cloud service providers entrenching their position" [2].

It's also interesting that they are calling out problems with software licensing, as that is another thing the CMA is investigating in their cloud market review.

[1] https://www.gov.uk/cma-cases/cloud-services-market-investiga...

[2] https://assets.publishing.service.gov.uk/media/652e958b69726...

blibble
26 replies
22h53m

I read through a couple of these responses to the CMA by MS, Google and AWS and their smaller competitors

as expected the hyperscalers refuse to acknowledge that the free ingress and expensive egress is a lock-in mechanism, and their smaller competitors complain bitterly about this

the hyperscalers say they have to charge egress fees to pay for the costs in building their networks, but for some reason doesn't apply to ingress (which they're silent on)

if they want to play this game then the CMA should simply make them charge the same for ingress and egress

that way they can "fund their network costs" without issue, and if they want to make them both free then that's their decision

amluto
11 replies
22h0m

the hyperscalers say they have to charge egress fees to pay for the costs in building their networks, but for some reason doesn't apply to ingress (which they're silent on)

This doesn’t pass the red face test IMO. The hyperscaler networks are indeed very expensive, but that’s because they need to provide non-blocking or near non-blocking performance within the availability zone, and the clouds don’t charge for this service.

The Internet egress part ought to be straightforward on top of this: plug as much bandwidth worth of connections into the aforementioned extremely fancy internal network. Configure routes accordingly.

It’s worth noting that the big clouds will sell you private links to your own facilities, and they charge for these links (which makes sense), but then they charge you for the traffic you send from inside the cloud to these links, which is absurd since they don’t charge for comparable traffic from inside the cloud to other systems in the same AZ.

jeffbee
10 replies
21h48m

they need to provide non-blocking or near non-blocking performance within the availability zone

I see you've never tried GCP.

amluto
9 replies
21h45m

I have, but not for a use case where this matters.

FWIW, Google has been working on these fancy nonblocking networks for a very long time. They’re very proud of them. Maybe they don’t actually use them for GCP, but Google definitely cares about network performance for their own purposes.

jeffbee
8 replies
21h33m

The whole concept of blocking is inapplicable to packet-switched networks. The whole time I was there I never heard anyone describe any of their several different types of networks as non-blocking. Indeed, the fact that they are centrally-controlled SDNs, where the control plane can tell any participant to stop talking to any other participant, seems to be logically the opposite of "non-blocking", if that circuit-switching terms were applicable.

Your message seems to imply that these datacenter networks experience very little loss, and this is observably far from reality. In GCP you will observe levels of frame drops that a corporate datacenter architect would consider catastrophic.

dekhn
6 replies
21h12m

Blocking is a common concept in packet switched networks; for example, a packet switch with a full crossover can be called "non-blocking". A switch is either going to queue or discard packets, and at the rates we're discussing, there is not enough buffer space so typically if a switch gets overloaded it's going to drop low priority packets. Obviously many things have changed, there are ethernet pause frames and admission control and SDN management of routes, but we still very much use the term "blocking" in packet switched networks.

What google decided long ago is that for their traffic patterns, it makes the most sense to adopt clos-like topologies (with packet switching in most cases), and not attempt to make a fully non-blocking single crossbar switch (it's too expensive for the port counts). More hops, but no blocking.

Scaling that got very difficult and so now many systems at Google use physical mirrors to establish circuit-like paths for packet-like flows.

GCP is effectively an application that runs on top of google's infrastructure (I believe you already worked there and are likely to know how it works) that adds all sorts of extra details to the networking stack. For some time the network as a user-space Java application that had no end of performance problems.

jeffbee
5 replies
20h55m

The whole word smacks of Bellhead thinking. With ethernet you put a frame on the wire and hope.

amluto
3 replies
19h3m

The term “non-blocking” may well originate with circuit switching, but Ethernet switches have referred to the behavior of supporting full line rate between any combination of inputs and outputs as “non-blocking” for a long time. (I wouldn’t call myself an expert on switching, but I learned IOS before there was a thing called iOS, and I think this usage predates me by a decent amount.)

With ethernet you put a frame on the wire and hope.

This is not really true. With Ethernet, applications and network stacks (the usual kind — see below) ought to do their best to control congestion, and, subject to congestion control, they put frames on the wire and hope. But network operators read specs and choose and configure hardware to achieve a given level of performance, and they expect their hardware to perform as specified.

But increasingly you can get guaranteed performance on Ethernet even outside a fully non-blocking context or even performance exceeding merely “non-blocking”. You are fairly likely to have been on an airplane with controls over Ethernet. Well, at least something with a strong resemblance to Ethernet:

https://en.m.wikipedia.org/wiki/Avionics_Full-Duplex_Switche...

There are increasing efforts to operate safety critical industrial systems over Ethernet. I recall seeing a system to allow electrical stations to reliably open relays controlled over Ethernet. Those frames are not at all fire-and-hope — unless there is an actual failure, they arrive, and the networks are carefully arranged so that they will still arrive even if any single piece of hardware along the way fails completely.

Here’s a rather less safety critical example of better-than-transmit-and-hope performance over genuine Ethernet:

https://en.m.wikipedia.org/wiki/Audio_Video_Bridging

(Although I find AVB rather bizarre. Unless you need extremely tight latency control, Dirac seems just fine, and Dirac doesn’t need any of the fancy switch features that AVB wants. Audio has both low bandwidth and quite loose latency requirements compared to the speed of modern networks.)

jeffbee
2 replies
17h50m

With Ethernet, applications and network stacks (the usual kind — see below) ought to do their best to control congestion

Exactly. Network endpoints infer things about how to behave optimally. Then they put their frame on the wire and hope. The things that make it possible to use those networks at high load ratios are in the smart endpoints: pacing, entropy, flow rate control, etc. It has nothing at all to do with the network itself. The network is not gold plated, it's very basic.

amluto
1 replies
17h4m

high load ratios

A ratio has a numerator and a denominator. I can run some fancy software and get (data rate / nominal network bandwidth) to be some value. But the denominator is a function of the network. I can get a few tens of 10Gbps links all connected together with a non-gold-plated nonblocking switch that’s fairly inexpensive (and even cheaper in the secondary market!), and each node can get, say, 50% of 10Gbps out as long as no node gets more than, say, 50% of 10Gbps in. That’s the load ratio.

Or I can build a non-gold-plated network where each rack is like this and each rack has a 100Gbps uplink to a rather more expensive big switch for the whole pile of racks, and it works until I run out of ports on that big switch, as long as each rack doesn’t try to exceed the load ratio times 100Gbps in aggregate. Maybe this is okay for some use case, but maybe not. Netflix would certainly not be happy with this for their content streaming nodes.

But this all starts to break down a bit with really large networks, because no one makes switches with enough ports. So you can build a gold plated network that actually gets each node its full 10Gbps, or you can choose not to. Regardless, this isn’t cheap at AWS scale. (But it may well be cheap at AWS scale, amortized per node.)

And my point is that egress does not add meaningful cost here. A 10Gbps egress link costs exactly as much, in internal network terms, as a compute or object storage node with a 10Gbps link. For an egress node, it costs whatever 10Gbps of egress transit or peering costs plus the amortized internal network cost, and a compute node costs whatever the cost of the node, space, power, cooling, maintenance etc costs, plus that internal network link.

So I think there is no legitimate justification for charging drastically more for egress than for internal bandwidth, especially if the cost is blamed on the network cost within the cloud. And the costs of actual egress in a non-hyperscaler facility aren’t particularly secret, and they are vastly less than what the major clouds charge.

jeffbee
0 replies
16h34m

I think one thing that's different about GCP is that Google itself is such an overwhelming tenant. If you read their SDN papers, their long-distance networks are operated near 100% nameplate capacity. For Google, I don't think squeezing customer flows is free.

dekhn
0 replies
20h50m

That's an exceptional simplification of modern network approaches.

if the world was bellhead, ATM would have won.

danpalmer
0 replies
12h8m

I think what the parent comment is implying is that at Google a lot of work has been put into making network calls feel as close to local function calls as possible, to the point that it’s possible to write synchronous code around those calls. There are a ton of caveats to this, but Google is probably one of the best in the world at doing this. There is some truly crazy stuff going on to make RPC latency so low.

charcircuit
7 replies
22h42m

but for some reason doesn't apply to ingress (which they're silent on)

The industry standard for peering is paying the 95th percentile of egress or ingress depending on whichever is greater. Ingress is free for these clouds because egress > ingress overall.

blibble
3 replies
22h17m

I accept there's some level of cost, but the prices are so high it's hard to describe it as anything other than gouging to prevent competition

toast0
2 replies
21h53m

My personal feeling is they're moving costs around so that egress has a big margin and other items have a smaller or potentially negative margin.

I've seen this at other providers. We did a competitive pricing exercise at my last company, and our overall cost went down, but the mechanism was per hosts costs went down significantly and egress costs went up significantly, and the per host cost decrease outweighed the egress cost increase.

It still doesn't make sense to charge for ingress, because everybody knows that should be free, unless you're a residential ISP.

lokar
0 replies
18h57m

Many companies in many industries do this. It’s often simply not practical or sensible to price every SKU “fairly “ on some value or cost basis. Your margin varies from item to item (including negative) and the whole bundle works out.

ikiris
0 replies
11h29m

y'all vastly underestimate how much it costs to run a CDN as large as that at scale.

bandwidth from cogent at whatever colo you can cross connect to them is wildly cheaper than "i need bandwith to everywhere" bandwidth.

logifail
2 replies
21h51m

The industry standard

Pfft...

for peering is paying the 95th percentile of egress or ingress depending on whichever is greater. Ingress is free for these clouds because egress > ingress overall

How about customers pay for actual usage, rather than some [fake] averaged-across-all-customers usage?

scottlamb
1 replies
19h47m

Why would the cloud provider charge for usage that doesn't actually cost them money? Unless usage patterns drastically change industry-wide, the ingress really doesn't matter to them. The egress does.

It seems entirely reasonable to look more skeptically at cloud providers' exact charges vs cost for egress, particularly when high egress fees might contribute to lock-in, and when the public price sheet vs the preferred customer pricing might differ radically. But asking them to totally restructure the charges, inventing a charge for ingress when their actual total ingress cost is zero and, short of major industry-wide usage pattern changes, will remain zero? Why would you do that?

makomk
0 replies
17h55m

Yeah, it's long been common for more traditional hosting providers to limit egress traffic and charge overage fees for going over that but have no similar limit on incoming traffic for basically the same reason: their network-wide traffic patterns mean that egress is what costs them money and ingress is effectively free on top of that.

mschuster91
2 replies
20h17m

the hyperscalers say they have to charge egress fees to pay for the costs in building their networks, but for some reason doesn't apply to ingress (which they're silent on)

because ingress traffic volume is a fraction (in my experience, in website hosting of a well known household brand, barely 1%!) of egress traffic volume, and most peering connections are 1:1 in ingress/egress bandwidth so the egress bandwidth cost sets the price.

NBJack
1 replies
19h33m

This also holds true on thebflip side for most consumer internet speeds (at least in the US).

Many people have a x00 Mbps or even x Gbps downstream, but most have no more than x0 Mbps upstream. Literally their ability to pull traffic from websites is 50X in some cases than to push information out. Going beyond that (greater uploads) often costs significantly more.

Whether or not these two are actually related isn't clear to me, but it is interesting.

womod
0 replies
15h10m

That's due to the DOCSIS standard for cable modems. They specced out more channels for downstream than upstream because of the limited bandwidth of copper and consumer priorities. With fiber there's an order of magnitude more bandwidth available, so the uneven split is much less (if at all?) common with the big backhaul lines between datacenters. For consumer fiber you'll usually get symmetric but for the most part it doesn't make sense as the vast majority of consumers just don't make use of their upstream bandwidth.

TulliusCicero
2 replies
20h38m

"Egress can be no more expensive than ingress" seems like a reasonable rule.

qwertox
1 replies
19h48m

But not during normal operation when egress to consumers is in higher demand than ingress. This would raise ingress prices unnecessarily.

close04
0 replies
9h46m

For what the lock-in topic is concerned the expensive egress is just relevant when you want to leave your provider, not during daily operations with your consumers. Free ingress but prohibitively expensive egress is literally only there to make it as easy as possible for customers to come to the cloud provider and as hard as possible to leave. Coming to a provider or leaving them should cost the same or else it's just a trap.

This is like Comcast giving you the initial installation for free but charging you an arm, a leg, and a year of your life to allow you to leave because "the process is costly". Both practices are scummy and abusive towards the customer.

jiveturkey
2 replies
23h0m

I don't see it that way at all. Those egregious egress charges still apply while you are actively using GCP, encouraging you to put everything in GCP, and eschew multicloud.

This seems to me more of a "try us out for free" play. Bring your big data here, if you end up not liking it, we won't penalize you for taking your data out. Given that GCP is running at a very distant 3rd, they need to make plays like this.

oh_sigh
1 replies
20h58m

It could also be a method to put pressure on AWS to get rid of their egress fees.

If it doesn't work - GCP looks a little better compared to AWS

If it does work, AWS users will have an easier time extricating themselves from the platform, and possibly going to GCP.

donalhunt
0 replies
19h38m

This. I see this as purely an attempt to influence AWS in lowering the barriers of migrating between clouds. But also makes sense for those who want the option to test the waters with no downside.

profsummergig
1 replies
18h44m

It's also smart Game Theory.

If they remove the fees, then competition might be pressured to do so (as a marketing response). Thus making it easier for people to switch to Google.

willsmith72
0 replies
18h22m

and google has way less to lose than amazon

loosescrews
8 replies
23h43m

Certain legacy providers leverage their on-premises software monopolies to create cloud monopolies, using restrictive licensing practices that lock in customers and warp competition.

I like to see them publicly call out Microsoft and Oracle.

SteveNuts
6 replies
23h27m

I think the "Legacy" and "licensing" portions are specifically calling out MS and Oracle, but they very sneakily are calling AWS out too on the fact that egress makes it insanely expensive to leave their platform.

I've worked at some orgs where to either move their data out of S3 would cost $20k, or to even delete it would cost thousands in API calls.

solatic
2 replies
20h51m

to even delete it would cost thousands in API calls

raises eyebrow

It's a well-known trick (proposed by AWS Support as well) to set S3 lifecycle rules to empty buckets with too many objects to cycle through via List calls. Doesn't cost anything.

SteveNuts
1 replies
20h29m

There's still a cost for transitions even if it's not list calls. That might be cheaper I haven't looked in a while.

QuinnyPig
0 replies
17h32m

Deletes are free in S3, whether done via API or lifecycle.

zinekeller
1 replies
22h47m

You don't need the "I think" for Microsoft: while not mentioned directly, the links on "unfair legacy software" points to Microsoft.

pyuser583
0 replies
16h9m

Oracle is what popped into my mind.

dspillett
0 replies
20h55m

> but they very sneakily are calling AWS out too

That, and any of the smaller “clouds” with egress/delete fees that need to be considered when leaving. Seems massively disingenuous given that until this announcement they also had such fees (“look, those people try rip you just like we were doing until five minutes ago!”) but that is pretty standard for marketing materials.

This makes them a better option as the first cloud provider to try, other things being equal, because leaving (back to on-prem or to another provider) is easier. I assume they are trying to remove a little of the huge the distance between them and the two leading players by reducing concerns that might add on-boarding friction.

blagie
0 replies
17h11m

I don't. This is very much the pot calling the kettle black. Of the cloud providers, Google has the least pleasant business tactics. I would do business with AWS gladly, with Azure, but never with Google.

Google should have made the same announcement without the snarky mean-spirited bitterness.

andrewstuart
6 replies
18h55m

The cynical might wonder if this is a precursor to Google closing down its cloud.

They can't really shut Google Cloud down and still be charging people to exit.

And if they had suddenly seen the light on egress fees then surely they would have cut egress fees everywhere..... the fact that it's only on account closure is kinda suspicious.

mgfist
1 replies
15h12m

Shutting down a $30B+ business would be such a ridiculous, yet Google-esque decision. It would also be a huge shame for consumers - the public cloud is already an Oligopoly

indrora
0 replies
8h14m

There's always Oracle (/s)

joeldo
1 replies
15h12m

Closing down within a year of it finally becoming profitable.. That would be a plot twist!

justinclift
0 replies
11h9m

But also not out of the question for Google, who don't seem at all rational about what they kill vs let live.

xeonmc
0 replies
17h48m

another Google product bites the dust

Sebb767
0 replies
15h37m

They can't really shut Google Cloud down and still be charging people to exit.

I'm not quite sure about this. Obviously they don't do it now, but I wouldn't have put it beyond them.

seatac76
5 replies
23h39m

Getting ahead of antitrust I see. Big tech has gotten so big that we will now see this peace meal stuff being done to appease regulators to stave off any major action.

kaonwarb
2 replies
23h31m

Google Cloud has ~10% market share; I don't think this is an antitrust avoidance play. More likely, it's removing a concern companies might have with bringing workloads onto a relatively smaller player. Especially one that has a history of discontinuing products.

aaomidi
0 replies
22h25m

The issue isn't as much market share as it is how sticky picking a cloud platform is. At that point the market share doesn't really matter, especially if your name is Google.

SteveNuts
0 replies
23h24m

Especially one that has a history of discontinuing products.

Let's just hope this isn't step 1 for their plan to do exactly that.

trhway
0 replies
23h32m

don't think so. Google is the distant 3rd in the Cloud, so hardly a subject to antitrust. It is more likely that failing hard on their goals for the Cloud set to that division about 3 years ago (and it was rumored that either they achieve the goals or it is "or else" for the Cloud division) they are starting the blame game whining about licensing/unfair competition/etc. (whereis they should blame only themselves as they would never bend over to the enterprise customer like say AWS would do who for example developed MS SQL (Transact SQL) interface to their own db - that is how you deal with the competition and software lock-in instead of whining (reminds that phrase from Babel - "that is why Benja is the king, while we are sitting on the cemetery fence"). In this case for example I remember how AWS was hunting down laid off Sybase (where T-SQL comes from) engineers whereis being an experienced enterprise software developer is viewed by Google is more like a disadvantage and results in meager offer (several my acquaintances had similar experience), so GCP losing enterprise game doesn't look that surprising to me. And now that enterprise customers are starting to add huge AI related workloads the GCP is hardly ever mentioned)

pimlottc
0 replies
22h5m

peace meal

*piecemeal

gnfargbl
5 replies
22h45m

So on the one hand Google now accepts that egress fees are outrageous. Great! On the other hand, they're only reducing (removing) the fees when you leave them.

If this move were really about acting in customers' best interests, they would reduce the fees for everyone. Doing this only for departing customers feels performative.

Patrick_Devine
2 replies
20h58m

Google is part of Cloudflare's Bandwidth Alliance [1] which is removing most egress fees. Google still charges for egress, but it's half of what AWS is charging. We moved everything off of S3 anyway and have been using Cloudflare's R2 storage along with Google Compute Engine instances. R2 can be a little flaky, but the cost savings for us more than makes up for it.

[1] https://www.cloudflare.com/bandwidth-alliance/

timenova
1 replies
20h13m

Can you give a few more details on the flaky nature of R2?

Does it lose files? Fails to write but gives an error? Fails to read sometimes? Silently fails?

whitepoplar
0 replies
13h10m

I'd like to know this as well!

vineyardmike
1 replies
21h25m

Unless they’re planning to kill GCP entirely. Then everyone is a departing customer /s

But seriously, it seems like a fair-ish compromise. The 60 day limit is tough though. As long as you’re continuing your usage and not using offsite backups, ingress/egress isn’t probably too problematic. It’s only problematic when you try to suddenly egress all data you’ve ever stored, which you probably wouldn’t do unless you’re migrating away.

gnfargbl
0 replies
21h16m

ingress/egress isn’t probably too problematic

It's problematic to me. I run a system partly on GCP and partly on another provider. I'd like to move some of the stuff that's on the other provider over to GCP VMs, but I am prevented from doing so solely by the GCP egress fees.

My general complaint is that high egress fees prevent users from developing hybrid cloud solutions which mix components from different cloud vendors. Instead, you are forced to choose a platform, and then you're locked into it. That seems textbook anti-competitive. We shouldn't be grateful for the opportunity to switch vendors, we should be angry that "choosing a single cloud vendor" is a thing at all.

ShakataGaNai
4 replies
23h46m

That's a cool thing for them to do, an interesting business choice, but cool. It certainly helps the companies feel a little better about vendor lock-in, which is terribly plentiful in the cloud.

The TLDR is that when you tell them you want to cancel, you have 60 days to do so and during that time you'll have no egress fees. Makes sense.

Biggest problem though is... if you have a substantial amount of data or need to do a complex and seamless transition - this probably won't work for you. I would have to be on the DevOps team that's told to move a complicated and data-heavy application, and they only had 60 days to do so. Also the bulk of the data movement is, in my experience, one of the first steps of migration - not the last.

My hope is that, if nothing else, this will spur similar behaviors in other large cloud providers ::cough::aws::cough::.

kentonv
2 replies
22h57m

Sounds like they're trying to technically address EU regulators' concerns without providing any real value to customers.

In real life it's probably extremely unusual for any company to altogether cancel their Google Cloud contract. More likely is the scenario where you move the bulk of your cloud usage to a new provider, but still have various straggler infrastructure on the old one, which is not worth the effort to clean up. Or, you go to a multi-cloud strategy so you want to move half your data off Google but keep the other half around. Google's egress fees are still standing in the way of these cases.

thrtythreeforty
1 replies
22h53m

Yep, for a sufficiently large account, this strikes me as an offer they know will never be taken. "Migrating" typically means "stopping $XXX,XXX spend per month" not "completely ceasing use of all GCE services."

They know this and this is mainly marketing, I think.

op00to
0 replies
22h19m

You migrate use cases, not entire environments!

diggan
0 replies
23h20m

I would have to be on the DevOps team that's told to move a complicated and data-heavy application, and they only had 60 days to do so. Also the bulk of the data movement is, in my experience, one of the first steps of migration - not the last.

Couldn't you wait to tell Google until you've more or less figured out the logistics, then tell Google that you're leaving, fees for egress gets disabled and you initiate the move. Then you have 60 days to complete the move.

But yeah, if the move takes 30 days because you have a ton of data, and you figure out after the move is complete, that you missed 10%, you only have 30 days to figure out how to get that out too.

QuinnyPig
3 replies
17h27m

Hi. I fix AWS bills for a living and also shitpost a lot.

This is a smart play that costs them basically nothing. Remember that egressing data costs customers at worst 3x the monthly cost of storing it. Nobody is avoiding leaving because of the egress fee.

What this does do is assuage the “lock-in” fear common in cloud-reluctant customers, while presents them as being forward thinking.

I’ve never heard of a cloud migration where the data egress wasn’t at least an order of magnitude less than the engineering cost of the migration itself.

xer
0 replies
8h15m

I share the view that nobody avoids an exit or migration because of egress fees. In fact, for online migrations the period of replicating data between providers might go on for months.

But all cloud providers leverage The Principle of data locality or data gravity, which states that compute benefits from being close to the stored data. If a customer moves the data elsewhere it follows that the compute will soon leave too.

danielvaughn
0 replies
16h48m

Ok that's reassuring. The way it was worded made me think that GCP was headed to the Google graveyard, as wild as that would be.

btown
0 replies
16h4m

also shitpost a lot

The parent poster https://twitter.com/QuinnyPig is one of my favorite Twitter accounts. Every day he gives me renewed hope that, no matter how much I wish I had more time to devote to developer experience for people using the internal tools and APIs I design on the startup side of things... at least it'll be better than the DX and customer service provided by the biggest players providing infrastructure to our entire industry :)

6nf
3 replies
8h48m

RIP Google Cloud

calling it now

flanked-evergl
2 replies
8h32m

Let's hope not, most of their business will move to Azure, and is that ever a dumpster fire.

reciprocity
0 replies
2h2m

Describing Azure as a dumpster fire doesn't strike you as hyperbolic? Could you elaborate?

bob1029
0 replies
7h42m

I think Azure is great if you are willing to do it Microsoft's way and are focused on B2B markets where factors like compliance matter in a way that can actually disrupt your sales cycle.

Put differently, if you can get over the principled nerd stuff like "everything must be open" and break out that wallet, you can stand to make a lot of money without as much headache as the guy who decided to roll his own IdP.

Getting "cost sensitive" on Azure is how you lose track of the rabbit. The whole point in my view is to trade money for reduced complexity and time. It lets you focus on solving really hard business that others simply can't seem to find the time to.

otterley
2 replies
18h3m

The fine print:

https://cloud.google.com/exit-cloud

* Free data transfers related to Google Cloud Exit are available on Premium Tier Network Service Tier

* Only data residing in Google Cloud data storage and data management products are covered

* You must report any changes to your migration timeline set out in your request form to the Google Cloud Support team

* You must submit your free data transfer request prior to the termination of your Google Cloud agreement

* Google Cloud reserves the right to audit movement of customers' data away from Google Cloud for compliance with program terms and conditions

robertlagrant
0 replies
17h59m

That all sounds pretty reasonable.

cobertos
0 replies
17h34m

I clicked the link and you have to _apply_ and potentially be admitted just to get a free exit? So there's no guarantee??

Is requiring a Google committee review/application process a new trend with Google products? I recently was denied on another application through Google for API access to get one businesses GMB reviews, and it's frustrating because there's no recourse. Google is so opaque now.

londons_explore
2 replies
20h41m

The process seems a bit cumbersome...

Why not:

1. Migrate your data out.

2. Close your account.

3. You will automatically be refunded all network egress fees incurred in the final 60 days, capped at the number of gigabytes you had stored in our products in the preceding 60 days.

arccy
0 replies
19h44m

your suggestion sounds more cumbersome (and probably refunds less)

Kwpolska
0 replies
10h45m

The process requires people to be aware of the offer and contact support. A few months down the road, nobody will remember this exists, so most migrations won't actually make use of it.

andersa
2 replies
23h44m

I bet there's some new regulation being implemented here that they just happened to forget to mention in the post.

robertlagrant
1 replies
17h30m

Why's that?

andersa
0 replies
8h16m

Large cloud providers don't do things out of the good in their hearts, but they sure like to pretend so whenever they are forced to improve something.

pwarner
1 replies
20h51m

Egress fees are way too high. And it doesn't keep anyone in cloud, if you want to move out, you do it. In fact, I think the fear of egress costs keeps more people OUT of cloud than it keeps people in. This is a smart move that won't cost them anything and may increase their business.

londons_explore
0 replies
20h45m

I know plenty of companies who can't afford the Aws egress fees to get their data out of Aws, and it's far cheaper to just pay storage for one more month and kick the can down the road.

asylteltine
1 replies
18h29m

I’m betting we will see GCP added to the list of killed by Google in 2025.

willsmith72
0 replies
18h16m

i'll take the other side of that bet. the cloud industry is a huge money-maker and not going anywhere

anyoneamous
1 replies
16h41m

I love that Google are willing to such a big swing at Microsoft in the text of this announcement - I just wish AWS wasn't so badly shackled busy it's marketing and PR people these days.

plantain
0 replies
14h21m

I thought it was Oracle they were swinging at.

andrewstuart
1 replies
18h3m

This whole story is very strange.

All over the Internet people are slapping google on the back for..... pretty much nothing. In fact much of the praise seems to be worded as though Google had dropped all egress fees.

Google continues to charge the nothing-short-of-highway-robbery 12 cents per gigabyte unless you are in Australia in which case its 19 cents per gigabyte. This is astoundingly bad value.

What are people hailing Google's "free to exit" in such glowing terms. Even herein the comments people are cheering for Google.

Worth noting at this point that Cloudflare R2 charges 0 cents per gigabyte egress.

justinclift
0 replies
11h8m

Hmmm, I'm not seeing many positive comments here about Google. Maybe I started reading a lot later than you? :)

Mortiffer
1 replies
23h30m

but i guess this won't impact Google Cloud Storage egress fee

CobrastanJorji
0 replies
19h51m

That is exactly what this impacts, if you're leaving Google Cloud.

xnx
0 replies
23h1m

When you don't have many customers [as AWS] there's much less to lose by eliminating these fees. Also makes AWS look bad by comparison. Smart move.

skywhopper
0 replies
23h18m

At face value this is a good thing, but I gotta say it’s pretty rich for Google to try to diss other cloud vendors for leveraging their effective monopolies given its moves in search and now browsers. Sorry, but GCP doesn’t get a pass on Google’s other behavior just because currently a loser in the cloud infra market.

hiroshi3110
0 replies
21h30m

Good move, but if you have data in GCS colder than nearline it may cost you still.

hipadev23
0 replies
20h4m

Wait so if I close my GCP account I get free egress, if I stay I don't?

dilyevsky
0 replies
15h28m

In europe they will not be able to charge egress at all starting next year i think. I wonder if that’s related development

danpalmer
0 replies
17h19m

Looking forward to "egress fees for leaving GCP" being added to Killed By Google (in a personal capacity, unrelated to my work).

bredren
0 replies
11h33m

Certain legacy providers leverage their on-premises software monopolies to create cloud monopolies, using restrictive licensing practices that lock in customers and warp competition.

This is fine issue of its own. But the way it is laced up here muddies the waters around this actual news.

Google was doing the wrong thing and is changing that while not really taking responsibility for doing the wrong thing.

But to make that less obvious, this other concern is brought into the story creating this high ground on a separate topic.

The strategic reframing of corporate communications is tiring.

asmor
0 replies
9h57m

The audacity of ranting about some cloud provider leveraging their licensing and then dropping a link to Azure in the next sentence, I love it, thanks Google.

ado__dev
0 replies
22h38m

Good move.

I still feel that the egress fees charged by the big 3 are way too high.

AtNightWeCode
0 replies
16h56m

Egress bandwidth costs have been discounted for years. Including GCP. Bandwidth Alliance... Like 10 years ago traffic cost was a real problem with the cloud. Today there are other areas that are expensive.