Looks like it's a response to the recent cloud services market investigation by the CMA [1].
Which highlighted "Egress fees harm competition by creating barriers to switching and multi-cloud leading to cloud service providers entrenching their position" [2].
It's also interesting that they are calling out problems with software licensing, as that is another thing the CMA is investigating in their cloud market review.
[1] https://www.gov.uk/cma-cases/cloud-services-market-investiga...
[2] https://assets.publishing.service.gov.uk/media/652e958b69726...
I read through a couple of these responses to the CMA by MS, Google and AWS and their smaller competitors
as expected the hyperscalers refuse to acknowledge that the free ingress and expensive egress is a lock-in mechanism, and their smaller competitors complain bitterly about this
the hyperscalers say they have to charge egress fees to pay for the costs in building their networks, but for some reason doesn't apply to ingress (which they're silent on)
if they want to play this game then the CMA should simply make them charge the same for ingress and egress
that way they can "fund their network costs" without issue, and if they want to make them both free then that's their decision
This doesn’t pass the red face test IMO. The hyperscaler networks are indeed very expensive, but that’s because they need to provide non-blocking or near non-blocking performance within the availability zone, and the clouds don’t charge for this service.
The Internet egress part ought to be straightforward on top of this: plug as much bandwidth worth of connections into the aforementioned extremely fancy internal network. Configure routes accordingly.
It’s worth noting that the big clouds will sell you private links to your own facilities, and they charge for these links (which makes sense), but then they charge you for the traffic you send from inside the cloud to these links, which is absurd since they don’t charge for comparable traffic from inside the cloud to other systems in the same AZ.
I see you've never tried GCP.
I have, but not for a use case where this matters.
FWIW, Google has been working on these fancy nonblocking networks for a very long time. They’re very proud of them. Maybe they don’t actually use them for GCP, but Google definitely cares about network performance for their own purposes.
The whole concept of blocking is inapplicable to packet-switched networks. The whole time I was there I never heard anyone describe any of their several different types of networks as non-blocking. Indeed, the fact that they are centrally-controlled SDNs, where the control plane can tell any participant to stop talking to any other participant, seems to be logically the opposite of "non-blocking", if that circuit-switching terms were applicable.
Your message seems to imply that these datacenter networks experience very little loss, and this is observably far from reality. In GCP you will observe levels of frame drops that a corporate datacenter architect would consider catastrophic.
Blocking is a common concept in packet switched networks; for example, a packet switch with a full crossover can be called "non-blocking". A switch is either going to queue or discard packets, and at the rates we're discussing, there is not enough buffer space so typically if a switch gets overloaded it's going to drop low priority packets. Obviously many things have changed, there are ethernet pause frames and admission control and SDN management of routes, but we still very much use the term "blocking" in packet switched networks.
What google decided long ago is that for their traffic patterns, it makes the most sense to adopt clos-like topologies (with packet switching in most cases), and not attempt to make a fully non-blocking single crossbar switch (it's too expensive for the port counts). More hops, but no blocking.
Scaling that got very difficult and so now many systems at Google use physical mirrors to establish circuit-like paths for packet-like flows.
GCP is effectively an application that runs on top of google's infrastructure (I believe you already worked there and are likely to know how it works) that adds all sorts of extra details to the networking stack. For some time the network as a user-space Java application that had no end of performance problems.
The whole word smacks of Bellhead thinking. With ethernet you put a frame on the wire and hope.
The term “non-blocking” may well originate with circuit switching, but Ethernet switches have referred to the behavior of supporting full line rate between any combination of inputs and outputs as “non-blocking” for a long time. (I wouldn’t call myself an expert on switching, but I learned IOS before there was a thing called iOS, and I think this usage predates me by a decent amount.)
This is not really true. With Ethernet, applications and network stacks (the usual kind — see below) ought to do their best to control congestion, and, subject to congestion control, they put frames on the wire and hope. But network operators read specs and choose and configure hardware to achieve a given level of performance, and they expect their hardware to perform as specified.
But increasingly you can get guaranteed performance on Ethernet even outside a fully non-blocking context or even performance exceeding merely “non-blocking”. You are fairly likely to have been on an airplane with controls over Ethernet. Well, at least something with a strong resemblance to Ethernet:
https://en.m.wikipedia.org/wiki/Avionics_Full-Duplex_Switche...
There are increasing efforts to operate safety critical industrial systems over Ethernet. I recall seeing a system to allow electrical stations to reliably open relays controlled over Ethernet. Those frames are not at all fire-and-hope — unless there is an actual failure, they arrive, and the networks are carefully arranged so that they will still arrive even if any single piece of hardware along the way fails completely.
Here’s a rather less safety critical example of better-than-transmit-and-hope performance over genuine Ethernet:
https://en.m.wikipedia.org/wiki/Audio_Video_Bridging
(Although I find AVB rather bizarre. Unless you need extremely tight latency control, Dirac seems just fine, and Dirac doesn’t need any of the fancy switch features that AVB wants. Audio has both low bandwidth and quite loose latency requirements compared to the speed of modern networks.)
Exactly. Network endpoints infer things about how to behave optimally. Then they put their frame on the wire and hope. The things that make it possible to use those networks at high load ratios are in the smart endpoints: pacing, entropy, flow rate control, etc. It has nothing at all to do with the network itself. The network is not gold plated, it's very basic.
A ratio has a numerator and a denominator. I can run some fancy software and get (data rate / nominal network bandwidth) to be some value. But the denominator is a function of the network. I can get a few tens of 10Gbps links all connected together with a non-gold-plated nonblocking switch that’s fairly inexpensive (and even cheaper in the secondary market!), and each node can get, say, 50% of 10Gbps out as long as no node gets more than, say, 50% of 10Gbps in. That’s the load ratio.
Or I can build a non-gold-plated network where each rack is like this and each rack has a 100Gbps uplink to a rather more expensive big switch for the whole pile of racks, and it works until I run out of ports on that big switch, as long as each rack doesn’t try to exceed the load ratio times 100Gbps in aggregate. Maybe this is okay for some use case, but maybe not. Netflix would certainly not be happy with this for their content streaming nodes.
But this all starts to break down a bit with really large networks, because no one makes switches with enough ports. So you can build a gold plated network that actually gets each node its full 10Gbps, or you can choose not to. Regardless, this isn’t cheap at AWS scale. (But it may well be cheap at AWS scale, amortized per node.)
And my point is that egress does not add meaningful cost here. A 10Gbps egress link costs exactly as much, in internal network terms, as a compute or object storage node with a 10Gbps link. For an egress node, it costs whatever 10Gbps of egress transit or peering costs plus the amortized internal network cost, and a compute node costs whatever the cost of the node, space, power, cooling, maintenance etc costs, plus that internal network link.
So I think there is no legitimate justification for charging drastically more for egress than for internal bandwidth, especially if the cost is blamed on the network cost within the cloud. And the costs of actual egress in a non-hyperscaler facility aren’t particularly secret, and they are vastly less than what the major clouds charge.
I think one thing that's different about GCP is that Google itself is such an overwhelming tenant. If you read their SDN papers, their long-distance networks are operated near 100% nameplate capacity. For Google, I don't think squeezing customer flows is free.
That's an exceptional simplification of modern network approaches.
if the world was bellhead, ATM would have won.
I think what the parent comment is implying is that at Google a lot of work has been put into making network calls feel as close to local function calls as possible, to the point that it’s possible to write synchronous code around those calls. There are a ton of caveats to this, but Google is probably one of the best in the world at doing this. There is some truly crazy stuff going on to make RPC latency so low.
The industry standard for peering is paying the 95th percentile of egress or ingress depending on whichever is greater. Ingress is free for these clouds because egress > ingress overall.
I accept there's some level of cost, but the prices are so high it's hard to describe it as anything other than gouging to prevent competition
My personal feeling is they're moving costs around so that egress has a big margin and other items have a smaller or potentially negative margin.
I've seen this at other providers. We did a competitive pricing exercise at my last company, and our overall cost went down, but the mechanism was per hosts costs went down significantly and egress costs went up significantly, and the per host cost decrease outweighed the egress cost increase.
It still doesn't make sense to charge for ingress, because everybody knows that should be free, unless you're a residential ISP.
Many companies in many industries do this. It’s often simply not practical or sensible to price every SKU “fairly “ on some value or cost basis. Your margin varies from item to item (including negative) and the whole bundle works out.
y'all vastly underestimate how much it costs to run a CDN as large as that at scale.
bandwidth from cogent at whatever colo you can cross connect to them is wildly cheaper than "i need bandwith to everywhere" bandwidth.
Pfft...
How about customers pay for actual usage, rather than some [fake] averaged-across-all-customers usage?
Why would the cloud provider charge for usage that doesn't actually cost them money? Unless usage patterns drastically change industry-wide, the ingress really doesn't matter to them. The egress does.
It seems entirely reasonable to look more skeptically at cloud providers' exact charges vs cost for egress, particularly when high egress fees might contribute to lock-in, and when the public price sheet vs the preferred customer pricing might differ radically. But asking them to totally restructure the charges, inventing a charge for ingress when their actual total ingress cost is zero and, short of major industry-wide usage pattern changes, will remain zero? Why would you do that?
Yeah, it's long been common for more traditional hosting providers to limit egress traffic and charge overage fees for going over that but have no similar limit on incoming traffic for basically the same reason: their network-wide traffic patterns mean that egress is what costs them money and ingress is effectively free on top of that.
because ingress traffic volume is a fraction (in my experience, in website hosting of a well known household brand, barely 1%!) of egress traffic volume, and most peering connections are 1:1 in ingress/egress bandwidth so the egress bandwidth cost sets the price.
This also holds true on thebflip side for most consumer internet speeds (at least in the US).
Many people have a x00 Mbps or even x Gbps downstream, but most have no more than x0 Mbps upstream. Literally their ability to pull traffic from websites is 50X in some cases than to push information out. Going beyond that (greater uploads) often costs significantly more.
Whether or not these two are actually related isn't clear to me, but it is interesting.
That's due to the DOCSIS standard for cable modems. They specced out more channels for downstream than upstream because of the limited bandwidth of copper and consumer priorities. With fiber there's an order of magnitude more bandwidth available, so the uneven split is much less (if at all?) common with the big backhaul lines between datacenters. For consumer fiber you'll usually get symmetric but for the most part it doesn't make sense as the vast majority of consumers just don't make use of their upstream bandwidth.
"Egress can be no more expensive than ingress" seems like a reasonable rule.
But not during normal operation when egress to consumers is in higher demand than ingress. This would raise ingress prices unnecessarily.
For what the lock-in topic is concerned the expensive egress is just relevant when you want to leave your provider, not during daily operations with your consumers. Free ingress but prohibitively expensive egress is literally only there to make it as easy as possible for customers to come to the cloud provider and as hard as possible to leave. Coming to a provider or leaving them should cost the same or else it's just a trap.
This is like Comcast giving you the initial installation for free but charging you an arm, a leg, and a year of your life to allow you to leave because "the process is costly". Both practices are scummy and abusive towards the customer.
I don't see it that way at all. Those egregious egress charges still apply while you are actively using GCP, encouraging you to put everything in GCP, and eschew multicloud.
This seems to me more of a "try us out for free" play. Bring your big data here, if you end up not liking it, we won't penalize you for taking your data out. Given that GCP is running at a very distant 3rd, they need to make plays like this.
It could also be a method to put pressure on AWS to get rid of their egress fees.
If it doesn't work - GCP looks a little better compared to AWS
If it does work, AWS users will have an easier time extricating themselves from the platform, and possibly going to GCP.
This. I see this as purely an attempt to influence AWS in lowering the barriers of migrating between clouds. But also makes sense for those who want the option to test the waters with no downside.
It's also smart Game Theory.
If they remove the fees, then competition might be pressured to do so (as a marketing response). Thus making it easier for people to switch to Google.
and google has way less to lose than amazon