return to table of content

Slashing data transfer costs in AWS

andrewstuart
132 replies
1d7h

An alternative to sophisticated cloud cost minimization systems is…….. don’t use the cloud. Host it yourself. Or use Cloudflare which has 0 cents per gigabyte egress fees. Or just rent cloud servers from one of the many much cheaper VPS hosting services and don’t use all the expensive and complex cloud services all designed to lock you in and drain cash from your at 9 or 12 or 17 cents per gigabyte.

Seriously, if you’re at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.

overstay8930
63 replies
1d2h

If you're at the point you're doing sophisticated cloud cost analysis you are doing the cloud right, because that is completely impossible anywhere else.

I swear the people who say go on premise have no idea how much the salary costs of someone who will not treat their datacenter like a home lab is. Even Apple iCloud is in AWS and GCP because of how economical it is, you suck at the cloud you think you have to go back on prem, or you just don't give a shit about reliability (start pricing up DDoS protection costs at anything higher than 10G and tell me the cloud is more expensive).

We spend 100k+ on AWS bandwidth and it's still cheaper than our DIA circuits because we don't have to pay network engineers to manage 3 different AZs.

dzikimarian
27 replies
1d1h

Apparently we're doing the impossible for over 12 years now. Who knew?

Some people act like it's some kind of black magic. It's not. We've some customers in our DC and some on AWS for various reasons. AWS isn't less problematic. AWS is about 10x more expensive. Both on prem and cloud require people familiar with them and cloud-engineers are in no way cheaper.

Only meaningful problem is that on-prem requires some up front cost&time. That can be mitigated by leasing and other means, but indeed can be an issue for small businesses.

falserum
13 replies
1d

Cloud clearly makes sense for:

- small business with at least some reliability expectation, and little to none IT expertise

- huge workload requirement volatility

- having someone else to blame

- solution is already working in cloud, with teams being very comfortable there, and perceiving on-prem as “enemy” (analogy: forcing devs to rewrite stuff from haskell to java)

- that extra cost is small budget line for you

On the other hand, it does not make sense to go cloud, if you are sufficiently big and already have on prem solution and expertise in house. (Extreme case: google, does not use aws for its main load; this upper threshold I wager is couple of orders of magnitude smaller)

coredog64
6 replies
1d

Amazon retail runs on AWS, and I think we can agree that Amazon retail is reasonably described as “large scale”

no_wizard
2 replies
1d

This is a quirk of the business that is Amazon and AWS, as they started by selling excess compute and expertise, due to how Amazon was built as API first internally it was almost natural.

It’s in no way the norm for a smaller business.

rented_mule
1 replies
23h29m

This matches the public (i.e., non-Amazon) speculation I was hearing around the launch of S3 and later EC2. But not what I was hearing internally when I worked at Amazon. I was there when S3, the first AWS service, and EC2 were launched. I was working on what I believe to be the first Amazon (non-AWS) application that used S3 for storage. Getting that approved was not easy - all the same skepticism existed internally as externally (cost, availability, durability, security, etc.).

The story I was hearing internally was that it was too costly to scale infrastructure the way Amazon had been doing it, it was fragile, and the expertise wasn't keeping up with growth. So, set the bar a lot higher, and build infra that is big enough and flexible enough to be everybody's infra, and then Amazon's applications (e.g., retail) could run on AWS' excess capacity. Literally the opposite of what external folks were guessing. I believe they were completely physically separate data centers - even the physical location of AWS data centers were on a need-to-know basis internally (the internal lore was "under a mountain in Virginia" - this was years before Regions and Availability Zones). And any bugs in AWS could be worked out with outside usage before moving Amazon's applications onto it.

Also, Amazon needed the elasticity of AWS because of the nature of their retail business. At the time that the initial AWS services were being developed, a massive chunk of Amazon's traffic came during the holiday season. IIRC, something like half of the year's traffic and revenue, possibly more, came in November/December each year. That meant a lot of capacity was sitting idle most of the year. Selling that excess capacity would mean shutting AWS down every holiday season.

For a time, there was an internal mailing list that wasn't yet locked down that contained reports on S3 bandwidth usage. The growth rate was shockingly high. I would guess that within a year or two of release, S3 was using a few (at least) orders of magnitude more bandwidth than everything else at Amazon combined.

no_wizard
0 replies
15h48m

In broad strokes, the main point I was makings till stands though: AWS was deliberately made to back the demanding scale of Amazon, it was a bet on the future and the Amazon model as much as it was product service, and that did mean they built expertise and hardware up and sold that as a product none the less.

This still isn't the norm for most businesses, even big ones.

SOLAR_FIELDS
2 replies
1d

Is that really a fair comparison though? AWS is a very weird argument to make because you could say that AWS is kind of “on premise” for Amazons purposes. Internally Amazon.com does not pay retail pricing or have the same level of support as third party end users. A better example would be looking at Jet.com/Walmart and asking if it runs on AWS.

overstay8930
1 replies
1d

Nobody who is big is paying retail prices, that's why saying "on premise is cheaper" is total copium.

As soon as you start factoring in discounts (i.e. bandwidth is nearly free at some point), the math of being on premise completely falls apart to the point you are paying more for licensing and support for the hardware than you are the entire lifecycle of your infrastructure in the cloud. It's just that bad to do it yourself.

rstuart4133
0 replies
20h31m

licensing and support for the hardware

Sorry, but who pays for licensing and support of the hardware? I've never done that. You buy a Dell server or whatever, you pay for the 5 or 7 years warranty up front, you put in in a rack and you never literally touch it again until it's EOL'ed. If something breaks Dell touches it, not you. That typically costs around $500/year, albeit paid upfront.

But you usually don't do that. Instead you rent a dedicated metal from someone. The cost if of the order of $1000/year including some storage, no more to pay unless you exceed 100TB/year. A similarly configured AWS EC2 instance is $13,000/year, plus bandwidth, plus whatever other services you get sucked into. And you will get sucked into it because if you ask AWS about any problem (like say, monitoring why your bills are so high), the answer is invariably "use this paid service of ours".

You're kidding yourself if you think using AWS is cheaper than the alternatives. Those discounts you speak of are from an absurdly high starting point. I'm sure there are lots of reasons to use AWS or a similar cloud service, but unless you only need a lot of grunt for at most a few months price isn't one of them.

ipaddr
3 replies
20h19m

Cloud doesn't make sense for small business. A vps would. If you are spending less than 100,000 you probably don't need it for your 10,000 million or less daily visitors

Turing_Machine
2 replies
20h12m

If you're spending less than 100,000, you almost certainly aren't spending enough to pay salary + benefits for a sysadmin.

ipaddr
0 replies
17h49m

Why would you need additional people to manage a vps? The person managing amazon can very easily use cpanel because it is much simpler.

BackBlast
0 replies
34m

Why does everyone always jump to this fiction?

You probably aren't paying a "cloud engineer" just to fiddle with cloud config full time with <$100k spend. Why would you suddenly need a full time sysop to do the same capability level of on-prem? If you do 10 hours of "cloud engineering" a month to support an application, the same capability level of on-prem work is probably in the same ballpark. 5-30 hours. Yes, it can be lower than cloud. No, it's NOT suddenly 160 hours every month. Yes, it does mean you need someone who can wear the extra hat, cloud engineering skills are literally no different in this regard.

Internet sites ran on basement and closet servers for years, actual server rooms for larger stuff. The jokes were tripping over power cords and backhoes digging up internet lines were the causes of most outages. It's never been easier to run on-prem than today. It cost a fraction of even budget VPS providers like Hetzner.

For certain classes of applications, it's awesome. It's not everyone's cup of tea I'll grant. But if you are inclined to play with hardware or have someone in the org who does, and an extra 2-4 hours of downtime a year isn't that big of a deal (depending on general utility and network available at your chosen site). You can save tons of money.

dzikimarian
0 replies
1d

Depending how small is small business, but I would probably go with VPS.

MaxBarraclough
0 replies
22h42m

Extreme case: google, does not use aws for its main load; this upper threshold I wager is couple of orders of magnitude smaller

Not a good example in my opinion, as Google is also a major provider of cloud services. AWS isn't the only game in town.

I think Disney+ is a more interesting case. A quick google search turns up some articles saying their video streaming is powered by external CDNs. As I understand it, Netflix take the opposite approach and deliver all video data from their own Open Connect CDN, although they use AWS for other workloads (presumably including things like authentication, their recommendation engine, etc).

chongli
8 replies
1d

Both on prem and cloud require people familiar with them and cloud-engineers are in no way cheaper.

I think the real story is a bit sordid: office politics. On-prem and cloud are different skillsets. Companies that have been around for a while can end up with both on-prem and cloud experts who end up competing with each other, often on separate teams. Throw in some slick consultants from Amazon who are able to bend the ear of the VP and you've got a real problem. From what I've seen it doesn't end well for the on-prem team!

overstay8930
4 replies
1d

Cloud engineers can do the job of 4-5 on-prem people. Our AWS devs don't need to be BGP or ZFS experts, they just need to be AWS experts.

rrrix1
1 replies
22h57m

Hilariously ironic, with a sufficiently large cloud footprint, things like BGP (and more/other internetworking protocols) and OpenZFS become required skillsets. I have firsthand experience of this. :)

FireBeyond
0 replies
21h50m

Yeah, there was an amusement there. I've definitely had to understand BGP to configure cloud VPC setups.

"They just have to be an AWS expert".

Right. They just have to be experts in: EC2, S3, Aurora, DynamoDB, RDS, Lambda, VPC, LightSail, Athena, EMR, RedShift, MQ, SQS, SNS, ECR, ECS, EKS, ElastiCache, CloudWatch, CloudTrail, IAM, Cognito, and a few more. No big deal.

dzikimarian
1 replies
1d

Well our on-prem team doesn't need AWS pricing calculation and optymization expert, so there's that :-)

owenmarshall
0 replies
16h24m

AWS pricing and optimization is just capacity planning, which doesn’t go away if you run on prem - it just looks different, with longer time horizons & financial implications.

“Will my data center run out of floor space & I need to expand?” (years+)

“Will I have enough cooling & power to support the new racks we need?” (6 months+)

“When do I need to get the server order out to ensure we meet our capacity needs?” (6+ weeks)

Every one of those are capital expenditures, so line them up with the annual budget cycle - be sure to keep enough spare capacity to be responsive for last minute asks.

Don’t think my intent is to romanticize the cloud, either. It’s not better, nor worse, just a different way to manage things.

Of course if your company is sufficiently small, do whatever you know and can do quickly - customer acquisition will be more important than debating the cost of either infra in aws or a colo’d server or two in some racks somewhere. But the complexity doesn’t go away if you go to the cloud, OR if you are all on prem. TINSTAAFL.

hirako2000
2 replies
1d

I concur. Many VPs bend ears so easily you got to wonder. do they also get invited to some private dinner by those so called slick consultants, who pay the bill in a rush and leaves after forgetting some thick envelope on the table.

chongli
1 replies
22h46m

It’s a story as old as time itself. IBM has been doing it at least as far back as the 60’s. Fancy consultants who know the tech and also know how to sell and make themselves seem way smarter than the VP’s reports. Do one slick presentation and the VP is asking his team “why didn’t you guys come up with this stuff?”

Next thing you know these multi-million-dollar contracts are signed and the existing teams are just shaking their heads. The smart ones have already put out their resumes and started interviewing elsewhere.

hirako2000
0 replies
5h56m

I'm not one of those smart ones. So many things I was seeing for years make much more sense now.

It's an old story but I guarantee you many don't know about it.

tqi
1 replies
1d

These types of debates always seem to go this way - one person saying one option works way better for them so the other option must be crazy, followed by another person saying the opposite. I'm not an infra guy, but my guess is the reality is both are choosing the right option for themselves, because is no one objectively "best" option. Just tradeoffs.

I also think if a company is unhappy with their current set up and thinks that switching from cloud to on-prem (or vis versa) is the answer, they're probably delusional because "the fault, dear Brutus, is not in our stars, but in ourselves."

dzikimarian
0 replies
11h6m

Completely not my point. I don't say that using cloud is crazy(we're using it sometimes). I'm saying that opinion that on-prem is much harder than AWS is IMO wrong.

Requires some planning ahead of time - yes. Harder - not really.

no_wizard
1 replies
1d

In many organizations the biggest appeal of services like AWS or GCP isn’t simply cost, it’s that the service is approved and therefore all the services of that service are approved and I no longer have to justify spinning up more compute or leveraging one of their more bespoke services like SQS (or wherever equivalent). It’s all just there, ready to be used.

It may not be true for you and where you work but this is a very real thing in a lot of organizations, where development teams want a quicker turn around on booting up services they need as the product evolves and having some control over how they act with each other. It (sometimes to negative results) opens up more architectural possibilities to solve problems

dzikimarian
0 replies
1d

I'm not saying using AWS is universality bad. Depends on your needs.

What I'm saying is some people are trying to portrait rolling your own k8s or on-prem as equivalent to rolling your own crypto - better left to the chosen ones with years of training in secret monastery. This is BS :-)

holoduke
12 replies
1d1h

My small business once spend 50k per month on AWS. We brought that back to 800 dollars for a similar setup at Hetzner. I find this a significant number.

overstay8930
7 replies
1d

"Yea this small VPS provider with 1% of the features is just as good as AWS to us" yea that's because you aren't using features as basic as AWS Nitro Enclaves and you are years behind even basic cloud security.

Hetzner is for running homelabs and basic compute, not businesses. That's why EU companies constantly ignore EU rulings on US-EU privacy shield because there's just no alternative to American cloud providers yet.

fabian2k
5 replies
1d

People have been running real businesses before the cloud providers existed. Of course you can run a business with rented dedicated servers.

overstay8930
4 replies
1d

Of course you can run your business on it, you will just suffer because there is practically 0 automation to it. I guess if you are a small business but we are very obviously not talking about mom and pop shops who need a web and email server.

carlhjerpe
3 replies
1d

Incorrect, there's automation for on-prem or smaller providers as well, just because you're unable to doesn't mean it's the truth.

How do you think AWS or Cloudflare is built?

overstay8930
2 replies
1d

You're arguing semantics, you are on massive copium if you think that automation is useful to anyone. Go ahead and set up cryptographic attestation (or just try and interface with a TPM ffs) for your apps on Hetzner to decrypt customer data and see how impossible it is.

holoduke
0 replies
5h17m

Sorry but what kind of wubble wobble talk is this. Hardly anyone needs the stuff you refer to.

carlhjerpe
0 replies
23h47m

It's useful to a lot of people, not everyone has the same requirements. I'm running SecureBoot and LUKS encryption on my Laptop. If there's a TPM2 interface in whatever you can use the same thing. Add an immutable Linux distro and Kubernetes and you'll cover pretty many use cases. Not everyone has the same requirements, $BIGCLOUD makes things easier, but you also pay top dollar for it. AWS funds Amazon

holoduke
0 replies
10h32m

My business is running on Hetzner. And many others as well.

glintik
2 replies
1d1h

Reliability differs, right?

simplyinfinity
1 replies
1d

I have hetzner vms going on 500+ DAYS without restart. I've been a hetzner customer for about 8 years now. Aws has been down x10 times more than hetzner has. And even when hetzner has issues they are localized to a single DC at worst, most often a single host is down. When aws has issue.. The whole internet has issues. Or their central region goes out, also can affect their other regions.

For the small/medium businesses infra I manage here in bulgaria , the same thing would cost 5-10 times as much on aws just for the compute, throw in 1tb of bandwidth.. And this makes no financial sense.

On hetzner I pay 40 euros for 2 vms, dedicated ips, daily backups, 100gb of ssd external storage, and firewall.

mad182
0 replies
22h17m

I have more than 10 servers on Hetzner (some dedicated, some VPS), for 5+ years and the same experience, once one of the dedicated servers had some hardware issues, and an hour later the drives were moved to another box and it was running again. Other than that time, I had downtime only because of my own fault.

Pretty sure over these years AWS has experienced a lot more issues overall.

icedchai
0 replies
1d

What's your Hetzner setup look like? 98% cost savings makes me curious...

qvrjuec
10 replies
1d2h

someone who will not treat their datacenter like a home lab

What does this mean? They steal company resources for themselves, or just configure things incompetently?

ttul
8 replies
1d1h

Incompetence. Take my friend’s company for instance. They were frustrated paying $60K/mo to Amazon so their brilliant sysadmin bought $600K of servers and moved them into a cheap colo.

Over Christmas, everything died, and the brilliant sysadmin was on holiday. Nobody could get things going again for many days and so their entire SaaS business was failing. They lost a lot of business and trust as a result.

The sysadmin is now gone and they are back on AWS.

lijok
5 replies
1d1h

No key person risk management -> no risk register -> no management. Your friends company will fail regardless of poor sysadmin decision making or not. They need to hire competent management ASAP.

overstay8930
2 replies
1d

This is basically the logic of people who say the cloud is too expensive, you have to ignore so many things to make being on premise logical. Basically you are lying to yourself if you think you can run a datacenter cheaper and better than Amazon or Microsoft can, because if you can you are just making huge sacrifices somewhere (usually time, which is why reddit sysadmins complain about how much work they have while defending being on-premise because they couldn't possibly be wrong).

project2501a
0 replies
1d

you must be management, cuz

1. you think it's the sysadmin's fault

2. there are no competent sysadmins out there

123pie123
0 replies
22h40m

Basically you are lying to yourself if you think you can run a datacenter cheaper and better than Amazon or Microsoft

what magical things do they have? that every single reasonably sized enterprise doesn't have? it should be extremly easy for a small enterprise to beat any of the main clouds* - they make crap ton of profit from you

*making an assumption that your needs are reasonably static and not MASSIVELY busting up and down your infrastructure

rzzzt
0 replies
1d

Faint ISO 27001 sounds in the background

mdale
0 replies
1d1h

With cloud and SaaS services you are paying to reduce person risk profile.

Your forming a larger dependency on a team lead against a custom system that now is a liability as new people come to the organization don't want to adopt an abandoned poorly understood project.

jjav
0 replies
23h39m

The sysadmin is now gone and they are back on AWS.

This story has nothing to do with AWS or on-prem.

It's a story about incompetent management allowing a single human point of failure. If they don't change that, they'll have the same problem wherever they go.

OrvalWintermute
0 replies
1d

brilliant sysadmin was on holiday

entire SaaS business

[ Unmentioned - Single Point of Failure Service dependent on a single admin ]

If you are fully accounting for vacation, training, sleep etc then you need a minimum of 5 admins for mission critical services. Now, you can engineer around this to reduce your staffing requirement but I wouldn't recommend going under 2 ever because accidents happen.

This business seemed one below that, without the engineering, and I would point to the mgmt, not the brilliant admin as the problem.

overstay8930
0 replies
1d

Non-scalable incompetence or basically pretending that the datacenter will never go down. Any high schooler with an iPhone can set up and maintain a datacenter full of servers.

But if you want something reliable that I can spend 30 seconds writing some terraform for, it will take an entire infra team to set up and maintain it, not to mention an entire procurement process and now having to integrate a new supply chain just for a basic multi-az setup (probably without things like backups and still without basic features the cloud gives you automatically).

echelon
8 replies
1d1h

You act like these problems are especially hard.

Active-active, five nines, fault tolerance. Hard stuff. But managing on-prem is no harder.

This is what we're paid for.

overstay8930
4 replies
1d

"this is what we're paid for" Nope, it's what YOU'RE paid for.

I am paid to relax on my holidays because I know my team and I don't have to drive to a colo to swap out a failing line card since I realized time is worth money and people quit jobs that take up too much of their time. I can A/B test (something on-prem guys NEVER get the luxury to do) so outages just don't happen at all (fingers crossed).

I have rarely met someone happy with their on-prem DC deployments, but after I moved to the AWS world it's just crazy how backwards it is to be anywhere but the cloud.

jjav
0 replies
23h35m

backwards it is to be anywhere but the cloud

It's almost as if you feel the cloud is something magically different, it's actually just servers on racks owned by someone else.

You can own the same thing if you want and do everything exactly the same.

(See e.g. Oxide)

icehawk
0 replies
23h7m

Anyone serious wouldn't "drive to the colo to swap out a failing line card" they keep have excess capacity and spares in the colo, and have the on-site personnel from the facility replace it.

Honestly just sounds like the environment you describe has greater organizational issues not related to on prem vs cloud.

echelon
0 replies
16h3m

That feels really grumpy.

Compare something like rocketry or chemical engineering with running an on-prem DC. I don't see what the complaining is about. It's still a luxury compared to what other professions have to deal with.

123pie123
0 replies
23h19m

there's so much to unpack here!!

kevin_nisbet
0 replies
1d1h

Managing on-prem hardware may not necessarily be hard, but it can be extremely time consuming. To me, the nice thing about dropping a bunch of hardware in a colo is you get to take a lot of shortcuts and take risks that you cannot buy from the public cloud providers.

I worked for a company and would do it again that did the colo route, and it gave immense cost savings compared to public cloud, taking on risks that you can't do elsewhere. Before they started investing in having folks take care of the infra as a raw startup, it was just some servers and some desktop unmanaged switches. But that gave the company breathing room to survive as the business model probably didn't work without it. But also had a reputation for unreliable service.

I've also built the five nines infra at telcos, and yes you can do it with average engineers, but it's going to be time consuming, slow, and expensive in costs and labor. To allow 26 seconds of unplanned outage a month, you're going to be testing every firmware update for every piece of equipment on an ongoing basis, and practicing every operation and change as best as possible. And you need the scale that you get that 26s by having most outages only impact a subset of your customer base, otherwise you're going to blow that outage budget fast.

hobs
0 replies
1d1h

Managing on prem is definitely harder because you are benefiting from the economics of scale of all the management problems that you have to pay yourself, and if you don't have scale then you will be significantly overpaying to get the same type of quality, reliability, or responsiveness.

Most people are not paid to manage infra, they are paid to talk to customers, ship features, fix bugs, and other "core business" items; just like most businesses don't build roads, they pay taxes and utilize them because the cost of doing it themselves for their preferred traffic patterns would be much more than they could justify (for now.)

acdha
0 replies
22h46m

On-prem is massively harder if you can’t cut corners on security or reliability. Just things like testing & upgrading firmware, doing real DR testing (I know multiple places which spent lots of time and money doing annual failover tests, but went down hard every time they had a true failure due to something they’d missed), handling things like boot signing or secure logging, etc. all take up multiple FTEs worth of time, or are a checkbox from a platform which handles that for you.

tiffanyh
0 replies
21h25m

on premise have no idea how much the salary costs of someone

I haven’t yet meet anyone at a company that heavily uses cloud and still doesn’t have the same number of salaried infra people as on-prem.

snug
0 replies
23h4m

AWS and GCP are giving companies like Apple huge discounts so someone could say something like, "Even Apple iCloud is in AWS and GCP because of how economical it is"

There is too much nuance to say one is better than the other. In some cases using a IaaS is more economical, in other cases it's not.

For Apple, the same is also true[0] to say "Even Apple is running their own datacenters because of how economical it is"

0 - https://dgtlinfra.com/apple-data-center-locations/#:~:text=a....

maeln
44 replies
1d6h

Seriously, if you’re at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.

Which would mean that you loose part of the reason to use the cloud in the first place... A lot of org move to cloud based hosting because it enable them to go way further in FinOps / cost control (amongst many other thing).

This can make a lot of sense depending on your infra, if you have some fluctuation in you needs (storage, compute, etc...), cloud based solution can be a great fit.

At the end of the day, it is just a tool. I worked in places where I SSH´d into the prod bare-metal server to update our software, manage the firewall, check the storage, ... and all that manually. And I worked in places where we were using a cloud provider for most of our hosting needs. I also handle the transition from one to the other. All I can say is: It's a tool. "The cloud" is no better or worse than a bare-metal server or a VPS. It really depends on your use-case. You should just do your due diligence and evaluate the reason why one would fit you more than the other, and reevaluate it from time to time depending on the changes in your environment.

This whole "cloud bad" is just childish.

ownagefool
29 replies
1d6h

A lot of org move to cloud based hosting because it enable them to go way further in FinOps / cost control

I think a lot of orgs move to cloud simply because it's popular and gartner told them so.

But taking a step away from that, it's really about self-service. When the alternative is logging a ticket for someone to manually misconfigure a VM and then fail to send you the login credentials, then your delivery is slow.

When you're chasing revenue, going slow means you're leaving money on the table. When you're a big bureaucratic org, it means your middle managers can't claim to have delivered a whole bunch of shit. Nobody likes being held up, but that's what infrastructure teams historically do.

sofixa
21 replies
1d6h

I think a lot of orgs move to cloud simply because it's popular and gartner told them so.

Nah, I think it's mostly about the second part of your comment. Everyone hates waiting for months to get a VM or a database or a firewall rule because the infrastructure/DBA teams are stuck ten years in the past and take pride in their artisanal infrastructure building.

So moving to the cloud eliminates a useless layer of time wasting,

steveBK123
13 replies
1d4h

If your on-prem team can't spin up a VM same day, then firing them is probably higher ROI than "going to cloud". Further, a lot of the shops "going to cloud" because their infra team is slow.. then hide cloud behind their infra team.

A prior 200+ dev shop went from automated on-prem VM builds happening within hours from when you raise a ticket, to cloud where there was a slack channel to nag&beg for an EC2 which could take a day to a week. This was not a temporary state of affairs either, it was allowed to run like this for 2 years+.

Oh and, worth mentioning, CTO there LOVED him some Gartner.

marcus0x62
6 replies
1d3h

I've never seen an IT team that couldn't spin up a VM in minutes. I have seen a bunch of teams that weren't allowed to because of ludicrous "change control" practices. Fire the managers that create this state of affairs, not the devops folks, regardless of whether you "go cloud" or not.

sofixa
2 replies
1d2h

I've met multiple customers where time to get a VM was in the weeks to months. (To be fair, I'm at a vendor that proposed IaC tooling and general workflows and practices to move away from old school ClickOps ticket-based provisioning, so of course we'd get those types of orgs).

And more often than not, it had nothing to do with managers, but with individual contributors resisting change because they were set in their ways and were potentially afraid for their jobs. Same applies for firewall changes btw.

steveBK123
1 replies
1d2h

I think a lot of HN crowd hangs out at FAANG/FAANG adjacent or at least young/lean shops, and has no idea how insane it is out there.

I was at a shop that provisions AWS resources via written email requests & clickops, treated fairly similar to a datacenter procurement. Teams don't have access to the AWS console, cannot spin up/down, stop, delete, etc resources.

A year later I found out that all the stuff they provisioned wasn't set up as reserved instances. We weren't even asked. So we paid hourly rates for stuff running 24/7/365.

This was apparently the norm in the org. You have to know reserved instances exist, and ask for them.. you may eventually be granted the discount later. I only realized what they had done when they quoted me rates and I was cross checking ec2instances.info I can guarantee you less than 20% of my org (its not a tech shop) is aware this difference exists, let alone that ec2instances.info exists for cross reference.

No big deal, just paying 2x for no reason on already overpriced resources!

shermantanktop
0 replies
1d1h

I went from that type of world (cell carrier) to a FAANG type company and it was shocking. The baseline trust that engineers were given by default was refreshing and actually a bit scary.

I’m not sure my former coworkers would have done well in an environment with so few constraints. Many of them had grown accustomed to (and been rewarded and praised for) only taking actions that would survive bureaucratic process, or fly underneath it.

steveBK123
1 replies
1d3h

Teams are what they DO, not what they CAN DO.

marcus0x62
0 replies
23h53m

Ok, but I’m not sure what that has to do with what I posted.

ownagefool
0 replies
8h3m

The problem is the strong players are less likely to stick around, so you often do end up with folks who can't do the work in minutes - though, the work is usually slightly more than clicking the "give me the vm" button.

photonthug
3 replies
1d3h

Despite years of friendly sounding devops philosophy there's times when devs and ops are fundamentally going to be in conflict. it's sort of a proxy war between devs who understandably dislike red tape and management who loves it, with devops caught in the middle and on the hook for both rapid delivery of infrastructure but also some semblance of governance.

An org with actual governance in place really can't deliver infra rapidly, regardless of whether the underlying stuff is cloud or on prem, because whatever form governance takes in practice it tends to be distributed, i.e. everyone wants to be consulted on everything but they also want their own responsibility/accountability to be to be diluted. Bureaucracy 101..

Devs only see ops taking too long to deliver, but ops is generally frozen waiting on infosec, management approving new costs, data stewards approving new copies across ends, architects who haven't yet considered/approved whatever Outlandish new toys the junior devs have requested, etc etc.

Depends on exactly what you're building but with a competent ops team cloud vs on prem shouldn't change that much. Setting aside the org level externalities mentioned above, developer preference for stuff like certain AWS apis or complex services is the next major issue for declouding. From the ops perspective cloud vs on prem is largely gonna be the same toolkit anyway (helm, terraform, ansible, whatever)

steveBK123
1 replies
1d2h

Yes, of course management is often the problem.

I think it helps when people actually take a step back and understand where the money that pays their salary comes from. Often times people are so ensconced in their tech bureaucracy they think they are the tail that wags the dog. Sometimes the people that are the most hops from the money are the least aware of this dynamic. Bureaucracies create an internal logic of their own.

If I am writing some internal software for a firm that makes money selling widgets, and I decide that what we really need is a 3 year rewrite of my app for reasons, am probably not helping in the sale or the production of widgets. If another team is provisioning hardware for me to write the software on, and it now takes 2 weeks to provision virtual hardware that could take seconds, then they are also not helping in the sale or the production of widgets.

These are the kind of orgs that someone may one day walk into, blast 30% of the staff, and find no impact on widget production, and obvious 30% savings on widget costs...

photonthug
0 replies
1d1h

If another team is provisioning hardware for me to write the software on, and it now takes 2 weeks to provision virtual hardware that could take seconds, then they are also not helping in the sale or the production of widgets.

Well in this example, the ops team slowing down pointless dev work by not delivering the platform that work is going to happen on quickly are effectively engaged in costs savings for the org. The org is not paying for the platform, which helps them because the project might be canceled anyway, and plus the slow movement of the org may give them time to organize and declare their real priorities. Also due to the slow down, the dev and the ops team are potentially more available to fix bugs or whatnot in actual widget-production. It's easy to think that "big ships take a while to turn" is some kind of major bug or at least an inefficiency, but there are also reasons orgs evolve in that direction and times when it's adaptive.

Often times people are so ensconced in their tech bureaucracy they think they are the tail that wags the dog.

Part of my point is that, in general, departments develop internal momentum and resist all interface/integration with other departments until or unless that situation is forced. Structurally, at a lot of orgs of a certain size, that integration point is ops/devops/cloud/platform teams (whatever you call them). Most people probably can't imagine being held responsible for lateness on work that they are also powerless to approve, but for these kind of teams the situation is almost routine. In that sense, simply because they are an integration point, it's almost their job to absorb blame for/from all other departments. If you're lucky management that has a clue can see this happening, introduce better processes and clarify responsibilities.

Summarizing all that complexity and trying to reduce it to some specific technical decision like cloud vs on-prem is usually missing the point. Slow infra delivery could be technical incompetence or technology choices, but in my experience it's much more likely a problem with governance / general org maturity, so the right fix needs to come from leadership with some strong stable vision of how interdepartmental cooperation & collaboration is supposed to happen.

ownagefool
0 replies
8h6m

Whilst often true in practice, this doesn't have to be true.

The reality is, a lot of these orgs have likely already discovered devops, pipelines, deployment strategies, observability, and compliance as code.

There's basically little in compliance that can't be automated with patterns and platforms, but in most of these organizations a delivery teams interface with the org is their non-technical delivery manager who folds like a beach chair when they're told no by the random infosec bod who's afraid of automation.

I've cracked this nut a few times though. It requires you be stubborn, talk back, and have the gravitas and understanding to be taken seriously. i.e. yelling that's dumb doesn't work, but asking them for a list of what they'd check, and presenting an automated solution to their group, where they can't just yell no, might.

ownagefool
0 replies
8h36m

They probably should be fired, but it's actually complicated because the orgs tend to be staffed with departments that believe this is the way things should be done, and best case the replacement needs to compromise with them, worse case they are like minded and you just get more of the same.

acdha
0 replies
1d2h

If your on-prem team can't spin up a VM same day, then firing them is probably higher ROI than "going to cloud".

I haven’t seen this be due to one set of incompetents since the turn of the century. What I have seen is this caused by politics, change management politics, and shortsighted budgetary practices (better to spend thousands of dollars per day on developers going idle or building bizarre things than spend tens on infrastructure!).

In such cases, the only times where firing someone would help would be if they were the C-level people who created and sustained that inefficient system.

nikau
2 replies
1d4h

It also allows management to hide bad decisions and poor planning.

Project is a dud? just nuke the cloud project and no more charges for it.

Project is poorly architected and running like a dog? throw more resources at it.

Both of the above are harder to hide when you have to order equipment for on prem.

yau8edq12i
0 replies
1d2h

Project is a dud? just nuke the cloud project and no more charges for it.

How is that a negative? Not every project is going to be successful. That's just a basic fact of life. That you don't have to deal with the sunk cost fallacy and just pull the plug is a good thing.

Project is poorly architected and running like a dog? throw more resources at it.

Another positive...?! You can continue to serve your clients and maintain a revenue stream while you work on a better architecture. Instead of failing completely. And once you need less ressources, you easily scale down.

ownagefool
0 replies
1d4h

If you're running an internal cloud, you can likely absorb that.

I think comes down to a couple of things:

- Small orgs don't have the resources to run internal clouds, nor should they be doing so. This limits the pipeline of available candidates. - Large orgs promote the wrong people to management, and they make decisions based on their mental model of the world that was developed 20 years ago. They're filled with people who don't understand the difference between cloud and virtualization. - Large consultancies make more money by throwing raw numbers at the problem rather than smart automation. i.e. it's easier for IBM to bill T&M and a whole project wrapper to patch the server than automate it. - Finance & HR teams want you to bend to their ways of working rather than the opposite.

Of the rest, you get into many of them are simply in ops because they're less skilled software developers, or they're now being asked to assure security, and that scares them so they try to lock everything down.

sgarland
0 replies
1d4h

If only that were the case.

Even when everything is in IaC and 100% cloud-native, I’ve still seen dev teams bypass the approved methods because ClickOps is easier.

sebazzz
0 replies
1d

That time wasting is back in the same form or another. For instance, at my side they enable *all* the Azure Application Gateway, even those rules that Microsoft says not to enable - causing even simple OpenID redirects from Microsoft AzureID (Microsoft login) to the application to get captured in AAG and fail.

jjav
0 replies
23h29m

waiting for months to get a VM or a database or a firewall rule because the infrastructure/DBA teams are stuck..

You still have to go through your devops (or equivalent) team to make any network configuration/permission changes. Whether that change is implemented by a local firewall rule or some AWS configuration change is not very important.

It's not like you're going to have developers changing AWS access permissions directly. Maybe in a few employee startup, but in any regulated & audited company, you must have separation of duties and audited change control process.

MaKey
0 replies
1d4h

The layer will still be there, because those teams are now managing some cloud infrastructure central to the organization.

photonthug
4 replies
1d3h

I think a lot of orgs move to cloud simply because it's popular

This can be rational and not just following the leader. In particular.. many devs might think that working with an org that does On-prem is bad for their career, and they might be right. So from an org POV you can't hire good engineers if you're perceived as a dinosaur. This actually might be enough to send you towards the cloud even if the price by itself makes no sense

Radim
3 replies
1d2h

I've experienced the opposite too: orgs looking at down on cloud-only devs.

The idea being that devs who lean on cloud excessively do that to masks their lack of fundamentals, which will cause costly fuck ups no matter what technology they use, cloud or on-prem.

Maybe directionally similar idea to hiring ex-Googlers? Some orgs also don't like those. Specific mindset, specific toolbox.

shermantanktop
2 replies
1d1h

It is absolutely true that some devs have the AWS product set as the tech toolkit they know best.

Whatever their fundamental skills are, the most important way they add value is by optimizing things like lambda startup time or EC2 CPU utilization. Does this allow them to mask deep problems with fundamentals? I guess it could, but that sounds a bit gatekeep-y to me.

photonthug
1 replies
1d

Sort of, but IDK, If you have specific needs this might be a somewhat reasonable heuristic for hiring.

Devs who came up building software more or less from scratch really do have a different skillset than ones who stick to working in service-rich environments because there's a significant difference between glueing services together vs building out those same services. For example something like using a paginated API is quite a bit easier than designing/implementing one. A developer who is skilled and methodical about reading and understanding service-level documentation may not actually be able to step through debugging in a REPL, and vice versa. (Not to say that either kind of person cannot learn the other persons tricks, but as far as the differences in what they already know, those can be pretty significant.)

Assuming someone only has one of these skillsets, the most valuable one totally depends on the situation. On the one hand it's pretty cool that service-familiarity tends to be language-agnostic, but it's less cool when your S3-API expert barely understands the basics of tooling in the new language.

shermantanktop
0 replies
22h6m

Paginated api is a great example. For me, I learned C from K&R and producing a.out files that would default and leave a core file in $HOME. If I wanted a list structure, I had to build it out of resizable arrays of pointers, etc.

I ended up years later at AWS, and while I was there I built internet-facing paginated APIs over resources which had a variety of backing stores, each of which was had some behavior I had reason about.

So I don’t doubt the difference between API builder and API user, I’ve been both. I think it’s less about what you are doing and more about how you do it (with curiosity about how things work, vs. as an incurious gluer).

That said, looking at the code inside MySQL is highly instructive for the curious; AWS doesn’t provide that warts-and-all visibility into their implementations, which cuts off the learning journey through the stack.

ok123456
0 replies
1d3h

There's also now regulatory capture. Your bare-metal or VPS solution won't be "FEDRAMP" approved, even though there are fewer moving parts to secure.

mx_03
0 replies
1d2h

I have worked in places where adding a new server is a bureocratic nightmare at best.

Granted I dont think thats the norm, but also host your webserver yourself is not as user friendly as AWS.

People always forget that.

andrewstuart
10 replies
1d6h

> This whole "cloud bad" is just childish.

Not childish…. it’s a growing line of thought in the IT community that has bought the cloud sell unquestioningly for 20 years.

robertlagrant
8 replies
1d6h

But it's the same mindset as "cloud good", which was also a growing line of thought once. Mantras aren't useful; tradeoff analysis is useful.

mcny
6 replies
1d6h

Mantras are good for orgs that are not mature enough to do actual analysis. A lead developer left recently where I work and while it was likely higher pay that was the biggest decision to move, I suspect the real reason he left is higher ups simply don't listen when he says things like you can't just take full virtual machines on azure, refuse any rewrite/redesign while complaining about high azure spend.

steveBK123
4 replies
1d4h

AWS is infamous in financial services for this though.

First they give you a ton of credits, assign you internal resources to help.

Then they encourage you to simply "lift and shift" your workloads onto EC2/EBS/EFS/etc. It's 100% compatible with your current system, you can rollback, etc. This take two years, then you notice your AWS bill is 10x your old infra.

Then they say - of course, that's because you need to rewrite it all to serverless/microservices/etc that are all AWS bespoke branded alphabet soup of services. Now you are fully entrapped, and can not rollback to your own infra, let alone another cloud provider without another rewrite.

A lot of big financial firms are 5+ years into this. Several have rolled back for certain use cases due to cost, especially anything with a lot of data transfer because yeah.. performant storage in the cloud & egress are expensive, duh.

robertlagrant
3 replies
1d3h

You can still use standard stuff like Kubernetes, even if you go microservices. I don't think it's that bad.

I'd say Cloud lets you do a few things, but the way I think of it ultimately is it lets you spend opex instead of capex. If that means though that your opex will end up higher than your capex, then it would be silly to go with it.

The other thing is in theory your reliability should be higher, but, again, that will depend on your individual situation, and how much reliability matters to you.

steveBK123
2 replies
1d3h

You CAN, but of course that's not what AWS steers you to.

Once your org has gotten to that step, it's been so steered by AWS staff it's hard to imagine suddenly finding sense and building with open standard stuff. Very few AWS shops I have encountered avoid the siren call of various AWS-only or AWS-specific services, which they then become heavily ingrained in..

Generally I do think its mostly about transforming CapEx to OpEx, with the rest of the stuff being noise.

rescbr
1 replies
1d

I was one of those AWS people that worked with Financial Services customers.

We (at least my team) were always pushing for a minimum of modernization when architecting migration projects - even a simple move to containers, managed by whatever orchestrator they want to run. It helps enormously on costs at least by just taking off so many overhead and overprovisioned VMs from the roster of migration candidates.

More often than not, the customer will refuse that and opt for lift + shift. It's either too hard, they don't have the resources or time, etc.

robertlagrant
0 replies
23h42m

Yep I did a (tiny small by your scale) lift and shift+ - basically take a thing that ran as a regular process with a database attached, and containerised (and pen tested, but that's another story) the thing and plugged it into a cloud-managed database. It worked great.

robertlagrant
0 replies
1d5h

Well, then they can flip a coin for which mantra to follow. If you pick "cloud bad" you'll get stories also about companies that refused to go to cloud when it makes sense to.

whstl
0 replies
1d4h

The GP said "[if X happens] consider dropping the cloud". Which is totally different from a mantra.

There is virtually nobody saying "cloud bad" without nuance.

mlhpdx
0 replies
1d3h

Oh how I wish it had been so. The cloud has been a hard sell all along. Also, 20 years ago S3 and EC2 didn’t exist, so maybe it’s been a little less time than that.

calvinmorrison
1 replies
1d6h

The cloud is better and worse than bare metal. It depends on the use case.

AWS is Kafkaesque though

Ringz
0 replies
1d2h

I have had the same experience. And all this, even though Amazon has granted us really generous and free annual plans and professional advice (all inclusive for a non-profit GmbH).

whstl
0 replies
1d4h

There is a massive difference between "[if X happens] consider dropping the cloud" and "cloud bad".

cdchn
8 replies
1d7h

Which VPS services don't charge you at all for bandwidth?

dijit
3 replies
1d6h

you'd be harder pressed finding ones that do (before a reasonable limit).

tilaa.com

vultr.com

hetzner.com

linode.com

JoshuaRogers
1 replies
1d1h

While they might not charge you directly as a line item, you still get charged: Linode in the above list is what I use. I get a fixed cap of bandwidth each month. Anything beyond that is charged. So, you don't get charged IF you stay below the initial cap.

cdchn
0 replies
3h12m

Exactly this. They're all charging for bandwidth.

If anything you could say that AWS is the closest to actually having unlimited bandwidth (or at least, half-parity there) since they don't charge you for incoming data, where other VPS charge you for data both ways.

Really which has more or less expensive bandwidth comes down to the shape of your data usage.

Sebb767
0 replies
1d1h

before a reasonable limit

Which is another way of saying you buy a lump sum of bandwidth included with your VM.

The issue is not paying for bandwidth, the issue is the insane pricing.

mciancia
1 replies
1d2h
cdchn
0 replies
3h31m
jiripospisil
0 replies
1d6h
champtar
0 replies
1d6h

OVH (except for APAC data centers), I think Scaleway also has bandwidth included.

antonvs
4 replies
1d6h

Host it yourself.

If you're actually using the features of cloud - i.e. managed services - then this involves building out an IT department with skills that many companies just don't have. And the cost of that will easily negate or exceed any savings, in many cases.

That's a big reason that the choice of cloud became such a no brainer for companies.

sgarland
3 replies
1d4h

Next you’ll tell me that full-stack is a lie, and devs don’t actually know how to run a DB.

xboxnolifes
1 replies
22h9m

Full stack is a lie because devs don't build their own hardware.

sgarland
0 replies
15h36m

I don’t think it’s that ridiculous (the next obvious goalpost being producing your own silicon), but since microservices means your DB is purely for your service, it stands to reason that you should also then know how to configure, backup, maintain, and tune the DB.

This is of course a wildly unrealistic ask, which is why I think the idea of merging jobs to save money is stupid. Let people who like frontend do frontend. Let people who like DBs do DBs. Let people who tolerate YAML do DevOps.

blantonl
0 replies
1d3h

Maybe I don't want to manage a db, I just want to run one.

crabbone
1 replies
1d5h

I don't think "host yourself" in this instance would've helped. I think AWS in this instance is operating at a loss. Author found a loophole in AWS pricing and that's why it's so cheap. Doing it on their own would've been more expensive.

Now as to why the AWS pricing the way it is... we may only guess, but likely to promote one service over the other.

amluto
0 replies
1d2h

AWS is surely operating at a loss for this particular case (zero S3 charge).

But there is no way that cross-AZ traffic costs AWS anywhere near what they charge for it.

barkingcat
1 replies
1d2h

also, if you really need to transfer hundreds of TB's of data from South Africa to the US, put it on a palette of disks and send it via bulk shipping.

then load the data into the datacentre and then just pay for the last sync / delta.

esafak
0 replies
1d1h
lijok
0 replies
1d1h

This makes no sense.

So what do I do if I'm at the point where I'm doing sophisticated analysis of on-prem costs? Do I move to the cloud?

belter
0 replies
1d3h

Those VPS hosting services are a solution for a startup but not to run your internet banking, airline company or public sas product. Plus their infinite egress bandwidth and data transfer claims are only true, as long as you don't...Use infinite egress bandwidth and data transfer....

api
0 replies
1d6h

There are also many bare metal and VPS providers that charge radically less for bandwidth or even charge by size of pipe rather than transfer.

Cloud bandwidth pricing is… well the best analogies are very inappropriate for this site.

Turing_Machine
0 replies
20h17m

The tipping point for self-hosting would be the point at which paying for full-time, 24/7 on-call sysadmins (salary + benefits) is less than your cloud hosting bill.

That's just not gonna happen for a lot of services.

Jean-Papoulos
0 replies
1d3h

Seriously, if you’re at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.

The blog post's solution is relatively simple to put in place ; if you're already locked-in to AWS, dropping it will cost quite a lot and this might be a great middle ground in some cases.

develatio
22 replies
1d6h

I'll share my trick :)

Lightsail instances can be used to "proxy" data from other AWS resources (eg EC2 instances or S3 buckets). Each Lightsail instance has a certain amount of data transfer included in it's price ($3.5 instance has 1TB, $5 instance has 2TB, $10 instance has 3TB, $20 instance has 4TB, $40 instance has 5TB). The best value (dollar per transferred data) is the $10 instance, which gives you 3TB of traffic.

Using the data provided by the post:

3TB worth of traffic from an EC2 would cost $276.48 (us-east-1). 3TB worth of traffic from a S3 bucket would cost $69.

Note: one downside of using Lightsail instances is that both ingress and egress traffic counts as "traffic".

jonatron
15 replies
1d6h

https://aws.amazon.com/service-terms/

51.3. You may not use Amazon Lightsail in a manner intended to avoid incurring data fees from other Services (e.g., proxying network traffic from Services to the public internet or other destinations or excessive data processing through load balancing or content delivery network (CDN) Services as described in the technical documentation), and if you do, we may throttle or suspend your data services or suspend your account.
sangnoir
8 replies
20h11m

You may not use Amazon Lightsail in a manner intended to avoid incurring data fees from other Services

This requires proving the users intent, which is not obvious except in the most blatant of cases (i.e. using Lightsail as a bent-pipe by writing the exact bytes you're reading). If it is a "CSV to Parquet translation layer", how would AWS possibly prove it's anything other than what it claims to be? You'd be paying a few more cents for compute, but that's the price of plausible deniability

greyface-
4 replies
19h54m

This requires proving the users intent

Companies are permitted to deny service to anyone at any time for any (non-protected) reason. They typically don't have to justify service terminations to a court of law. Who would they be required to prove user intent to, and why?

cstrahan
3 replies
15h19m

I don't think you parsed their message correctly. It's not about litigation.

Re-posting a bit of the service terms for easy reference:

51.3. You may not use Amazon Lightsail in a manner intended to avoid incurring data fees from other Services [...]

As you point out, they may terminate your service without any justification in a court of law. So how do they go about terminating the offenders? Well, one trivial way (from a technology and/or policy perspective): terminate everyone's service! If you blindly terminate everyone's service, that will certainly prevent anyone abusing LightSail.

But that's, uh, not good for business. So they probably want to terminate the service of only those people actually abusing it. But how do you do that?

You'd have to look at each account's usage and do something to determine if that traffic is or isn't a means of avoiding data fees from other services. In other words, you'd have to determine the intent of that traffic. Or, put yet another way: "this requires proving the users intent".

If doing so was as trivial as detecting any traffic between LightSail and the other services, they'd just prevent such connections in the first place. So how can AWS tell if some traffic between services is legitimate or not? The unspoken premise of the person you're replying to is that this probably isn't feasible for AWS to catch any and all people abusing LightSail in this way, with the conclusion being that you can (in practice) probably get away with it unnoticed.

greyface-
2 replies
14h37m

We disagree on the definition of "prove". I would not object to the claim if it had used "determine" or "detect" instead of "prove".

That said, detection is easy. Look for users who spin up a Lightsail instance and use close to 100% of its bandwidth quota before spinning it down. Sort by number of such instances, and tell all users above some cutoff that in your sole discretion you believe they have violated your TOS, and are terminating their service. Doing so is completely legally defensible.

usr1106
0 replies
11h10m

I always assumed that your free quota is proportional to the time you pay. Even the price is not the advertised fixed $3.5, you pay less in months with 30 days than in months with 31 days.

I have not checked my cost and usage reports every time I have some experimental instance for a shorter time, so I am not sure. Just from the general knowledge that AWS is permanently counting every fraction of a peanut. But as the submission shows, exceptions to the rule can exist.

sangnoir
0 replies
13h2m

GP here - feel free to replace "prove" with determine because that's what I meant. My point was that it is really hard for Amazon to detect data exfiltration when its disguised as some other run-of-the mill service. Amazon can cancel anyone's service at anytime, but they can't afford to piss off legitimate customers with capricious, undeserved bans due to false positives. Regardless of where AWS draws the line to separate abuse from legit usage,it will always be possible to skirt underneath it. The crux of my argument is that AWS will tolerate false negatives over false positives.

manquer
2 replies
19h5m

This is not a court. Amazon does not have prove anything to anyone.

There is going to be a program that will have rules to detect patterns in customer traffic and automatically block when those patterns are tripped.

At best you could complain in the forums and maybe if you are lucky a sympathetic community manager may look into your use case.

cstrahan
1 replies
15h14m

Amazon does not have prove anything to anyone.

True. And no one has said that they must prove anything to anyone.

Amazon wants to make money, so they probably don't want to terminate the service of people who are acting in good faith. But that's just another way of saying that they probably want to determine with some certainty that someone is not acting in good faith before terminating their service.

So it's not that Amazon needs to prove anything to anyone. But they do want to prove something to themselves.

manquer
0 replies
12h0m

In this case they are actually losing money not gaining by allowing this kind of abuse, both because the bandwidth usage costs money and also because of potential lost billing from other services which now is not billed.

The Lightsail style billing model works same way shared vs leased lines works, if everyone fully used their max allocation it won't be possible to offer service at that price point. They can offer 2TB or 4TB for price because the usage modelling of target users supported that.

No company wants a customer to bypass their usage and pricing ToS even if they are not actively enforcing it, it is lost revenue and/or bringing in customers who you don't really want.

develatio
2 replies
1d6h

A had a suspicion that this was against AWS's terms, but I never bothered to look if that was actually the case. Thank you for the heads up!

slashdev
1 replies
1d5h

It’s mostly in there to scare people into not doing it. AFAIK they’ve never taken action on that.

Of course if you abuse it, you’re asking for trouble.

mdasen
0 replies
23h18m

As someone who has dealt with users who use a system in an unintended way, you don't go looking for those people and you don't build something to enforce a policy like this. When you're running services for lots of customers, you often don't know a lot of what's going on in the system and how people are using it. Then something seems weird or something is causing a problem and you want to deal with it - and you want the language out there so that you can deal with it.

In Amazon's case, their bandwidth pricing isn't really defendable. It's just crap. However, sometimes you're trying to offer something reasonable, but need to make sure that a customer doesn't end up abusing something. For example, Chia is a cryptocurrency that will basically wear through SSDs (it's a proof-of-space system). There aren't explicit limits on how frequently you can write to a disk from most hosting providers, but Chia goes beyond what normal usage would do to a disk. Chia farmers would rather burn someone else's SSD that they're renting than their own. But no one at most hosting providers was probably looking at how frequently people were writing before noticing "hey, why are the disks failing faster than we'd expect?"

They probably haven't taken action on it because they probably haven't noticed it being a problem. But if you're a whale of a customer and suddenly your data transfer charges drop off a cliff, someone might end up looking into that and seeing what's going on.

Hamuko
1 replies
1d5h

At least AWS is fully aware how premium their normal data transfer is and that one might want to optimise those costs.

vidarh
0 replies
19h2m

It's extreme enough that I never willingly serve data directly from AWS without a caching proxy elsewhere in front unless the egress is tint.

It takes very low hitrates before it pays for itself several times over including management overheads.

Sometimes you can justify a complete replica outside AWS (one of the things I will gladly pay AWS for is durability)

mmh0000
0 replies
1d2h

Yeah, but "service terms" are just recommendations that should often be ignored.

rfoo
2 replies
1d5h

Here's another one:

You can download 1TB of data for free from AWS each month, as Cloudfront has a free tier [1] with 1TB monthly egress included. Point it to S3 or whatever HTTP server you want and voila.

[1] It used to be 50GB per month for the first 12 months. It was changed to 1TB free forever shortly after Cloudflare posted https://blog.cloudflare.com/aws-egregious-egress

overstay8930
1 replies
1d2h

That "shortly" was 2 years, that Cloudflare post had nothing to do with it, Amazon barely considers them a competitor to begin with.

rfoo
0 replies
11h21m

Sigh, I should have quoted both side, here is it: https://aws.amazon.com/blogs/aws/aws-free-tier-data-transfer...

Cloudflare rant was posted in July 2021, new Cloudfront free tier was there Nov 2021. I consider changing pricing like this for AWS in 4 months pretty fast.

Where is your 2 years number comes from?

andruby
1 replies
1d4h

Nice!

Nitpick: $5 for 2TB is better than $10 for 3TB.

develatio
0 replies
1d2h

Ooohhh!! It is, indeed!

intelVISA
0 replies
20h37m

Nice trick but you are playing with fire due to the AWS' terms.

quickthrower2
15 replies
1d6h

This is a loophole. Hitting some loss leader at AWS, but if everyone only buys the $1 hotdog and nothing else then the $1 hotdog gets removed.

api
11 replies
1d6h

It’s not a loss leader. Cloud bandwidth pricing is almost pure profit.

martinald
9 replies
1d4h

It's absolutely amazing that so many devs don't realise this. They seem to think that bandwidth should cost a few cents a month, when in reality it is virtually free. Perhaps the 7c/GB charge was reasonable when AWS came out 15 years ago, but networking has got orders of magnitude cheaper and faster in the intervening time period.

What's more odd now that 1gigabit+ home connections are available, it should be obvious to anyone doing the math that it can't cost that much, otherwise a 200GB CoD install would be costing the ISP $20.

unclebucknasty
4 replies
1d3h

it is virtually free

The infrastructure comes at some cost though, right? And there must be some cap on the bandwidth / throughput that a given infrastructure can handle.

So, given these, does it make sense to price bandwidth as a throttle?

martinald
3 replies
1d2h

That's why I said 'virtually'.

Hurricane Electric does 40gig/sec IP transit for $2k/month.

Assuming you used 50% of the capacity of that link that's about 1/200th (I think, numbers are so small) of the cost of AWS for bandwidth.

Aissen
1 replies
1d

And at scale, AWS does not pay the HE price. So add another factor 3 to 10 there.

api
0 replies
1d

Yep. Big cloud bandwidth is a 200X markup from list price. Its ludicrous.

It serves two purposes for them. One is obviously a nice profit center. The other is that free ingress but expensive egress causes data to flow in but not out, creating a center of gravity and a form of lock in.

unclebucknasty
0 replies
4h32m

That's why I said 'virtually'.

I hear you, and that is an egregious margin. Just wondering if part of their bandwidth pricing calculation is driven by a goal of constraining their infrastructure costs (or other considerations beyond profit). I'm actually wondering this exactly because it is so egregious.

There is of course a thing wherein if something is free people mindlessly use it. If all AWS customers did this with bandwidth, I wonder how it would impact total usage and AWS's subsequent infrastructure considerations.

I'm no fan of their pricing and I'm sure there's an unhealthy dose of greed in there. Your phrasing just prompted me to consider what other factors might also be involved. And, if part of the rationale is actually to influence customer behavior with disincentives, then by definition there would have to be some pain involved.

akira2501
1 replies
22h16m

when in reality it is virtually free

They're not paying for bandwidth, but their connections are not asymmetric, so they need to balance egress and ingress or they will incur fees or dropped traffic.

The pricing is there to maintain this balance. Since they're obviously egress heavy, it makes sense for them to charge for egress, and make ingress free.

People think AWS is using costs to "tax" you, what they're really doing is using to control the shape and size of their traffic.

api
0 replies
4h19m

If this is true then how do so many other companies not charge this way? VPS companies that charge radically less and bare metal / colocation hosts that charge flat rates are all profitable and their networks work fine.

Add to that the fact that people often explicitly choose these smaller providers because they have cheap bandwidth, meaning they're going to be a magnet for high bandwidth users like DIY CDNs, streaming, game servers, TURN servers, video conferencing relays, etc.

I find it hard to believe that AWS or GCP are getting core Internet bandwidth on worse terms than much smaller companies like Vultr, Hivelocity, Datapacket, or OVH.

I call BS.

mad182
0 replies
12h27m

Yep. I got over 250TB/month traffic (150TB+ outbound) on Hetzner, and I don't pay anything additional for that, just ~800$ month for 11 servers.

At 7c/GB that would be over 10500$ just for the traffic alone, and probably about the same for the processing power.

api
0 replies
1d

I feel like an entire generation of devs have been weirdly brainwashed by cloud to believe that a ton of things need to be very complex and expensive.

Of course it’s also a zero interest rate phenomenon. We are exiting a >10 year era when the name of the game was simply to grow and anything in your way could be dealt with by just throwing money at it. Nobody cared about cost as long as growth numbers went up.

quickthrower2
0 replies
14h42m

Ok “loss” is a relative word here… a loss compared to what they could have got from you.

Some how AWS has to rip you off so if there is a non rip off gateway to the ripoff then if you can use the non rip off to avoid another rip off, they will close the “loophole”.

danielklnstein
2 replies
1d6h

I'm not sure how this could be removed - the fundamentals behind it are basic building blocks of S3.

Maybe raising the cost of transient storage? e.g. If you have to pay for a minimum of a day's storage - but even if that was the case this would still be cost-effective, and at any rate it seems very unnatural for AWS to charge on such granularity.

+ I would guess that S3 is orders of magnitude more profitable for AWS than cross-AZ charges, so I'm not sure they'd consider it a loss-leader.

kevincox
1 replies
1d

It would be fairly easy to change the pricing policy. GCP did something similar for cross-region https://cloud.google.com/storage/pricing-announce#network. This is pretty severe because it seems to affect all reads. However I can imagine an alternate implementation where the source AZ is tracked when data is written and egress fees are charged when the data is read (as if the data was always stored in the source AZ). This could even be done more complexly such as only charging the first time data is read in another AZ. Once you read once it is free as-if it is now cached in that new AZ forever. Another option would just be raising the minimum storage duration so that it basically costs all or most of what the data transfer would.

It would definitely piss a lot of people off as it is adding to their bill, but it could likely be done in a way that makes exploiting this for just data transfer not worth it without adding huge costs to most "real" use cases.

danielklnstein
0 replies
23h7m

Yeah, I see what you mean - that'd indeed render this method ineffective. Like you said I'm sure this would bother a lot of customers, but it's not a completely unrealistic overhaul of S3 pricing.

That being said, that'd be sort of "mean" of AWS to do - the data is already replicated across AZs whether you pay for it or not because of how S3 works.

jakozaur
7 replies
1d7h

S3 is a nice trick. More tricks:

1. Ask for discounts if you are a big AWS customer (e.g., spend $1mln+/year). At some point, they were huge for inter-AZ transfers.

2. Put things in one AZ. Running DB in "b" zone and your only server in "a" is even worse than just standardizing on one zone.

3. When using multiple AZ do load aware AZ balancing.

throwaway167
3 replies
1d6h

Running DB in "b" zone and your only server in "a"

There must be use cases for this, but I lack imagination. Cost? But not cost?

sokoloff
0 replies
9h18m

I can't see a reason to do this intentionally within a single account, but use cases with multiple accounts should be aware that what AZ has ID us-east-1a in Account 1 is not necessarily the same AZ that has the ID us-east-1a in Account 2.

https://docs.aws.amazon.com/ram/latest/userguide/working-wit...

jakozaur
0 replies
1d4h

Defaults. Either as a code or using click ops.

Many companies run servers without considering AZ. Then you can get the "best" of the worlds:

1. Your service is down if either of AZs gets hiccups.

2. You pay network charges and latency cost.

QuadmasterXLII
0 replies
1d4h

Cost, unlimited cost- but no cost.

pibefision
2 replies
1d7h

4. Activate S3 Inteligent-Tier storage class?

endgame
0 replies
1d5h

You've got to be careful of the automation charge with Intelligent Tiering.

https://discourse.nixos.org/t/the-nixos-foundations-call-to-...

danielklnstein
0 replies
1d7h

This is great for saving on S3 storage costs!

But in the context of data transfer costs, this would actually increase the costs, because there's a small surcharge for Intelligent Tiering - and the only relevant storage class for sidestepping data transfer costs is standard storage (because it's the only one with free download), so Intelligent Tiering won't provide value.

ishitatsuyuki
6 replies
1d6h

GCP patched a similar loophole [1] in 2023 presumably because some of their customers were abusing it. I'd expect AWS to do the same if this becomes widespread enough.

[1]: https://cloud.google.com/storage/pricing-announce#network

cedws
3 replies
1d2h

I know of a way to get data out of GCP for free, although I haven't tried it in practice. Wonder if I could find a buyer for this info ;)

greyface-
1 replies
20h6m

A guess: tunnel through 169.254.169.254 DNS server?

cedws
0 replies
17h3m

It's a good guess, but what I have in mind would have high bandwidth.

BonoboIO
0 replies
1d1h

I got one for Azure, where it would be nearly free to egress data from Azure to any other cloud provider or the internet.

It works, but i have no use case for it. 100TB egress to the internet costs about 7000$ ... i think, i could do it for 20$-50$.

rfoo
0 replies
1d5h

Unlikely. The "loophole" GCP patched was that you can use GCS to transfer data between regions on the same continent for free. This is already non-free on AWS. What OP mentioned is that transferring data between availability zones *in the same region* also costs $0.02 per GB and can be worked around.

Cthulhu_
0 replies
1d2h

This doesn't feel like a loophole though, it feels like they have optimized S3 and intend your EC2 instances to use S3 as storage. But maybe not as transfer window, that is, they expect you to put and leave your data on there.

mlhpdx
5 replies
1d3h

I’ve been deploying 3xAZs in 3xRegions for a while now (years). The backing store being regional s3 buckets (keeping data in the local compliance region) and DDB with replication (opaque indexing and global data) and Lambda or Sfn for compute. So far data transfer hasn’t risen to the level of even tertiary concern (at scale). Perhaps because I don’t have video, docker or “AI” in play?

hipadev23
4 replies
1d1h

I’m guessing either you don’t have much data or your infra is already so absurd that, yeah, the transfer costs are irrelevant by comparison.

mlhpdx
3 replies
21h59m

Not using VPCs (no need without instances/containers/RDS) mean most of the “absurd” costs go away. It’s cheap by any standard.

hipadev23
2 replies
20h46m

VPCs don’t cost anything.

mlhpdx
1 replies
19h50m

Sorry, that was indeed nonspecific, you’re right. The add-on features for VPCs are commingled with the concept for me since they almost always go hand in hand. Internet gateways, transit gateways, EIPs, service endpoints, etc., and their fixed costs. Yuck.

icedchai
0 replies
15h27m

All that stuff definitely adds up. I'm familiar with some low traffic projects with high "security" requirements that have so much overhead due to those sorts of adds-on. All the overhead winds up costing more than the actual compute + bandwidth running the site.

andersa
5 replies
1d6h

I don't understand how AWS can keep ripping people off with these absurd data transfer fees, when there is Cloudflare R2 just right over there offering a 100 times better deal.

tnolet
0 replies
1d5h

we just built a new feature for our pretty bandwidth heavy SaaS on R2. Works pretty damn good with indeed massive savings. We just use the AWS-SDK (Node.js) and use the R2 endpoint.

perryizgr8
0 replies
1d1h

When all my VMs and containers are hosted in AWS, and S3 has rock solid support no matter what language, framework, setup I use, it becomes really tough to ask the team to use another vendor for object storage. If something goes wrong with R2 (data loss, slow transfer, etc.) I will get blamed (or at least asked for help). If S3 loses data or performs slowly in some case, people will just figure we're somehow using it wrong. And they will figure out how to make it better. Nobody gets blamed. And to be honest, data transfer fees is negligible if your business is creating any sort of value. You don't need to optimise it.

karlkatzke
0 replies
1d3h

Data has "gravity" -- as in, it holds you down to where your data is, and you have to spend money to move it just like you have to spend money to escape gravity.

fabian2k
0 replies
1d6h

R2 is still pretty new. I don't know how well it works in practice in terms of performance and availability. And of course durability, which is difficult if not impossible to judge. S3 has a much longer history and track record, so it has the advantage here. And if all your stuff is inside AWS already there are advantages to keeping the data closer. Depending on how the data is used, egress might also not always be such a major cost.

But yes, the moment you actually produce significant amounts of egress traffic it gets absurdly expensive. And I would expect competitors like R2 to gain ground if they can provide reasonably competitive reliability and performance.

akira2501
0 replies
22h22m

I trust cloudflare far less than AWS. Once my data is in AWS all applications in the same region as the data can use the data without paying anything in transfer costs.

Also, the prices he quotes are label prices, if you are a customer and you pre purchase your bandwidth under an agreement, it gets _significantly_ less expensive.

jonatron
3 replies
1d6h

If you're a heavy bandwidth user it's worth looking at Leaseweb, PhoenixNAP, Hetzner, OVH, and others who have ridiculously cheaper bandwidth pricing.

I remember a bizarre situation where the AWS sales guys wouldn't budge on bandwidth pricing even though the company wouldn't be viable at the standard prices.

declan_roberts
2 replies
1d1h

That’s very unusual, I think. Transfer cost seem to be something most people can negotiate.

jonatron
1 replies
23h51m

I hadn't really thought about it much, but googling it looks like there's a discount programme for a committed spend of around $1M/year. For a small company, that's a lot of money, and it was an unusually large amount of bandwidth for the size of company. I suppose it makes sense now I know they're interested in companies spending that sort of money.

rescbr
0 replies
23h41m

The trick is to use CloudFront if possible, even if not caching, just passing through requests.

Standard discounts start at 10 TB, which is not that much.

If not using HTTP, then it's a no starter.

xbar
2 replies
22h20m

After my account started getting bills this month for pennies for which there was no obvious accounting, I slashed my AWS costs by 100%.

I'm back to managing my own systems. So much cheaper and less chance of nonlinear bills.

rospaya
0 replies
8h56m

Probably free tier expiration for some small change. With me it was AWS KMS.

danielklnstein
0 replies
21h53m

In case you ever decide to return to AWS, its Cost Explorer is far from perfect but it can show you where your expenses are coming from, especially if your costs are pennies. In the last re:invent they even released daily granularity when grouping by resources (https://aws.amazon.com/blogs/aws-cloud-financial-management/...).

nodeshift
2 replies
22h28m

Someone in the thread said that if you're 'at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.'

We've built https://nodeshift.com/ with the idea that cloud is affordable by default without any additional optimization, you focus on your app with no concerns on costs or anything else.

akira2501
1 replies
22h25m

Cost analysis has helped me build great infrastructure on AWS. The costs are communicating to you what is and is not efficient for AWS to do on your behalf, by analyzing the costs and working to reduce them, you also incidentally increase efficiency and in some cases, such as this one, workload durability.

nodeshift
0 replies
23m

Cost analysis should of course play the foundation of everything you build, regardless if it's SaaS tooling or infrastructure. But surely it's easier to do a cost assessment and optimization exercise on something that is fundamentally more affordable than AWS and doesn't have as high of margin costs? That's why we have built a platform that creates all the value at a low cost.

lucidguppy
2 replies
1d5h

This feels like the tech equivalent of tax avoidance.

If too many people do this - AWS will "just close the loophole".

There's not one AWS - there are probably dozens if not hundreds of AWS - each with their own KPIs. One wants to reduce your spend - but not tell you how to really reduce your spend.

If you make something complex enough (AWS) - it will be impossible for customers to optimize in any one factor - as everything is complected together.

karlkatzke
1 replies
1d3h

This isn't a loophole. This is by design. AWS wants you to use specific services in specific ways, so they make it really cheap to do so. Using an endpoint for S3 is one of the ways they want you to use the S3 service.

Another example is using CloudFront. AWS wants you to use CloudFront, so they make CloudFront cheaper than other types of data egress.

lucidguppy
0 replies
1d2h

If they wanted you to behave in specific ways logically - wouldn't their documentation be less ambiguous?

https://www.lastweekinaws.com/blog/aws-cross-az-data-transfe...

vlovich123
1 replies
1d1h

it’s almost as if S3 doesn’t charge you anything for transient storage? This is very unlike AWS, and I’m not sure how to explain this. I suspected that maybe the S3 free tier was hiding away costs, but - again, shockingly - my S3 storage free tier was totally unaffected by the experiment, none of it was consumed (as opposed to the requests free tier, which was 100% consumed).

It’s also possible their billing system can’t detect transient storage usage. Request billing would work differently from how billed storage is tracked. It depends on how billing is implemented but would be my guess. That may change in the future.

adrianmonk
0 replies
19h1m

Maybe some sampling mechanism comes along and takes a snapshot once per hour.

Suppose you store the data there for 6 minutes. Then there's an 90% probability that the sampler misses it entirely and you pay $0. But there's a 10% probability that the sampler does catch it. Then you pay for a whole hour even though you used a fraction of that.

Over many events, it averages out close to actual usage[1]. In 9 out of 10 cases, you pay 0X actual usage. In 1 out of 10 cases, you pay 10X actual usage. (But you can't complain because you did agree to 1-hour increments.)

---

[1] Assuming no correlation between your timing and the sampler's timing. If you can evade the sampler by guessing when it runs and carefully timing your access, then you can save a few pennies at the risk of a ban.

lijok
1 replies
1d1h

There are tons of these tricks you can use to cut costs and get resources for free. It's smart, but not reliable. It's the same type of hacking that leads to crypto mining on github actions via OSS repos.

Treat this as an interesting hacking exercise, but do not deploy a solution like this to production (or at least get your account managers blessing first), lest you risk waking up to a terminated AWS account.

huslage
0 replies
1d1h

I have used this and other techniques for years and never gotten shut down. Passing through S3 is also generally more efficient for distributing data to multiple sources than running some sync process.

issafram
1 replies
22h58m

I've been looking for a place to store files for backup. Already keeping a local copy on NAS, but I want another one to be remote. Would you guys recommend S3? Wouldn't be using any other services.

spieden
0 replies
13h46m

I use S3 with the DEEP_ARCHIVE storage class for disaster recovery. Costs go up if you have many thousands of files so careful there. Hopefully will never need to access the objects and it's the cheapest I could find.

esafak
1 replies
1d1h

How do people economically run multi-region databases for low latency?

declan_roberts
0 replies
1d1h

Data transfer costs are extremely easy to negotiate with Amazon.

dangoodmanUT
1 replies
1d5h

I've not seen any evidence that multi-AZ is more resilient. There's no history of an entire AZ going down that doesn't affect the entire region, at least that I can find on the internet within 15 minutes of googling.

playingalong
0 replies
1d2h

Do you mean S3 or all services?

If all services, then things like whole or most of a single AZ being borked happens fairly often.

arpinum
1 replies
1d4h

Another trick is to use ECR. You can transfer 5TB out to the internet each month for free. The container image must be public, but you can encrypt the contents. Useful when storing media archives in Glacier.

declan_roberts
0 replies
1d1h

Sneaky idea! I love it!

TheNewsIsHere
1 replies
1d3h

This may be arguably nitpicking, but the following statement from TFA isn’t exactly the case:

Moreover, uploading to S3 - in any storage class - is also free!

Depending upon how much data you’re transferring in terms of storage class, number of API calls your software makes to do so, and the capacity used, you may incur charges. This is very easy to inadvertently do when uploading large volumes of archival data directly to the S3 Glacier tiers. You absolutely will pay if you end up using millions of API calls to upload tens of millions of objects comprising tens of terabytes or more.

danielklnstein
0 replies
1d3h

Thanks for the feedback! I don't think it's nitpicking, you're right that it's misleadingly phrased - in fact, the only S3 costs I observed weren't storage at all, but rather the API calls.

I updated the phrasing.

sebazzz
0 replies
1d

Offtopic but related: Has anyone noticed transient AWS routing issues as of late?

I’ve on three or four occasions the last three months notices that I got served a completely different SSL certificate than the domain I was visiting, of a domain that often could not be reached on publicly - probably pointing to some organizations internal OTA environment. In all occasions the URL I wanted to visit and the DNS of the site I was then actually visiting were located in AWS. Then less than a minute elapsed the issue is resolved.

I first thought it must be my side, my DNS server malfunctioning or something, but the served sites could not be accessed publicly anyway, and I had the issue on two separate networks with two separate clients and separate DNS servers. I’ve had it with polar.co internal environment, bank of ireland (iirc), multiple times with download.mozilla.org and a few other occassions.

I contacted AWS on Twitter about it, but just got some generic pointless response I should make an incident - but I’m just some random user, I’m not an AWS client. Somehow I could not get it clear to the AWS support on the other side of Twitter.

salawat
0 replies
20h21m

Or... Build your own cloud and transfer data to your hearts content for free (minus power).

rco8786
0 replies
20h46m

There's going to a be a huge market for consultants to unwind people's cloud stacks and go back to simpler on-prem/colo (or Heroku-like) deployments in the coming years.

playingalong
0 replies
1d2h

This trades costs for latency. Which is not a big deal for some use cases, but may be a real breaker for some of the others.

gumballindie
0 replies
1d4h

I reduced them to 0 by not using AWS. This simple trick lets you install and configure dedicated servers that work just fine. Most of your auto scaling needs can be solved using a CDN. But by the time you reach such needs you'd have hired competent engineers to properly architect things - it will be cheaper than using amazon anyway.

glenngillen
0 replies
1d5h

This is clever. And as I understand it, one of the tricks WarpStream (https://www.warpstream.com) use to reduce the costs of operating a Kafka cluster.

explain
0 replies
1d6h

Paying for bandwidth is crazy.

emmanueloga_
0 replies
18h29m

For those suggesting VPSs instead of cloud based solutions, how do you deal with high availability? Even for a small business you may need it to stay up at all times. With a VPS this is harder to accomplish.

Do you setup the same infrastructure in two or more VPS instances and then load balance? (say, [1]). Feels a bit of an ... artisanal solution, compared to using something like AWS ECS.

1: https://www.hetzner.com/cloud/load-balancer

boiler_up800
0 replies
1d3h

S3 storage costs are charged per GB month so 1 TB * .023 per GB / 730 hrs per month… should be 3 cents if the data was left in the bucket for an hour.[1]

However sounds like it was deleted almost right away. In that case the charge might be 0.03 / 60 if the data was around for a minute. Normally I would expect AWS to round this up to $0.01..

The TimedByteStorage value from the cost and usage report would be the ultimate determinant here.

[1] https://handbook.vantage.sh/aws/services/s3-pricing/

TruthWillHurt
0 replies
1d6h

True meaning of Trustless Environment.

Havoc
0 replies
1d6h

It’s unfortunate that such shenanigans are even necessary

DeathArrow
0 replies
1d6h

Can you do the same on Azure?