return to table of content

Amazon S3 will no longer charge for several HTTP error codes

tyingq
0 replies
22h20m

As an HN reader it added valuable context for me that I was unaware of.

spacebanana7
0 replies
23h18m

To be fair it is quite remarkable customer service story & relevant to the article.

jsheard
0 replies
23h10m

I did, but I think it's worth stressing that this didn't actually have the quick two week turnaround you might assume if you first heard about it from the billing horror story posted on here recently. It's been known about forever and only became a priority when it turned into a PR issue.

j-pb
0 replies
23h20m

It gave me pleasure.

Quinner
0 replies
23h17m

Adds historical context as to a duration of an extremely long-lasting problem?

cbsmith
4 replies
23h23m

In fairness, the issue was attended to within weeks after it recently got attention.

recursive
3 replies
22h25m

So, another way of saying this is, it took more than a decade to get attention?

surgical_fire
0 replies
22h23m

something something customer obsession

pavel_lishin
0 replies
21h21m

We prefer to call it "eventual consistency".

cbsmith
0 replies
6h4m

Realistically, Amazon didn't have the scale/resources to mitigate/manage this problem for customers back in the day. It also wasn't a target like it is now. Even a decade ago, it was a comparatively small problem that was no doubt simpler to address this on a case-by-case basis.

Being responsive isn't about having infinite resources. It's about prioritization. I doubt at the time this was anywhere near the top of the list for them to fix even for the person who tweeted years ago.

treve
3 replies
23h14m

In the same timespan Microsoft released Windows 1 all the way up to XP

DaiPlusPlus
2 replies
22h14m

Why have things stagnated?

ranger_danger
0 replies
21h33m

People put up with it.

SSLy
0 replies
19h39m

actual innovation has just moved to different problems, now that the desktop OS has been mostly "solved" – until SteamOS showed a fresher way forward, that is.

CSMastermind
1 replies
22h47m

Ahh I see the problem. The steps to get it resolved were not to tell the team about it.

The steps were to raise a big enough fuss that it would undermine customer trust if the team didn't fix it.

swyx
0 replies
22h19m

twitter support tier is the highest of all!

lapcat
11 replies
22h43m

AWS is full of dark patterns. You can sign up for the so-called "free" tier and then too easily, unwittingly enable something that suddenly charges you hundreds of dollars before you know it (by getting a bill at the end of the month), even if you're not doing anything with the service except looking around. AWS doesn't give any warning to free tier members that a configuration change is going to cost you, and their terms are also very confusing. For example, PostgreSQL is advertised as free, but "Aurora PostgreSQL" is quite costly.

ranger_danger
4 replies
21h36m

AWS doesn't give any warning

It does if you ask it to. You can get billing alerts if current costs are projected to go over a threshold.

OptionOfT
1 replies
21h24m

But it's kinda the same as a trial where you have to put in a credit card number.

If they auto-charge once the trial is over, I don't like them. That is a dark pattern.

Equally, with this, AWS could very well ask you, the user, what you'd like to do if you surpass the free tier. Charge? Or turn it all off.

On top of that they could instate default thresholds so that you, the person who just started their free trial, does not get bill shock when you forget to turn of that $200/h machine.

paulddraper
0 replies
20h15m

Almost all trials tell you how much they'll charge you.

lapcat
0 replies
21h20m

My threshold is $0. I was never expecting to get billed on the free tier.

hughesjj
0 replies
21h15m

You have to manually set this up though.

There's so much UX for a prospective new AWS dev that could be improved. Say a 1 click "do you want billing alerts with that?" Template, or a "do you want to lock down expensive nonfree stuff?" option to set some soft limits to zero out of the gate (nescessitating a self serve support case to unblock).

It's frustrating. It's been this way for over a decade yet you'll still see new customers cutting themselves on the nuances of free tier. I get that AWS is 'enterprise real deal do what I say', but I don't think that means you should completely exclude the customer story of any new developers just getting their feet wet. It's an area of customer obsession the business has regrettably lacked, and if you go by the continuous stories of people messing it up on HN/twitter/reddit/etc, it only becomes more glaring how little the new guys are being taken care of.

akira2501
2 replies
21h17m

unwittingly enable something that suddenly charges you hundreds of dollars before you know it

The default is to have current and estimated monthly cost displayed on your root console as soon as you login. You will also get email alerts when you hit 50% and then 80% of your "free tier quota" in a month.

even if you're not doing anything with the service except looking around.

I'm not aware of any services which will cost you money unless you actively enable them and create an object within it's class. Many services, such as S3, will attempt to force you into a "secured" configuration that avoids common traps by default.

For example, PostgreSQL is advertised as free, but "Aurora PostgreSQL" is quite costly.

There's a rubric to the way AWS talks about it's internal services that is somewhat impenetrable at first. It's not too hard to figure out, though, if you take the time to read through their rather large set of documentation. That's the real price you must pay to successfully use the "free tier."

Anyways.. PostgreSQL is an open source project. Amazon RDS is a managed service that can run instances of it. Amazon Aurora is a different service that provides it's own engine that is _compatible_ with MySQL and PostgreSQL.

To know why you'd use one or the other, the shibboleth is "FAQ," so search for "AWS Aurora FAQ" and carefully read the whole page before you enable the service.

simonw
1 replies
21h4m

"It's not too hard to figure out, though, if you take the time to read through their rather large set of documentation"

I don't even know where I would start with that. https://docs.aws.amazon.com/ lists documentation for 305 different products!

deanCommie
0 replies
16h18m

Well, presumably, from the service you decided to use first?

_adamb
1 replies
20h57m

We have a slack channel, #aws-budget-alerts, where AWS sends a notification any time our forecasted spend reaches certain milestones or the actual spend reaches certain milestones.

It's a really easy to set up app!

sierra1011
0 replies
20h2m

Ooh, that sounds nice. Beats my current slew of emails around week 3 of the month. Got a link to any docs to get set up quickly?

alanfranz
0 replies
14h11m

Most cloud providers work this way somehow. Flexible, pay as you go infra doesn’t cope well with fixed pricing.

Fixed price cloud offerings exist for some services, but can end up with an apparently larger sticker price.

dmw_ng
11 replies
23h15m

Can't imagine a change like this would be made without some analysis.. would love an internal view into a decision like this, I wonder if they already have log data to compute financial loss from the change, or if they have sampling instrumentation fancy enough to write/deploy custom reports like this quickly.

In any case 2 weeks seems like an impressive turnaround for such a large service, unless they'd been internally preparing to acknowledge the problem for longer

pdimitar
3 replies
22h50m

Are you for real? Legitimately baffled by your comment.

How about the financial losses of customers that could be DDoS-ed into bankruptcy through no fault of their own? Keeping S3 bucket names secret is not always easy.

dmw_ng
1 replies
22h17m

I prefer your version: Barr replies to a tweet before gatecrashing the next S3 planning session. "A customer is hurting, folks!". The call immediately falls silent with only occasional gasps heard from stunned engineers, and the gentle weeping of a PM. I wonder if Amazon offers free therapy following an incident like this

pdimitar
0 replies
6h1m

Not billing you because a script kiddie ran a script on your S3 bucket is a good start of a therapy, I'd say. :)

xmprt
0 replies
22h38m

I was thinking this too. You're giving AWS a lot of credit if you think they're not going to do some kind of analysis about how much they were making (albeit illegitimately) from invalid responses. I'm just surprised that they either didn't do the analysis beforehand or that if they did do the analysis beforehand (like the parent commenter suspected), how they were able to get the report for that analysis out so quickly.

mike_d
3 replies
20h27m

Can't imagine a change like this would be made without some analysis.. would love an internal view into a decision like this

Sure, here you go: There was some buzz and negative press so it got picked up by the social media managers who forwarded it to executive escalations who loops in legal. Legal realizes that what they are doing is borderline fraud and sends it to the VP that oversees billing as a P0. It then gets handed down to a senior director who is responsible for fixing it within a week. Comms gets looped in to soft announce it.

At no point does anyone look at log data or give a shit about any instrumentation. It is a business decision to limit liability to a lawsuit or BCP investigation. As a publicly traded company it is also extremely risky for them to book revenue that comes from fraudulent billing.

xmprt
1 replies
20h1m

I find this hard to believe because this issue was known for years.

amzn-throw
0 replies
16h15m

As someone that works at AWS (but not on S3), that's wrong in like eight different ways.

But the only way that matters is the core one - analysis, data, and instrumentation.

AWS does not make these kinds of decisions without a look at the metrics.

londons_explore
2 replies
22h12m

2 weeks seems like an impressive turnaround for such a large service

I assume they were lucky in that whatever system counts billable requests also has access to the response code, and therefore it's pretty easy to just say "if response == 403: return 0".

The fact that is the case suggests they may do the work to fulfill the request before knowing the response code and doing billing, so there might be some loophole to get them to do lots of useful work for free...

hughesjj
0 replies
21h11m

Metering feeds into billing and they are some truly epic levels of data volume. You can kind of see the granularity they're working with if you turn on cloud trail.

dmw_ng
0 replies
22h6m

do lots of useful work for free

Have often wondered about this in terms of some of their control plane APIs, a read-only IAM key used as part of C&C infrastructure for a botnet might be interesting, you get DNS/ClientHello signature to a legitimate and reputable service for free, while stuffing "DDoS this blog" e.g. in some tags of a free resource. Even better if the AWS account belonged to someone else.

But certainly, ability to serve an unlimited URL space from an account with only positive hits being billed seems ripe for abuse. Would guess there's already some ticket for a "top 404ers" internal report or similar

chadhutchins10
10 replies
22h52m

We've done it. Now let's re-engineer our apps to use error codes for 200 responses and get free S3 usage.

hunter2_
5 replies
22h25m

If I understand TFA, you'd need to find a way to get S3 (which offers no server-side script execution, only basic file delivery) to emit an error code (403 specifically) alongside a response of useful data. Good luck...

wavemode
1 replies
22h10m

Simple. Just encode all of your app's data and logic as a massive lookup table, each bit of which is represented by an object that either doesn't exist (a zero) or is unauthorized to access (a one).

When you read a sequential series of keys (404 403 403 404 = 0110) it will either tell you the data you were looking for or the next key name to begin reading from.

_flux
0 replies
10h43m

You can also perfectly parallellize those requests, making the operation highly efficient!

dgacmu
1 replies
22h13m

Well, you can probably send out one bit a time by updating your ACLs on a clock (with which your clients are also roughly synchronized) and distinguishing between 403 and 404.

take an awful lot of time to get that data out, though.

ornornor
0 replies
22h4m

take an awful lot of time to get that data out, though.

That’s what glacier is for!

adrianmonk
0 replies
19h29m

It said "never incur request or bandwidth charges". I assume this means you don't pay to compute the response or for the bandwidth to deliver it.

Seems you could compute the response, store it somewhere (memcached or something), and then return an error. Then have the caller make another call to retrieve the response. (To associate the two requests, have the caller generate a UUID and pass it on both calls.)

That doesn't make it entirely free, but it reduces the compute cost of every request to just reading from a cache.

(This does sound like a good way to get banned or something else nasty, so it's definitely not a recommendation.)

pavel_lishin
1 replies
21h18m

I don't know if this counts, but I had something that I would call parasitic happen to me once.

I administered a VBulletin forum, and naturally, we installed all sorts of gewgaws onto it, including an "arcade" where people could play games, share high scores, etc.

This arcade, somehow, came with its own built-in comment system, one where users could somehow register without registering for a proper VBulletin user account on our instance, and thus without admins being notified.

One day, we discovered this whole underbelly community that had apparently been thriving under our metaphorical floorboards, and promptly evicted them. In hindsight, I probably should have found some way to let them stick around, but recently several things had happened that hardened our stance to any sort of un-wanted users.

surfingdino
0 replies
21h33m

I worked on a team with similar cost optimisation gurus... They abused HTTP code conventions and somehow managed to wedge in two REST frameworks into the Django app that at one point had 1m+ users...

beeeeerp
10 replies
23h16m

Now please do this for NXDOMAIN on Route53. This can be a big problem with acquired domains.

SushiHippie
4 replies
22h50m

I just searched for this and this documentation entry came up:

https://docs.aws.amazon.com/whitepapers/latest/aws-best-prac...

I can't believe that their 'fix' is to set a wildcard dns entry, this feels somewhat like a joke.

Does this mean that a NXDOMAIN response costs more than a successful response?

aeyes
3 replies
22h43m

It has the same cost as a successful response which can quickly add up to a few hundred dollars per month with a couple of DNS enumeration scans.

Google Cloud and Azure also bill DNS like this. Unless you need some of the advanced features you really shouldn't host your DNS in the big cloud providers.

technion
1 replies
22h16m

Cloudflare don't bill like this, part of why I moved off route 53.

withinboredom
0 replies
21h54m

Neither does Digital Ocean, for that matter.

beeeeerp
0 replies
5h23m

That’s not entirely true - aliases to AWS resources are free (their suggested “workaround”).

paulddraper
3 replies
20h12m

That's not analogous.

beeeeerp
2 replies
16h59m

It’s still a huge problem for people that have purchased domains. I bought one that apparently used to be a BT tracker, and gets on the order of several hundred NXDOMAIN requests per second.

I understand it’s still hitting Route53 infrastructure, but I’m not using it, and it’s not commonplace to charge for NXDOMAIN records. Because of this, I’m unable to host at AWS (prohibitively expensive for my use-case).

It’s worth mentioning that DNS infrastructure for things like this are very cheap (I used to self-host the DNS infrastructure for this domain for ~$2.5/mo), so the up charge is even worse that what AWS is charging for bandwidth. If they brought it in line with actual costs, I wouldn’t have as much of a problem.

paulddraper
1 replies
16h17m

A book could be written about AWS "overcharged" services

beeeeerp
0 replies
13h49m

Totally agree with this. In their defense (not that I like it), obviously the market is willing to pay what they charge. It’s unfortunate that the other big cloud providers haven’t driven prices down that much.

mike_d
0 replies
20h41m

You should never actually use Route53 for your domains. Delegate a subdomain like cloud.yourcompany.net to R53 and use that.

moi2388
2 replies
11h56m

There needs to be a law that says any user needs to set any limit on any service or subscription, and then the costs can not surpass this until the budget is upped by the user. At the same time, there should be real-time cost analysis, breakdown per service and predicted costs per day.

usr1106
1 replies
11h43m

A law in which country?

Well, GDPR showed a bit that rather global impact is possible.

If you offer an open service on the internet you need to be prepared that users and misusers will cause costs.

However, if you block it for public access you as a customer are not offering a public service. It's the cloud provider offering a public service so it seems just a basic legal principle that it's the cloud provider who pays for misuse (attempts to access something that is not public). But of course big corporations are not known for fair contracts respecting legitimate interest of the customer before legal action is on the horizon. I wonder what made AWS wake up here.

moi2388
0 replies
2h52m

Agreed. Don’t know what made them wake up, but I did file a complaint about their free tier dark patterns and the Luxembourg EU GDPR office got involved after my countries GDPR office tried it first, and apparently is busy with some bigger investigation, so that investigation might’ve spooked them (not my own application I don’t think)

dangoodmanUT
0 replies
22h31m

we canceled them successfully ig

cratermoon
0 replies
22h32m

From the previous story, "S3 requests without a specified region default to us-east-1 and are redirected as needed. And the bucket’s owner pays extra for that redirected request."

So will Amazon continue charge for the redirected 403?

ceejayoz
0 replies
23h25m

For buckets configured with website hosting, applicable request and other charges will still apply when S3 returns a custom error document or for custom redirects.

I was wondering about that one.

Joel_Mckay
0 replies
23h20m

Bezos loss-leader product-manager pushes hook deeper into worm.

I fail to see this as progress, YMMV =3