return to table of content

Someone has been attempting to DDoS us for weeks and we do nothing

ThePhysicist
20 replies
9h24m

4 TB per month isn't really a DDoS attack, no? 4 TB per hour might qualify as a DDoS, but 4 TB per month is just 1.5 MB / second. 6 million requests per months are just 2 requests per second. I'd say the fact they run a monolith service isn't really relevant at this scale, especially as I assume Cloudflare handles most of the requests through caching them at the CDN level.

tbarbugli
7 replies
9h11m

I think its around 1TB a day, but indeed still very small.

tutfbhuf
5 replies
9h3m

Yes, you can rent a few dollar VPS from e.g. Hetzner (since Germany is mentioned in the blog post), and run a few wget commands in parallel in a loop on their 200MB setup file to easily reach 1TB a day.

For a company, this should definitely not be something to worry about. However, if I were able to single out individual IPs that are attacking me, then I would simply block them, report them (use the abuse form from the hoster of the attacking IP), and call it a day. This way, you can at least hope that the hoster will do something about it, either by kicking the hacker off its platform or, if it is some kind of service reflection attack, inform the victim to close the security loophole on their server and remove themselves from the botnet. If your attacks originate from a vast amount of different IPs from Russia and China, consider geoblocking.

tamrix
4 replies
8h40m

The worst thing reporting an IP can do is increase your ranking on scamalitics.

Cgnat is becoming common on home internet. You can share an IP with up to 128 other people.

tutfbhuf
2 replies
8h3m

On Hetzner, you receive an abuse email with the directive to respond appropriately if your root server or VPS is involved in some kind of abuse related issue. In larger companies this happens quite frequently. I'm not sure what would happen if you ignore such email.

justsomehnguy
0 replies
7h54m

Just reply what you are sorry and you are fixing/fixed the issue. Till the next report.

Pi9h
0 replies
7h32m

Hetzner will usually give you 24 hours to respond to abuse reports. Failure to do so will lead to locking of your server IP.

supriyo-biswas
0 replies
8h3m

scamalytics has nothing to do with a report to the tech-c or abuse-c email address in the WHOIS information.

praseodym
0 replies
7h1m

That would cost you $200-$400 per day when hosting at Netlify or Vercel, which can quickly impact the bottom line of a startup or small business.

buro9
6 replies
9h1m

Given that the vast majority of the web are Wordpress sites (or Drupal if you're a larger Org) that are on $5-20 per month multi-tenant hosts, have not installed caches, and speak directly to the database... the vast majority of sites on the internet can be knocked offline by merely doing 4 requests per second.

That sounds crazy, right? But yet that's where we are.

Context: I used to manage the DDoS protection at Cloudflare, and also the WAF, Firewall and some customer facing security projects, and we frequently either saw that web scraping took customers offline, or trivial and very low volume HTTP request rates took customers offline. In the early days we considered anything to be a DoS when it threatened our infra, but the threshold for customers is barely higher than a few concurrent and active users.

The big numbers always make headlines, but it's the small numbers which most people feel.

rrr_oh_man
2 replies
7h59m

FYI: Most decent Wordpress hosters these days have cache, SSL, minification out of the box.

My favourite WP hosters:

- kinsta.com (for scaling & multi-sites)

- raidboxes.io (amazing customer support, usually within 15min, even on a Sunday)

buro9
1 replies
7h45m

Decent Wordpress hosters always had this, but the majority of Wordpress sites were (and probably still are) just droplets, Linode, Hetzner, etc.

gustavorg
0 replies
6h48m

Sir, I have my pompous site on linode and feel offended and hurt.

maxk42
0 replies
5h23m

The numbers given in my the article are 800k requests in five days for a static file, which is 1.85 requests per second and no database access. It's not even a DDoS by your own description here.

denton-scratch
0 replies
8h15m

have not installed caches

Drupal, in particular, is notorious for having multiple layers of cacheing out of the box. Of course, you can always add some extra caches...

Roark66
0 replies
6h29m

When I was setting up one of these $5 per month services(a simple php Web shop, with shopware if I remember correctly) I tested it with 10 concurrent users (5 req/s or so on average) and the $5.5 per month instance handled it just fine.

Yes, the instance had docker and was in an auto scaling group to be rebuilt if anything fails. There were 3 containers running with strict mem/cpu limits. Nginx reverse proxy (all in 128mb of ram), a mariadb sql server with a minimum of ~300mb of ram and up to 512mb if available and the php/Web host with 512mb ram reserved. Mariadb was tuned and shopware was tweaked, but that's about it. Everything run fine on a 2 core, 1gb ram instance. (has, not "has been" because a year later the shop closed for other reasons). So the morale of this story is, sometimes a $5 vps or an instance is the correct answer.

saagarjha
2 replies
9h23m

I guess that depends on how irregular the traffic is.

ThePhysicist
1 replies
9h21m

Sure, if every request triggers a very complex database transaction or computation, but if I understand correctly this is a simple file download endpoint that's probably cacheable.

n4r9
0 replies
9h11m

I think OP means that the 6 million requests might not be evenly spread. They might only occur during 5 minutes of each day, for example. I don't know enough to know whether that's feasible.

croes
0 replies
8h54m

At least 8TB. 4TB is UK only.

MartijnHols
0 replies
7h21m

A Denial of Service attack could be mere KBs per a hour. If they trigger a heavy API endpoint on the target service, that may be enough to take the service down. The Distributed part just means multiple machines are attacking at the same time.

No part of a DDOS requires the throughput to be gigantic, although the big ones are typically the ones you will find in the news.

One possible aim of this attack is to either burn through the bandwidth quotum of the source servers, or to use so much bandwidth that it becomes unaffordable. This could be done very cheaply with just a single or few attacking machines. Most datacenters and hosting providers have bandwidth limits or start charging after a certain amount, and too often the company being attacked only finds out when they receive a bill they can't afford.

NKosmatos
19 replies
9h42m

Nice one :-)

“… => Thus, we build a monolith service for each app, which is easy to deploy and maintain. No Docker, no Kubernetes, no dependencies, no runtime environment - just a binary file that can be deployed on any newly created VPS. …”

iamcalledrob
11 replies
9h29m

This is such a fantastic benefit of Golang: spin up a VPS, apply some sensible defaults, cross compile then run your binary.

Compare this to deploying python, node or php... Needless complexity.

If only running (and keeping running) a database server could be this straightforward!

binarymax
4 replies
9h21m

Nowadays you can bundle a node app as a single binary file. It’s an underused feature, maybe it will catch on.

enva2712
1 replies
8h49m

I saw that deno did this but cool to see node picked it up too. I wish there was an option to run turbofan at build to generate the instructions rather than shipping the entire engine, but i guess that would require static deps and no eval, which can’t really be statically checked with certainty

binarymax
0 replies
1h32m

The engine is actually pretty small. Something like 50-100MB if memory serves (when I was using pkg)

dgellow
1 replies
7h44m

Could you share how that can be done? I spent some time this year trying to pack a node tool into a single fat binary for a specific use case where we wanted a history of versioned executables - i.e a build job that needs to run specific versions of the packed tool in a specific order determined by external factors.

I tried Vercel pkg, Vercel ncc, nexe, and a few other tools I can’t remember right now. They all had issues with node v20, some dependencies, or seemed to not be maintained anymore. I ended up relying on esbuild as a compromise to get a fat script containing all sources and dependencies, tarballed with some static files we rely upon we can at least get versioned, reproducible runs (modulo the node env). Still not perfect, a single binary would be preferable

binarymax
0 replies
4h34m

I’ve used pkg with success for some small apps - curious to know why it didn’t work for you.

Now you can use this native feature (not totally stable yet though) which I’ve been meaning to try https://nodejs.org/api/single-executable-applications.html

zilti
1 replies
8h26m

Just pack up your whatever-else as an AppImage. Job done.

drowsspa
0 replies
7h48m

How does it deal with the undocumented system dependencies Python libraries often have?

sedatk
0 replies
9h7m

You can build native and self-contained binaries in C# too.

rnewme
0 replies
6h32m

You can do the same with python tho, from nuitka compiler to LinkedIn shiv or twitters pex (that follow pip 441).

quietbritishjim
0 replies
9h12m

For Python, you could make a proper deployment binary using Nuitka (in standalone mode – avoid onefile mode for this). I'm not pretending it's as easy as building a Go executable: you may have to do some manual hacking for more unusual packages, and I don't think you can cross compile. I think a key element you're getting at is that Go executables have very few dependencies on OS packages, but with Python you only need the packages used for manylinux [2], which is not too onerous (although good luck finding that list if someone doesn't link it for you in a HN comment...).

[1] https://nuitka.net/

[2] https://peps.python.org/pep-0599/#the-manylinux2014-policy

neonsunset
0 replies
6h33m

How often the deployment model “copy a single binary to VPS via SSH and run it” is even used nowadays?

And with that still, you’d be much better served by using a more expressive and less painful to use language like C#. Especially if the type of use is personal.

tuwtuwtuwtuw
6 replies
9h33m

I don't use TablePlus myself. Are they talking about their marketing website? If so, then obviously thet wouldn't need to use Kubernetes. Are they talking about their application then I wonder how there can be no dependencies - don't it store data, log things, etc?

tux3
3 replies
9h22m

You can store data by connecting to a database, and you can store logs by either sending them to the system journal and having a daemon collect them, or sending them to whatever cloud you like using a logging library.

It's fine, really. Those database and logging services you can put in a docker if you like, but if you put them anywhere else it works just the same. A Postgres in k8s or a Postgres on a dedicated server is the same as far as the client is concerned.

tuwtuwtuwtuw
2 replies
8h22m

But isn't that software you download and run in your own environment?

I'm mostly not following what is under a DDoS attack. Is it their web page mostly consisting of marketing material with static pages?

gnuvince
1 replies
7h15m

I'm mostly not following what is under a DDoS attack. Is it their web page mostly consisting of marketing material with static pages?

Yes.

tuwtuwtuwtuw
0 replies
1h48m

Okay, well then this was a waste of time.

ortichic
0 replies
9h17m

they talk about storing logs and separating databases, so good question

TheRoque
0 replies
9h10m

Yep, I'm wondering the same thing. It seems easy to brag about using only one binary if you don't need to use another service e.g. a database.

headmelted
18 replies
9h16m

It’s great that this isn’t hurting them but it leaves out a lot that makes me a bit nervous about this being taken as advice.

They’re advocating deploying a binary as preferable to using docker, fair enough, but what about the host running the binary? One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

On that, the monolith talked about here can be hosted on a single VPS, again that’s great (and cheap!), but if it crashes or the hardware fails for any reason that’s potentially substantial downtime.

The other worry I’d have is that tying everything into the monolith means losing any defence in depth in the application stack - if someone does breach your app through the frontend then they’ll be able to get right through to the backend data-store. This is one of the main reasons people put their data store behind an internal web service (so that you can security group it off in a private network away from the front-end to limit the attack surface to actions they would only have been able to perform through a web browser anyway).

llm_trw
8 replies
8h59m

They’re advocating deploying a binary as preferable to using docker, fair enough, but what about the host running the binary? One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

There is no universe in which _increasing your attack surface_ increases your security.

headmelted
4 replies
8h48m

I agree in principal but not in practice here.

If you’re using a typical docker host, say CoreOS, following a standard production setup, then running your app as a container on top of that (using an already hardened container that’s been audited), that whole stack has gone through a lot more review than your own custom-configured VPS. It also has several layers between the application and the host that would confine the application.

Docker would increase the attack surface, but a self-configured VPS would likely open a whole lot more windows and backdoors just by not being audited/reviewed.

zilti
3 replies
8h27m

You'd have to be utterly incompetent to make a self-configured VPS have more attack surface.

I have a FreeBSD server, three open ports: SSH with cert-login only, and http/https that go to nginx. No extra ports or pages for potentially vulnerable config tools.

oefrha
0 replies
7h47m

Given the huge number of wide open production Mongo/ES/etc. instances dumped over the years, I wager having heard of ufw puts you among the top 50% of people deploying shit.

llm_trw
0 replies
6h18m

This whole thread is incomprehensible to me.

I guess no one knows how to harden an OS anymore so we just put everything in a container someone else made and hope for the best.

headmelted
0 replies
6h9m

I don’t think we need to be calling people incompetent over a disagreement.

Are you suggesting that not opening the ports to any other services means they’re no longer a vulnerability concern?

That would be.. concerning.

rezonant
1 replies
8h26m

Considering the vast majority of exploits are at the application level (SQLi, XSS, etc), putting barriers between your various applications is a good thing to do. Sure, you could run 10 apps on 10+ VMs, but it's not cost efficient, and then you just have more servers to manage. If the choice is between run 10 "bare metal" apps on 1 VM or run 10 containers on 1 VM, I'll pick containers every time.

At that point, why are we making a distinction when we do run 1 app on one VM? Sure, containers have some overhead, but not enough for it to be a major concern for most apps, especially if you need more than 1 VM for the app anyway (horizontal scaling). The major attack vector added by containers is the possibility of container breakout, which is very real. But if you run that 1 app outside the container on that host, they don't have to break out of the container when they get RCE.

supriyo-biswas
0 replies
8h1m

The VM/container distinction is less relevant to this discussion than you might think; both Amazon ECS and fly.io run customer workloads in VMs (“microVMs” in their lingo).

spockz
0 replies
8h40m

On the other hand. If by using containers it has become more feasible for your employees to use something like AppArmor, the end result may be more secure than the situation where the binary just runs on the system without any protection.

enva2712
3 replies
8h58m

Ahh yes, security through obscurity - if we make it so complex we can’t understand it then no one else can either, right?

The important thing is making walls indestructible, not making more walls. Interfaces decrease performance and increase complexity

peanut-walrus
1 replies
8h52m

Literally the entire guiding principle for security architecture for the past decade or even more has been that "there is no such thing as an indestructible wall".

enva2712
0 replies
8h42m

I agree, perfection isn’t a realistic expectation. I also think effort spent building better defenses leads to fewer exploits over time than adding more of the same defenses. The marginal cost of bypassing a given defense is far lower than the initial cost to bypass a new defense

headmelted
0 replies
8h45m

Literally no-one said that.

(Some of) the reasons why you would do this are explained (I thought clearly) above. None of this is security through obscurity.

turboponyy
1 replies
6h50m

They’re advocating deploying a binary as preferable to using docker, fair enough, but what about the host running the binary? One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

There are tools that make "bare metal" configuration reproducible (to varying degrees), e.g. NixOS, Ansible, building Amazon AMI images.

headmelted
0 replies
6h7m

All of which would be better than what the post is advocating and I totally agree with this.

7bit
1 replies
8h47m

One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

This is false. Or so you think your host is secured by installing Docker? And when you scale, how do you get additional hosts configured?

True is, when you use Docker you need to not only ensure that your containers are secure, but also your host (the services running your containers). And when you scale up, and you need to deploy additional hosts, they need to be just as secure.

And if you're using infrastructure as code and configuration as code, it does not matter if you are deploying a binary after configuring your system, or Docker.

pheatherlite
0 replies
7h28m

Complexity is the criminal in any scenario. However, if we simply focus on a vanilla installation of docker, then the namespace isolation alone can be viewed as a step up from running directly on the os. Of course complexity means a vulnerability in the docker stack exposes you to additional risk, whereas a systemd svc running as a service account is likely to contain any 0day better.

diarrhea
0 replies
4h38m

I never understood how one “breaches an app through the frontend”. SQLi messes with your data store, natively (no RCE). XSS messes with other users, laterally. But how does one reach from the frontend all the way through, liberally? Are people running JavaScript interpreters with shell access inside of their Go API services and call eval on user input? It’s just so far fetched, on a technical level.

lopkeny12ko
15 replies
7h9m

This reads as extremely self-congratulatory. A "billion requests per month" is only a few hundred requests per second, which is both trivial and not a "DDOS." Also, their site is behind a CDN (Cloudflare), so I'm extra confused on how they think they did something notable from a performance perspective here.

For example, there's no reasonable world where that 200 MB blob is not cached and served over CDN. I can't imagine someone would be so proud that their application server isn't reading 200 MB from disk and copying those bytes to the client on every download; it's just so obviously poor design.

mondomondo
8 replies
6h42m

But typically this take would be assumed by a FAANG company to require over a thousand developers? Like how can claim it’s self congratulatory when huge companies regularly can’t do this?

boredpudding
6 replies
6h38m

Because it's the company with a thousand developers (Cloudflare) who's dealing with this.

conductr
4 replies
6h27m

I think the self congratulatory tone is just them saying if you design things simply you can do a lot (load) with a lot less (people/complexity) and still have an app that’s resilient (ignore minor ddos issues).

It’s encouraging or reminding people that this style of architecture, which was once prevalent, is still an option and is still rather legit.

Or, that’s how I read it however I also am biased as I prefer this method of development too. I think what’s missing in the article is, who are they and who is the audience. As in, some acknowledge that most things are not ever going to see mass scale usage and don’t need to be developed to the specs of a faang

maxk42
3 replies
5h27m

Except that it wasn't their architecture which allowed them to weather this storm: it's Cloudflare's. And the storm is a whopping 1.85 requests per second, all coming from Europe which means this isn't even a DDoS to begin with.

aequitas
1 replies
5h6m

But they didn't active the "Under Attack" mode in Cloudflare, so surely they must be doing it all by themselves.

sethammons
0 replies
3h3m

sarcasm is hard to read in text; missing /s

conductr
0 replies
1h25m

Probably mincing words but I guess I consider cloudfare part of the architecture and not part of their infrastructure and it seems like you’re conflating the two?

I do get that the volume of data and requests they are handling hardly constitutes the claim of ddos.

rvnx
0 replies
6h28m

Exactly, the same way that "the cloud" is just someone else computer and responsibility.

Aurornis
0 replies
4h56m

But typically this take would be assumed by a FAANG company to require over a thousand developers?

Using a CDN has been common practice for many years at companies big and small. Many big CDNs even have free tiers.

Like how can claim it’s self congratulatory when huge companies regularly can’t do this?

You don’t need a FAANG company with thousands of developers to use a CDN. You can sign up for Cloudflare today and have your files distributed by CDN with some simple changes. You could have it done by end of day.

helsinkiandrew
3 replies
6h49m

If the 200MB files they are referring to are the TablePlus client side app downloads (183MB for windows) at https://tableplus.com/download

The files are indeed cached by CF:

    # curl -v https://files.tableplus.com/windows/5.9.2/TablePlusSetup.exec > /dev/null
    ...
    < cache-control: max-age=691200
    < cf-cache-status: HIT
    < age: 2980

hartator
1 replies
4h58m

So, there are serving virtually zero bits from their servers.

helsinkiandrew
0 replies
4h55m

To be fair that was only one of the files they mentioned - the rest could be going to their server on every hit.

thinkingemote
0 replies
1h17m

I thought cloudfare would only cache web files like html and images and not actual files? This caused problems with some of their users as seen previously on HN

Or is that only with the free tier?

tgv
1 replies
6h45m

It's rarely a few hundred per second. Request density is nor uniform over time. It can regularly reach 20x the average.

eru
0 replies
6h27m

20x of a few hundred a second is still only a few thousand a second.

vasco
11 replies
8h42m

Bragging about this has to rank up there as the worst idea in the world. If your hole argument is taunting would be attackers with your wallet - saying you're more overprovisioned than the traffic they can send, you're just threatening them with a good time. At another time in my life I'd take this post as an invitation, even, specially because the numbers shared are super low.

I've had 3 situations where my place of work was under DoS attack, in the 3 cases I managed to identify an email address and reached out asking why they are doing it, and if they want to talk about our backend. In 1 case, the "attack" was a broken script by someone learning how to program, the other two were real attacks and one of them just immediately stopped once they knew we knew who they were, the other actually wanted to chat and we emailed back and forward a bit.

99.99% of the time a DoS is someone who is bored. Talking to them tends to work.

Edit: there's some questions about the situations so I'll expand:

- The first was not a real attack, and they were doing the network calls through their authenticated API key. This was early days of a YC startup so of course there was no rate limiting in place. In this case I exchanged 2 or 3 emails and after they sent me their python script I sent them back a patch and they finished their scraping without bringing us down. Never heard from them again

- The second was at a different company, we were getting targeted to distribute email spam, because at the time we'd allow people to invite their colleagues as members of their account, and some people associated with casinos based out of Macau automated a way to spam their casinos by putting the URL in the name of the account, which went out in the email notification. I contacted one of the admin emails of one of the casinos I found and they stopped and disappeared. In this case we also locked all their accounts and prevented further logins + emailed them to reach out to support if they thought it was a mistake.

- The third one was more difficult, they weren't using any account, so all we had was network. At some point on the second day though they changed how they were sending some of the calls, and by mistake or not leaked their Telegram username. I installed telegram and talked to them, they trolled me a little bit, but stopped very quickly and didn't start it again. This one was very amusing to people in my company because I had told them this approach would work but a few of the big wigs didn't want me to do it (they didnt have any reason other than "obviously won't work to just talk"). I just did it anyway.

To be clear, you shouldn't reach out with some threats or how you're so good that you found them. My approach is of genuine curiosity, and my literal first message to the telegram person was:

"Hello, how is it going? I work at <companyname> and we're seeing a load of requests originating from your user here on telegram. Does this make any sense to you or do you think I might have the wrong person?"

That's it!

pdimitar
3 replies
8h34m

How did you deal with these three customers, if that's not a secret?

RE: your tidbit on "don't threaten bad actors with good time", I disagree. If you can brag and demonstrate that their fleet of 10k machines can't touch a single service of yours and can't even make it pause then I'd say that sends pretty strong message to other potential bad actors.

Those bad actors have to be discouraged. I would get a very nice ego trip if I knew I can show the finger to a state actor, metaphorically speaking. But again, they should get the message that they are not as dangerous as they think they are.

Though I agree with other commenters that this traffic didn't seem as scary as the last 2-3 recorded attacks going through Cloudflare.

sspiff
1 replies
8h18m

You seem to fundamentally misunderstand the mindset of these attackers. This can just be about a challenge or a show of capabilities on the side of the attacker.

Telling them how big and strong you are will just trigger some of them to show you just how big and strong their botnet is. And there's always someone with a big enough botnet to bring you down.

pdimitar
0 replies
8h14m

Telling them how big and strong you are will just trigger some of them to show you just how big and strong their botnet is.

And? Let them keep feeding more data to Cloudflare so they deflect them even easier next time around.

Maybe I do misunderstand their motivation. But you seem to think they can knock down anyone they want. To me that's observably false; even the record-setting DDoS attack through (or on?) Cloudflare that was something like 1.6Tb/s didn't seem to do much.

Let them flex. They provide us with valuable information for free. :)

lukan
0 replies
8h19m

"If you can brag and demonstrate that their fleet of 10k machines can't touch a single service of yours and can't even make it pause then I'd say that sends pretty strong message to other potential bad actors"

It is also pretty good PR.

YPPH
1 replies
8h35m

I generally agree, but:

99.99% of the time a DoS is someone who is bored. Talking to them tends to work.

This is an overstatement. A great number are extortionists or state sponsored attackers. They're not interested in chit chat, except if it's to negotiate a price to stop the flood. Particularly not the ones commanding resources which are sufficient enough to make a dent in the operations of a substantial commercial entity.

rezonant
0 replies
8h32m

Blasting a site but sending no communication offering to make it stop doesn't sound like an extortionist.

JoshTriplett
1 replies
8h35m

the other actually wanted to chat and we emailed back and forward a bit

That sounds like a fascinating story. What did they want to chat about?

vasco
0 replies
8h12m

Added some more color to the original message in an edit.

simonmysun
0 replies
6h26m

In China 10 years ago, I've heard of stories that when someone got DDoSed and ransomed, they could pay to stop the attack (only relatively small amount of money). When they got DDoSed and ransomed again by another attacker, they simply told the previous attacker that they have already paid and the attack stopped.

Seems that they had some kind of alliance.

pryce
0 replies
8h32m

This kind of proposal is one I hadn't ever really considered. I would be fascinated to know if others here have had similar experiences when seeking contact with malicious attackers.

creatonez
0 replies
8h31m

Feeding the trolls might be the worst idea in the world but it's certainly the best way to harden your infrastructure through extreme real-world testing :)

samyar
10 replies
9h14m

This is the first time hear the word "Monolith"

What is it and how can one learn about it.

doctor_eval
6 replies
9h10m

It just means that there is one big binary that does everything, instead of a bunch of microservices communicating over a fabric of some kind.

As someone who thinks microservices actually simplify a lot of things, especially in complex domains, the idea that a monolith is a choice makes me cringe a bit.

I mean they start out simple, but ...

kryptiskt
3 replies
8h38m

The idea of unnecessarily replacing nanosecond scale function calls with network communication that is five orders of magnitudes slower makes me shiver. Yeah, you can make a microservice that does one thing with a well-defined API and it's nice and clean. But you might as well make a module that does one thing with a well-defined API, and it will be so much faster because it's right there in memory with you.

vintermann
0 replies
4h48m

Speed isn't the main argument as I see it, rather complexity is.

How much of each of your microservices is boilerplate code? How much is outright copy-pasted? Microservices can be an invitation to write lots of lines of code, so that management sees that you're "efficient".

objektif
0 replies
6h16m

I would love to hear a valid argument against it. Why microservices as opposed to say modules or libraries.

doctor_eval
0 replies
6h24m

If you’re replacing nanosecond function calls with microservices, you’re doing it wrong. It’s a specious argument.

In the domains in which I’ve worked, most services receive calls over the network, and go on to make database calls that also go over the network. So whether you do the routing inside or outside a monolith makes almost no difference to latency. And what’s more, with a front end like GraphQL, you can parallelise the work which reduces latency further.

Microservices have a lot of benefits relative to monoliths, but they aren’t a panacea any more than monoliths are. They’re a useful architecture for certain workloads and a poor fit for certain others.

But in my experience it’s quite a lot more difficult to maintain discipline over the long term with monolithic architectures, and that’s why I tend to prefer microservices attached to messaging architectures like NATS. YMMV, and that’s fine.

vintermann
0 replies
4h51m

It's a lot about trusting your infrastructure, your development process, your overall plan with respect to state and communication etc. If you need to make it more complicated later, can you? Probably. If you see you could have done something simpler, can you de-complicate it? Sounds harder to me in general.

robwg
0 replies
8h42m

Them be fighting words.

Tell some others orgs I've worked at that microservices are simple and they would laugh.

But yes it depends on the complexity of your domain/org.

pmontra
0 replies
9h4m

It's a well established term. Example, this article of James Lewis and Martin Fowler from 10 years ago https://martinfowler.com/articles/microservices.html

"To start explaining the microservice style it's useful to compare it to the monolithic style: a monolithic application built as a single unit. Enterprise Applications are often built in three main parts: a client-side user interface (consisting of HTML pages and javascript running in a browser on the user's machine) a database (consisting of many tables inserted into a common, and usually relational, database management system), and a server-side application. The server-side application will handle HTTP requests, execute domain logic, retrieve and update data from the database, and select and populate HTML views to be sent to the browser. This server-side application is a monolith - a single logical executable[2]. Any changes to the system involve building and deploying a new version of the server-side application."

keybored
0 replies
8h20m

It’s a kind of word which only makes sense as a negation to its antonym. Because if the antonym didn’t exist then it would just fade into the background as “normal”.

mastermedo
10 replies
9h32m

I might be out of touch with reality, but billions of requests per month sounds like peanuts. Is that considered a big ddos attack?

heythere22
5 replies
9h27m

Your correct. 1 billion requests per month is 380 requests per second on average which is not that high

logtempo
3 replies
9h12m

it's 2.3/second not 380

avoid3d
2 replies
9h3m

How are you arriving at that number?

60 seconds per minute 60 minutes per hour 24 hours per day 30 days per month ~2.59 million seconds per month

One billion requests into 2.59 million seconds is 386 requests per second.

logtempo
0 replies
5h32m

ho it's billion, my bad ahah

grodriguez100
0 replies
6h28m

Or 386K requests per seconds, depending on where these billions are coming from :-)

fragmede
0 replies
9h11m

assuming they're smeared equally across the whole month, that is. Eg if the majority of those requests kick off a job at midnight on the first of the month, it's a bit more to deal with.

drewdevault
1 replies
9h25m

It depends on a lot of factors. Generally people provision infrastructure according to its expected usage, and to overprovision is wasteful.

mastermedo
0 replies
9h7m

I see, that makes sense. And automatic provisioning can be costly in instances like ddosing.

rezonant
0 replies
8h37m

Something like 300-400 API requests per second is not a heavy load for any reasonably designed API. Something like 300-400 static files requests per seconds is less than peanuts.

anonzzzies
0 replies
9h29m

Depends who is paying for that? Self hosted that’s not a problem, but ‘serverless’ that’s usually costly at those levels, especially for worthless traffic.

kopos
8 replies
9h18m

A bit ingenious to say we do nothing when you have CloudFlare in front of your servers. Cloudflare by itself can automatically detect and handle DDoS without explicitly activating the Under Attack mode.

Also Java jar files give you the same benefit.

vintermann
2 replies
9h16m

Also Java jar files give you the same benefit.

You have to explain that one a bit more.

RedShift1
1 replies
8h38m

You can compile a jar to include all dependencies (like statically compiling C code), then you can just run `java -jar myprogram.jar` and it will work as long as the Java runtime is the same major version or newer than the version you compiled for.

diarrhea
0 replies
4h29m

That’s different from the runtime-free binaries produced by Rust and Go (binaries ship with tiny runtime) though. These are truly dependency-free, requiring only that you can execute ELF files.

cwillu
2 replies
9h10m

Ingenious doesn't mean what I think you think it means.

worddepress
0 replies
8h54m

It is ingenious to turn a mild attack into a 100+ point HN submission!

sethammons
0 replies
7h41m

I think they meant disingenuous

rezonant
0 replies
8h35m

Also those requests for the 200MB setup aren't even hitting your servers unless you have disabled caching for some reason, not that it'd be that hard to serve it directly.

n4r9
0 replies
9h10m

bit ingenious

I think the word you're after is "disingenuous"

PreInternet01
8 replies
8h54m

I'd hardly call that a DDoS attack: from the description given, the extra 8TB-or-so of monthly traffic seems to fall under "annoyingly pointless abuse of services"...

As long as such abuse doesn't cause monetary or resource exhaustion concerns, it's quite OK to ignore it, but stories like "whelp, turns out that 80% of the capacity of our auto-scaling fleet is not doing anything useful" are depressingly common enough to at least keep an eye on things.

My annoyance with this kind of abuse revolves mostly around logging: a majority of logs just showing the same set of hosts displaying the same pointless behavior over and over again. Again, not a huge issue if your log storage is cheap and plentiful (as it should be), but having some kind of way to automatically classify certain traffic as abusive and suppress routine handling of that is definitely a good idea.

It's also a lot harder than it sounds! I can't count the number of times I've added classification logic to my inbound SMTP server that should pick up on outright, silly abuse (of which there is a lot when dealing with email), only to have it triggered by some borderline-valid scenario as well.

Spending way too much time on going down successive rabbit holes is a great way not to get any real work done -- a great reason to outsource, or, if that's too much work as well or just too expensive, indeed just ignore the abuse, annoying though it is...

Borg3
4 replies
8h47m

Yes!! Great idea.. Keep ignoring them. So they feel more encouraged to do more fishy things. That attitude made todays internet pretty much swamp.

PreInternet01
3 replies
8h41m

The TL;DR of my comment is "I personally enjoy implementing automated solutions to relatively-low-volume abuse, but as long as it doesn't cause you any capacity concerns, I fully understand ignoring it, since it's hard"

Using that as a reason to assign me responsibility for the state of the internet seems... slight hyperbole?

Borg3
2 replies
8h14m

It wasnt directed at you as person, but as an idea. Not sure if you ever did abuse report, but they are mostly ignored. Thats the problem. Everyone just waves the hand like, it doesnt make capacity issues, we can ignore it. Sure, until its too late.

Maybe I am overly paranoid, but seems that old russian maxima is reasonable: Fight when they coming for cent, because when they will come to take dollar it will be too late.

PreInternet01
1 replies
7h41m

So, funny story, a major reason why abuse reporting became pointless (unless done at the right level, i.e. when there is a direct and significant business relationship between the parties involved) is... abuse of the abuse reporting process!

Sometime around the dawn of this millennium, for example, 'consumer firewalls' at just about every OSI layer became a thing, and a lot of these had the great feature where they would automatically email WHOIS contacts for domains and IP blocks (plus all of their upstreams, for good measure) every time something bad happened, like receiving a single UDP packet on port 139.

Stuff like that, predictably, put a bit of a dent in the availability of useful technical contact information, and as much as I would like to go back to the "I have the beeper number of the guy who runs the national backbone" Internet, I'm afraid that Cloudflare is the best we can do these days, sorry.

Back to the topic at hand: "fight" on the 2024 Internet means refusing service to abusive parties as much as possible. That responsibility is best outsourced (see 'Cloudflare' above...), and a hard undertaking if you want to do it yourself without causing collateral damage (which, yes, Cloudflare also does, but at least you get someone to point at!).

Expecting to somehow get in touch (or worse, 'get even') with the myriad of bulletproof hosters (who simply don't care), admins of bug-ridden/misconfigured systems (who often don't even understand the issue) and assorted detritus is unproductive. And, as with any "the ideal amount of fraud is nonzero" discussion, that can be a hard pill to swallow, but a necessary one nonetheless.

Borg3
0 replies
15m

Huh, interesting story. Can you point to some sources about that? What FW software vendors did it? I never heard about such dumb feature. Really. If I have rule that DROPs traffic, I do NOT care anymore whats really going on (unless its DoS).

smarx007
2 replies
7h38m

As long as such abuse doesn't cause monetary [...] concerns

8TB egress on AWS is $595 (taking into account 1TB free egress/mo), while 8TB egress on Hetzner starts at less than $10/mo. With DigitalOcean you'd pay $30 overage for 3TB on top of 5TB included in the s-4vcpu-8gb-intel, for example. 3TB overage is $15 with Linode. I think the article has a point.

PreInternet01
1 replies
7h31m

The article (the point of which is: 'just ignore this', which is sort-of the opposite of the conclusion you seem to have gotten to) specifically mentions that their egress is free via Cloudflare.

But, sure, if you have public-facing services on AWS that have the ability to send large amounts of data on demand, absolutely make sure that you limit access to those! (E.g. using a unique download token that is only available from a separate rate-limited and valid-source-checking service).

smarx007
0 replies
7h4m

What I was trying to say is that the article describes an architecture that takes cloud billing abuse attacks into account (they point out specifically that R2 is preferred to S3 due to egress cost structure) and this design is what partially allows them to ignore the light attack.

Most of the cloud architecture posts on HN either focus on how k8s/%your favourite new tool% is good for scale or detrimental to keeping complexity under control. And I think it's valuable for startups to consider cloud billing abuse attacks in addition to horizontal scaling concerns and complexity, which is what I referred to when I said the article has a point. As you wrote, rate limiting and extra checks could get the job done in a scalable deployment, so there is more than one way to keep cloud bill from an attack.

vintermann
4 replies
9h17m

we’ve simplified the deployment process as much as possible. We don’t use Docker, Kubernetes, or any containers, or need to setup the enviroment.

This sounds like a dream, both in the sense that it's wonderful, and that I'm not quite sure I believe it.

zilti
3 replies
8h19m

It is very easy. Why do so many people torture themselves with complex setups? Masochism?

ahoka
1 replies
5h55m

Some companies run more complex things than download buttons.

zilti
0 replies
5h10m

At least they like to think they are doing that.

cess11
0 replies
7h37m

When it's time for major shareholders and investors to 'exit' they don't want to market 'we did a simple setup', they want to be able to communicate twentyfive buzzwords incomprehensible to everyone directly involved.

sameoldtune
3 replies
7h34m

Pet peeve of mine. “Billion requests per month” is about 370 rps. Which can be likely handled by a single well configured server. Certainly less than 10 servers. A single rogue bash script could cause that much traffic

injuly
0 replies
6h36m

Assuming those requests are evenly distributed over time, yes. But in the event of an attack you would see a sudden surge in requests followed by a flatline, and still end up at 1B/month over 30 days.

groestl
0 replies
7h31m

can be likely handled by a single well configured server.

A single core, actually, after JVM JIT kicked in.

cjk2
0 replies
7h22m

Thanks to microservices we need 45 kubernetes nodes to handle our 1000 requests a second!

block_dagger
3 replies
9h8m

Reminds me of Nietchze’s Genealogy of Morals quote: I’m strong enough to allow that.

keybored
1 replies
8h23m

I can learn to resist

anything but a flogged mare

cess11
0 replies
7h38m

Syphilis is one hell of a drug.

082349872349872
0 replies
9h0m

The latin equivalent: aquila non captat muscas ("eagles don't hunt flies")

Anyone have the cuneiform expression for 80/20?

pknerd
2 replies
8h39m

off topic but you guys have done solid SEO. You query anything related to SQL/syntax and tableplus will be in front of you.

rs_rs_rs_rs_rs
1 replies
8h37m

Using TablePlus and I would wager it's not the SEO but the quality of the tool.

pknerd
0 replies
7h54m

What does a tool have to do when I'm purely searching for a certain MySQL/SQL syntax? TablePlus has done an awesome job of writing brief articles about different SQL syntax and its usage in TablePlus.

oefrha
2 replies
8h34m

This is just a static marketing site for a desktop app. They don’t even have a discussion forum — feedback is handled by GitHub issues. Bragging about how simple their deployment is for a static marketing site and how it’s able to handle a static file being downloaded millions of times a day is super weird. And Cloudflare is doing all the mitigation work here (if that’s even needed for such a puny amount of traffic), not them.

If I were to be hit by such an "attack" myself I probably wouldn't even notice until Cloudflare sends me that monthly "X TB of data transferred, something close to 100% bandwidth saved" email.

I like the app btw, can recommend.

naiv
0 replies
6h58m

I like the app as well but to me it also sounds like 'ChatGPT, create an unusal marketing post', it all doesn't make sense, even less for people who have experience with real ddos attacks.

mamcx
0 replies
54m

Bragging about how simple their deployment is

To the contrary, I wish more people do this: The more people know that their overly complex infra is sub-optimal the better!

tromp
1 replies
8h47m

our setup file is approximately 200MB

we keep things as minimal as possible

Wonder what's in that file that makes it need to be that large...

speedgoose
0 replies
6h19m

TablePlus supports quite a few databases. It adds up.

sethammons
1 replies
7h56m

That's not a noteworthy "attack"; that could be a single runaway bash script on someone's machine. 50MM requests per month "from the UK" averages out to under 20 requests per second. I would expect a single Go server to handle 250 times that request volume before optimizing much.

Their advice isn't bad per se, but their numbers are not a testament to it. I expect for my Go HTTP API services to handle 5k requests per second on a small to medium VPS when there is some DB activity and some JSON formatting without doing any optimizations. This is based on deploying dozens of similar services while working at a place that got multiple billions of requests per day, spiking to over 500k rps.

jameshart
0 replies
4h48m

If you’re getting that kind of traffic hammering your API with repeated requests, but it’s all from non-sketchy locations, don’t think ‘DDoS’, think ‘did we accidentally put an infinite retry loop in our client code?’

pheatherlite
1 replies
7h37m

Why do they need an app server at all? The website, to my initial glance, seems to be a brochure for the desktop product. Surely static pages and static assets would be even more resilient against a ddos since it's just bog standard webserver streaming out the static resources. Mount a memory based fs and conventional disk latency concerns become mitigated, too.

eknkc
0 replies
7h30m

I think they have a licensing server which handles device authorization and auto updates.

dugmartin
1 replies
8h12m

It feels like they didn't learn the root lesson - move your 200MB setup file to a subdomain. You shouldn't host large assets like this on the same domain as your marketing/app site even if there is a CDN fronting it because an attacker can simply add a random query string to bust through the CDN cache and cause the cache miss to hit your box. The subdomain should be hosted on a different box and fronted possibly with a different CDN provider so that any large scale attack doesn't affect your marketing/app site (either due to your CDN provider or upstream network provider temporarily black holing you).

viraptor
0 replies
8h10m

an attacker can simply add a random query string to bust through the CDN cache

You can configure the cache so that they can't do it: https://developers.cloudflare.com/cache/troubleshooting/cach...

Without knowing their specific configuration, we don't have enough info to complain about that.

Also the separate domain doesn't really change much with CF in front. It could be nice for a few reasons, but it's not really bad. If you have a reasonable CDN already, there's really not much point getting a different one.

d_burfoot
1 replies
5h5m

This content was very useful to me, as I am running a small service for a few clients that I worry might be taken down by a DDoS. The main takeaway seems to be "use a CDN", but if you are running a more complex service, why can't the attackers hit endpoints that aren't CDN-cached? Is the strategy in this case simply to refuse the request very early in the process, to ensure the service doesn't waste much time processing it?

nojvek
0 replies
3h55m

The other point here was that billions of requests per month is only 2 requests per second.

That can easily be done on a one core server if you use an efficient web language like go/rust/nodejs.

Tableplus is a simple marketing site that serves the binary via cdn.

Most of the time when a site goes down is because it is doing something very CPU intensive either at the app layer or the Database layer. E.g an expensive query.

If queries are hitting indexes and app is doing simple auth, routing and sending queries to DB, it’s hard to DDOS it easily.

With things like Cloudflare pages and functions, someone could hit it with billions of requests/month and you’d still be in standard $5/month tier.

They could download terabytes off CDN and you’d have $0 cost.

It’s pretty radical how much you can build on Cloudflare on their free tier.

bun_terminator
1 replies
8h51m

I guess this is an ad, so I'll bite: Why is the mac download button featured so centrally, while there appear also to be downloads for other platforms, too? It's not like that's a usual default.

troupo
0 replies
7h45m

Their original product was Mac-only, and they added other platforms only recently

wigster
0 replies
7h7m

THATS not a DDos attack! when i were a lad...

vdddv
0 replies
9h22m

Is there any way to know who's behind a DDoS attack?

ur-whale
0 replies
9h6m

Public boasting as a mitigation strategy, that's got to be a new one.

Not entirely sure it's a wise approach given the deeply asymmetric infrastructure costs of DDoS attacks, especially if the attacker has access to a botnet.

[EDIT]:

in other words, there is a non-zero probability that the attacker, piqued by the boasting, might be able at the flick of a switch to increase the intensity of the attack by a factor 1M.

tluyben2
0 replies
9h34m

Similar problem and similar-ish product 0]; we get DDoSsed a lot and I don’t know why. We had to put Cloudflare botfight to stop it. That works very well, but what do you do if CF doesn’t exist?

0] https://flexlists.com

sylware
0 replies
6h53m

It is like computer viruses.

DDoS attacks do benefit some specific corps, for instance cloudflare.

What's very important is to build DDoS resistant infrastructure without them, to rid of the incentive to shadow-hire hackers to DDoS and force some infrastructures to move there and pay them.

There is too much suspicion in the digital world nowdays. Like current crypto is not mainly for shaddy ops and mafia? Really?

razodactyl
0 replies
9h22m

I like this a lot: Why? Because the attacks are directed to someone who isn't bothered and wastes their own resources. I've been a Table Plus user for near a decade now and enjoy the simple but highly compatible software they provide.

neya
0 replies
3h3m

This is the dumbest thing I've read on HN today.

"We do nothing..because we can."

This speaks volumes about your attitude towards security as a business. If I was your enterprise client I wouldn't really be happy reading this.

memothon
0 replies
2h50m

A post like this seems kind of dangerous. Just asking someone to fire their cannon at you! Beware.

kbar13
0 replies
8h27m

not really that interesting of a post. billions of requests per month is like low hundreds of requests per second. billion is a big number but so is a month when it comes to request throughput. all the grandstanding about monolith... for something that serves 2-3 requests per second, and is a static marketing site... this is so overblown.

hntddt1
0 replies
8h41m

It's going to a point where that directly find out the person behind it is cheaper than fix the bug. People nowadays don't pay respect to the hard working people anymore

filleokus
0 replies
8h49m

Was hoping for something more swole dog worthy when reading the headline. Even though I agree with much of the advice, being behind Cloudflare is definetly not nothing.

Depending on the distribution of the traffic they might have survived well on VPS's without Cloudflare anyways, doesn't seem that large. Would be interesting to see more detailed stats of rps and how much (if any) Cloudflare stopped before they got it.

Russian layer7 ddos'es that I know of targeting Swedish companies have been large enough that major providers run into capacity problems and fall over (including Verizon, Azure Frontdoor, Cloudflare, GCP's Load balancer). This strategy would absolutely not work against those volumes.

dsign
0 replies
6h29m

When using binaries, you can let Linux Systemctl handle the process

“Systemctl” instead of “systemd” ? Hm, do I detect reticence to publicly admit the undeniable, vast superiority of systemd by confusingly using the name of the utility?

dewey
0 replies
6h34m

Almost sounds like a buggy update process of their app that they shipped.

ddorian43
0 replies
9h35m

A nice thing about modern cloud providers is their expensive bandwidth so a new vector of attack is simply downloading large files that they host. (except cloudflare)

ckdarby
0 replies
6h44m

Literally laughed when they're talking about language choices for a billion requests per month.

I've got nodejs lambda code that is doing 388B/month and only at this point have we even considered changing the language for performance because the cost savings have a net positive ROI.

It took 5 years to get to this point.

b0x68
0 replies
4h45m

What does “heete” mean?

aoeusnth1
0 replies
1h8m

Why is billions of request per month so exciting to the authors? That’s only ~100 QPS, which a single-core application should be able to handle easily.

Wake me up when you have hundreds of millions of QPS of DOS load.

andrewmackrodt
0 replies
7h27m

The architecture of the app didn't seem related to the "DDoS" attack they're describing. If it's only their setup file being downloaded, I imagine their backend isn't even touched, doubly so if they're using cloudflare for caching.

_ache_
0 replies
8h9m

People has broken CI/CD so we do meme. Not sure if 6M/m is a lot. Looks like not that much.

PaulHoule
0 replies
5h52m

Downloading a setup file is not the way to bring down a site. My experience in the HDD era was that people laugh at you when you do a lot of requests like that but call the FBI on you (at least here in the States) if you insert a lot of random users into their database. (Each of those requires a transaction and each of those requires waiting for the disc to spin around unless they had a nice battery-backed write cache)

KingOfCoders
0 replies
8h24m

Doesn't look like a real DDos attack to me with the traffic numbers (of course, No true Scotsman).

4TB/200mb = 5000.

CanaryLayout
0 replies
8h51m

Yeah Goroutines are great. Then add something like WebRTC to your project that realistically tops out at 10000 listeners, and people wonder why Twitter Spaces is so buggy...

AtNightWeCode
0 replies
8h9m

Those numbers in the screenshot from Cloudflare represents requests to Cloudflare, not requests to the origin. It includes cache hits.