return to table of content

I looked through attacks in my access logs

pferde
59 replies
1d1h

An interesting thing that I've noticed is that some of the attackers watch the Certificate Transparency logs for newly issued certificates to get their targets.

I've had several instances of a new server being up on a new IP address for over a week, with only a few random probing hits in access logs, but then, maybe an hour after I got a certificate from Let's Encrypt, it suddenly started getting hundreds of hits just like those listed in the article. After a few hours, it always dies down somewhat.

The take-away is, secure your new stuff as early as possible, ideally even before the service is exposed to the Internet.

KronisLV
31 replies
23h38m

The take-away is, secure your new stuff as early as possible, ideally even before the service is exposed to the Internet.

Honestly it feels like you'll need at least something like basicauth in front of your stuff from the first minutes it's publicly exposed. Well, either that, or run on your own CA and use self-signed certs (with mTLS) before switching over.

For example, when some software still has initial install/setup screens where you create the admin user, connect to the DB and so on, as opposed to specifying everything initially in the environment variables, config files, or other more specialized secret management solutions.

p_l
27 replies
23h19m

Generally I'd recommend not exposing anything unless you deployed the security for it.

Just SSH scanning can be a big issue.

psanford
26 replies
22h27m

A big issue how? If you block password auth, ssh scanning is a nonissue.

bradknowles
16 replies
21h13m

DDoS attack on your sshd?

psanford
10 replies
17h34m

Are we just speculating? SSH scanners are not sources of DDoS. Large companies have ssh bastions on the internet and do not worry about ssh DDoS. Its not really a thing that happens.

You don't need to freak out if you see a bunch of failed ssh auth attempts in your logs. Just turn off password based authentication and rest easy.

koito17
7 replies
17h8m

Agreed. Another thing you can do to drastically reduce the amount of bots hitting your sshd is to listen on a port that is not 22. In my experience, this reduces ~90% of the clutter in my logs. (Disclaimer: this may not be the case for you or anyone else)

TacticalCoder
5 replies
16h39m

Just to reduce the crap in the log and also because I can, I have my SSH servers (not saying what their IPs are) using a very effective measure: traffic is dropped from the entire world, except for the CIDR blocks, which I put in ipsets, of the five ISPs over three countries I could reasonably be in when I need to access the SSH servers.

And if I'm really, say, in China or Russia an really need to access one of my servers through SSH, I can use a jump host in one of the three countries that I allow.

So effectively: DROPping traffic from 98% of the planet.

Boom.

bradknowles
3 replies
15h51m

Deny by default, allow only those sources that are considered trustworthy. And frequently re-evaluate who and what should be considered trustworthy.

aborsy
2 replies
13h38m

Or close the ports, and install an agent that phones out such as Tailscale or twingate.

PLG88
0 replies
9h14m

This is the way, outbound only connections so you can stop all external unauthenticated attacks. I wrote a blog 2 years back comparing zero trust networking using Harry Potter analogies... what we are describing is making our resources 'invisible' to silly muggles - https://netfoundry.io/demystifying-the-magic-of-zero-trust-w...

Fnoord
0 replies
10h50m

Or just Wireguard itself.

Fnoord
0 replies
10h51m

I don't want to be able to auth whilst physically in authoritarian regimes. If I had to be physically there it'd be via burner devices.

amne
0 replies
7h59m

I used to have an iptables config that just drops everything by default on the SSH port and run a DNS server that when queried a magic string would allow my IP to connect to SSH. It did help that the DNS server was actually used to manage a domain and was seeing traffic so you couldn't isolate my magic queries so easily.

bradknowles
1 replies
15h52m

Until there is a new zero day on sshd.

You want to keep these things behind multiple locked doors, not just one.

For the servers themselves, you shouldn't be able to get to sshd unless you're coming from one of the approved bastion servers.

You shouldn't be able to get to one of the approved bastion servers unless you're coming from one of the approved trusted sources, on the approved user access list, and using your short-lived sshd certificate that was signed through the use of a hardware key.

And all those approved sources should be managed by your corporate IT department, and appropriately locked down by the corporate MDM process.

And you might want to think about whether you should also be required to be on the corporate VPN. Or, to be using comparable technologies to access those approved sources.

op00to
0 replies
5h27m

For the servers themselves, you shouldn't be able to get to sshd unless you're coming from one of the approved bastion servers.

What if there's a zero-day in your bastion service, whatever that is?

indigodaddy
4 replies
20h53m

One option could be to lock the port down to only your jump/bastion server source IP.

bradknowles
3 replies
20h50m

That fails to when you lose that IP address, or you lose that server.

indigodaddy
1 replies
20h45m

I suppose if you don’t have console access, sure. But inconvenient at worst imv.

bradknowles
0 replies
20h43m

If you have a way around that bastion server, then at least you've got a backup. But then you also have to worry about the security of that backup.

PLG88
0 replies
9h13m

Yes, better to make your bastion 'dark' without being tied to an IP address. This is how we do it at my company with the open source tech we have developed - https://netfoundry.io/bastion-dark-mode/

p_l
4 replies
17h25m

Until a junior from another project enables password-based root logins because Juniper team that was on site to help them install beta-version of some software they collaborated on asked them to.

Few days after they asked to redirect an entire subnet to their rack.

And yes, you still need to remember to close password logins or at least pick serious password if you need them. Helps to have no root login over SSH and normal users that aren't defaults for some distro...

psanford
3 replies
17h18m

It sounds like your organization has a bunch of problems. I'm sorry to hear that. But I don't think you can really blame those on ssh.

There are plenty of organizations and individuals that can competently run ssh directly on the internet (on port 22) with zero risk to "ssh scanners."

sumtechguy
1 replies
4h53m

Your security is only as good as the people running your system. Unfortunately not everyone has teams of the best of the best. Sometimes you get a the jr dev guy assigned to things. They do not know any better and just do as they are told. It is the deputized sheriff problem.

p_l
0 replies
3h5m

In that case it wasn't even the junior's fault - they were following experts from Juniper who were supposed to be past-masters on installing that specific piece of crap (as someone who later accidentally became developer of that piece of crap for a time, I feel I have the basis for the claim).

And those people told him the install system didn't support SSH keys (hindsight: they did) and got him to make the root logins possible with passwords. Passwords that weren't particularly hard to guess because their only expected and planned use was for the other team to login for the first time and set their own, before the machines were to be exposed to internet, using BMC.

p_l
0 replies
17h8m

I am not blaming this on SSH (also, no longer in that org for many years).

I am just pointing out (also in few other, off-site discussions), that one should not even think of exposing a port before finishing locking it down.

Because sometimes people forget, even experienced people (including myself), and sometimes that's enough (I think someone few weeks ago submitted a story which involved getting pwned through accidentally exposed postgres?).

And there's enough people who get it wrong for various reasons that lowest of low script kiddies can profit buying ready-made extortion kits on chinese forums, getting a single VM with windows to run them, and extort money from gambling/gameserver sites. Not to mention all the fun stuff if you search for open VNC/RDP.

yorwba
1 replies
8h15m

Yes, if you follow the advice of "not exposing anything unless you deployed the security for it" of course you block password auth before exposing SSH to the internet.

Not everyone is following that advice. Just last week I taught a friend about using tmux for long-running sessions on their lab's GPU server, and during the conversation it transpired that everyone was always sshing in using the root password. Of course plugging that hole will require everyone from the CTO downward to learn about SSH keys and start using them, so I doubt anything will change without a serious incident.

op00to
0 replies
5h29m

Of course plugging that hole will require everyone from the CTO downward to learn about SSH keys and start using them

I had a similar issue 15 years ago, and tied the Linux boxes into Active Directory and authenticated via Kerberos. Worked nice, no SSH keys needed!

aunderscored
1 replies
21h37m

Unless an attack on the sshd is employed. Which is also possible

psanford
0 replies
17h36m

Pre-auth sshd vulnerabilities are extremely rare and are not what ssh scanners are looking for.

Filligree
1 replies
21h12m

something like basicauth

I wish. I use basicauth to protect all my personal servers, the problem is Safari doesn't appear to store the password! I always have to re-authenticate when I open the page. Sometimes even three seconds later.

eru
0 replies
10h47m

Have you considered using a different browser?

woleium
0 replies
23h9m

or get your certificates using dns auth a week or so prior to exposing the service

elashri
8 replies
1d1h

It is also useful to rely more on wildcard certs, as it makes it difficult to determine from CT logs the specific subdomains to attack.

nullindividual
3 replies
23h52m

A compromised wildcard certificate has a much higher potential for abuse. The strong preference in IT security is a single-host or UCC (SAN) certificate.

Renewing a wildcard is also unfun when you have services which require a manual import.

dfc
2 replies
20h25m

Renewing any certificate that requires a manual import is not fun. Why are wildcard certs less fun to manually import than individual certificates?

nullindividual
1 replies
19h56m

Presumably one purchases a wildcard for multiple distinct systems.

dfc
0 replies
18h34m

Using them like that never occurred to me. I was thinking multiple sites on one host or vanity hostnames: dfc.example.com / nullindividual.example.com. etc.

bombcar
2 replies
1d1h

There's really no reason to avoid wildcard certs for your domains, unless you have so many subdomains that are managed by various business interests.

I use LE wildcard certs and they're great, you can use them internally.

couchand
1 replies
23h53m

It seems like the principle of least power would apply here. There's value in restricting capability to no more than strictly necessary. Consider the risk of a compromised some-small-obscure-system.corporate.com in the presence of a mission-critical-system.corporate.com when both are issued wildcard certs.

Wildcard certs are indeed a valuable tool, but there is no free lunch.

baobun
0 replies
20h1m

You'd usually put a reverse proxy exposing the services and terminating TLS with the wildcard cert.

The individual services can still have individual non-wildcard internal-only certs signed by an internal CA. These don't need to touch an external CA or appear in CT logs - only the reverse proxy/proxies should ever hit these, and can be configured to trust the internal CA (only) explicitly.

stefandesu
0 replies
12h7m

Yeah, I switched to wildcard certs at some point for this reason.

heywoodlh
4 replies
1d1h

Was looking into Certificate Transparency logs recently. Are there any convenient tools/methods for querying CT logs? i.e. search for domains within a timeframe

Cloudflare’s Merkle Town[0] is useful for getting overviews, but I haven’t found an easy way to query CT logs. ct-woodpecker[1] seems promising, too

[0] https://ct.cloudflare.com/

[1] https://github.com/letsencrypt/ct-woodpecker

simonw
1 replies
1d1h

Steampipe have a fun SQLite extension that lets you query them via SQL: https://til.simonwillison.net/sqlite/steampipe#user-content-...

It uses an API provided by https://crt.sh/

high_priest
0 replies
23h43m

Querying crt.sh helped me identify a dev service I was supposed to take down, but forgot about it. Nice alternative use case :D

j0hnyl
1 replies
1d1h
H8crilA
0 replies
1d1h
supriyo-biswas
3 replies
1d1h

These aren't attackers - they're usually services like urlscan.io and others who crawl the web for malware by monitoring CT logs.

joshspankit
2 replies
1d1h

The thread is specifically talking about logs of attacks

fulafel
1 replies
6h47m

The message they responded to was not, it was:

"I've had several instances of a new server being up on a new IP address for over a week, with only a few random probing hits in access logs, but then, maybe an hour after I got a certificate from Let's Encrypt, it suddenly started getting hundreds of hits"

pferde
0 replies
5h40m

What I wrote was ".. hundreds of hits just like those listed in the article ...", and the article listed attacks.

pkulak
3 replies
16h51m

I host so many services, but I gave up totally on exposing them to the internet. Modern VPNs are just too good. It lets me sleep at night. Some of my stuff is, for example, photo hosting and backup. Just nope all the way.

justaj
2 replies
16h34m

If you're the only one accessing those services, then why use a VPN instead of port mapping those services to localhost of the server, and then forwarding that localhost port to your client machine's localhost port via SSH?

pkulak
1 replies
16h21m

I didn’t understand any of that, sorry. Haha

A VPN lets me access my stuff from my phone while out of the house, for example.

justaj
0 replies
8h2m

This might clear up a few things then: https://web.archive.org/web/20220522192804/https://www.dbsys...

Original article which doesn't contain the first graphic: https://www.xmodulo.com/access-linux-server-behind-nat-rever...

throwbadubadu
0 replies
9h41m

The take-away is, secure your new stuff as early as possible, ideally even before the service is exposed to the Internet.

What? Ideally..before? Seriously? It is 2024.. and this was true even decades ago, absolutely mandatory.

(Still remembering that dev that discovered file sharing in his exposed mongo instance (yes, that!! :D) without password only hours after putting it up.. "but how could they know the host it is secret!!" :D ).

nemothekid
0 replies
14h49m

Fun anecdote - I wrote a new load balancer for our services to direct traffic to an ECS cluster. The services are exposed by domain name (e.g. api-tools.mycompany.com), and the load balancer was designed to produce certificates via letsencrypt for any host that came in.

I had planned to make the move over the next day, but I moved a single service over to make sure everything was working. Next day as I'm testing moving traffic over, I find that I've been rate limited by Lets Encrypt for a week. I check the database and I had provisioned dozens of certificates for vpn.api-tools.mycompany.com, phpmyadmin.api-tools.mycompany.com, down the list of anything you can think of.

There was no security issue, but it was very annoying that I had to delay the rollout by a week and add a whitelist feature.

maccam912
0 replies
1d1h

Same! As soon as a new cert is registered for a new subdomain, I get a small burst of traffic. It threw me off at first assuming I had some tool running that was scanning it.

kafrofrite
0 replies
7h36m

I work as a security engineer and, yes, the CT logs are extremely useful not only for identifying new targets the moment you get a certificate but also for identifying patterns in naming your infra (e.g., dev-* etc.).

A good starting point for hardening your servers is CIS Hardening Guides and the relevant scripts.

feitingen
0 replies
13h19m

I'm still getting crawlers looking for an old printer i got a letsencrypt certificate for.

SamuelAdams
21 replies
1d2h

What are some realistic, self hosted mitigation strategies for defending against these attacks?

ggpsv
5 replies
1d2h

Don't expose your services publicly unless it is necessary. If you're self-hosting services that are meant to be accessed only by you then consider accessing them exclusively over a VPN like Wireguard (Tailscale is nice) and firewall everything else.

yjftsjthsd-h
4 replies
1d2h

Or, depending on practical constraints, even http simple auth via nginx/apache reverse proxy.

ggpsv
3 replies
1d2h

Yes, though I'd say _and_ not _or_. Just because you're using a VPN it doesn't mean you should drop all forms of authentication.

Aachen
1 replies
1d1h

I am not aware of a vulnerability in popular web server software that affected a basic auth login screen for over a decade. Assuming you use a proper password and don't typo the domain and end up on someone who wants to specifically phish you through typo squatting, it's about as solid as SSH or Wireguard

They serve different use-cases but I wouldn't say that VPN is strictly better than HTTP auth or vice versa. Recommending to double up for a self-hosted little something, not a big target like 4chan or Gmail, is overkill

ggpsv
0 replies
1d

What I'm saying is that security is about layers. I agree with you, a VPN and HTTP auth is not apples to apples where one is better than the other.

yjftsjthsd-h
0 replies
1d2h

Well, I did mean or; sometimes just sticking httpd in front of the application with a user:pass over https is fine, and also much easier if the client can't run a VPN client or doesn't want to.

mfashby
4 replies
1d2h

Updates as the other commenter says. Also isolation technology like docker containers, chroots, bsd jails, protections that systemd offers, or virtual machines. While not perfect, it means that the attackers must have the ability to chain exploits in order to break out of the compromised application to the rest of the host system.

ggpsv
3 replies
1d2h

Docker is great but it is easy to shoot yourself on the foot if you use it conveniently but don't actually understand it.

A common mistake is to publish the Docker ports unknowingly to all interfaces (e.g `5432:5432`), which makes your Docker container available to everyone. It is common to see this in Docker tutorials or pre-made Docker Compose files. Coupled with UFW, it may give you a false sense of security because Docker manages its own iptables rules.

elashri
1 replies
1d1h

I do make the habit of not expose ports and just use reverse proxy for the container. Of course, you will need a bridged network between the reverse proxy container and the target container, but that's fine. I'm sure there is more clever ways around that.

ggpsv
0 replies
1d1h

I prefer to run the webserver using systemd on the host so publishing the container port to 127.0.0.1 is enough for me.

mfashby
0 replies
20h57m

Yes I've made this mistake with docker and UFW before :( Such a footgun.

azeemba
2 replies
1d2h

Update your software frequently.

klysm
1 replies
1d2h

But not too frequently

Tijdreiziger
0 replies
20h34m

I’d rather update it too frequently and potentially bork something, than not update it frequently enough and potentially get pwned.

wetbaby
1 replies
1d1h

Nord Meshnet, ZeroTier, Cloudflare Tunnels.

Instead of exposing your applications externally, you create a private network that uses UDP hole punching.

This isn't completely self-hosted, as you need some server to auth / broadcast connection details with. Self-hosting might be possible on ZeroTier, but I'm not familiar enough to say for sure.

tamimio
0 replies
19h40m

Yeah you can self host zerotier, there’s even quick docker for it.

achairapart
1 replies
1d2h

Also, be sure your web server is not serving dotfiles/dotfolders (apart for a few legitimate reasons, ie .well-known).

Aachen
0 replies
1d1h

Why? Code is open source and I don't check config changes into .git. Do you mean like .htpasswd which is already default disallowed (on web servers that make use of it in the first place; I think Nginx doesn't block it by default but also doesn't use it so it wouldn't grant any access)?

ufmace
0 replies
22h28m

IMO, you shouldn't do anything special. They're all very low-skill automated attacks. Just design and deploy stuff well, do the basic stuff correctly before you make anything publicly accessible, and don't worry about the noise. If you're doing things properly, none of it will work. Whatever fancy thing anyone suggests to try to reduce the noise likely won't slow down any actual determined human attacker much, and will just cause you more hassle to deploy and maintain.

lazyeye
0 replies
1d

Tailscale is ridiculously easy to install and get running.

layer8
0 replies
20h1m

Use packages from a distribution like Debian and run unattended-upgrades or equivalent for the security-updates repository. They usually fix newly reported vulnerabilities in less than a day.

snowwrestler
20 replies
1d1h

Back when I started managing self-hosted sites, I would look through access logs as well. We even had an IDS for while that would aggregate the data and flag incoming attack attempts for us.

Eventually I stopped proactively reviewing logs and stopped paying for the IDS. It was a waste of time and a distraction.

It's not hard to find really useful content that summarizes common vulnerabilities and attacks, and just use that to guide your server management. There are a ton of best practices guides for any common web server technology. Just executing these best practices to 100% will put you way ahead of almost all attackers.

And then the next best use of your time and resources is to prioritize the fastest possible patching cadence, since the vast majority of attacks target disclosed vulnerabilities.

Where logs are super helpful is in diagnosing problems after they happen. We used log analysis software to store and search logs and this was helpful 2-3 times to help find (and therefore address) the root cause of attacks that succeeded. (In every case it turned out to be a known vulnerability that we had been too slow to patch.)

gnyman
12 replies
23h50m

In every case it turned out to be a known vulnerability that we had been too slow to patch.

Yes. This is why relying on "patching" is a bound to fail at some point. Maybe it's a 0-day, or maybe the attackers are just quicker.

The solution to this is defence in depth, and it's very easy for most services, especially when self-hosting personal things. Few tips most people can do is.

Put up a firewall in front or put it behind VPN/tailscale.

Hide it in a subfolder. The automated attacks will go for /phpmyadmin/ , putting it in /mawer/phpmyadmin/ means 99.9% of the attackers won't find it. (This is sometimes called security by obscurity and people correctly say you should not rely on it, but as a additional layer it's very useful).

Sandbox the app, and isolate the server. If the attackers get in, make it hard or impossible for them to get anywhere else.

Keep logs, they allow you to check if and how you got attacked, if the attack succeeded and so on.

Depending on the service, pick one or more of these. Add more as necessary.

The key thing is that you should not rely on any ONE defence, be it keeping it patched or firewalled, because they will all fail at some point.

citizenpaul
5 replies
22h20m

security by obscurity

I detest this phrase right up there with "fake it till you make it". All security is by definition obscurity. Just a meaningless platitude that rhymes.

I suppose un-formalized un-proofed security practices will eventually be broken or counting on hackers not to do any investigation of your system will get you hacked, doesn't roll off the tongue though.

rablackburn
4 replies
17h58m

All security is by definition obscurity.

Can you expound on this? For example, if I add a 30 second lockout after failed authentication attempts I don’t see how that comes under any non-tortured definition of “obscurity”.

hanszarkov
1 replies
17h20m

yes, also add cryptographic complexity, don't think that qualifies either.

citizenpaul
0 replies
13h35m

Cryptographic complexity is math if you ask me?

A hammer is not construction, it's a tool used by construction?

citizenpaul
1 replies
13h36m

The lockout doesn't exist without the authentication system in the first place. Which exists to keep people from knowing the information. Its a subset of something that is needed for obscuring the information in the system from anyone that should not see it.

I really feel dirty trying to justify this level of nerdy pedanticness. I'm sure you can poke some holes from my off the hip internet comment if you really want to I'm not trying to be academic. I mostly fueled this comment with my distaste for that other platitude.

leononame
0 replies
11h51m

I think you miss the point of security through obscurity. It's not about keeping the information itself obscure (in your example login information), but rather the method. For example, your password hashing mechanism. If you have a strong password hash function, you don't need to obscure which hash function you use (otherwise, open source software couldn't even exist in certain areas). However, if your security relies on you obfuscating your broken, home-made hash function that only hashes the first three letters of the password, you're not really secure. Security through obscurity is an attempt at securing an otherwise unsecure system by hiding or disguising the implementation.

That being said, obscuring parts of an otherwise secure system is fine as an additional layer, especially if you just want to deter script kiddies that always hammer the same endpoints

jmb99
4 replies
21h28m

This is sometimes called security by obscurity and people correctly say you should not rely on it, but as an additional layer it's very useful.

Anyone who thinks “security by obscurity” is useless should try reverse engineering some properly obfuscated executables (or even code). Obscurity is absolutely useful; definitely not a complete solution by itself, but a very useful component to a security solution.

bradknowles
2 replies
21h8m

The term "security by obscurity" or "security through obscurity" implies that ONLY obscurity is being used to provide the security in question. Like leaving a totally unsecured server on your network with root access available with no password, and telnet or sshd open on a high numbered port instead of the regular ones.

Obscurity is a useful tool to be added on top of real security, and can help reduce the random baseline doorknob jiggling attacks, where people are just scanning the standard ports. But obscurity by itself is not enough to provide any real security beyond that.

bruce511
1 replies
13h0m

You are completely correct. For a level 3 person.

Level 1 : I don't know what I'm doing, so I'll invent stuff only I know in the expectation that'll be enough. This person is told (with good reason) that security by obscurity is no security at all.

Level 2. They got the above message, so do everything right. Setups, firewalls, permissions, and so on. They are proud of their expertise and lecture level 1s all day long.

Level 3. Understand that all the fundamentals need to be done right. Add addional obscurity onto that because it doesn't hurt, and can filter out some useless traffic. (These folk should also lecture level 1 with the simplified message, but can explain the benefits to level 2.)

The problem with HN threads like these is that I don't know who's giving the advice. Level 1 2 or 3. Equally readers could be any level. Which might be dangerous if they are level 1.

IF you ARE level 1, learn the correct way to secure things first. THEN feel free to add obscurity onto that if you like.

hnfong
0 replies
4h54m

I dunno, it feels to me that people are too ready to tell "white half-truths" to simplify the message, and assume everyone else is too dense to understand nuance...

Why can't we just outright say to everyone: "learn the correct way to secure things first. THEN feel free to add obscurity onto that if you like"? There's probably some way to convert that message into a catchy phrase, instead of just demonizing security by obscurity, yet having a small sect of elites in the know who break their own "rules"...

FWIW, this (rant) applies to a broader scope as well, beyond "security by obscurity".

Gibbon1
0 replies
20h44m

Reminds me of a friend that does security stuff. He says never store or transmit keys in a useful form. Bonus using munged keys will trigger alarms.

PLG88
0 replies
9h8m

"and if at all possible keep them off the public internet", this is the way. I would recommend going beyond a VPN to implement zero trust networking which does outbound-only connections so that its impossible to be subject to external network attacks. Tailscale does part of that, other exist, such as the open source project I work on - https://github.com/openziti

dandrew5
6 replies
1d

And then the next best use of your time and resources is to prioritize the fastest possible patching cadence, since the vast majority of attacks target disclosed vulnerabilities.

Just curious, do you leverage any tools to decide when to patch or is it time-interval based? We currently attempt[0] to update our packages quarterly but it would be nice to have a tool alert us of known vulnerabilities so we can take action on them immediately.

[0] "Attempt" meaning we can't always upgrade immediately if the latest version contains difficult-to-implement breaking changes or if it's a X.0.0 release that we don't yet trust

eyegor
3 replies
23h55m

Besides pointing pentester tools like metasploit at yourself, there are some nice scanners out there. Examples in no particular order:

https://github.com/quay/clair

https://github.com/anchore/grype/

https://github.com/eliasgranderubio/dagda/

https://github.com/aquasecurity/trivy

So then you set up something like a cron job to scan everything for you and email the results once a week or whatever if you don't want to monitor things actively.

dandrew5
1 replies
21h47m

Thanks for these. I should have clarified, I'm more interested in something that will alert me when newly-discovered vulnerabilities surface. The systems I maintain are protected [enough] today but a new hack could drop that doesn't make mainstream media and I may not hear about it. We have annual security audits but it would be nice to patch things immediately. Aside from subscribing to a security forum/discord/slack, I'm wondering what other methods folks are employing to solve this.

bradknowles
0 replies
21h3m

Keep your pentesting tools up-to-date. Run them against yourself on every single deployment, if you can. Don't just run them quarterly because that's all that your PCI-DSS requirements say you have to do.

Integrate security code scanning tools into your CI/CD process. Tools like Dry Run Security, or something comparable.

There's much more, but that has to do with how to run your CI/CD systems and how to do your deployments in general, and less to do with security aspects thereof.

BLKNSLVR
0 replies
16h28m

I just setup GVM / OpenVAS[0] to work out where I need to put in some maintenance work, how does that rate as worthwhile in comparison to those you've listed above? (which I will also look into).

(not a fan of the effort Greenbone have gone to for hiding their community edition and promoting their commercial products)

[0]: https://greenbone.github.io/docs/latest/22.4/container/index...

snowwrestler
0 replies
15h39m

I’m out of the self-hosting game now, but back in the day we just tried to keep up on security announcements.

layer8
0 replies
20h12m

The simplest is to only use packages from a distribution like Debian and run unattended-upgrades or equivalent for the security-updates repository. They usually fix vulnerabilities in less than a day.

1B05H1N
20 replies
1d2h

I work in application/product security and have managed WAFs for multi-billion dollar companies for many many years.

Move DNS to Cloudflare and put a few WAF rules on your site (managed challenge if bot score less than 2 / attack score == x). I doubt you'll even pay anything, and it will resolve a lot of your problems. Just test it before moving it to production please (maybe setup a test domain). Remember, a WAF is not an end-all be all, it's more of a band-aid. If you app isn't hardened to handle attacks, no amount of advanced WAF/bot protection will save it.

Message/email me if you need help.

418tpot
15 replies
1d1h

Yes, that's just what the internet needs is even more websites centralized behind Cloudflare. Why do we even bother with TLS anymore if we're going to give them unencrypted access to practically all of our internet traffic.

Hacker news is so funny, they complain about the amount of power we've allowed Google, Amazon, and Microsoft to have, and then go right around and recommend putting everything behind Cloudflare.

Once Cloudflare starts using attestation to block anyone not on Chrome/iOS Safari it'll be too late to do anything about it.

NicoJuicy
5 replies
20h48m

Once Cloudflare starts using attestation to block anyone not on Chrome/iOS Safari it'll be too late to do anything about it.

That's just plain bs...

Eg

1) they have customers and their customers want protection, with minimal downsides.

2) Cloudflare is the only one with support for Tor. I'm 100% sure you didn't knew that.

What "examples" do you have to blame them for something they aren't doing? Based on what?

I'm getting tired of people blaming Cloudflare for providing a service that no one else can provide for free to small website owners => DDOS protection.

dang
2 replies
12h46m

Could you please stop breaking the site guidelines? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

You're of course welcome to make your substantive points thoughtfully while staying within the rules.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

NicoJuicy
1 replies
1h4m

You're correct.

I reiterated over my last comments and they've been snarky lately.

Not an excuse, a lot is going on and overworked and without patience lately.

That shouldn't reflect in my comments and I'll pay more attention to it.

Have a good week.

dang
0 replies
34m

Appreciated!

pid1wow
1 replies
20h6m

What do you mean? On Tor I get a Cloudflare block just from clicking 2 links on the front page of HN:

http://forums.accessroot.com/index.php?showtopic=4361&st=0

Please wait while your request is being verified...

I can't remember any day I didn't get a Cloudflare block. Even on bare IP sometimes. WAFs are security theater.

NicoJuicy
0 replies
6h38m

Site admins can enable onion routing: https://developers.cloudflare.com/network/onion-routing/

Which circumvents the bad reputation of certain exit nodes:

Due to the behavior of some individuals using the Tor network (spammers, distributors of malware, attackers), the IP addresses of Tor exit nodes may earn a bad reputation, elevating their Cloudflare threat score.
solumunus
3 replies
1d1h

Hacker news is so funny, they complain about the amount of power we've allowed Google, Amazon, and Microsoft to have, and then go right around and recommend putting everything behind Cloudflare.

It’s almost as if those saying contradictory things are actually different people despite being on the same website. But it can’t be that, surely? Truly a perplexing phenomenon that I hope someone can one day explain.

418tpot
2 replies
1d1h

Fair, although I know quite a few people that hold both of these opinions simultaneously because I've met them in person. It's only after I point out their hypocrisy do they even realize what a danger Cloudflare poses to the free and open internet.

I suspect it's because hating on Google is in vogue, and so is recommending Cloudflare.

jopsen
0 replies
20h36m

Given how Cloudflare works I imagine that there are alternative services offering the same thing.

Probably not as cheap. AWS can put a WAF and CDN infront of your site too.

And migrating from one service to another isn't much more work than moving DNS records.

Just saying, it's not the same level of vendor lockin as using dynamodb or whatever.

BLKNSLVR
0 replies
16h36m

I'm going to try to provide / justify my potentially hypocritical viewpoint:

I use Cloudflare (free tier) in front of the very few and almost entirely unused websites that I run. I believe that the service they provide is useful for protecting the IP addresses of the servers on which the content is hosted, whilst also providing some amount of protection from malicious traffic.

I also agree that centralisation of services is a big problem for the future of the internet.

My position is that, whilst there seem to be increasing voices / examples of Cloudflare's (potential in) acting against the nebulous notion of "spirit of the internet", for me they certainly haven't reached the "evil" stage. I'm also of the understanding that it's Cloudflare customers that choose to block access from Tor or VPS IP address ranges and / or add Captcha's or other bothersome verification. True Cloudflare enable it and make it possible, but the administrators of the website that you're trying to visit have made the choice to make it more difficult for you to access their content; not Cloudflare themselves.

I would prefer there to be similar-scale alternatives to Cloudflare as a kind of a middle-ground decentralisation of centralisation. I'm sure there are alternatives, but I'm not yet motivated enough to even consider starting the research process.

If Cloudflare start selling visitor analytics to data brokers, however, very fast goodbye.

chaxor
1 replies
21h30m

Agreed

We should be suggesting self hosted and decentralized solutions to website hosting and file hosting.

On that note, does anyone have any secure methods of providing serving a file from your computer to anyone with a phone/computer that doesn't require them downloading/installing something new? Just a password or something? Magic-wormhole almost seems great, but it requires the client to install wormhole (on a computer, not phone), and then type specific commands along with the password.

Is there a simple `iroh serve myfile.file` from server and then client goes to https://some.domain.iroh/a086c07f862bbe839c928fce8749 and types in a password/ticket you give them?

That would be wonderful.

albuic
0 replies
9h55m

Sharedrop or p2p sharing site like this one.

redcobra762
0 replies
15h46m

It’s kind of an absurd notion to think the Internet would just allow Cloudflare to make any kind of unilateral decisions like what you suggest.

esafak
0 replies
23h10m

You criticize but don't offer suggestions. What do you use instead of Cloudflare?

dang
0 replies
12h43m

Can you please not post in the flamewar style? It's not what this site is for, and destroys what it is for.

You're welcome to make your substantive points thoughtfully but it needs to be within the rules. If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

ozim
0 replies
23h45m

Putting WAF on app and calling it a day is indeed putting lipstick on a pig.

I can imagine that might be needed if some company for some reason has to run some not really up to date stuff but yeah it is just a bandaid.

cloudking
0 replies
4h31m

This works well for standard WordPress sites, throw in GuardGiant and Sucuri plugins for extra layers.

asabla
0 replies
1d1h

Usually I only manage internal facing applications these days, which makes the attack surface greatly reduced compare to public ones.

But since you seem to have a lot of knowledge in this area. Have you manage solutions which also includes infrastructure in Azure combined with Cloudflare?

And if so, any suggestions on things people usually miss? except for the usual stuff of OWASP and what not

CharlesW
0 replies
1d1h

I was unfamiliar with this, so for anyone who's in a similar position: https://blog.cloudflare.com/waf-for-everyone/

The Free Managed Ruleset appears to be deployed by default, and Cloudflare keeps a changelog here: https://developers.cloudflare.com/waf/change-log

evantbyrne
15 replies
1d1h

The elephant in the room is that–at least in my experience–a lot of these attacks come from hostile nation states. This is going to be controversial, but one may find it useful to block entire IP ranges of problematic states that you cannot do business with. I was able to block 100% of probes to one of my new services by doing this.

redcobra762
5 replies
15h48m

Isn’t blocking regions just going to block the legitimate traffic, and push determined adversaries to use proxies?

oblio
4 replies
15h33m

Isn’t blocking regions just going to block the legitimate traffic

I imagine blocking North Korea, Iran, etc will probably impact 0.5% of traffic and 0.001% of revenue for most sites.

and push determined adversaries to use proxies?

That might mean you're left with 10% of the previous attackers so it could be worth it.

redcobra762
2 replies
14h27m

Is it? Which of these "attackers" are you most concerned with? I hesitate to even call them "attackers." They're jiggling your front door knob, at worst.

evantbyrne
1 replies
13h49m

This is actually a decent analogy because a person sneaking around the neighborhood trying to open doors should be considered a threat. Nothing good happens when someone like that gets inside.

redcobra762
0 replies
13h47m

Eh, yeah, but at the same time, can you jiggle doorknobs from halfway around the world, and is it so overwhelmingly common that within minutes of every door being built, dozens of people come by just to jiggle the knob?

It's just so unbelievably common and so frequently harmless that it's hard to take all that seriously. But you're right, it is a threat, I won't deny that.

rvba
0 replies
3h56m

Can you even sell stuff to North Korea or Iran? Arent they on some embargo?

A friend told me that someone in his very big company sold some random stuff to North Korea and now 1) they have obligatory training not to sell there 2) they have to go through obligatory training on non proliferation of nuclear weapons

nerdponx
2 replies
1d

It's not an elephant, it's just that many people aren't willing to block legitimate users from those regions.

evantbyrne
1 replies
21h30m

American websites don't typically have customers dialing in from North Korea. Just saying that IP blocking is something more businesses should consider. Traffic can also be routed to different subdomains for businesses that need to provide a subset of services to a region.

tjbiddle
0 replies
16h28m

Most admins blocking regions aren't so selective; rather than blocking North Korea - they'll block everything not USA.

As an American living abroad, it's not uncommon for me to not be able to access a website I need (or want) to, and I need to pop onto my VPN.

macintux
2 replies
1d

15+ years ago I was reviewing server logs for small, local businesses we hosted and came to the conclusion we should just block all APNIC IP addresses.

acherion
1 replies
20h52m

APNIC includes Australia, New Zealand, Japan, Taiwan and Singapore – did you block addresses from there too?

macintux
0 replies
20h30m

Given the length of time since then, I have no recollections beyond making the decision

wooque
1 replies
6h21m

That would mean blocking USA and Netherlands :)

Most of probing to my server comes from USA, and a far second place is Netherlands. Probably most of it comes from AWS and other data centers.

evantbyrne
0 replies
3h32m

Do what you gotta do haha

op00to
0 replies
5h25m

If only nation states had some way of purchasing servers in another country and launching their attacks from there.

warkanlock
12 replies
1d2h

If anyone is receiving these types of logs on AWS, please do yourself a favor and place AWS WAF in front of your VPC.

It's not expensive and can significantly help you, saving you from many headaches in situations like this. While it might not block everything that arrives at your service, it can be a great help!

anamexis
5 replies
1d2h

This is a good suggestion, but careful with the default rulesets. We turned on AWS WAF (in our case, the motivation was just SOC 2 compliance).

There were a few overzealous rules that subtly broke parts of our app.

There were request body rules that did things like block requests that contained "localhost" in the request body. There was also a rule that blocked requests without a User-Agent header, which we were not previously requiring on API requests, so we broke our entire API for a few users until we figured that out.

everfrustrated
3 replies
1d2h

In my experience WAFs are not something that one should ever "just turn on".

Complete due diligence is required to fully understand and realise the impact of the rules and should be tested like any software change by going through a testing phase.

Ideally software teams should be fully trained and be responsible for their lifecycle.

marcosdumay
0 replies
1d1h

Just to add, but only testing will never work well enough for something like this.

This is one of the cases where you must understand what you are doing. There's no technique for doing it mindlessnessly.

macNchz
0 replies
1d2h

Yes you need to be familiar with the rulesets being applied, and prepared to closely monitor what is being blocked. Ideally I think I’d roll it out one ruleset at a time to limit the number of potential issues being introduced at once.

Had a fun one after turning on the AWS WAF with some default rules–a small number of users reported they couldn’t upload new logo images anymore. Turned out some Adobe product was adding XML metadata to the image files, which the WAF picked up and blocked.

arter4
0 replies
1d1h

Agree. There are so many ways WAF rules can unintentionally block legitimate traffic. From very long URLs (is that a DoS attempt?), to special characters in a POST with a file upload (is that = part of a SQL injection attempt or is that just part of a base64 encoded file?) and so on.

nullindividual
0 replies
23h48m

I use the Azure equivalent of the AWS WAF but I have no direct experience with AWS WAF. Azure WAF leverages the OWASP ruleset[0] and many of those rules throw false-positives, SQL-related rules being one of the top offenders.

As you note, it requires adjustment due to overzealous rules. OWASP has Paranoia Levels[1] which allow you to be more targeted.

[0] https://github.com/coreruleset/coreruleset

[1] https://coreruleset.org/20211028/working-with-paranoia-level...

bogota
2 replies
1d1h

It is not that easy as using the AWS WAF with default rules for our application led to many valid requests and IPs being blocked. You need to know what is being blocked and verify at first or you will in some cases be losing customers.

CubsFan1060
1 replies
1d1h

Your best plan is to start with all the rules in count mode. Let that sit for awhile and analyze anything that was counted. As you feel good about it, slowly start to move things into block.

mango7283
0 replies
12h10m

Problem is when you get owned during that window and get slammed for not blocking sooner.

Then you block sooner and get slammed for blocking too soon.

Repeat for 1000 web services.

remram
0 replies
22h37m

WAF tends to ban widely, sometimes for dubious reasons. For example, researchers at my university study Twitter data, and the mere fact of following links from a small random sample of tweets means that our university's IPs are blocked by most WAF.

mac-chaffee
0 replies
20h21m

In actuality, WAFs hurt more than help. They give a false sense of security since they are so easily bypassable, plus they have a significant performance cost and a significant chance of blocking legitimate traffic: https://www.macchaffee.com/blog/2023/wafs/

fabian2k
0 replies
1d1h

None of the attacks listed in the post would be an issue for any kind of modern web application. Why should I add a WAF for this?

simpaticoder
11 replies
1d1h

Thank you! I've been self-hosting for about a year running a 400-line http/s server of my own design, and it's remarkable all the attacker traffic my 3 open ports (22, 80, 443) get, although I've never taken the time to analyze what the attackers are actually trying to do! This post fills in a LOT of blanks.

Would be cool to do the same thing for the weird stuff I see in /var/log/auth.log!

It's crazy that attackers would bother with me since the code is entirely open source and there is no server-side state. The best outcome for an attacker would be root access on a $5/mo VPS, and perhaps some (temporary) defacement of the domain. A domain no-one visits!

epcoa
6 replies
1d1h

These are all automated bots. No one is “bothering”. You open the 3 most well known ports you’re going to get connections. They don’t know what you’re running nor do they care.

simpaticoder
5 replies
1d1h

By "bothering with me" I mean "add my IP to the long list of IPs they are scanning".

By the way, I find it annoying that my logs get filled with this kind of trash. It has the perverse effect of making me long for something like Google Analytics since they rarely if ever bother running a javascript runtime.

epcoa
3 replies
1d1h

That long list isn’t curated, it’s every publicly routable IPv4 address. It really does not take long to run some canned probes against 3.7 billion addresses. Making your service IPv6 only tends to cut down on this traffic. You’re anthropomorphizing a script on some botnets.

iramiller
1 replies
19h37m

IPv6 only? If you have a DNS record for that your are still not making it very difficult for scripts to find you.

epcoa
0 replies
18h7m

If you have a DNS record for that your are still not making it very difficult for scripts to find you.

If you put your ssh server or something on an uncommon subdomain how will these scripts find it?

If you are on @ or some common name sure, otherwise no.

ericpauley
0 replies
23h22m

This isn’t entirely true. Many scanners do preference specific IP ranges such as cloud providers. Cloud IPs receive substantially more scanning traffic than darknet IPs or even random corporate IPs.

ozim
0 replies
23h55m

No one is maintaining a list they just scan all. Scanning every IPv4 there is on a single port takes minutes.

icameron
1 replies
1d

Consider blocking 22 except whitelist your own IP. My ISP changes my IP rarely in practice and when they do I can log into the hosting web admin panel and update the rule.

petee
0 replies
1d

Just accept key-only logins, everything else becomes noise.

I also limit concurrent connections, which significantly reduces data usage during aggressive attacks

dotancohen
1 replies
1d1h

Access to your VPN is a great way to launch attacks on other machines, and to add another layer to covering his tracks. Not to mention hosting malware to be downloaded to other places, and even a crypto miner.

mianos
0 replies
22h24m

I set up a honeypot once and logged the passwords of created accounts. I then used 'last' to find the incoming ip.

I then used ssh to try and connect to the originator (from an external box). I went back 5 jumps until I got to a windows server box on a well known hosting service that I could not get into.

Lots of what looked like print servers and what looked like linux machines connected to devices. Maybe just the exploit at the time.

patja
11 replies
1d2h

Fail2ban doesn't help when the attacker/abuser rotates their IP address constantly. I now look for aberrations in a few http headers they often neglect to spoof in their attempt to act like an honest human.

boringuser2
7 replies
1d2h

These attackers don't have inexhaustible resources.

quesera
5 replies
1d2h

I've recently logged credential stuffing attacks coming from over 100K distinct IPv4 addresses to a small backend API.

Aachen
2 replies
1d1h

This interests me a lot as we do security but don't run a big service ourselves and so don't have data on what motivated attackers' behavior is exactly.

How many active users (to an order of magnitude; no need for precise numbers of course) does this service have? 100k IPs sounds pretty costly to burn, so I'm curious how important one needs to be before that's considered worth it.

And could you say what type of IP addresses those were? Did it look residential such as from compromises computers (botnet), do they rent lots of IP addresses temporarily from aws/netcup/alibaba, or is it a mix such that neither category has the overwhelming majority?

If it's all server IP ranges for a service where end users normally log in, you could apply entirely different rate limits to those than to residential IPs, for example. Hence I'm wondering how these cred stuffing attacks are set up

quesera
1 replies
1d1h

It was a mix. Lots of apparent botnets (residential service scattered across many providers and networks), but also healthy chunks of colo/SaaS netblocks as well.

And of course a ton of open proxies.

Aachen
0 replies
18h12m

Thanks!

vntok
1 replies
1d1h

Rate-limit after x failed logins on either source IP, username and password. Just provide a realworld sidechannel escape hatch for legitimate users (ex: phone or email). Barely anyone will actually use it.

quesera
0 replies
1d1h

We had to go further. The attacks were from arbitrary IPs, and were working through a huge list of leaked username/password combinations.

The attacker even went to the trouble of spoofing some custom headers in our API client.

Our eventual solution was to a) attack our own auth credentials first, identify any users with leaked creds (from other services) and force a password reset for them. b) disallow users from setting common leaked passwords. c) make the auth checking request as low-cost as possible, and scalable separately from the main application. d) when an attack request is detected, bypass the relatively-expensive real cred check but return the same failure response (including timing) as a real failure. e) build a secondary requirement in the auth flow that can be transparently enabled when under high volume attack.

This works, so far. It sheds the volume to the application, and has low-to-zero impact on legit users. This took a couple weeks away from feature development though!

logifail
0 replies
1d2h

Indeed, but neither do you.

Be very careful that fail2ban isn't actually exhausting your own resources faster than you believe you're exhausting the attackers' resources...

https://www.google.com/search?q=fail2ban+resource+usage

mianos
0 replies
22h19m

Bit they don't. The most of the ssh attacks come from the same 10 addresses from China Telecom. Sometimes they don't change for.months. Maybe it is some country sponsored attack organisation or maybe just carrier grade nat.

lofaszvanitt
0 replies
1d

When will people forget about fail2ban already? It's an old, cumbersome, useless tool.

finnjohnsen2
0 replies
1d2h

fail2ban has calmed attacks (ssh dictionary mostly) from 100-1000s per day to about a dozen, on my private rpi thing. I assume most attackers are looking for low hanging fruit with little (cheap) effort.

mkoryak
9 replies
1d2h

Sometimes I think it might be fun to setup an express server that correctly responds to one of these attacks just so I can waste someones time.

But doing that would also waste my time.

azinman2
4 replies
1d2h

It’s all automated; not you’re not really wasting an actual persons time.

pnw
3 replies
1d1h

It does waste their time if your honeypot is constantly responding with legitimate looking but fake credentials. Presumably the hacker is going to try to use them?

It’s the same idea used by anti-spam activists back in the day with software that would flood spam website forms with fake but realistic looking info, so the real data would be buried in the noise.

runeb
1 replies
16h6m

That part is also automated

sumtechguy
0 replies
4h39m

From what I understand the automated part is sometimes a first pass. Just to see if you are there and have something. They will then wait some period of time and come back later with the real attack. Sort of like the war dialer from the movie wargames. Basically try everything get a list of interesting targets. Then go at them. Now some are fully automated and just try whatever exploits they are trying right then and there.

8organicbits
0 replies
8h0m

Exactly. Poison the attacker's list of credentials with bogus values. This decreases their efficiency, slowing them down.

cyberlurker
1 replies
1d1h
emmanueloga_
0 replies
1d

Ah, tarpit refers to a system that purposely slows down answers, while honeypot is a system that _looks like_ it's delivering the goods but it is just a trap.

I'm sure they mostly refer to the same thing, though.

--

https://en.wikipedia.org/wiki/Honeypot_(computing)

klysm
0 replies
1d2h

Honeypots are good fun

gnyman
0 replies
23h39m

It's mostly wasted time but I feel it's slightly more beneficial than playing video games (which I also do) so I do it for fun sometimes. [1] [2]

[1] https://infosec.exchange/@gnyman/109318464878274206

[2] https://nyman.re/super-simple-ssh-tarpit/

nothis
8 replies
1d2h

Hyper-naive take: Couldn't nearly all of these attacks be blocked by a white-list approach, essentially hiding every file or directory from the internet except a very controlled list of paths and escaping all text sent so it can't contain code?

I somehow always imagine these types of hacks to be more clever, like, I dunno, sending harmless-looking stuff that causes the program receiving it to crash and send some instructions into unprotected parts of RAM or whatever. This all looks like "echo ; /bin/cat /etc/passwd" and somehow the server just spitting it out. Is that really the state of web security?

quesera
1 replies
1d1h

Couldn't nearly all of these attacks be blocked by a white-list approach, essentially hiding every file or directory from the internet except a very controlled list of paths and escaping all text sent so it can't contain code?

This is basically how things work.

For convenience, instead of itemizing each filename, the webserver root is a subdirectory and anything underneath is fair game. The webserver uses the OS "chroot" facility to enforce this restriction. What you are seeing is ancient exploitation strings from 30 years ago that haven't worked on any serious webserver since that time, but a) keeping the test in the attackers lib is essentially free, and b) there are some unserious webservers, typically in cheap consumer hardware.

Webservers pass plain text to the app server. It is the app server/framework's responsibility to understand the source of the request body and present it to the application in a clear way, possibly escaped. But the app needs to process this and sometimes through poor coding practices, fails to respect the untrusted nature of the data. This again is more typical in historical systems and low-cost consumer products where software is not a marketing advantage.

InitialBP
0 replies
13h55m

ancient exploitation strings from 30 years ago that haven't worked on any serious webserver since that time

Unfortunately, there are plenty of serious (business critical) servers that _ARE_ vulnerable to these types of attacks. I've found and remediated things like this all the time. One very common example I've seen of the `.env` issue is Django servers that are exposed to the internet in with debug=True. There's probably thousands if not tens of thousands of servers leaking credentials this way on the internet now.

Beyond that, companies often have internal systems that do not meet the same security standards that external systems require, and sometimes those systems get shifted around, maybe it's moved to a new subnet, maybe a third-party needs access and the CIDR range gets fat fingered in the firewall. Regardless - now that "internal system" is exposed to the internet with all the dangerous configuration.

Zetobal
1 replies
1d2h

Security through obscurity is like a ninja tiptoeing in a room full of laser beams; make one loud move and you'll reveal that your entire protocol hinges on no one sneezing!

akerl_
0 replies
1d1h

How is strictly controlling exposed server resources to only URIs you’ve confirmed should be an exposed an example of “security through obscurity”.

scherlock
0 replies
1d2h

Yup, 99.999% are script kiddies running bots that look for unsecured servers or indicators for known exploits.

dartos
0 replies
1d2h

You’re probably right, but consider that not every person is even aware of the security risks of running servers.

Someone might be trying to play with self hosting or a contractor at a company did a bad job and accidentally exposed stuff they shouldn’t.

This attacker is likely just trolling lots of IPs hoping for low hanging fruit that can be exploited with simple/well known attacks.

Thorrez
0 replies
1d1h

This all looks like "echo ; /bin/cat /etc/passwd" and somehow the server just spitting it out. Is that really the state of web security?

It's attempting to exploit a vulnerability in bash that was discovered and fixed in 2014:

https://en.wikipedia.org/wiki/Shellshock_(software_bug)

InitialBP
0 replies
13h32m

Bit of a rambly reply:

There are different types of web security vulnerabilities and the attacks you see from automated scanners are likely to be far less sophisticated than targeted web attacks. Specifically these scanners are going to spam out widespread and common CVE's that might grant privileged access to the server or dump credentials in some fashion.

The more sophisticated attack you described is essentially an overflow, and most modern web servers are usually written in memory-safe languages making it very unlikely to see that type of attack on the web. More often it's the underlying OS, servers, or communication stacks (bluetooth, TCP, nginx, etc) that have these types of vulnerabilities since they are often written in low level non memory safe languages like C and C++.

Attacks that exploit the HTTP and HTTPS protocol are a little more interesting. Request smuggling lets you trick certain load balancers and webservers by sending an HTTP request "smuggled" inside of another HTTP request.

Here is a blog by James Kettle's about some request smuggling vulnerabilities and the impact they can have. https://portswigger.net/research/http2

There's really a lifetime's worth of knowledge on web security and the type of stuff you see in scans is just trying to hit the low hanging fruit. Portswigger has loads of free challenges and information about different web security topics.

https://portswigger.net/web-security/all-topics

sph
7 replies
1d1h

SSH is constantly hammered by bots as well:

    % ssh example.com
    Last failed login: Sun Jan 28 16:59:35 UTC 2024 from 180.101.88.233 on ssh:notty
    There were 5385 failed login attempts since the last successful login.
    Last login: Sat Jan 27 13:33:30 2024 from xxx.xxx.xxx.xxx
5.3k failed attempts in ~30 hours. I know, I should be setting up fail2ban.

vldb7
0 replies
1d

I’m not a fan of fail2ban. A simple but quite effective approach would be permitting remote login only from certain IP ranges. I know that it looks like a bad trade for self hosted web apps, but it is very easy to setup on many cloud providers.

Also I normally set up a jump host first - a smallest instance that only runs ssh and everything else would not open ssh port to the outside at all. One nice effect is having to search just one auth log if something about ssh looks concerning.

nerdponx
0 replies
1d

I set my SSH port to something with a high number that is not used by any other known service. Drive-by attacks dropped to 0.

miyuru
0 replies
1d1h

Most of my servers are IPv6 only and there are no failed ssh attempts on those. I install fail2ban just in case and firewall the IPv4 address, since I don't SSH via IPv4.

idoubtit
0 replies
1d1h

fail2ban is not very performant and it will only reduce the amount of attempts. An alternative is to add a nftables rule (or iptables or whatever firewall). Something like :

    table inet filter {
    chain input {
    tcp dport ssh accept comment "Accept SSH"
    tcp dport 22 ct state new limit rate over 2/minute drop
But even with rate limiting, the logs are still polluted by auth attempts. Changing the port does little. The only solution we found was to configure port knocking (with direct access from a few whitelisted IPs).

emmanueloga_
0 replies
1d

In case you are using AWS, I learned that you can close port 22 on your EC2 instances, and connect trough the Systems Manager (SSM):

https://cloudonaut.io/connect-to-your-ec2-instance-using-ssh...

SoftTalker
0 replies
1d1h

fail2ban will reduce your log noise but it's another thing to manage, you can end up locking yourself out also, and if you're using good passwords (or better, public key auth) it's not really providing any additional safety.

Ideally you don't need ssh open to the whole world anyway, and can restrict it to a certain subnet or set of addresses. Then your attacks will drop to nearly zero.

NavinF
0 replies
1d1h

I should be setting up fail2ban.

No, that does nothing if you followed best practices and disabled password login. fail2ban is just another Denial of Service risk that has the added bonus of bloating your firewall table and slowing down all your new connections

velcrovan
6 replies
1d3h

I make a point of running fail2ban on my servers and will even add custom jails to catch attacks specific to the types of functionality I may be exposing on the site(s) hosted on them. But it’s been a long time since I checked whether fail2ban’s other defaults are still comprehensive enough to block the most common attacks. I guess I’ll bookmark this link for when I get around to doing that.

Palomides
5 replies
1d2h

if your systems have any of these easily targeted vulnerabilities exposed, fail2ban won't save you

velcrovan
2 replies
1d2h

fair point, the main reason I use fail2ban is to limit traffic from malicious activity rather than letting the attempts run rampant and unchecked.

geraldhh
0 replies
23h28m

caring for pointless attacks is more work and bears more risk

creeble
0 replies
1d

If it makes you feel good, do it. It can also cut down on log noise a bit, for when you’re really looking for something.

But in general, I’ve given up on caring about the routine “attacks” listed in all the logs. If you have good security, they don’t matter. And if you don’t, they don’t matter either.

jbverschoor
1 replies
22h35m

In addition, you can have nginx filters to check for simple patterns (php on a non-php site? -> instant ban). Too many 404s? -> instant ban.

BLKNSLVR
0 replies
17h0m

I have recently started using my own sledgehammer-subtle approach of detecting (what I refer to as) Uninvited Activity on any port not offering a service, and straight-up banning the source IP (indefinitely at the moment) from accessing any actual service ports.

Over the few months I've had it running I've needed to progressively create failsafes for IP addresses that I know are trustworthy so I don't lock myself out. I've also started tiering the importance of blocking based on different sets of ports which are being probed. I've also discovered that there's a significant amount of Uninvited Activity coming from "security" companies in their pro-active scanning of the entire IPv4 space - which I don't trust at all and ban with prejudice.

It's messy and I need to update and add many explanations, but it's on Github if anyone wants a laugh: https://github.com/UninvitedActivity/UninvitedActivity

(I'm aware of various limitations and footguns inherent in this un-subtle approach but, as another commenter elsewhere alluded to, "it makes me feel better". I also think that a fair bit of processing volume can be taken off IDS' if a heap of "known garbage" traffic is blocked prior - it's all about tiers).

Erratic6576
5 replies
1d

I once got a message in my logs “you have been powned” :/

ijhuygft776
4 replies
21h37m

what did you do about it

Erratic6576
3 replies
21h17m

I don’t remember well. I guess I Worried and stopped reading logs.

Aah. I diverted my domain to cloudflare, allowing only traffic from 1 country in.

So instead of exposing my IP publicity through my registrar, I set the firewall to only allow traffic in from cloudflare.

I would have loved to install PfSense as well but it was out of my budget

ijhuygft776
2 replies
19h50m

so, you, are, still, hacked.

Erratic6576
0 replies
4h59m

Impossible. My 3 servers are hackproof now. 100 % safe

8organicbits
0 replies
7h50m

I can visit http://example.com/i_hacked_you and it will be in the logs but no hacking has occurred.

iboisvert
4 replies
1d

As someone who knows very little about security, this is really interesting, thanks! A question though: how would one know if there has been a breach? These examples look relatively easy to detect, but I guess there would be more complex cases?

tamimio
0 replies
19h43m

IOC or indicators of compromise, but if you know little it is always advisable to hire someone to go through it on demand or periodically as there’s no one trick to rule them all.

takemine
0 replies
14h10m

You can use honeypot that bait hackers . I am running a non-intrusive one where you put baits in your servers or laptop, when hackers see it, they'll try to use them.

macintux
0 replies
1d

I also know very little, but something that struck me upon reading your question: if a breach is successful, the logs can't be relied upon for detection/analysis if they're on the same server. It's important to ship them elsewhere.

kevindamm
0 replies
1d

This is why some people run a honeypot in their network... and even those won't necessarily catch everything if the honeypot only mimics services that the attacker isn't probing for. You can set up tripwires on access and egress of sensitive data but that's only part of the surface area (and if the system gets attacked those tripwires could be disabled, if the attacker either knows what to look for or has a plan for a side channel for exfiltrating data).

Really the only good answer is defense in depth and keep looking for any indicators of odd behavior, and wall out unrelated systems entirely from each other, keep the DMZ and public facing bits as simple as possible.

emmanueloga_
4 replies
1d

WAF seems to be an essential piece for any website with even a little bit of visibility / traffic on the net. Some questions:

* Comparison of AWS WAF vs Cloudflare vs Others?

* Many services like EC2 charge for data transfer [1], so how much of your monthly/yearly costs of hosting goes toward fending scans like these? Does AWS count any traffic blocked by the WAF toward the transfer limits?

--

1: https://aws.amazon.com/ec2/pricing/on-demand/

nickjj
2 replies
20h34m

If you're on AWS the AWS WAF is pretty low cost. You can expect to pay less than $10 / month and still get an ok amount of value on a decently popular site.

The problem is you have to manually configure a lot, the rate limiting aspect is way worse than Cloudflare and while the AWS WAF can geolocate an IP address and block by country it does not send the country code back to you in a header where as Cloudflare does. The last one stings because it's super handy to have an accurate country code attached to each request, especially if it's something you don't need to think about or waste I/O time calling out to a 3rd party service in the background to backfill in that data later.

emmanueloga_
1 replies
19h11m

This is helpful! I found some CDK libraries that allows for connecting a load balancer or Cloudfront to WAF with a few lines of code. I'll give it a try! [1] [2].

--

1: https://github.com/awslabs/aws-solutions-constructs/tree/mai...

2: https://constructs.dev/search?q=waf&cdk=aws-cdk&cdkver=2&lan...

nickjj
0 replies
17h34m

Yep, that's one of the values of the WAF, it can be associated with your ALB which means you can match rules on headers, cookies, etc. after the traffic has been decrypted.

rfmoz
0 replies
23h47m

With Fastly WAF - SignalSciences, Distil Networks - Imperva, Akamai - StackPath. There are a few companies that started alone in this ecosystem and found the evolution path with the mayor CDN networks, those who have failed themselves in provide a good internal alternative service.

Human security - PerimeterX, Haproxy WAF, Datadome, are other players to different target audience.

If you have a good control over the app exposed, maybe you only need a WAF in the sense of stop stuff outside your infra. The sql injections, weird urls trying the way to /etc/passwd or related things look from the past and only makes noise nowadays. The real issue is when someone hits you in a rate impossible to manage with your resources or when it cost you more than the securing layer.

aborsy
3 replies
22h3m

I’m looking for a secure authentication/proxy application to put in front of the webservers that have to be exposed to the internet. The application will authenticate the user with SSO or hardware key, before forwarding the traffic to the right internal address.

Cloudflare Tunnels are great, but CF doesn’t allow the TLS pass through. CF man in the middles and decrypts the traffic.

So far I have looked into Teleport, authelia, authentik, and keycloak, perhaps combined with Traefik.

Any feedback on the level of security of these tools for being exposed to the public internet?

FuriouslyAdrift
1 replies
21h52m

Granted we are a Microsoft-based shop, but Microsoft Entra Application Proxy has worked out great for exposing our internal web based apps to the outside for mobile/home workers.

https://learn.microsoft.com/en-us/entra/identity/app-proxy/a...

aborsy
0 replies
21h24m

This seems to be very similar to the Cloudflare tunnels. They are reverse proxies in the cloud, with TLS termination. The traffic is terminated and scanned in the cloud.

https://learn.microsoft.com/en-us/entra/identity/app-proxy/a...

A version with end to end encryption will be great!

lukax
0 replies
22h0m

If your auth provider supports is OAuth 2 OIDC you should check oauth2-proxy. It's just one binary or a container sidecar.

https://github.com/oauth2-proxy/oauth2-proxy

woodruffw
2 replies
1d2h

The author says they aren’t a security person, so to correct a minor thing: the first examples are credential and configuration discovery, not directory traversal. The latter, to the best of my knowledge, is reserved for techniques where the attacker “escapes” the webroot or otherwise convinces the server to serve things outside of its normal directories.

asynchronous
0 replies
19h35m

It’s technically both if they’re performing an include file that wasn’t supposed to be hosted- ex. “/../../passwd/etc”

3abiton
0 replies
10h17m

Thanks for the clarification!

rsolva
2 replies
1d1h

If someone has experience with CrowdSec (a kind of crowdsourced fail2ban), I would like to hear your opinion!

espe
0 replies
2h10m

cuts out like 80-90% of the automated scans in our cases.

creeble
0 replies
1d

Don’t know CrowdSec but have used abuseipdb.com a bit to reduce log noise.

azinman2
2 replies
1d2h

Why protect the IPs of the those trying to attack you?

bshacklett
1 replies
1d2h

Most of them are probably bots running without the knowledge of the IP owner. There’s little benefit to sharing those IPs with anyone other than the provider who owns them.

BLKNSLVR
0 replies
6h5m

Is there a potential benefit in larger sharing of malicious-traffic IP addresses in that if more sites blocked these known-bad IP addresses then there'd be a higher chance that a victim would notice something is wrong as their online services have started blocking them?

It feels to me like we're too polite, so we're letting the infected walk amongst us. It might not be their fault, but I'd be guessing it'd also be better for victim to find out sooner rather than later if they're pwned.

I suppose the slippery slope / end game of this would then concentrate all intentional malicious traffic to VPNs and proxy's and Tor and the like, with them becoming useless due to being blocked.

I block any IP address that's probed any of my ports they have no business probing. See https://news.ycombinator.com/item?id=39171782

yawaramin
1 replies
22h17m

Does anyone else have the experience that almost every single one of the 'attacks' is using HTTP and not following a 301 redirect to HTTPS? I have my internet-facing web server set up to redirect all HTTP -> HTTPS and this thwarts almost every 'attack'. I have to say, they're not very smart about it.

gary_0
0 replies
20h1m

I do that as well, but nothing gets through anyways because I'm not running Wordpress or a version of Apache from 2012. I also have nginx set up to reject connections without the expected 'Host' header, and that quickly bounces a few attackers. But many still end up getting all the way to a 404 or 400 for their attack URL.

urbandw311er
1 replies
23h7m

As a thought experiment - if there was public money to back this, would it be making us all safer to run a series of honeypot servers that automatically start DDOSing the various C&C servers that attempt to compromise them?

PeterisP
0 replies
21h2m

Such a response would generally be a crime; existing cybercrime legislation generally does not have any clauses permitting retaliation as "self defence", and also very often you'd be DDoSing an innocent third party, another victim whose compromised device is abused to route traffic, and also affecting their neighbors on the same network.

dangus
1 replies
1d1h

I would be interested in reading something similar but focused on less common ports and services, like game servers that run on high ports and more typically use UDP.

I wonder if that’s more or less of a “safe” situation.

jbverschoor
0 replies
22h32m

You could and should alway monitor protocol errors (any protocol).

burgerquizz
1 replies
1d1h

I am wondering if there was any reports (on large scale systems) on what hackers were looking for on servers?

CharlesW
0 replies
1d1h

Cloudflare's WAF managed rulesets changelog might be helpful for this: https://developers.cloudflare.com/waf/change-log

For example, they did an emergency update on 1/22 in response to CVE-2023-22527.

zetalemur
0 replies
21h1m

Interesting. Came to similar conclusions when analyzing my (httpd) access logs for https://turmfalke.httpd.app/demo.html ... but so far nothing really out of the ordinary.

wkjagt
0 replies
7h45m

There are quite a few examples in there of “the bad guy” trying to find files (like .env) that are accidentally left somewhere. From my PHP days I remember that indeed Apache/PHP just serves up any file from anywhere as a static file if it isn’t PHP. My memory is pretty vague on this and I don’t remember if this behaviour is configurable but I guess it must be. Having done mostly Ruby on Rails since, it feels so strange now that a web server can be some kind of a file browser into your code base. Am I remembering this correctly? Are there other languages/web servers that work like this?

tgrzinic
0 replies
8h2m

Nice article.

Btw how do you make sure you are not hacked or that some frivolous attacker didn't accessed something valuable?

tamimio
0 replies
19h49m

Yeah some thing in websites I have, even the personal one, they get on average tens of these attacks, and again, it seems they are after .env too and other php related directories. Big portion of these attacks come from TOR networks too.

takemine
0 replies
17h42m

Nice analysis! You should protect your infra to avoid this kind of scanning:

- Disable password login for SSH, use keys instead.

- Limit access to known IPs (with a managed vpn)

- Use Cloudflare: Their WAF is really good

- Forward logs to an other service that can analysis logs (datadog is nice)

shameless plug: started a small honeypot service[1] if anyone would need it as a last resort[1] to catch hackers in your servers . Feedbacks appreciated!

[1] https://hackersbait.com

qaq
0 replies
22h28m

Tangentially the sad thing about whole cyber sec space is that well resourced APTs like say APT-29 Cozy Bear. Have enough resources and actually run labs where they deploy all top Endpoint solutions and validate their offensive tools against them.

pid1wow
0 replies
20h12m

directory traversal

The correct term is directory enumeration. Traversal usually means something about ../../

oars
0 replies
17h26m

Great post, thanks for sharing.

m0rissette
0 replies
23h27m

Man people don’t use shadow anymore and expose creds through /etc/passwd; craziness

foofie
0 replies
1d2h

I also check the access logs collected by my self-hosted services, and I think there's a detail that's conspicuously absent from this analysis: the bulk of these malicious requests are made by regular people running plain old security scanners that are readily available from sites such as GitHub. These are largely unsofisticated attacks that consist of instances of one of these projects just hammering a server without paying any attention to responses or even if they have been throttled or not.

Some attacks don't even target the IP and instead monitor domains and its subdomains, and periodically run the same scans from the exact same range of IP addresses.

For example, on a previous job we had a recurring scan made over and over again from a single static IP address located in Turkey that our team started to refer to it as "the Turkish guy", and our incident response started featuring a preliminary step to identify weird request patterns that was basically checking if it was the Turkish guy toying with our services.

elendee
0 replies
17h55m

autoGPT thanks you for this lucid writeup

daneel_w
0 replies
19h59m

Curious enumerations on the most common items. There's definitely a topical bias. By far the most common attack attempt I see on all the various webhosts I administer is WordPress-oriented (despite not present on any of the hosts), which doesn't even get an honorable mention by the author. Perhaps he hosts WordPress content and didn't discern attacks from legitimate traffic.

cantSpellSober
0 replies
1d2h

the user agents for these attacks mention Mozlila/5.0

The linked write up on why is interesting, finds attacks for many of the same files, and recommends blocking the user agent.

SebFender
0 replies
23h19m

Interesting, well prepared and presented - a necessary read for many.

Adachi91
0 replies
14h11m

I always find it fun to respond in strange ways to "malicious" requests on all my webservers derived from a defcon talk[0] that I watched a while ago, there's a lot of great fun to be had in honeypots and other things of the nature.

[0] https://www.youtube.com/watch?v=4OztMJ4EL1s