return to table of content

What You Get After Running an SSH Honeypot for 30 Days

BLKNSLVR
76 replies
11h45m

I self-host a (non-critical) mail server and a few other things and occasionally look at live firewall logs, seeing the constant flow of illegitimate traffic hitting random ports all over the place, some hitting legitimate service ports but others just probing basically anything and everything. I decided to setup a series of scripts that detect activity on ports that aren't open (and therefore there's no legitimate reason for the traffic to exist) and block those IP addresses from the service ports since the traffic source isn't to be trusted.

Something that came out of analysis of the blocked IP addresses was that I discovered a few untrustworthy /24 networks belonging to a bunch of "internet security companies" whose core business seems to depend on flooding the entire IPv4 space with daily scans. Blocking these Internet scanner networks significantly reduced the uninvited activity on my open service ports. And by significantly I mean easily over 50% of unwanted traffic is blocked.

Network lists and various scripts to achieve my setup can be found here: https://github.com/UninvitedActivity/UninvitedActivity

Internet Scanner lists are here: https://github.com/UninvitedActivity/UninvitedActivity/tree/...

Large networks that seem responsible for more than their fair share of uninvited activity are listed here: https://github.com/UninvitedActivity/UninvitedActivity/tree/...

I'm semi-aware of the futility of blocking IP addresses and networks. I do believe, however, that it can significantly reduce the load on the next layers of security that require computation for pattern matching etc.

Be aware: there are footguns to be found here.

TacticalCoder
41 replies
5h53m

One thing I do is I blocklist entire countries' and regional ISP' CIDR blocks. Believe it or not: straight to firewall DROP.

China, North Korea, so many african countries who's only traffic is from scammers, tiny islands in the pacific that are used for nothing but scamming...

Straight to DROP.

And I do not care about the whining.

nequo
17 replies
5h42m

I assume you don’t host anything that could be useful to the 1.5 to 2 billion people that you’re blocking.

luma
16 replies
5h33m

Or they host a business site that doesn't do business in those countries and so nothing of value is lost to them. For example, it's literally illegal for me to accept payments from .ru, so why bother wasting their time and my bandwidth?

ajsnigrutin
15 replies
4h31m

I live in EU,and a bunch of american sites just block the whole EU due to GDPR laws.

Then someone in US uses my email by accident to subscribe to some newsletter (not the first time, I also get personal emails for that person, since it's just one letter difference, and i'm guessing it's someone old, considering the emails I get), i try to click "unsubscribe", and it just redirects me to "<site> is unavailable in EU, blah blah" page, without unsubscribing.

I make sure to report that site to every goddamn spam list possible.

DEADMINCE
12 replies
3h23m

a bunch of american sites just block the whole EU due to GDPR laws.

Which is incredibly reasonable. If the EU didn't try to claim EU law applies globally, those sites might still be up.

DEADMINCE
7 replies
3h2m

That situation is quite different. The US is using its significant power and weight to coerce those non-US banks into compliance with FACTA. Those banks don't have to comply, but they want to do business with the US and US companies, then they don't have much of a choice.

It's not like they just made a law and now insisted it applies globally, which is what the EU did.

belk
2 replies
2h46m

it's effectively the same, small banks just shove you out of the building and refuse to open a bank account for you if FATCA applies to you, their compliance is through just not accepting US tax payers.

This is a real issue that leaves US citizens only able to open accounts at bigger banks (with shittier services but enough budget to hire a FATCA compliance department)

DEADMINCE
1 replies
2h24m

it's effectively the same

Nope. Not even close.

Practically the GDPR law has no teeth at all because its claim of extraterritorial jurisdiction is nothing but nonsense.

FATCA applies because the US has a carrot or stick to enforce it.

Also, the US law as written is entirely reasonable and doesn't try to claim the law applies to US citizens anywhere in the world.

shkkmo
0 replies
19m

US law as written is entirely reasonable and doesn't try to claim the law applies to US citizens anywhere in the world.

It absolutely does.

The USA has laws that govern what it's own citizens do abroad like. You aren't allowed to have sex with minors or pay bribes when abroad.

The USA also recently passed a law that allows it to prosecute foreign officials who solicit bribes from USA entities. https://www.ropesgray.com/en/insights/alerts/2023/12/us-cong...

mratsim
1 replies
2h48m

Why is it different?

People don't have to comply to GDPR but if they want to serve EU folks then they don't have a choice.

DEADMINCE
0 replies
2h24m

The EU claims their law applies globally regardless of if people set foot in or do business in the EU. According to the EU, an EU citizen just needs to visit a site and the law applies, regardless of where the site is hosted.

According to the EU, the GDPR applies to some small shop owner in China with a website that harvests all data it can that isn't advertising in the EU, courting EU citizens in any way, has no business with the EU, etc.

echoangle
1 replies
2h49m

Isn’t it actually exactly the same? The website doesn’t have to comply (and many don’t), but if they want to do business in the EU, they have to. How is that different?

DEADMINCE
0 replies
2h26m

No, it's not remotely the same.

The US is using the fact that people want to do business with them to coerce compliance, and as written the law only applies to US persons.

The EU claims the GDPR applies globally, regardless of if people want to do business with the EU, or even if people ever set foot in the EU. It's amusing nonsense.

3836293648
1 replies
2h37m

What? No

Claiming jurisdiction by server location is the stupidest thing ever if you trying to have any kind of customer protection laws. You have to go by customer location.

However, the claim that they have jurisdiction over EU citizens abroad is very questionable.

DEADMINCE
0 replies
2h21m

Claiming jurisdiction by server location is the stupidest thing ever if you trying to have any kind of customer protection laws. You have to go by customer location.

I disagree, because that's impossible. That's why the EU's attempt is largely a joke. Literally - it seems to get mocked a lot when I tried reading up on the credibility and practicality of what they claim.

However, the claim that they have jurisdiction over EU citizens abroad is very questionable.

It's the claim that they have jurisdiction over non-EU citizens and businesses in their own countries which is so laughable.

rapind
1 replies
3h58m

IMO replying unsubscribe should always work for marketing emails and if it doesn’t then I flag the email as spam. Nope, I’m not going to visit that tracked / info gathering unsubscribe link.

dheera
0 replies
17m

I only use unsubscribe links from things I voluntarily and willingly subscribed to.

If I was involuntarily subscribed to something, or subscribed because of an inconspicuous "subscribe me" checkbox that I probably didn't notice, including from a legit business that I purchased an item, it's getting reported as spam in Gmail.

mmsc
9 replies
3h18m

Had a travel insurance do this and when I was in hospital in Asia I couldn't start a claim and the hospital nearly kicked me out. I'm sure the sysadmins thought it was a great way to reduce hacking attempts by blocking Asia.

boredtofears
5 replies
2h46m

That’s awful but why is the onus on random sys admins around the world to deal with this correctly and not the government hosting the problem entities?

krsdcbl
1 replies
15m

if the government in question is supportive of said problem entities, they won't "deal" with it

If the government in question has free reign on regulating said traffic, it's an avenue for repressions and censorship

Otherwise it's a legal matter to seek action against such entities, which is already how it works

(... but I'm afraid we're actually mostly talking about "scenario 1 entities" here, which makes it futile to seek action from the very offices that already play a role in making it harder to use existing legal means)

bobthepanda
0 replies
7m

And it’s not like we will invade countries to stop spam calls, although China is probably the closest to getting to that stage given that the scam centers in Myanmar seem to be a deciding factor in who they throw their support behind: https://www.theguardian.com/world/2024/jan/31/myanmar-hands-...

kjkjadksj
0 replies
2h6m

Government needs lobbying to act

belk
0 replies
2h41m

That's like asking why don't we expect burglars to not burgle, they won't, but that doesn't mean walling off a whole neighborhood is the solution either.

AJayWalker
0 replies
2h40m

I would say because it’s their job to serve their customers, even if they’re abroad? Especially for a travel insurance company.

lopkeny12ko
0 replies
4m

Ironic that GP commenter said "I do not care about the whining" about regional IP blocks and the first reply is just someone whining about it.

dahart
0 replies
47m

If there’s one single business that I might expect to honor traffic from foreign countries, it would be the travel industry. I can suddenly envision using a VPN to route through Asia and check a travel agent’s site access before purchasing.

O5vYtytb
0 replies
48m

That's so remarkably stupid for travel insurance, it's unbelievable.

grishka
5 replies
1h51m

As a Russian, I hate it when people do this. It's extremely annoying when you just click some random interesting-looking link from HN or Reddit or Twitter only to be greeted by a 403 or a connection timeout. Then you turn your VPN on, and magically, it loads just fine.

mistrial9
2 replies
1h42m

people here are not thinking in whole systems-- roads have dual purpose.. there is security AND there is trade .. a world without trade is a poor world.. that includes the intellectual arts, civilian institutions cooperating, common issues like Climate.

The voices here that say "I block everyone, don't bother me with your whining" .. it is a security practice.. OK. security is not the whole story of civilizations; obstinate thinking leads to ignorance, not evolution.

The topic is SSH, an administrative and secured access. Yes security applies. to be on-topic

grishka
1 replies
1h22m

Of course one can obfuscate and secure their own SSH access as much or as little as they want. Run sshd on a different port, require port knocking, ban IPs after failed login attempts, all that kind of stuff.

I'm, however, specifically talking about public-facing services like HTTP(S), which also get blocked with this "I'll just indiscriminately blacklist IPs belonging to countries I don't like" approach.

phsau
0 replies
1h9m

Malicious traffic is not limited to ssh and comes from the same usual suspects. Automated attacks against web applications is constant. I wouldn't say it's indiscriminate, it's practical.

__turbobrew__
0 replies
1h17m

For many services, the expected value of letting people from Russia access their service is negative. The reality is that Russia contributes a large portion of hacking attempts while providing very little to no revenue for the service. At the end of the day it is just business, and sometimes letting countries access your service is bad for the bottom line.

NicoJuicy
0 replies
27m

Had a reddit clone. The amount of Russian spam coming in was nuts.

Blocking the ru language blocked all spam. And since it didn't have Russian users, it was an easy choice to make.

ajsnigrutin
2 replies
4h27m

Personal page.. sure.

Business? You're a pain to many people and don't care.

I live in EU and many US pages just block the whole EU due to GDPR laws... then someone (by mistake) subscribes me to their newsletter, and the "unsubscribe" links leads to "this page is unavalable in EU"? I'll goddamn make sure your domain ends up on every goddamn possible antispam filter I can find.

cdelsolar
0 replies
4h2m

Why? Are they spam pages?

DEADMINCE
0 replies
3h22m

I'll goddamn make sure your domain ends up on every goddamn possible antispam filter I can find.

Honestly, individuals can't really do much to change the reputation of a domain.

Maybe petition your representative to adjust the GDPR so they don't claim it applies globally?

tiahura
1 replies
5h17m

The Biden administration needs to explain why they allow ISPs to import data from these countries.

hahajk
0 replies
4h35m

I'm not sure I understand what you're suggesting. Are you saying that the US govt should make it illegal for people in its borders to communicate with people in those countries?

DEADMINCE
1 replies
3h24m

That's very computationally inefficient.

aforwardslash
0 replies
2h18m

You can trivially maintain a list of the size of the whole ipv4 space by using bitmaps

pgraf
14 replies
10h58m

Just be aware that with your strategy “blocking 50% of unwanted traffic” means blocking non-attack traffic, as these Internet security companies are mostly legitimate. The automated attack traffic that you actually want to block is in the other half and will frequently change IPs.

BLKNSLVR
7 replies
10h20m

these Internet security companies are mostly legitimate

This is both subjective and highly dependent upon the scope of services being run. My setup would probably progressively create more hassle than it saves as on a scale from small business to large business. For the setup I have, I quite specifically want to block their traffic.

I'm possibly overly militant about this, but they keep databases of the results of their scans, and their business is selling this information to ... whoever's buying. I don't want my IP addresses, open ports, services or any other details they're able to gather to be in these databases over which I have no control and didn't authorise.

To steal an oft-used analogy, they're taking snapshots of all the houses on all the streets and identifying the doors, windows, gates, and having a peek inside, and recording all the results in a database.

I believe all of them are illegitimate. They 'do' because they can, and it's profitable. "Making the internet safer" is not their raison d'être.

Happy for any else to form their own opinion, but this is my current stance.

appstorelottery
6 replies
9h0m

Would be cool to have a "don't scan me bro" list of IP's that engage in this that we could share - is there such a thing?

BLKNSLVR
3 replies
8h38m

The problem is that becomes a concentrator of IPs behind which privacy conscious individuals exist, which probably has higher value to "whoever's buying". It's a conundrum.

yesbabyyes
2 replies
7h41m

It sounds like what GP is suggesting is to collect ips of all the scanners, and share the list of ips among ourselves, so we can collectively route their traffic to /dev/null.

kjkjadksj
0 replies
2h3m

Why not also sell the scans of scanners to the scanners customers and make a little pocket change?

BLKNSLVR
0 replies
7h27m

aaaaah, that makes sense. See the links in my original post.

dataflow
0 replies
8h22m

You're being sarcastic, right? We did this for telephone numbers and saw how it turned out...

wl
1 replies
4h56m

My experience is that after blocking Censys, unwanted traffic on non-standard ports from other IP blocks has basically gone to zero. It appears to me that some bad actors are using Censys scans for targeting.

rolph
0 replies
3h44m

i get similar results

nubinetwork
1 replies
9h19m

these Internet security companies are mostly legitimate

Act like a bot, get treated like a bot.

Just be aware that with your strategy “blocking 50% of unwanted traffic” means blocking non-attack traffic

You don't block them forever, just enough for them to move on to someone else.

slt2021
0 replies
1h57m

they dont move on to someone else, they scan entire internet on a regular basis, just like gogle crawls web pages

moffkalast
0 replies
9h12m

Lol legitimate. As legitimate as door to door salesmen. OP just put up a proverbial "no soliciting" sign.

chipdart
0 replies
8h44m

(...) as these Internet security companies are mostly legitimate.

Note that you're basing your assertion on the motivation of random third parties exclusively on the fact that they exist and they are behind active searches for vulnerabilities.

cranberryturkey
9 replies
10h22m

Just install fail2ban.

WhackyIdeas
3 replies
8h5m

For SSH, changing to a random port number resulted in zero connection attempts from bots for months on end. It seems bots just never bother scanning the full 65535 port range.

dizhn
2 replies
7h51m

For most of my VMs there's no ssh running. I use wireguard to connect to a private IP. I haven't done this on the bare metal yet but I might. Though barring exploits like we had recently nobody is getting into a server with either strong passwords or certificates. Fail2ban in my eyes is a log cleaner. It's not useful for much else.

cranberryturkey
1 replies
5h48m

it bans the bad ips, isn't that worth running?

thfuran
0 replies
47m

But what does that actually accomplish?

speleding
2 replies
5h36m

A server with fail2ban can be DOSed by sending traffic with spoofed IP addresses, making it unavailable to the spoofed IP addresses (which could be your IP, or the IP of legitimate users).

That is typically a bigger problem than polluting your logs with failed login attempts.

CreatedAccount
1 replies
5h8m

What would spoofing the IP of a packet when the underlying protocol requires a two-way handshake accomplish?

ajsnigrutin
0 replies
4h25m

With CGNAT, a prepaid sim card and some effort, you can make them block a whole legit ISP in a few days without spoofing anything.

hypeatei
1 replies
3h20m

fail2ban is another layer which is susceptible to abuse and vulnerabilities. It might keep noise out of your logs but at a huge cost. I'd rather just change the SSH port to something non-standard and write it down.

gnuser
0 replies
36m

Add it port knocking and this is how I do it. nftables ftw

k8sToGo
3 replies
11h18m

Have you considered using crowdsec?

teruakohatu
1 replies
8h34m

Are there any downsides to crowdsec?

snorremd
0 replies
7h42m

You end up sharing signals (IPs) to their crowd-sourced bad IP databases, but only get 3 free IP lists on the free plan. To get some of the bigger IP lists you need an enterprise plan at $2500 a month.

Essentially they use the free customers to build the lists that drive their enterprise sales, which is fair enough as you get to use their free dashboard and open source software. But to me it seems they're really only targeting enterprise customers as a business.

BLKNSLVR
0 replies
11h6m

I set it up in a fairly superficial way, and there are only a handful (two or three) rules that can be applied on the free tier, and I'm a tight-ass.

It's still running, but it doesn't seem to block much - but that might be because I didn't put enough time into "doing it properly".

shaky-carrousel
1 replies
7h15m

Good idea. What I do is, I disallowed password login in my ssh server, and I permanently ban whichever address that tries to log in using a password.

BLKNSLVR
0 replies
6h59m

I use a bastion host on a VPS as the only source IP address allowed to ssh into my systems, so any attempts to connect to ssh (from any IP address other than the bastion) are both blocked and logged into "the list" to be blocked from connecting to any other service ports.

nilsherzig
1 replies
7h48m

Try running some of your blocked ips through greynoise, they usually have some interesting information about them

BLKNSLVR
0 replies
7h4m

Thanks for the tip. Looks like greynoise use ipinfo.io for IP metadata.

I use https://www.abuseipdb.com/ for any manual IP address checks, and https://hackertarget.com/as-ip-lookup/ for finding what ASN an IP address (range) is a member of. I'll check out greynoise and see what extra info may be provided.

tomxor
0 replies
3h17m

and block those IP addresses from the service ports since the traffic source isn't to be trusted

Don't get me wrong, I want to do the same, I run a lot of servers and see all the automated nonsense aimed at public servers. However, you should consider the fact that today blocking an IP is akin to blocking a street, a village or sometimes even a town. For ~better or~ worse we now live in the age of CGNAT.

If your threat model and use case means you only care about a known subset of users with static IPs who are lucky enough to not share IPs then fair enough; but if you are running services intended for wide spread consumption you are likely blocking legitimate users without even knowing it.

poikroequ
57 replies
12h40m

I once tried hosting a web server at home by exposing ports 80 and 443 to the Internet. Hours later I reviewed the logs, thousands of attempts to hack into my lil Linux server. It spooked me to say the least, so I switched to using cloudflare tunnels instead.

Exposing ports on the Internet is dangerous, especially SSH. You're much safer using a proxy or gateway of some sort, or better yet a VPN if it doesn't need to be publicly accessible.

waingake
19 replies
12h31m

Is it? If you've got `PasswordAuthentication` disabled, only allow public key logins and keep your system up to date. Honest question.

I self host my email ( docker-mailserver ) and host my personal website on an old laptop with a static IP. Have done for years now without issue.

Beijinger
7 replies
11h10m

"I self host my email "

Is this still possible? Are your emails getting delivered?

Downvoted. I don't know when the downvoter tried the last time to "host their own email". Yes, DMARC, DKIM und SPF. Good luck trying to get your email deliverd to t-online or something.

https://forum.hestiacp.com/t/t-online-curious-story-about-th...

They may even check if your domain has an "imprint". I kid you not. I use my own domains too, but I piggyback with infomaniak.com

johnklos
1 replies
10h45m

Good luck trying to get your email deliverd to t-online or something.

People who say it cannot (or should not) be done should not interrupt those who are doing it.

The dismissiveness is likely why you are downvoted, I'm guessing. The suggestion that because it's hard for you and therefore you're surprised others are doing it isn't a good look.

Self hosting email isn't that hard, and there are many solutions for all sorts of self hosting issues. That's a topic for another discussion, though.

Beijinger
0 replies
10h38m

"Self hosting email isn't that hard". Self hosting is super easy. Getting your emails delivered is hard. And I am not even talking SPAM folder here (see t-online example).

Smart comment from reddit:

"The problem with selfhosting email, unlike selfhosting services like Jellyfin or Nextcloud, is that you rely on other people's servers to play ball with you, but they often don't. Or they play for a while and then suddenly decide not to without telling you. It's unpredictable and we selfhosters don't have enough control over that."

This describes it pretty well.

pja
0 replies
10h23m

Is this still possible? Are your emails getting delivered?

Mine are. Although it probably helps to have a static IP with a 25 year long clean history.

Are there very occasional glitches? Sure. But I've seen ISPs drop everything from GMail on the floor for no obvious reason. I've seen GMail drop GMail email before. Same for every other large email provider.

To date I haven't seen any reason strong enough to push me to switch to a centralised email host. That day may yet come of course.

hggh
0 replies
7h6m

Is this still possible? Are your emails getting delivered?

Yes and yes (if DMARC/DKIM/SPF configured correctly).

gsich
0 replies
7h50m

yes and yes.

Selfhost does not imply residential IP.

cherryteastain
0 replies
8h32m

I fo it too and can deliver to gmail/office365 etc addresses no problem.

A1kmm
0 replies
6h35m

I self-host my email, and have not really had problems delivering normal quantities of personal email (except a bit of pain for Microsoft to accept mail in the first place, but it can be sorted quickly) - as long as you do DMARC / DKIM / SPF.

I've never heard of t-online before or tried to send an email there to my knowledge... if one provider I've never heard of would refuse to accept my mail if I ever sent something to them, that's more of a them problem than a me problem - but it certainly isn't the norm for other providers.

Beijinger
5 replies
11h9m

"PasswordAuthentication disabled" not sure I can even do this on my shared BSD server. I have ssh access via pw and need it. Is this really dangerous?

sneak
1 replies
10h4m

Yes. Authenticating with passwords is obsolete and dangerous. Use keys and disable password auth.

tpoacher
0 replies
9h42m

And if you really like passwords, you could always enable both, too!

johnklos
0 replies
10h54m

It is, if for no other reason than you never know when some other user has a guessable password. You should switch everyone to ssh keys. It's a good excuse to learn :)

fragmede
0 replies
3h47m

How good is your password? If it's long, with special characters, it's fine. Install fail2ban. The problem with auth keys is you can't get into the server if you don't have your laptop/phone/NFC device because you got pickpocketed/mugged?

Scramblejams
0 replies
10h53m

Yes, it's risky to accept password auth if someone sharing the box with you has a poor password. They could do things like:

. Install a spam or brute force password bot, which could get the machine kicked off its internet connection (in addition to whatever havoc it causes first)

. DoS the server by filling up the disk or using too much RAM (are quotas enforced?)

. Exploit a local vuln to get root, if such exists on that box. (Is the kernel promptly patched and the box rebooted?)

. Explore other users' directories (are permissions locked down correctly across users?)

…and more thrilling possibilities!

Embrace key auth. Future you will thank you.

pkrotich
4 replies
12h22m

The keyword is diligently keeping your system up to date! That said you’ll still have exposure to zero day vulnerabilities and DOS attacks.

Fabricio20
1 replies
11h15m

But an attacker with one of the biggest vulnerabilities on earth (hell, ssh noauth 0day) would very likely use it against big cloud providers and infrastructure (isps and others) and not burn it on your home server! Keeping it reasonably up to date with your distro's cycle is probably enough for most people doing this home server thing.

So of course, as things always are with security this is a matter of risk assessment and understanding your attack surface, a server with only public key and maybe on a special port goes a very long way, add fail2ban on top and i'd say it's probably fine for quite a while.

But that does make me think... what if... a wormable noauth 0day like that on ssh or some other popular system... how fast could it replicate itself to form the biggest botnet.. how long would it take, to take over all visible linux servers on the internet (so that your little home box ends up being a target)?

I guess at that point you are limited by bandwidth, but since you can scale that with every compromised server... hope someone does the math on that one day!

rcxdude
0 replies
8h50m

Ipv4 is only 4 billion addresses. It doesn't actually take very long to just try all of them. If you're running a service exposed to the internet and it has a published exploitable vulnerability, it's just a matter of time before it gets exploited. (that said, that time does give a little buffer for patching)

aadhavans
14 replies
12h36m

Out of curiosity, what are the ramifications of exposing ports 80 and 443? Can these ports even be 'hacked'?

It doesn't seem terribly unsafe to me, especially if you're serving static pages.

koito17
9 replies
12h28m

In my experience, most of the noise on my web server are bots with spoofed iPhone or Google Chrome user-agents. I see three kinds of traffic patterns.

1. bogus /wp-login.php requests, or endpoints of presumably insecure wordpress plugins. These bots are pretty dumb and do it non-stop, even if the server constantly responds with a 404

2. testing recent Apache vulnerabilities by POST-ing to something like /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh . Even if your web server clearly communicates that it's not Apache, the bots still insist on testing Apache vulnerabilities. They also occasionally test vulnerabilities that exist in ancient Nginx versions.

3. less common, but bots that exist to scrape something from the internet. I remember two years ago seeing a bot whose sole purpose was to document as many registered, valid domain names as possible (I found out about this since they linked a website explaining who they were in their user-agent string)

Overall, I would say the background noise of HTTP servers is tame compared to what you see for SMTP servers and, to some extent, SSH servers. I happen to also self-host e-mail; logs record failed login attempts about every second. They always pick a username like "admin" or "adm". There's also people who try using your SMTP server as a relay for spam.

hyperman1
4 replies
11h29m

I've added a /wp-login.php and friends that firewall-blocks the IP of the requester for a week. It greatly cuts down the bot noise.

immibis
1 replies
11h23m

My competing site can have <img src="https://yourdomain/wp-login.php"> and customers won't be able to view your site after that. Thanks for the free customers!

sweetjuly
0 replies
8h11m

Yep :) The real trick is to not be vulnerable to known issues, and then mitigate post-compromise like crazy on the off chance you get patch gapped or (very unlikely) zero dayed.

Blocking IP addresses is extremely silly, especially in an IPv6 world where attacker can easily get access to gigantic numbers of addresses in hard to identify ways (there's no source of truth for what IPv6 range corresponds to one blockable "customer". Some get /56s, others get /48s, etc.). It's security theater which may well just break your service for real users.

Beijinger
1 replies
11h1m

Can you post the script?

Obviously I assume you don't run wp. I think wordfence does something similiar.

DEADMINCE
0 replies
2h51m

It's probably just an nginx fail2ban jail or something that looks for the wp pattern.

fpoling
1 replies
11h30m

For me the biggest source of noise in logs for a small site is the referrer spam. At some point like 12 years ago I enabled webalizer stats with a public link to the stats page. Soon I had to deal with massive amount of bot requests with http referrer pointing to porn and farmacy ads. That has not stopped after the public link was removed and the stats has started to use a public spam database. And the spam is still there after 12 years.

tombrossman
0 replies
1h12m

Matomo (self-hosted analytics, used to be called Piwik) maintain a list of referrer spam domains. I use it as a filter list with GoAccess and haven't seen referrer spam for a long time. Worth a look. https://github.com/matomo-org/referrer-spam-list

aadhavans
0 replies
11h56m

Gotcha, thanks for the detailed response. I've seen the WordPress login attempts in my own web server logs, and that seems to be corroborated in your comment.

DEADMINCE
0 replies
2h52m

testing recent Apache vulnerabilities by POST-ing to something like /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh .

Are they really recent vulns though?

ozim
0 replies
8h47m

99.9999% of issues on 80/443 are apps run on the server not webserver itself.

It is applications that you run on web server that are exploited.

So serving static pages is safest thing you can do.

chipdart
0 replies
11h13m

Out of curiosity, what are the ramifications of exposing ports 80 and 443? Can these ports even be 'hacked'?

These are the ports usually employed to serve HTTP and HTTPS traffic, which mean public-facing servers.

Having a server listening to those ports is the precondition to have web servers running specific types of services, some of which have known vulnerabilities that can be and are exploited.

ValtteriL
0 replies
10h8m

Ports can't be hacked but the application listening on them can ;)

You can have vulnerabilities on the server software and its configuration even if you are serving only static content. This should be unlikely if you use up-to-date battle-tested software like nginx without making crazy config changes.

If you serve dynamic content, that may also have vulnerabilities that hackers can exploit.

nurettin
6 replies
11h49m

Don't worry, they are usually Russian/Chinese ips scanning for 5 year old php exploits. I've been exposing ports to the internet for decades with no issues. Always block ssh password and keep software relatively up to date. If you are very paranoid, make a vps beacon and remotely tunnel ports from your lab to it. That way you only expose the beacon.

zelphirkalt
5 replies
11h38m

I wonder, what is the issue with authenticating by password. If you choose a password of lets say 64 random chars, shouldn't it be pretty safe? Or is there something in the password method itself, that is inherently weak?

denton-scratch
0 replies
8h55m

Or is there something in the password method itself, that is inherently weak?

Your 64-character high-entropy password might be safe; other users on your system might baulk at memorising/typing in 64 random chars, and choose a less-secure password instead. With SSH keys, that can't happen.

cess11
0 replies
11h21m

Sure, they probably won't crack that, but there are other things to consider as well. A sshd on IPv4 port 22 that accepts password auth attracts attention, and you'll spend CPU cycles constantly checking credentials from very large database dumps that float around. In my experience it leads to more log noise too, it seems many bots will discard your IP and stop pestering it if passwords aren't accepted.

So in practice you'll probably also use something like fail2ban, firewall rules that only allow connections from certain IP blocks, things like that.

a_dabbler
0 replies
8h30m

The first benefit is some bots won't bother testing passwords as the SSH error message tells them the server doesn't use password auth. The second benefit is if your server is compromised it's quite easy for a rootkit to hijack SSH and steal your password when you login (and then abuse that on other servers you use it), the same is not true with a key and it is much harder for a rootkit to abuse as long as you only use the key on your local machine (there are strong protections against SSH handshake MITM attacks afaik)

KAMSPioneer
0 replies
10h9m

There are still advantages to public key auth. Sibling comment mentioned resource use, but also consider ease of use: are you setting a random 64-character password on every machine that has SSH server installed? Would it not be easier to generate one ed25519 keypair, apply a reasonable passphrase (and/or use disk encryption), and then you have secure auth on all your machines without a password manager?

If you're _not_ setting unique 64-character passwords per server, then you should consider what happens if your super strong password is discovered -- an attacker would have access to all your boxes. Compromising a key is harder than compromising a password.

Hendrikto
0 replies
7h19m

Or is there something in the password method itself, that is inherently weak?

You have to send your password/hash. With PKC, your private key never leaves your device. It can even live on a separate security key. All you ever send are signed messages, never your key.

mikhmha
3 replies
12h32m

Yeah this is what keeps me away from self-hosting public facing stuff. To me its like opening a new pipe into your home that is open to the whole world. And I'm too carefree to get the settings down right. So I avoid it all with complete process isolation. Don't shit where you sleep!

sureglymop
2 replies
12h10m

But couldn't, you, within your home, separate it from everything else? I don't see how it's any more dangerous really.

mikhmha
0 replies
11h34m

I should clarify. When I mean self host it’s for public facing applications that generate revenue. It involves some transaction in currency?value? between the user. Once money is involved you become a target. I don’t want anything that could be traced to my physical address. I told you I’m careless, I’ll eventually slip up on installing the patches or configuring something right.

Public facing like serving some static webpages or blog, text content. Yeah do it.

Nux
0 replies
10h49m

Obviously you need to know how and if you don't then it's always going to look very daunting.

chipdart
3 replies
11h46m

I once tried hosting a web server at home by exposing ports 80 and 443 to the Internet. Hours later I reviewed the logs, thousands of attempts to hack into my lil Linux server. It spooked me to say the least, so I switched to using cloudflare tunnels instead.

Isn't this hypothetical risk mitigated or outright eliminated by using stateless apps and periodically redeploying them in the spirit of cattle?

metadat
2 replies
11h38m

Depends, If they get into the stateless app and hoist that to penetrate into other stuff in your network, they might be able to install an APT.

chipdart
1 replies
8h42m

(...) they might be able to install an APT.

As you're periodically doing clean redeployments, that's not a concern isn't it?

immibis
0 replies
50m

Clean deployments of your entire home network?

kristopolous
2 replies
12h5m

I've been doing it for 25 years. It's fine.

Hendrikto
1 replies
7h25m

”Works for me.“ does not really answer the question.

Having a 25 year history might be why your mail gets delivered, while many people trying to self-host have constant and unpredictable deliverability issues.

kristopolous
0 replies
4h35m

It's more an advocacy against security paranoia.

You will always get automated attacks, constantly. But they're almost all doing stuff like trying to exploit a 12 year old bug in Wordpress or IIS.

They're about as sophisticated as any other scammer on the net.

spc476
0 replies
10h59m

I checked the logs for May for one website I run---65% of failed requests were for PHP scripts (mostly Wordpress). I don't run PHP so I don't worry. The rest of the requests were bots that can't parse HTML [1] and other weird requests. I've been running a webserver, SMTP, SSH and DNS for over 25 years and only once had an issue due to an inside job [2] twenty years ago (hard to protect against those).

[1] https://boston.conman.org/2019/07/09.1

[2] https://boston.conman.org/2004/09/19.1

JackSlateur
0 replies
6h58m

Every things on the internet is doing exactly this "dangerous things", with the exact same means you have at your disposal.

Exposing a service is not dangerous.

It is the same thing when you go to the sub and many people ask you for money : they keep asking, but that will not lead you to your bank account.

So you have log, this is not an issue, this is not something to be scared of or even cared of.

Just ignore them, as they are worthless and part of the v4 internet.

INTPenis
0 replies
12h37m

I noticed earlier this year while deploying a CoreOS VPS with terraform that sometimes you'd get an interesting IP that would receive incoming HTTP requests for interesting domains such as theguardian.com. I of course destroyed and re-deployed the VPS several times so the interesting IPs are lost to me, but it might be worth running a HTTP honeypot as well as an SSH one.

DEADMINCE
0 replies
2h56m

The traffic doesn't matter if you are sure your setup is secure. Key auth only for SSH, reverse proxy in front of your actual web server and use secured containers or VMs for each service. Throw in fail2an or crowdsec and that's more than enough for a little home linux server.

danielovichdk
37 replies
12h12m

I am not sure why this should keep anyone from hosting their own servers and services.

I find it positive to know that whatever and whomever expose anything on the Internet someone will try to exploit it.

For 443 and 80, why the concern ? Outsiders can try all they want bit if you are certain the software you use is secure, there will be no cigar.

I'd much rather have these things out in the open than hiding things away with some obscure thought about that should help anything.

If something is difficult do more of it. The same goes for understanding security.

tjoff
27 replies
11h46m

if you are certain the software you use is secure

The entirety of the problem is that you can't be certain the software you use is secure.

quaintdev
11 replies
10h57m

Common the web servers like Nginx, Caddy are not secure? If they found a zero day in these application whole Internet will go up in flames.

robertlagrant
9 replies
10h31m

The whole internet keeps patching those flaws as they are found. The problem with self-hosting is patching.

wruza
8 replies
9h43m

This is a non-problem since the invention of unattended updates. This whole subthread spreads uncertainty and doubt over simple things like nginx or ssh. Service providers don’t patch their software by hand either.

20 years ago, when I was still young and naive, I took these concerns way too serious, remapped ports, believed in pwn, set up fail2ban and knocking, rotated logs. Later I realized it was all just FUD, even back then. You run on 22, 80 and 443 like a chad, use pw-based auth if you’re lazy, ignore login attempts and logs in general and never visit a server until it needs reconfiguration. Just say f* it. And nothing happens. They just work for years, the only difference is you not having tremors about it.

The only time a couple of my vpses were pwned in decades was a week after I gave a sudoer ssh key to some “specialist” that my company decided to offload some maintenance to.

What changed from back then is that software became easier to set up and config and less likely to do something stupid. Even your dog can run a vps with a bunch of services now.

denton-scratch
6 replies
9h12m

And nothing happens.

Good luck. Some people have different experiences.

wruza
5 replies
5h24m

Some people install every php plugin they can find. Recently I gave my coworker an access to a gui server and next day he complained he can't install some chinese malbloatadware on it. People have different experiences due to different paradigms. My message is about not being anxious, not about being clueless.

With opensource and how code works in general, we are all in the same boat with bigcorps and megacorps. And they receive the same updates at the same rate (maybe minutes faster cause they host repos).

This quote, "you can't be certain the software you use is secure", is technically true but is similar to the "you can't be certain you won't die buying groceries". Perfectly useless fearoid for your daily life.

tjoff
4 replies
4h42m

I get what you are saying, and if anything all the "attacks" in the logs should build you some confidence. Oh, so 98% of all attacks assume I haven't changed the root password? I must be ahead in the game then.

But the way you phrase it isn't really convincing, and for singling out 443 and 80 ports. As the subthread of breaches hint towards. You might not need to be worried about nginx, but whatever you host on nginx might be a problem and being "certain the software you use is secure" is also pretty darn useless as guidance.

wruza
3 replies
3h41m

How do you run software? Or if you are using managed hosting or a platform for running software, how exactly they solve this “security strictly < 1, have to run somehow” dilemma?

tjoff
2 replies
3h20m

For systems exposed on the internet?

  * Try to avoid it in the first place.
  * Do research, minimize risk and make whatever compromises you are willing/able to make
  * Isolate it
  * Maintain, update and monitor it
At no point am I certain the software is secure.

wruza
1 replies
2h32m

You seem to include some absolute security, which is obviously nonexistent in this world (p!=0 for any event according to some models), into your internet exposure formula, when "minimize risk, make whatever compromises, update" is sufficient (to me) and everything above that is just worrying too much without having control. I think that's where we fundamentally disagree.

tjoff
0 replies
1h48m

I really don't.

Be aware of your threat model and the risks associated.

ricardo81
0 replies
4h19m

pw-based auth

better off using key only logins and forgetting IMO

mr_mitm
0 replies
10h15m

Even OpenSSH almost got a fatal backdoor recently.

danielovichdk
9 replies
11h37m

Exactly. And to overcome this you as a user of that software has to be aware of that specific software.

Most people doesn't give a shit, they pull down or introduce dependencies and think "wauw that was easy and fast".

Of course there is secure software, otherwise we wouldn't be able to live as we do.

lazide
8 replies
4h39m

As history has shown repeatedly, there is no secure software - just software that folks have not yet discovered how to exploit widely and effectively yet.

hollerith
6 replies
4h38m

That gives the misleading impression that it is impossible to create and maintain a truly secure software system.

lazide
3 replies
4h37m

I have yet to find any such system - given enough time and exposure.

What makes you think such a thing is possible? In reality, not theoretically.

I also have yet to find an unpickable lock, given the same constraint. Locks still have utility.

But only fools protect something very valuable with just a lock.

hollerith
2 replies
4h10m

What makes you think such a thing is possible?

The main source of my confidence is extrapolation from the results of successful initiatives to improve security. Rust is one such initiative: at relatively low cost, it drastically improves the security of "systems software" (defined for our purposes as software in which the programmer needs more control over resources such as compute time and latency than is possible using automatic memory management). Another data point is how much Google managed to improve the security of desktop Linux with ChromeOS.

There's also the fact that even though Russia has enough money to employ many crackers, Starlink's web site continued operating as usual after Musk angered Russia by giving Starlink terminals to Ukraine -- and how little damage Russia has managed to do to Ukraine's computing infrastructure. (It is not credible to think that Russia has the ability to inflict devastating damage via cracking, but is reserving the capability for a more serious crisis: Russia considers the Ukrainian war to be extremely serious.)

Sufficiently well-funded organizations with sufficiently competent security experts can create and maintain a software-based system that is central to the organization's process for delivering on the organization's mission such that not even well-funded expert adversaries can use vulnerabilities in that system to prevent the organization from delivering on its mission.

lazide
1 replies
4h4m

‘Secure’ == unable to be compromised.

You seem to be saying ‘secure’ == ‘compromises are able to be fixed’.

Which doesn’t fit any definition of secure I’m aware of.

Every one of those things you mention has been compromised, and then fixed, at various times. Depending on specific definitions of course.

And that is what we see publicly. Typically figure on an order of magnitude more ‘stealth’ compromises.

For a compromise to be fixed, someone has to notice it. Exposing machines to the Internet increases attack surface dramatically. Allowing machines to talk to the Internet unmonitored and unrestricted increases their value to attackers dramatically.

Without careful monitoring, many of the resulting compromises will go undetected. And hence unfixed.

[https://www.cvedetails.com/vulnerability-list/vendor_id-1902...]

[https://www.cvedetails.com/product/47/Linux-Linux-Kernel.htm...]

[https://purplesec.us/security-insights/space-x-starlink-dish...]

[https://www.pcmag.com/news/account-hacking-over-starlink-spa...]

hollerith
0 replies
3h57m

You made a universal statement, namely, "there is no secure software".

If you had written, "99% of software used in anger is insecure," or, "most leaders of most organizations don't realize how insecure the software is that their organizations depend on," or, "most exploits go undetected", I would not have objected.

kjkjadksj
1 replies
1h56m

Is that impression not accurate? Everything is possible to exploit imo. Its why the us government spends a mountain on cyber defense and offense.

oopsallmagic
0 replies
1h29m

Better pack it in then, y'all, we're done writing software. If it can't be absolutely 100% perfect all the time, then why even bother?

oopsallmagic
0 replies
1h30m

Then why bother? I'm sorry, but where did this meek, defeatist attitude come from? It pervades software now. Sure, you're right, I guess I could get hit by a bus today, but that won't stop me from crossing the street, because there are a lot of things I can do to minimize my risk, like looking both ways, listening, and crossing at a signal. Software is similar. "Nothing means anything, all is chaos" might poll well on Reddit, but it's not good engineering.

moffkalast
4 replies
9h4m

Haveibeenpwned paints a pretty good picture. Breaches, breaches everywhere. The average piece of software cannot be trusted with keeping any data secure for any notable amount of time.

It's funny that password managers and random generated single use passwords are so popular now, because the greatest risk to one's credentials isn't direct attacks, but having them leaked by someone's half assed backend. It gets even funnier when the service that gets breached has some arcane password security rules with two symbols or whatever, the ultimate hypocrisy.

withinboredom
1 replies
8h21m

A “breach” usually means they got access to the database, which is much different to access to the underlying server. We aren’t talking about databases, we are talking about servers.

moffkalast
0 replies
5h41m

It really depends on the architecture. At least I think it's fairly common for people to have some sort of database proxy running beside the static serve, so there isn't any direct public access and to do some caching, but once you're there it should be pretty wide open.

otherme123
0 replies
8h36m

Almost all stories you read about data leaks are some variation of "I installed XXX database and forgot to limit access" or even "and I wrongly supposed it wasn't listening to an internet exposed port". Breaches are just queries.

oopsallmagic
0 replies
1h28m

To be blunt, those breaches are the result of software written by people I wouldn't trust to bag my groceries. I've never had a database get leaked, because I'm not a hack, and I know how to do the bare minimum above professional negligence to secure internet-facing services. I wish I could say the same about most of the industry.

dotancohen
7 replies
10h46m

  > if you are certain the software you use is secure
This is the problem right here. You can be certain that the software you use has security issues.

lofaszvanitt
2 replies
7h28m

And who will fire a 10k+ exploit on your server? So you could record it and resell? In the early days, surfing shady sites with Internet Explorer, you could net a lot of interesting js that exploited the browser.

dotancohen
1 replies
2h45m

My server is an attack vector for my 10k+ users, and all their contacts. A 1% ransomware infection rate could net them $1 million USD worst case, and potentially an order of magnitude more if one of my users is browsing from a work machine in their network.

Don't underestimate the security value of people hitting your servers, even if all you think you're serving is emojis.

lofaszvanitt
0 replies
1h42m

I'm not underestimating. All I'm saying if someone pays 10k or more for an exploit against ssh/nginx/whatever, nobody is gonna pepper your server with it. They will sell it to a broker and pocket the money, end of story.

You will be targeted if your server seems to be the lowest hanging fruit or most easily exploitable or the target is most easily reachable through your site. Otherwise noone will bother with your setup.

input_sh
2 replies
5h4m

The question isn't does the software I run have some sort of yet-undetected security issues, but am I a valuable enough of a target for someone to waste their yet-undetected exploits specifically targeting me?

If the answer's no, then your only job is to keep up with software updates.

lazide
1 replies
4h40m

If you’re exposing your software to the external internet, you’re potentially valuable enough to get a drive by.

input_sh
0 replies
3h44m

Assuming your software is fairly up to date and/or you haven't badly misconfigured it, they're not gonna do anything. There are a ton of routers and IoT devices that are a much easier catch than a machine run by someone that actually gave a thought or two about securing their server.

danielovichdk
0 replies
7h53m

Sure. And so what ? Should I stop using it ?

jsiepkes
16 replies
8h0m

If you have only public key authentication enabled with SSH I honestly don't understand why people bother with things like fail2ban. It just adds more moving parts with very little security gain.

The real risk is a zero-day in OpenSSH and fail2ban probably isn't going to protect you from that. In that case you are better served by putting another layer of defense in front of SSH like a VPN.

BrandoElFollito
9 replies
6h55m

fail2ban is the kind of pseudo-security applied just because someone's cousin mentioned that in his blog.

It provides zero security. If your endpoint uses default usernames you will be shot anyway because of IP spread. If your security is good you add something that will block your legitimate connection when you are in the middle of nowhere and, shit, cannot access your <some service>.

zbentley
4 replies
5h48m

You're not wrong, but I'd say fail2ban still has value for junior operators seeking to reduce load and increase stability. If you don't know how to harden SSH, fail2ban is offers a much friendlier way to reduce the volume of logspam, CPU burn, and network traffic. It's just a pity that it's understood/documented/pitched as something that substantially increases security.

BrandoElFollito
3 replies
3h14m

If you don't know how to harden SSH

then you do not open it to Internet. Otherwise you patch aggressively, you use ssh keys and not passwords and you move it to some random port to hide it a bit (it actually helps)

logspam

you can filter this out in your log management tool

CPU burn

if this is your concern, then you have a hep of issues you need to address. I have never seen a CPU perf hit because of such behaviour (there are cases where it happens, butthis is due to a vulnerability of the service)

network traffic

the packet is here already, there is nothing to reduce

Karunamon
2 replies
2h40m

Moving ssh off of port 22 makes it a pain in the ass to work with. Ports are standardized for a reason.

Authentication attempts are a useful security signal; I don't want to filter them out. I want hosts running dictionary attacks to not be able to connect to my services in the first place. If you are running an SSH bot, then I don't want you on my website or anything else.

BrandoElFollito
1 replies
2h27m

Moving ssh off of port 22 makes it a pain in the ass to work with. Ports are standardized for a reason.

yes, they were standardized in the ol' good times :) If you have a limited amount of people/services connecting then it is manageable. But of course YMMV.

Authentication attempts are a useful security signal; I don't want to filter them out. I want hosts running dictionary attacks to not be able to connect to my services in the first place. If you are running an SSH bot, then I don't want you on my website or anything else.

enumeration and brute force on SSH fail by design when using keys.

As for other services I do not see how this helps - you will block random IPs hoping that a vulnerable site is not taken over if they happen to get back. It is not common (at least in my monitoring of several honeypots in various locations) to have the same IP being particularly visible. Sure they are back sometimes but this is quite exceptional. Anyway - it is not worth the hassle, better have proper hardening.

throwitaway1123
0 replies
1h26m

yes, they were standardized in the ol' good times :) If you have a limited amount of people/services connecting then it is manageable. But of course YMMV.

Agreed. I've never found it difficult to manage this. I already tend to configure SSH hosts in my ~/.ssh/config file anyway so that I don't have to remember every IP and port combination for every host I have access to when I want to use SSH (or something that relies on the SSH protocol like rsync or scp).

d-z-m
2 replies
5h51m

"security" is a term that has to be defined in relation to a threat model. If your threat model is an attacker with a static IP hammering your server, fail2ban does provide some security against that sort of attacker.

SahAssar
0 replies
47m

If your server is on the internet with a public ssh server then it is probably providing some sort of internet service. That internet service is almost always easier to DoS than your openSSH server. If you are not providing a internet service then why is your SSH open to the internet?

BrandoElFollito
0 replies
3h16m

No it does not. If the packet is at your door it is too late already. Then either it does not matter in which case you do nothing, or it matters (DoS) and then you have other problems.

You are right that security works in the context of a threat model. There are however useless tools that give a false sense of "security" that do not fit in any reasonable model.

I have cases where I block whole ranges of IPs for "legal" reasons - it does not make sense but there you are, the ones who write the rules are not the ones who actually know the stuff.

mmsc
0 replies
3h13m

People don't believe it's possible for software to be secure, and need a secondary defense to "protect them".

jcynix
2 replies
7h37m

Fully agree. Limiting the networks which can access your server will help, e.g. limit access to just your local provider or your workplace and you'll see no attempts from Brazil, China, ... unless you are located there, of course ;-)

ajsnigrutin
1 replies
4h24m

It's all fun and games, until you travel outside of your country, and try to access stuff at home.

jcynix
0 replies
3h49m

That's manageable with a bit of preparation: when I'm travelling, I allow access from other networks, e.g. those from phone providers. Or add a web form where I activate the IP address with a cryptographically signed "token" which the server can verify and then add the IP address to the set of allowed ones.

Used one or the other every now and then in the last 10+ years and still have my attackable footprint small the rest of the time.

Too
1 replies
6h33m

How do you protect your vpn?

d-z-m
0 replies
5h48m

use a vpn that does not advertise its presence, like wireguard.

mekster
0 replies
7h32m

Repetitive log is something you appreciate by reducing and you don't have to give it unnecessary CPU cycles too.

laktak
15 replies
11h33m

What does `echo -e "\x6F\x6B"` do?

zh3
4 replies
10h59m

It prints "ok" and shows they got in (it relies just on a shell, nothing else).

lucianbr
3 replies
9h54m

Why not do 'echo "ok"'?

kynetic
2 replies
8h36m

As shown by someone having to ask what it does, it obscures what it does.

lucianbr
1 replies
4h24m

Doesn't seem terribly useful. I mean it only obscures that it prints "ok". If you're looking at the logs, you probably already figured out someone is attacking you, and if you didn't, seeing "echo ok" will not help you figure it out.

If the only thing the command does is "obscure what it does", then the only thing it obscures is "obscure what it does". I guess there's no requirement that whoever writes these scripts is a genuis.

Retr0id
0 replies
3h30m

People writing malware generally don't want to deploy it on honeypots, because then they're handing their payload (and other tradecraft) directly to analysts.

So often the first stage is an attempt at honeypot detection, or more broadly, device fingerprinting.

A bad honeypot might not even run a real /bin/sh, and this detects that right off the bat.

ggambetta
3 replies
11h10m

If you say it 3 times in front of a mirror, it summons Stallman

moffkalast
1 replies
9h2m

With or without the swords?

withinboredom
0 replies
8h19m

Only one way to find out!

pompompurin
0 replies
7h29m

Haha

raverbashing
1 replies
10h9m

Maybe I should create a honeypot where cat, echo, sed, and curl/wget all drop random bytes in all commands they execute

Would be fun

thesnide
0 replies
9h16m

Better would be to just subtly change the output...

Like do a +1 on the byte every 7 bytes. Bonus to do it only on every 7 printable chars.

And you can even do A/B testing on the constant 7.

ynoxinul
0 replies
11h6m

This look like a simple test to see if remote command execution works.

spc476
0 replies
10h57m

It echos "ok".

gpvos
0 replies
7h28m

Tests whether `echo` supports the `-e` option.

Mxrtxn
0 replies
11h3m

Prints out `ok`

frankohn
9 replies
10h35m

Some time ago I set up a server for a website and I was appalled, like many others, by the number of SSH connection attempts. I decided to open SSH only in a randomly chosen port number above 1024 and now I have essentially zero probing attempt. It is trivial but for me is a satisfying configuration.

usr1106
8 replies
9h49m

This was true in 2018. In recent years I get 100s, sometimes 1000s of login attempts a day on high addresses.

My servers are on AWS addresses. If someone searches for servers (as opposed to routers, phones etc.) AWS might be a preferred address range. No experience whether scan rates depend on the address used.

eps
5 replies
9h19m

It appears to be two-stage process.

There are open port scanners that just check what ports are open on which IPs, and there are separate ssh login brute-forcers. Once your machine gets picked up by the former, the latter will pile up.

I have two servers on adjacent IPs, both with ssh listening on a high port. One gets hammered with login attempts and the other does not.

gradschool
1 replies
7h2m

This might not matter for your setup, but I would have thought it's bad in general to have sshd listening on a high port because then any non-root user who finds a way to crash it can replace it with his own malicious ssh server on the same port.

20after4
0 replies
4h9m

That's a good point, though you could use some firewall rules to rewrite the port number so that the local daemon is listening on the normal port but accessible via an alternate high numbered port.

usr1106
0 replies
6h8m

Maybe that's the case. The machines where I am seeing a lot of ssh login attempts on high ports have been on the same IPv4 address for years. Some since 2018.

nonamesleft
0 replies
5h28m

A lot of these seem to use zmap (https://github.com/zmap/zmap) or masscan (https://github.com/robertdavidgraham/masscan) for the initial scan.

Often with default parameters such as zmap setting ip id to 54321, having tcp initial window at 65535, having no SACK bit set and masscan with no SACK bit either, tcp initial window at 1024, tcp maximum segment size 1460 (which is strange to put below initial window size!), (older versions having fixed src port 61000 or 60000 from documentation examples and no MSS set), all of which are extremly uncommon in legitimate traffic and thus easily identified.

Even those so called "legitimate" scanners (emphasis on the "") seem to use these tools with little or no extra configuration.

With this setup the last time my high-port ssh (key-only) has got an attempt on it was 2023-07-26 (previous intruders get permanently firewalled).

frankohn
0 replies
8h59m

Interesting to know. For the moment, several months, I still have no login attempts but so that means my server didn't get picked up by any port scanner.

gsich
1 replies
7h54m

addresses == ports in your view?

usr1106
0 replies
6h14m

Yeah, sorry about the mistake. Too late to edit the comment :(

kristopolous
8 replies
12h7m

in the early 2000s I kept an anonymous ftp server open and would routinely get the latest cracked software delivered right to my hard drive. It was very convenient.

sattoshi
4 replies
12h4m

Cracked software can contain extra features. Especially when delivered in this way.

seanthemon
1 replies
11h11m

Ooo like that awesome techno music on startup, or maybe bee movie during install

Etheryte
0 replies
10h40m

I like the idea that someone embedded an entire movie as a malicious payload in an installer.

input_sh
1 replies
5h28m

In the early 2000s it was pretty much expected that each and every computer you encounter is full of viruses. That is, viruses on top of viruses that come by default from everyone running a cracked version of Windows XP.

welder
0 replies
2h53m

Most people on here didn't use Windows in the early 2000s, or ever.

lofaszvanitt
1 replies
7h29m

Oh, when you needed specific ftp clients, because most of them couldn't handle special characters needed to access the directory containing the LOOT :D.

cranberryturkey
0 replies
5h49m

serv-u and cuteftp baby!

throw_m239339
0 replies
6h43m

"H2O, try before you buy..."

mtekman
7 replies
11h36m

I have a utility that parses ssh failed attempts and creates iptables blocklists:

https://gitlab.com/mtekman/iptables-autobanner

For those just wanting the blocklist, here is a table of malicious IP addresses, with columns of: address, number of ports tried, number of usernames tried.

https://upaste.de/bgC

securethrowaway
2 replies
11h18m

I simply run fail2ban with a whole bunch of customer filters that will ban people very quickly. There's no need to request php or malformed urls when php is not used for example.

mtekman
1 replies
11h15m

I used to run fail2ban, but I found it (or at least its defaults) ineffective against discouraging further requests. With iptables, you can specify the connection to hang for a period and then drop

justsomehnguy
0 replies
10h40m

Defaults are set to reject. Just configure the jails or a global config.

sambazi
0 replies
7h38m

a lot of ppl thought this would be a good idea at some point

miah_
0 replies
7h41m

A iptables hashlimit rule can do the same. Your firewall rules get to be more readable and you don't end up relying on the security of a log parser.

The biggest win comes from just disabling password authentication in sshd though.

eps
0 replies
9h18m

upaste link is 404

Phelinofist
0 replies
2h25m

I run endlessh, I always giggle when I see some connection that last for 2d

microbass
5 replies
6h15m

A perfect example of why one should use SSH over a mesh network like Tailscale, and don't expose over the public internet. No attack surface means no attack.

stanac
4 replies
5h32m

I love TS just for this reason. All ports are locked and ssh-ing is possible only via TS. And for public facing web apps I open only 80 and 443.

Does anyone have any experience with CF tunnels on free account? Is it actually free for smaller apps with less than 1TB of traffic per month? I was wondering about switching to CF tunnel which would mean I could also close 80 and 443 ports and block China (because I read somewhere that most of DDOS attacks come from Chinese locale botnets).

microbass
1 replies
4h53m

For some additional peace of mind, you could also use something like Authentik in front of your web apps, so you don't expose the apps themselves, only Authentik. You can then use the IDP of your choice within Authentik for authentication.

stanac
0 replies
2h33m

Thanks, I was thinking about small but public project.

andylynch
1 replies
5h14m

Yes, CF tunnels are $0 for very small users. I have this, as do many others, as a reverse proxy for stuff like Home Assistant and it works great.

stanac
0 replies
2h34m

Thank you, I'll have to try them

pingec
3 replies
10h23m

A bit tangential but is there a service or self hosted solution that would take a list of IPs and then keep scanning them periodically and alert me if any new ports have suddenly open?

cranberryturkey
1 replies
10h22m

hmmm....you could do that with nmap script and a cronjob.

cranberryturkey
0 replies
3h35m

I just scanned my domain for all 65k ports and it took 20 seconds with a 10gbit pipe. i could scan yours for you and shoot you an email if a new port is discovered. Would charge you Like $100/year or something.

bluish29
0 replies
10h1m

I think shodan could br useful in this regards

https://www.shodan.io/

noduerme
3 replies
11h10m

Good grief. A couple days ago I re-enabled password logins on a server that normally only accepts private keys, just to check something from a third location, and then forgot to turn it off. Two days later the server's logs were full of thousands of failed login attempts that started a few hours after I enabled passwords and then ramped up to dozens per minute.

Just because it didn't instantly say "Goodbye".

I checked ip locations on the biggest offensing addresses; all were in China.

I don't know what to call the idiocy and amorality that leads people to scan port 22 for a living (or the stupidity that leads them to guess random passwords for random usernames that don't exist), but I suppose that for every gardener there are a billion ants.

p_l
0 replies
8h26m

There's a cottage industry of shitty mass-scanning attacks that continue onto getting root on badly setup fresh installs of various linux distros and drop a rootkit on them.

Some other common targets are websites to be reused for spam (hello, Wordpress!) or to hijack things like gitlab (again to drop a rootkit.

The rootkits are then usually used either for DDoS extortion rackets (usually against game servers, including online gambling), spam (might be less big today than it used to be), and cryptocurrency mining (from my experience mainly monero).

One time it happened in a network I set up due to miscommunication and misunderstanding of how vendor's install scripts worked (by vendor technicians!). During investigation, we found out that this particular "kit" was sold cheaply on a chinese forum (used to be russian forums back in the day, eh), as complete package to run on Windows to attack linux hosts for DDoS botnet purposes.

mmcnl
0 replies
1h48m

I have SSH access to my server behind a VPN. Not opening port 22 makes life a lot easier.

beastman82
0 replies
5h36m

The name for it is "authoritarian government"

hugocbp
3 replies
5h12m

Amazing article!

It is actually amazing how fast and thorough the connection attempts happen as soon as you put anything online.

I've been playing around Hetzner and Coolify recently, and notice that, as soon as port 22 is opened, it is bombarded by those attempts. Several per second. It might be due to Hetzner IPs being reused, but happened to me every single time. Same with Postgres default port (those were the ones I've seen).

I have defaulted to use Terraform and bash to only open those ports in the Hetzner firewall (and more common ones like 3000 or 8000) to my own current ip. It does mean I'll get drift and need to reapply the Terraform code if I change ips, but seems to be at least one way to defend.

I fear that a lot of devs jumping into the "you only need a VPS" crowd on Twitter will end up with a huge attack surface on their apps and machines and most won't even know they are being targeted like that most of the time.

To this day I still find it hard to find a comprehensive security guide for those newer Linux fresh boxes (and the ones you find are all so very different with different suggestions). If anyone knows of a good one, please share with me!

fsmv
1 replies
4h58m

You just need to turn off password authentication so it's keys only. They can attempt logins all they want and never get in.

Also if you run ssh on a nonstandard port you get many fewer attempts. There are several groups that constantly scan all of ipv4 for open ports, if you use ipv6 they cannot scan that space anymore.

Optionally you can set up fail2ban but I find it's not a big deal.

ogud2025
0 replies
4h44m

I changed my SSH configuration to only listen on an IPv6 address 6 months ago and since then the number of SSH attacks has fallen from 1000+/day to less than 10/week.

e12e
0 replies
5h9m

I would recommend just using a VPN, like tailscale, for all non-public resources - rather than IP whitelisting.

Ed: including private web services like self-hosted gitlab not used for publishing public projects.

mianos
2 replies
7h7m

Over 90% of the ssh logins come from just a few China Telecom addresses. They just keep trying random ssh accounts over and over all day. I just geoblock China now. Maybe occasionally unblock it for a few minutes if the kids want to buy something from Shien. Then I honeypot the rest with the continuous ssh banner script.

m0rde
1 replies
2h18m

What's a continuous ssh banner script?

pompompurin
1 replies
7h55m

How did he expose his honeypots and make the bots aware of his existence?

themoonisachees
0 replies
7h47m

If your server has something that listens on port 22, you just have to wait for like 5 minutes

nilsherzig
1 replies
7h47m

Check out https://viz.greynoise.io/ especially the trends > anomalies tab is very interesting

jslakro
0 replies
6h23m

How do you use that information?

lithiumii
1 replies
9h26m

My new VPS got an SSH attempt in 5 minutes after I purchased it. I'm now in the progress of running a similar honeypot experiment.

cess11
0 replies
8h11m

If you push it you can scan the entirety of IPv4 in about five minutes.

eps
1 replies
9h25m

8181 root

In 30 days? That's tad unrealistic.

Just checked and there are dozens root login attempts per minute on my colo'ed server in the EU. Virtually all from the Chinese and post-Soviet IP space. But mostly Chinese.

nubinetwork
0 replies
9h9m

I see ~1000 unique IP addresses hitting SSH every day.

FredPret
1 replies
4h25m

I simply block traffic from countries where I do not do business in.

I used to see constant attempts to mess with Wordpress URLs, which I know is not legitimate because I don't run Wordpress.

Cutting out Russia & China basically removed this problem. I really hate locking up my tiny corner of the internet but I don't see another way.

oopsallmagic
0 replies
1h25m

Waiting for the whatabout crew to show up asking what you'll do if the website for Joe's Barbecue and Grill needs to be accessible from Moscow.

throw156754228
0 replies
2h9m

My website backend APIs get repeated attempts at javascript prototype injection, all day, every day.

tanepiper
0 replies
6h53m

We run internal sites that are on the public facing web - the logs from Akamai are a daily list of mostly the same requests to find unsecured Wordpress and MySQL installs, .cgi and php files and paths like "..%C0%AF../..%C0%AF../..%C0%AF../..%C0%AF../..%C0%AF../..%C0%AF../etc/profile"

In 24 hours theres anywhere from 7000-9000 log events just from the CDN

slt2021
0 replies
1h55m

dont ever run publicly exposed production SSH. If there is vulnerability in your version of ssh, you risk getting pwned.

simonmysun
0 replies
7h25m

Coincidently, I recently visualized the scanners for fun by plotting them on a globe[1]. It gives a more comprehensive view of the locations and ASNs of the scanners. The demo data is generated from 1 day of logs.

[1]: https://github.com/simonmysun/where-are-the-scanners

Amazingly there's no request from same ASN. I believe this is because the VPS provider has a quite strict validation process, e.g. you have to upload a photo of yourself with your ID and your handwritten username, etc. I would suggest we consider the reputation or credibility of the data centers so that the data centers have the motivation of banning such users. In my case, a lot of the requests were sent from Tencent or Alibaba data centers.

nisa
0 replies
7h9m

Somewhat related due to a weak password a mail server from a community I'm involved in send out lot's of spam mail, after analysing the log files I've had over 1500 different IP addresses that logged in to send spam, about 10 mails for each address. ASN and subnets where spread across over the whole world. It seems like these attacks are coordinated using vast botnets and the use of single ssh public key here seems to confirm this. I had similar experiences going after attacks on WordPress instances and there I've also found attacks spread out across lots of hosts.

I'm wondering if it's possible to pin down those behind these attacks, there must be mistakes.

msephton
0 replies
4h3m

I wanted to read more about the interesting part!

jcynix
0 replies
7h41m

I've been running self-hosted servers for the last 25+ years without an incident and its less complicated than it might seem if you learn a bit about securing unix-based systems (ok, I already had 10+ years of server admin knowhow for various systems, but anyway, it's not rocket science ;-). Yes, an hour or so after you connect any machine to the Internet, you'll see attempts to "talk" to your server. So don't wait to set up basic security. But it actually has never been so easy to "just give it a try" (see below), with all the virtual offerings today. So here's a short/raw sketch of basic things you'd need to do:

1. 25+ years ago I used http://easyfwgen.morizot.net/ to generate an iptables based local firewall. Still works fine (then and now tweaking some things) and allows only certain ports too be accessed at all. I just open email, ssh and a web server.

The generator is well documented and still works, although it would be nice to see an updated version to newer firewall software like pf.

2. server configs:

edit /etc/hosts.deny --> restrict all by default

  ALL: ALL
edit /etc/hosts.allow --> allow your service providers networks, e.g.

  sshd: .t-dialin.net
  sshd: .dip0.t-ipconnect.de
So you can connect to your machine for further setup, but not the whole world.

3. set up sshd:

edit /etc/ssh/sshd.config

  # allow key-based access only
  PasswordAuthentication no
Maybe change sshd's port (reduces log file entries) but don't forget to allow this port in your iptables setup and your /etc/hosts.allow

People have opinions an key-based access, I know. But my private and public key is stored in various secure locations, including my phone (password safe) and I can access my server even from my Android phone or tables via Termux.

4. set up email (I suggest postfix as an MTA):

configure restrictions in /etc/postfix/main.cf, e.g.

  # restrictions in the context of the RCPT TO command
  smtpd_recipient_restrictions =
        reject_invalid_hostname,
        reject_non_fqdn_hostname,
        reject_non_fqdn_sender,
        reject_non_fqdn_recipient,
        check_sender_access hash:/etc/postfix/sender_access,
        reject_unknown_sender_domain,
        reject_unknown_recipient_domain,
        permit_mynetworks,
        reject_unauth_destination,
        [...]

  # restrictions for clients connecting
  smtpd_client_restrictions =
        reject_unauth_destination,
        check_client_access hash:/etc/postfix/access_client,
        reject_unknown_client,
        reject_unauth_pipelining
This heavily reduces the amount of spam you'll see. I add greylisting too, as this even nowadays reduces even more unwanted traffic. Combine that with spamassassin if you like. This setup gives me maybe one spam per day reaching my inbox (actually the spam subfolder).

5. Learn by doing (not just reading stuff on the Internets ;-), that is, set up a machine, e.g.

If you'd like to experiment a bit, take a look at Hetzner's unexpensive cloud servers, these are easy to set up (incl. a virtual firewall in front of it) and take down after some experiments of a failure. You can do this in Hetzner's web interface, even if you misconfigure your server to be unaccessible. Cf.

https://docs.hetzner.com/cloud/servers/overview/

Tip: Hetzner's web interface allows you to pre-define an ssh key which they'll install automatically on your new machine (but they leave password login enabled, so change that asap).

Disclaimer: I'm just a happy customer, no other relation. And it might be as easy to do this with Digital Ocean, which have some nice tutorials too, for example on the set up of a web server:

https://www.digitalocean.com/community/tutorials/how-to-inst...

Last but not least No Starch Press overs some nice books like "How Linux Works" or "The Linux Command Line" (if you're not sure about that) or even "Linux Firewalls: Attack Detection and Response" ...

You learn most by trying.

I'm now heading for the beach to enjoy some offline adventures and will answer questions later if needed.

ibbtown
0 replies
12h9m

Had a own server in university during mY PhD. Most request were trying to download scientific papers from large journals using absolute and not relative URLs after request.

gunapologist99
0 replies
2h37m

In conclusion, these commands represent a clear strategy to infiltrate, assess, and establish control over targeted systems.

Oh hello, ChatGPT. You seem to be everywhere these days.

figassis
0 replies
6h55m

Most of this nonsense disappeared when I adopted wireguard and later Tailscale.

e40
0 replies
3h59m

We use port knocking and haven’t had a single hack attempt in many years.

ciebie
0 replies
8h52m

What is a `lockr` command? Is it file system specific or something? Never seen anything like this. It probably should lock permissions on .ssh, but how?

chickenfish
0 replies
5h26m

I guess may the compromised host was probably also use same weak password as it's Brute force other host.

charles_f
0 replies
1h55m

I opened my personal server's 22 to the world because I screwed up my vpn config a couple weeks ago. I just had a look at the auth log and closed it again. It is non-stop.

braza
0 replies
28m

(Long shot) I really would like to USA a spare machine for web serving a Jupyter Notebook server, but I did not found a single resource that blocks everyone except a single IP or something like this. Supper annoying to pay some cloud providers to have a resource that I already have.

bobbob1921
0 replies
2h4m

Not sure if op will see this, but with regard to his comments on MikroTik routers and frequently seeing in his honeypot logs, the command: /ip cloud print

he is correct, This is a MikroTik command- although mikrotik has this feature, disabled/ off by default, a lot of users make use of it, and running that command will (if cloud dns enabled), will show the dynamic DNS entry of the device he is connected to. Ie if the cloud DNS is enabled, the output from that command will be something like: Detected public ip: 34.2.82.3 DynDns: djwisyehd.clouddns.mikrotik.com (which will always be updated to the detected public IP address of the router)

So I assume the attackers run this command so that they can still reach the router in case it’s public IP address changes at some point. (And assuming that the device will still be accessible after any public IP address changes).

(or perhaps they run that command to see if the cloud DNS service is disabled, which is the default, in which case they will then enable it so that they will have a dynamic DNS entry for the device).

agilob
0 replies
11h49m

There's a project for running Honeypot as a Service: https://haas.nic.cz The data is public and you can register your router too

Tiberium
0 replies
11h35m

Interesting article, sadly due to my exposure to LLMs I couldn't help but notice that the parts about "oinasf" and sakura.sh are AI-edited at least. Kind of a weird choice considering that a lot of the article was clearly human-written.

RecycledEle
0 replies
1h17m

I am amazed we have not yet said "Hands off!" and coordinated physical interventions against the scum who attack our electronic brains.

Is it so hard to kick in the doors of those whose IP addresses are used to try to hack honeypots?

This lack of action is why I oppose all law enforcement. Until they do their jobs, they do not need to be paid.

ProllyInfamous
0 replies
11h49m

I somehow found myself in charge of a computer lab two decades ago... and idiotically set up admin controls via SSH.

The entire lab was down for almost a week [immediately hacked], and then I suddenly moved a few states away.