“However, you are still not convinced, so you call your manager to ensure that the email is legit. He confirms, so you transfer the money.”
I feel like it’s a HUGE (silly) assumption you’d ask generically “did you send this email” instead of something more specific like “do you REALLY want me to transfer you money like this?” to which the manager would obviously be confused and the attack would likely be killed in that conversation.
This is an interesting attack vector but I am questioning how likely it is to succeed. The article paints a very specific and narrow window of events for this attack to really work. I don’t buy it, personally.
EDIT: I know phishing happens and works. I am not saying it doesn't. I just mean the people that fall for phishing don't need this sophisticated of an attack to fall for. In fact, the attacker probably narrows the chance of success by putting this much extra (very specific) effort into the attack. They are likely to just succeed with their typical phishing email.
Worked for a 10,000 person company, 50% engineers. There'd be several cases a year of someone using the company credit card to buy gift cards for "the CEO" or other senior execs whose details are available on linkedin or corporate sites, despite that exact case being the example in the anti-phishing training. So you'd be surprised.
I don't doubt phishing happens. I just think this specific scenario/technique is one that is probably extremely rare. The attacker likely wouldn't put this much extra effort/thought in when their basic attacks already work, like you're describing.
Security at my job pumps their numbers by pretending you fell for a phish if you click the link in their obvious phishing test emails. I clicked one to see how good of a job they did at the other end of the link trying to extract whatever they want from me, but there's nothing there! So lazy.
I got dinged for clicking "report as phishing" as part of that process forwards it to microsoft threat intelligence in outlook and so their systems said I forwarded and therefore fell for the phishing, now I look for a particular header and put all of those messages in a "phishing" folder
I run my organization's phish sims, and we had a similar issue one month. A bunch of people failed for downloading attachments. When I looked into it further, all the attachments were downloaded by the same Czech IP address. With some research, I found that it was an AVG IP address. The fix is very simple. The phish sim service has a place to exclude IP ranges. Any activity from those IPs are just ignored. I'm sure all phish sim services and software have this ability.
Now when I see a phish, I check to see where it is coming from. 97 percent of the time, it is a test. We're getting these tests often enough that I just assume that's what it is.
Which is fine, actually. If you see it and think "oh, IT is at it again" and delete it or report it, mission accomplished, because there is still that 3/100 chance it is real.
It only works on fake fishing.
So when you look at the sender of a suspicious email and it's not the phish sim service you just go ahead and open it? That doesn't sound like a problem with the phish sim.
It's certainly a problem with the phish sim if you're trying to teach people not to open random links and instead you're teaching people not to open phish sim emails.
It fact, it can be actively harmful if it creates a false sense of security.
Question: why is clicking on the (test) phishing email's link "fail"? Isn't the whole contract between browsers and society that one can safely open any website they want (ie loading a webpage is safe), and what you do on the actual site is the actually unsafe op?
Asking because in the vast majority of cases, the phishing landing page has way more signals to recognize than the email headers.
Unfortunately not. If there is a 0 day vulnerability, or you're running an older version of a browser for a known patched issue, you may find yourself with a remote code execution, or 0 click download. Or it could be another kind of exploit, maybe your email service is vulnerable to XSS attacks. Like operating systems, browsers can have security issues too. So trusting your browser to see if a phish is really a phish is just unnecessary risk. I've worked with clients that have ended up with crypto lockers from clicking the link. Even from the IT side, I'm not going to increase the risk by opening a known phishing link to check how good it looks. If I am, it's going to be in a system that doesn't have active logins to other systems/sites, and is in easily disposed and reset. Check out all the YouTubers getting channels hacked with session stealing. Yes, they are falling for phishing attacks, but you really don't know what the attack vector is going to be. It might just be a fake login, or it could be much more sophisticated.
Thanks, that makes sense!
I got dinged once for using curl (in a VM) on the link get the details to pass one when I reported it.
I once got dinged for forwarding an obvious gotcha email, without ever opening it, to our security team's phish notification address, as our employee handbook instructed. I learned my lesson.
I once got dinged for not reporting. I saw an email that was clearly an internal security campaign. I deleted it. I received an email a day or two later stating that I failed to take action on a phishing attempt. Damned if you do; damned if you don't.
For a while I had a thunderbird filter to automate forwarding based on our provider's email header.
They disabled SMTP and the Gmail web client has no such ability to filter on arbitrary email headers.
You can setup a Google app automation to do this for you.
I did for e.g. knowbe4 since all their test emails have the same header information. It made it quite easy to never see any of their attempts, though I did have to check every once in a while to see if I'd been signed up for any random learning and it removed those emails as well..
iirc, the same company had locked down the allowed oauth apps, so you would have needed an exception from security to run one.
I doubt they'd have granted an exception to stop getting annoyed by their own training.
Yeah the links from Proofpoint are unique to you, so however you visit it you still get tracked
It was when I was working at HP/HPE/DXC (I don't remember what it was at the time), I don't remember what they used.
Thank you!
- Browser 0-day vendor
You aren't wrong. I've got a heavily locked down browser on an off-network device for working with questionable websites. While the vast majority of phishing sites aren't pushing malware spearphishing is another story.
IT still might not want you to follow the link.
* Other users might have, instead, an incompetently secured browser that they think is locked down on their work devices. It is hard for IT to distinguish between you and them.
* If the URL is personalized, it tells the attacker that the address is active. This is probably pretty limited help to the attacker. But it might tell them if your company emails follow a particular format, right?
I just asked chatgpt and it knows what email format the company I work for follows, so I'm not sure this is of particular value.
It's useful, even if you aren't a scammer, but it's generally not hard info to get elsewhere.
It's good that I otherwise don't click on links in my browser during my day-to-day work. /s
Good thing browser aren't able to display content of random unvetted third parties in exchange for money on any website you visit too :)
Adblock is a security measure at this point.
I feel truly sorry for whoever spends a browser 0-day giving RCE on me.
Many phishing simulation systems are not technically correct. Microsoft, Google and other 'security vendors' may inspect links in emails. That link inspection can sometimes be blamed on the end user. "You clicked the phishing link, now you have to take remedial security training!"
The only way to know for certain that a user fell for a phish, during a simulated exercise, is to make an HTML form that does a HTTP POST request and contains the user's credentials (that only they could type in). If a user enters their username and password and clicks submit, then they fell for the phish, otherwise no one can say for sure who or what software clicked that link that did a simple HTTP GET.
Microsoft Safe Link technology does not actually inspect the link until the user clicks on the link. This is to avoid that confirmation links, used by some service to confirm registratio or as 2FA, may be triggered by the security engine without user consent.
Our workplace outlook phishing protection does though. I was signing up to test one of our apps recently and my email was auto confirmed in 5 seconds despite me never receiving it. Turns out it was caught in the phish filter which automatically clicked the link to check it, so the above is not always true. Confirmed this with a few co-workers too.
We must use the same vendor, as I heard about that happening to my coworkers. I clicked "it's phishing you idiots" in Outlook and got a gold star. I find it funny because my organization doesn't even use email, so 100% of email I get is spam or phishing.
The dead giveaway on this email was that there was a Via: header that was like "phishingtestsforyourworkplace.com" or something.
I did that once for the same reason, and found myself sentenced to mandatory security retraining videos with no possibility of appeal.
While I’m not saying the specific scenario will work 100% of the time, it doesn’t need to - by the email getting forwarded at all, there is some element of trust in “my manager forwarded me this email and typed ‘complete this for me’”. If this css technique increases the attackers odds, then it’s an issue.
Or for your specific example, imagine the recipient is passing their manager in the hallway: “hey, can we chat about the Acme Corp email, I’ve got some questions about it”. Response: “sorry, super busy. It’s a fairly common ask, just get it done!”
Maybe it's just good to be aware.
Yes. It's more of the opposite. It's a well documented fact that the most obvious/ridiculous scams work the best, because they help select the most gullible potential victims.
https://www.microsoft.com/en-us/research/publication/why-do-...
This is only true for high throughput spam e-mails, such as those sent to literally every e-mail address in a large data breach. Corporate phishing attacks are much, much more advanced.
That doesn't mean those scams are actually commonly successful.
That analysis is from the perspective of the scammer. The scammer has limited time to write to each victim once the responses come back from the initial mass-email, so the scammer is better off if only the most gullible people reply. From the perspective of the person being attacked, the counterintuitive result based on selection bias goes away, and a more convincing scheme is more of a risk to you personally. (The assumption that scammers have limited time to write to each victim may itself become less true because of LLMs.)
You’re right. They wouldn’t ask any questions at all, and just send the money.
Agreed, the people falling for this would already fall for a much more basic phishing attempt. Thus, the attacker has no need to put this much extra effort/thought into it.
This doesn't even need to be a hypothetical. We know that the attacked currently do not need to do this, because they don't. Darwin's law is very much in effect for scams of all types.
Still pretty cool trick though
This attack works like normal Sales calls. Hit enough of them and you'll find someone that's new, or in a rush, or distracted or ancient or challenged or a Republican idiot, or, or.
That's why it's still in use today. It works, but takes a lot of "cold calling" via phishing to find targets.
One scenario where this night not be far-fetched is when such mails are sent to the accounts payable department of large companies. The people are not going to call a line manager everytime a payment request comes through email, especially if the dollar amount is small and didn't require pre-approval.
I remember even Google had fallen prey to such a scam where they were paying somebody even though no work was done. Admittedly, that case involved fictitious invoices. However, the principle remains the same.
My gut says this could be more effective. After all, the initial “phish” (the innocent looking email the manager receives) isn’t fishy at all, and unlikely to trigger any concern. Once the stakes are raised and the scam is revealed, the email has already been granted some amount of legitimacy.
Sure, it can easily fail (“did you really want me to wire money to Cyprus?”), just as any phishing email can. But by bypassing the initial phishing filters of the recipient’s awareness, I could see it having a higher success rate than a cold phish that leads immediately with the scam.
No evidence or knowledge either way, just a hunch.
I agree, but actually it's just a really bad example that takes the reader to the wrong place because it has the participants acting so irrationally.
The underlying issue is still there, they've just distracted from it by putting this in and having the reader go "hang on a second". They should have used a situation that was more believable, but also concentrated more on requests where the target likely wouldn't even seek confirmation.
I agree the example they give seems a bit unlikely especially since the subject line is not changing (though admittedly I do not have experience in this area).
However something a little more subtle such as swapping out a routing number from a legitimate to an illegitimate one could be done and that seems harder to catch especially if the person who forwarded it to you is supposed to verify it first.
A more trivial gambit is logging into an attacker controlled site leaking credentials or installing malware.
Also office drones are probable targets. They won't want to waste important peoples time asking for confirmation.
I agree that his seems so specific that while it is very interesting from a technical perspective, it is also much less likely compared to most phishing.
I think that, in theory, it could allow for more sophisticated and targeted attacks, like changing the intended recipient of a money transfer. That would be much harder to detect.
Could be a link to some kind of portal.
You ask your boss if he sent the link to the portal, he confirms, they change the link to a phishing site.
It depends on the people I guess. Some managers will be annoyed of such conversations when they have to approve payments like that on multiple occasions a day - so employees might want to avoid such conversations.
The attacker could even add something like that: "I am currently on a trip. If you are unsure call me on my private mobile phone number..." and then respond with a faked voice. I think a good way of reaching targets would be a "double" forward. So the sender assumes the role of an employee forwarding the email of a manager to an administration adjacent employee. This employee unsuspectingly forwards the seemingly harmless mail (that seems to be forwarded from the manager) again for a reason like birthday wishes or sick notice. This will make it hard for the actual target to understand where the email originally comes from.
Beside that one can easily think up more creative ways to use this "feature". E.g. letting unsuspecting persons forward problematic content and then blackmail them etc.