return to table of content

Second factor SMS: Worse than its reputation

dools
137 replies
1d5h

A family friend of ours recently fell victim to a phishing attack perpetrated by an attacker who paid for Google Ads for a search term like "BANKNAME login". The site was an immaculate knock off, with a replay attack in the background. She entered her 2fa code from the app on her phone but the interface rejected the code and asked her for another one. In the background, this 2nd code was actually to authorise the addition of a new "pay anyone" payee, and with that her money was gone[0].

I have accounts with 2 banks, one uses SMS 2fa and the other uses an app which generates a token. I had thought that the app was by default a better choice because of the inherent lack of security in SMS as a protcol BUT in the above attack the bank that sends the SMS would have been better because they send a different message when you're doing a transfer to a new payee than when you're logging in.

So really the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee. I haven't seen any bank with that level of 2fa yet, has anyone else?

I guess perhaps passkeys make this obsolete anyway since it establishes a local physical connection to a piece of hardware.

[0] Ron Howard voice: "she eventually got it back"

AegirLeet
34 replies
1d3h

Turns out ads aren't just annoying little acts of psychological terrorism that eat up a lot of bandwidth and computing power, they are also the #1 vector for spreading scams and malware on the web.

In other words: If you're trying to improve your security posture, installing an ad-blocker is one of the best things you can do. If you have less tech-savvy friends and relatives, I would strongly recommend setting up uBlock Origin for them.

slothtrop
29 replies
1d3h

Why isn't there any market fulfillment for "safe, non-intrusive ads", on the part of a vendor? Is it because it's not possible, or not worth the overhead either because of cost or no effect on consumer behavior/blocking?

This seems like it ought to be low-hanging fruit. I would have less aversion to clicking on ads if I did not default to it being a security risk.

j5155
10 replies
1d2h

They are rare but do exist, see ethicalads and Modrinth’s ad program

rsync
9 replies
1d1h

They are still third-party ad networks that require a browser to cross multiple domains, etc. etc. etc.

I am not ideologically opposed to advertisements but I do believe the only safe ads are first party hosted coming from the same domain.

skrtskrt
7 replies
23h14m

most people publishing a website either cannot or do not care to host the ad server on the same domain, they just want to monetize the site.

things could get a lot better, but this self hosting suggestion in particular will never see wide adoption unless major hosting providers build it and host for their customers. most people don't even bother to self-host/bundle stuff like their fonts and JS libraries unless they have have a JS framework in the loop doing it for them.

JadeNB
3 replies
21h48m

most people publishing a website either cannot or do not care to host the ad server on the same domain, they just want to monetize the site.

That's sort of beside the point, though. The site owner's commitment to running ads is useless unless there are people to view them, and, as long as unsafe ads are ubiquitous, the only safe advice to give to people is that they should run ad blockers everywhere. It doesn't matter that that isn't what the site owner wants to happen.

skrtskrt
2 replies
20h6m

there are plenty of site owners that would voluntarily choose a more ethical ad hosting network if it was a good and easy option.

adding a pain-in-the-ass hurdle like "has to be hosted on the same domain" that 99.99% of people won't see the value of or understand is only going to hurt adoption of the better solutions.

JadeNB
1 replies
15h26m

adding a pain-in-the-ass hurdle like "has to be hosted on the same domain" that 99.99% of people won't see the value of or understand is only going to hurt adoption of the better solutions.

Right, but that's my point—this is not a situation where visitors have to hope that site owners will be responsive to their preferences; rather, visitors are in a position to enforce their preferences via ad blockers, so there's no incentive for them to compromise on matters that, however poorly appreciated or understood, genuinely can affect security.

skrtskrt
0 replies
6m

agree - but that gets to the larger point that mass adoption of anything like has to be fairly frictionless.

We are barely getting a third of people to use adblockers - you'd have to squeeze the ad server industry a lot more to make them change. How to squeeze them? Get more people to use an adblocker that enforces serving from the same domain. How to get more people to use an adblocker? Make it frictionless, like enabled by default on browsers.

Then by squeezing them, they would be forced to respond by building tooling making it more frictionless to serve ads form the same domain, etc.

rsync
1 replies
14h48m

Who said anything about an ad server ?

An ad is a particularly sized JPEG that you place in your images directories… and then point to with an HTML tag.

Everything we tried to build for you was lost once you deviated from that level of complexity.

skrtskrt
0 replies
1h32m

one suggestion more arrogant, ridiculous, and in bad faith than the last

you're now implying everyone hosting a website should pound the pavement to sell their own ads - or use a a static export from an ad network and build it into the website themselves? Sure maybe they should but they never will. Dream on.

Everything we tried to build for you

You are a speck of dust in the universe of computing. Get a grip.

bluGill
0 replies
21h37m

Pack before the web most places doing ads had them al, in house, salesmen (mostly male) design and so on., large byers (mcdonalds) might hire an agency to talk to all the little newspapers, but even the little ones did this in house.

j5155
0 replies
16h57m

Modrinth’s ads are this way.

rurp
7 replies
1d1h

Intrusive ads are more profitable for the ad company, while the costs are largely born by other parties. A strategy to privatize the gains and socialize the costs is common in a lot of sleazy industries.

wang_li
6 replies
1d1h

There is zero reason for ad companies or ad networks to be covered by any safe harbor provisions of the law. They should have 100% criminal liability for every mal-advertisement they send to a user.

WorldMaker
3 replies
21h30m

Ads are a paid transaction and Ad Companies absolutely need to be held liable for the money that they take because of who they take it from voluntarily. Google should be ashamed at all the money they are making from scammers and criminals and other evils. They should have a terrible score at every agency remotely like the Better Business Bureau. They should be tarred and feathered in public opinion. The brand name should already be tarnished by all this Evil across too many years of negligence. Same goes for Meta/Facebook, though they do have some of the tarnish already, more than Google has managed to get to stick. (I think too many people still want to believe the "Do No Evil" lie and its lasting brand propaganda.) Other companies should be wary of working with Google because of that bad reputation. ("No, we won't be using GCP because Google does too much business with criminals.")

Yes, it is hard to scale Terms of Service enforcement. Yes it is a hard problem to solve finding bad actors at scale. That shouldn't be a free pass to just not do it at all. Especially when money is changing hands. If someone is paying you to be a bad actor they are either paying you to look the other way (called a "bribe" in most jurisdictions, and illegal in some of them) or you aren't doing due diligence before accepting bad money (called things like "laundering" and "embezzlement" at scale). "It's hard to scale" doesn't sound like a good excuse to do financial crimes, last I checked with banking regulators and is in fact the opposite (a larger crime); why should Google or Meta get a free pass in advertising because they don't want to put the work in and take the revenue hit?

gavinhoward
0 replies
16h58m

Google should be ashamed at all the money they are making from scammers and criminals and other evils.

Yes, and so should every person that works for them.

dataflow
0 replies
20h53m

Yes, it is hard to scale Terms of Service enforcement. Yes it is a hard problem to solve finding bad actors at scale. That shouldn't be a free pass to just not do it at all.

What evidence do you have that they are "not doing it at all"?

bigiain
0 replies
18h6m

Google should be ashamed

"Do no evil. Instead enable others to do evil profitably and take a cut off the top."

kevin_thibedeau
1 replies
19h9m

They don't. DMCA safe-harbor covers copyright violations. All it takes is a prosecutor willing to use the CFAA to hold business as accountable as people.

leereeves
0 replies
15h27m

DMCA safe-harbor covers copyright violations.

That's true, but Section 230 of the Communications Decency Act of 1996 provides broader (but not unlimited) immunity for user submitted content.

RandallBrown
6 replies
1d1h

I feel like this was one of the original selling points of Google's ads. They were pretty simple, unobtrusive, mostly text, ads.

JadeNB
3 replies
21h50m

I feel like this was one of the original selling points of Google's ads. They were pretty simple, unobtrusive, mostly text, ads.

One of the original factors in the rapid uptake of Chrome was believed to be that the ads for it were the first time an ad appeared on google.com.

rezonant
1 replies
18h55m

I believe they meant on https://google.com (the home page)

Thorrez
0 replies
11h21m

Oh, I didn't think of that. But I don't think that's really true either. There seem to be ads on the homepage before Chrome:

Google News: https://web.archive.org/web/20021001073516/http://www.google...

Google Calendar: https://web.archive.org/web/20060831050142/http://www.google...

For comparison, Chrome: https://web.archive.org/web/20080904192205/http://www.google...

Now admittedly the Chrome one is a bit flashier. Although I haven't exhaustively gone through every homepage variant before Chrome, so it's possible there was something as flashy before Chrome as well.

Sylamore
1 replies
1d1h

The doubleclick acquisition was the end of that.

joquarky
0 replies
18h56m

Isn't that when the business majors took over?

HL33tibCe7
1 replies
1d1h

Google’s search ads have become explicitly more intrusive and less distinguishable from the real content over time, deliberately and knowingly.

It’s funny, that while many parts of Google are making improvements to the web security ecosystem, they are completely ready to throw it out of the window when it comes to making them more money.

EasyMark
0 replies
17h42m

You mean you can’t see the “promoted content” label that is 1 px high?

terribleperson
0 replies
18h49m

It doesn't seem to be profitable, in part because the internet now consists of mega-sites and if your network doesn't serve ads on the mega-sites, no one is interested in your network.

Project Wonderful was a fantastic webcomic-focused ad network. From my perspective as a reader, being shown ads for other webcomics while I'm reading a webcomic was... a positive, really. A lot of webcomic artists ran Project Wonderful ads and nothing else. They shut down in part because of the rise of facebook.

dpkirchner
2 replies
1d3h

It's to the point that even the US government (even with all its faults and lobbying) recommends using an ad blocker for this reason.

darknavi
1 replies
1d2h

Very interesting! Could you link to that recommendation?

DowagerDave
0 replies
22h40m

TIP: I sold my senior mom on uBlock Origin because YT ads are so obnoxious. The added benefits are extra security and performance improvements. She was even able to understand that if something doesn't seem to be working right (like a banking site) "turn it off and try again".

bckr
21 replies
1d5h

Another lesson here is to bookmark/ memorize the url of your bank, and don’t trust search engines to take you to your bank

elric
4 replies
1d4h

This might not be sufficient anymore. Many online payments are rendered either on the shop's pages or on a third party payment provider, including 3DSecure implementations. These don't redirect to any sensible bank URLs.

Both of my banks use a payment flow which uses a hardware authenticator. But only one bank seems secure: it prompts for an amount and a reference and generates an OTP based on that. This is distinct from any other signing operations with the same authenticator. The other bank tells me to enter a 6 digit number (which is allegedly made up out of a part of the amount and a reference), but it is impossible to tell this apart from any other signing operation. It doesn't strike me as too hard to abuse that to either log in to my account, to sign another payment, or even to create a direct debit...

kevincox
2 replies
5h24m

I ran into this. I'm trying to set up an account on wise.com. The way they want me to set up my bank for direct deposit is to type my banks password into their site! I asked support if there was any other way to do this (for example the regular institution, branch, account numbers) and they said no. But they reassured me that despite me typing the password into their site that they don't have access to it! (Ok, it was actually a Plaid iframe, but still not my bank. Clickjacking would also be very easy to implement and there is no way for the average user to understand this.)

Then banks wonder why their customers get phished.

yencabulator
1 replies
1h59m

Plaid is a cancer of the payment system. It's amazing how they're trying to normalize entering your banking credentials into their third-party site.

kevincox
0 replies
1h49m

It's not even their site as far as the user can tell. It is a full-screen iframe. At least if it was their site a bank could say "plaid.com is fine". Still bad to make acceptable domains more than one but at least it isn't infinite.

EasyMark
0 replies
17h37m

I have a couple of bills to pay to the city and the 3rd party pay processor (they switched a couple years back) they got looks like the page was made by a moderately talented 5th grade web developer. I actually called them to verify I had the exact URL correctly and also told them the page looked like it was made by complete amateurs and was kind of scary it was so poorly done.

michaelt
3 replies
1d3h

In the past I've heard people say the opposite - that if less computer savvy people are using google instead of URLs, it's a good thing.

The reasoning was it protects them against typosquatters and whitehouse.com situations. I guess when people were giving out that advice, google wasn't the way it is now.

HanClinto
2 replies
1d3h

Yeah -- this was good logic back in the day.

Now one has to scroll down -- sometimes several links -- before finding a link that isn't an ad.

Maybe this is where encouraging people to use the "I'm Feeling Lucky" button would help, because it should still go to the top non-ad-link?

michaelmrose
0 replies
23h40m

There was a time wherein the top result for facebook was a blog which faced a deluge of comments complaining that they couldn't log onto their facebook.

joveian
0 replies
20h28m

"Always use a bookmark" has always been the best advice. I'm fairly sure getting a bunch of typosquatting domains is standard practice now for major (particularly financial) sites so typing in the site from a reliable printed source for the first access is fine (particularly since you can be extra careful if you only do it once). For using shared computers, I'd still personally recommend typing from a reliable printed source.

For logins, a major advantage of having browsers save login info is to recognize legit sites becuase the login can be filled out (though it should be set to require a click on the login form and not just appear). Occasionally sites change in a way that breaks this but usually just once to use a subdomain and can be investigated more closely when it happens.

I think browsers should add a "site bookmark" feature that uses a well known mechanism to allow all associated sites to be annotated in a way that shows up similar to how EV certificates used to work (but is entered by users). That would make it possible to recognize legitimate links into a site (as long as you annotate the correct site the first time) and there could be an option to be notified when leaving the annotiated set of domains for particularly sensitive sites. Currently the closest is bookmarking the home page, editing the URL to remove everything after the domain, checking that the edited url is bookmarked (this is fragile since sites change the redirection quite a bit), and then hold the back button and go back to to the linked page, although this might not work for additional domains (e.g. support sites are often on a subdomain). Ideally, the site bookmarks would also annotate search results before they are clicked. While "remember to check if the site is legit" is not ideal it is a far better situation than "no way to tell if the site is legit". This could also be used to add a standard OTP entry mechanism that binds to a site and gives a warning if it is from a site you haven't given an OTP to before or stored login info (and shows the site name when you enter the OTP).

freeopinion
2 replies
1d1h

I'd like to throw a little blame onto many namebrand websites.

Sites like Digital Ocean try to load dozens of third-party trackers for a single page. Their supposedly secure payment processing includes cross-site violations that are blocked by modern browsers.

When their credit card management pages fail to work with reasonable browser defaults or sane browser add-ons they immediately advise their users to strip out all security protections. You are supposed to just trust content coming from seemingly unrelated domains including multiple processors you may or may not have ever heard of. Paypal? Ok, plausible. Stripe? I guess, but both? Pendo? Sentry? Optimizely? Hexagon? Google Ads? Google Analytics? Six other different Paypal domains? Eight other Stripe domains? Multiple Typekit domains? TagManager? Spuare? The list keeps going.

Plenty of reasonable protections cause alarm bells left and right. The answer? Disable those protections. Train users to think they are the problem.

freeopinion
1 replies
23h29m

I'd be interested in anybody explaining why a welcome page needs assets from 17 other domains, including paypal and stripe.

https://cloud.digitalocean.com/welcome

Why are multiple payment processors included on this page that doesn't involve any payment?

Domenic_S
0 replies
22h33m

pptm.js is paypal's tag manager, for "Marketing Solutions". Gives the vendor shopper insights (clickthru, etc).

Stripe recommends putting stripe.js on every page to help detect fraud better (https://docs.stripe.com/js/including)

dools
2 replies
1d4h

Also that Google, as a search engine that is also the world's biggest advertising company really should be able to manage not to sell ads to phishing scammers!

Ekaros
1 replies
1d4h

Maybe if platform is large enough it should be criminally liable for phishing attacks. I see no reason why Google should not be responsible in vetting each and every link they advertise at top of their search results.

walterbell
0 replies
1d4h

KY(C)A

alistairSH
2 replies
1d

Or skip the website and use their native app.

DowagerDave
1 replies
22h36m

then you can't block anything

EasyMark
0 replies
17h32m

Why do you need to block stuff on your bank? I do all my banking through their app tbh. If I had to block stuff from the bank then I’d switch banks.

rsync
1 replies
1d1h

Very difficult with most big banks.

In my experience with Bank of America and US Bank they bounce you around to several totally different top level domains as you navigate through the web-based banking.

These are third-party service providers that the banks contract for various pieces of their online infra… And it is a complete mess in terms of conditioning consumers to be phished.

twelve40
0 replies
1d1h

that's true and kind of a joke by now, bofa has at least two parallel bill pay systems (both seem white-labeled from someone else?) keep redirecting through multiple domains, both are barely usable and take forever to load to do basic tasks. Security definitely takes a back seat when fighting with their UIs to get anything done.

akira2501
0 replies
22h20m

If Google ad words even allows a scammer to create an ad for a bank login page then we have a more fundamental problem.

hbn
7 replies
1d4h

an attacker who paid for Google Ads for a search term like "BANKNAME login"

I tried out buy Google ads once out of curiosity cause they gave me a free credit. It was crazy how many ridiculous stipulations and guidelines I had to work around before they'd accept my ad.

How are they that strict for me, but seemingly they'll sell to a phishing page that's impersonating a bank and targeting it to people searching for that bank?

Ozzie_osman
1 replies
1d4h

Because the impersonator is probably a lot more sophisticated at this than you or I, and it's likely that 999 impersonators were rejected and this is just the 1/1000 who found a way around it.

The system probably produces a lot of false positives AND negatives.

gorlilla
0 replies
23h43m

And even at those failure rates (no matter how anecdotal), economies of scale creep in so a couple billion failures/day still would result in nearly a billion successes per year. The machine never rests and is fueled by creative people from all walks of life from every possible place on earth.

EasyMark
1 replies
17h41m

Don’t they usually put in “legitimate” ads and info then swap out the content afterwards with the spamscam?

qingcharles
0 replies
17h28m

I have an ads account; I don't see them checking I haven't done a switcheroo on the landing page contents. I think I could easily put a JS redirect on the landing page, if nothing else worked.

They are reasonably strict about the keywords though -- I often go into a "verifying" stage when setting up the ads.

jrochkind1
0 replies
1d3h

I'd guess the phishers spent a lot of time and burned some google adwords accounts looking for what would get through any automated checks they had.

aftbit
0 replies
1d3h

I once tried to buy a domain which contained the word "Google" from Namecheap, but I was rejected with an error telling me that I needed to contact support and show that my use of the trademark was approved by Google. So instead I went to Google Domains and bought it from them with no issues.

UncleMeat
0 replies
1d3h

Criminals are incentivized to evade detection. And you only get to observe the successful criminals and none of the unsuccessful ones. This makes it appear like the criminals are getting through the filters trivially. What you don't see is the work they are putting in to get a successful phishing ad up there.

Not to excuse failures, but there isn't a "it is easy for them but hard for me" situation.

rlpb
6 replies
1d4h

So really the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee.

My understanding of EU regulation is that it effectively requires this by requiring the 2FA to validate not just the identity but also the transaction (such as an amount, or destination account).

Unfortunately it means that all banks use SMS. We did have card reader 2FA that also did this but it's falling out of use because users don't like having to carry a card reader around.

wiredfool
2 replies
1d4h

My EU bank uses an app for consumer accounts. It hasn’t used sms for a few years, except when setting the app up on a new phone/sim.

rlpb
1 replies
1d4h

except when setting the app up on a new phone/sim.

So it does when it needs to authenticate you :)

wiredfool
0 replies
22h46m

The difference is, it’s a pain, has happened twice in 5 years, and I know what triggered it, and it doesn’t happen with every 3d secure purchase or login.

So much less likely to get phished.

bux93
1 replies
1d4h

Yes, the Payment Services Directive requires "dynamic linking" to a specific amount and a specific payee in article 97, and the RTS in article 5 go on to say that the payer should be "made aware of the amount of the payment transaction and of the payee".

The most elegant implementation I saw of this were card readers with a 2D (colored) barcode scan ; the 2D barcode contained transaction details that the card reader would display on its screen. This was an effective control against MITM. But even I myself always misplaced the card reader.

So now, most confirmations are done using the banking app. Even if I use a credit card by filling in its details on a US website, I get a push notification on my phone to confirm the tx on my app.

The app asks for a password or uses biometrics, so thats 1FA, and the app is enrolled at some point, so the token on your phone (I presume in some secure storage) counts as the 'thing you have' for 2FA.

Enrolling the app nowadays usually entails scanning your ID card and a 'live selfie' (blink your eyes). And of course you get notified (via e-mail) that you just installed the app on some device.

namibj
0 replies
23h30m

I preferred the blinky bars; the reader for them is tiny, not locked to an account, battery lasts what feels like forever, and they're cheap enough that you can trivially eat a loss (from forgetting where it is or leaving it in a place where it disappears before you get a chance to collect it).

Maybe just stick an airtag to the back?

riffraff
0 replies
23h18m

Unfortunately it means that all banks use SMS

This is not true, I have used multiple financial things where they have different codes for different uses (Raiffeisen, K&H) or apps which have a server sent event and local approval showing the transaction (wise, Fineco)

renewiltord
4 replies
1d2h

Just use apps. Apple protects.

renewiltord
2 replies
1d

Well, TIL

callalex
1 replies
1d

It's really incomprehensible, whatever minuscule revenue Apple is getting from running this protection racket, I mean ad network, must be minuscule compared to the potential damage they cause to their users and brand. But I guess that's next quarter's problem.

renewiltord
0 replies
23h56m

Their ad business is generally booming but the brand rap can’t be worth letting these in. OTOH maybe it’s either all in or don’t bother. There’s no way to staff reviews at the scale you need an ads business to work.

TacticalCoder
4 replies
1d1h

The site was an immaculate knock off ...

Then I can picture a great way, locally, to screw these knock off big times.

Either the site is a great knock off, visually similar (if not identical) or it won't fool people, right?

So what about this: what about the browser saving, locally, screenshots of the login pages you visit.

Then, when a new login is made, compare, visually, the page to what's saved and see if any saved pages are similar?

"Oops, the page www.banklng.com looks nearly identical to www.banking.com which you visited previously, they're probably trying to scam you!".

Eisenstein
1 replies
1d

Another step everyone will ignore because it isn't a problem for any particular person until it is.

TacticalCoder
0 replies
1d

Another step everyone will ignore ...

Well then enforce it, at the browser level.

recursive
0 replies
1d1h

When a measure becomes a target, it ceases to be a useful measure.

breischl
0 replies
22h43m

I feel like PassKeys and browser-integrated password managers both solve this problem better already. And yeah they're extra things to do, but so is this.

somehelpdeskguy
3 replies
1d4h

"...with a replay attack in the background."

Wouldn't this be MITM?

leni536
1 replies
1d2h

I'm not familiar with the nuances of terminology, but I would expect MITM to only apply when you (and your computer) actually attempt to connect to service A, and a malicious actor X intercepts that communication. Phishing is different in the sense that you connect to the phishing page directly, and it may or may not replay some of your inputs to the actual service it is phishing.

ReK_
0 replies
1d2h

I guess theoretically phishing could be considered MiTM, but the latter term generally implies the attack is fully transparent to the user, whereas phishing convinces the user to insert the malicious party themselves.

dools
0 replies
15h50m

I called this a "replay attack" because it sounds more like this:

"A replay attack in a network communications setting involves intercepting a successful authentication process—often using a valid session token that gives a particular user access to the network—and replaying that authentication to the network to gain access"

Even though this wasn't a session token, it was an authentication process and token, gathered from a fraudlent source and replayed to a valid source.

MITM is:

"A man in the middle (MITM) attack is a general term for when a perpetrator positions himself in a conversation between a user and an application—either to eavesdrop or to impersonate one of the parties, making it appear as if a normal exchange of information is underway."

So to me a MITM would be more like using a wifi access point to access the correct banking URL, but the service carrying the data was acting maliciously.

kardianos
3 replies
1d4h

We have it: FIDO U2F. you could even treat it like the new password less manager, with a computer/phone specific store.

My gut? It actually works, and people didn't like that. Users and orgs like authentication slightly broken so they can work around systems.

armada651
0 replies
1d4h

My gut? It actually works, and people didn't like that. Users and orgs like authentication slightly broken so they can work around systems.

People like authentication systems that are secure enough to keep bad actors out, but not so secure that it keeps legitimate users out. It's got nothing to do with users wanting to break into a system.

aftbit
0 replies
1d3h

I like FIDO U2F as a second factor, although you always need a fallback of some kind in case you are stuck using a device without a USB port. I don't like it as a single factor, as most devices make it hard or impossible to back up your keys. Using Passkeys with Bitwarden is pretty interesting though, and appears to satisfy most of my concerns, as they're just stored in my password manager and move devices with me.

0xbadcafebee
0 replies
1d4h

It only works in a couple of situations and it's difficult to manage. When the site doesn't support it (which is almost all of them), when you don't have USB, when you lose or forget your YubiKey, when you don't have a phone with NFC or lose it, when you can't afford the device, or it's difficult for the user to set up, etc it fails. Now you need a different factor to finish logging in, which is probably weaker, so attackers will try to degrade this first factor to force the second weaker one.

It's a nice-to-have but not even close to a universal solution.

Tepix
3 replies
1d3h

Kraken is a cryptocurrency exchange that utilizes (at least) two different TOTP codes, one for login and one for money transfers.

smeej
2 replies
1d

For a long time (still?) Kraken also refused to add SMS 2FA as an option due to its weak security.

I still don't see how that's worse than no 2FA at all, which was an option, but I appreciated that they were banging the "SMS 2FA isn't very secure" drum.

Negitivefrags
1 replies
22h35m

It’s worse in a lot of implementations because often SMS is often used as part of a recovery flow in cases where you lose the first factor.

I find it more secure in some contexts to never give a company my phone number at all if possible, so that it simply can’t be used as any kind of authentication no matter what.

smeej
0 replies
58m

Yeah, I'd draw a hard line between "SMS 2FA is better than no 2FA" and "SMS should never become a single-factor recovery method."

I agree SMS should never be an option for single-factor recovery.

schmorptron
2 replies
1d3h

I'm paranoid enough at this point that I check that the Cert authority for my bank is the one I know it to have before I log in on the website.

TacticalCoder
1 replies
1d

What's your process? Where do you save the cert? I'd be interested in automating that.

schmorptron
0 replies
21h14m

Oh nah, I just check the lock icon in firefox, and it's a pretty unusual (and not publically accessible) cert authority so I'd notice if it's a different one

ptero
2 replies
1d2h

Was this in the US or elsewhere, what was the amount and how long did it take to notice? Just curious.

In the US the bar to pull money out of an account is pretty low. Most banks would allow reasonably-sized transfers out with just routing and account numbers. I was stunned by this, but this is the reason utilities and stores can pull your money without you even talking to your bank. Just give them the info. And that information is not secret, it is printed on your every check.

The flip size is that for those "convenience" and service payments the money is easy to get back: banks, at least traditional, will bend over backwards to prevent being seen as enabling fraud.

dools
1 replies
15h54m

Was this in the US or elsewhere, what was the amount and how long did it take to notice? Just curious.

It was in Australia, amount was thousands of dollars, she noticed when she was asked to enter yet another code and all of a sudden it made her snap out of her "autopilot" and take notice and look at the URL and other details. So as soon as she realised that something was fishy, she logged into the correct site, then saw the money was gone.

In the US the bar to pull money out of an account is pretty low. Most banks would allow reasonably-sized transfers out with just routing and account numbers. I was stunned by this, but this is the reason utilities and stores can pull your money without you even talking to your bank. Just give them the info. And that information is not secret, it is printed on your every check. The flip size is that for those "convenience" and service payments the money is easy to get back: banks, at least traditional, will bend over backwards to prevent being seen as enabling fraud.

This was a "pay anyone" transfer. So money was being transferred to a bank by BSB/Account number in the background. The bank required a code when a new Payee is added, but the codes were not differentiated, so she was asked for a code to login, then told the code was wrong and asked for another code. In the background the real banking site to which her actions were being replaced had successfully logged in and had initiated a transfer to a new Payee. The real banking site asked the attackers for a code to add the new Payee, the fake banking site asked her for a new code to login.

The thing that really enabled the attack is that the same code generator was used for both codes, without any indication that a different action was being performed.

ptero
0 replies
15h14m

I understand the attack (I heard some high visibility telegram channels were recently compromised by a similar technique).

My point was that most bank transactions are actually reversible for a while after the money supposedly left the account.

brightball
2 replies
1d3h

I still don’t understand why banks just don’t use FIDO2/WebAuthn yet.

I’d much prefer to use a Yubikey over all other options at this point.

ReK_
1 replies
1d2h

Because banks are financial institutions and every decision they make is based in that. If the cost of insurance is less than the cost to actually secure the system, they will choose that every time.

Banks and payment processors have some of the worst technical debt. For example, a lot of transactions are processed using the ISO8583 standard, a binary bitmap-based protocol from the 80s. The way cryptography was bolted onto this was the minimum required to meet auditing standards: specific fields are encrypted but 99% of the message is left plaintext without even an HMAC.

timr
0 replies
20h16m

I don't work at a bank, but I do work in fintech, and this strikes me as excessively cynical. The reason banks are slow about this stuff is not necessarily because "it's cheaper" (though maybe it is), but because the complexity of any change is simply off the charts: money-related logic must work correctly, to a far higher standard than almost any tech company. It makes you conservative, in the same way that demanding 99.999% uptime is exponentially harder than demanding 99%, and makes moving quickly essentially impossible.

(Also, of course, they're probably working on COBOL stacks that were written in 1978.)

For a bank, pile on top of that mountains of (often conflicting) regulatory review, such that just about any change sounds the alarm for armies of nearby lawyers to swarm upon you and bury you in paper. All it takes 0.1% of annoyed users filing complaints that they can't access their accounts, and you might well be looking at a steep fine, a class-action lawsuit, or worse.

BriggyDwiggs42
2 replies
1d4h

This is a great example of why a search engine shouldn’t overtly let people pay them to alter rankings lol.

account42
0 replies
1d3h

I am continually surprised that in a country as litiguous as the US companies can continue to sell advertising space and then just shrug when the buyer uses that space to defraud someone.

EasyMark
0 replies
17h31m

And why ads should be to the side at the very least and not mixed with search results

2Gkashmiri
2 replies
1d4h

How did entering login and 2fa do the following things.

1. Login

2. Add payee

3. Create transaction

4. Verify transaction

This appears to be a banking issue where they do not try to maximize the attack surface.

Sure people will try to game the system by doing phishing but its the responsibility of banks to actively make it harder

twelve40
0 replies
1d3h

Yep. A few simple steps like an extra SMS (or email) code to add a recipient, an email notifying about the change, not perfect, but will make this harder to pull off. Not sure what is '"pay anyone" payee', i don't think it's a thing at my bank. They could try to scrape the account number though, I think in the States that may be enough to try to debit someone's account.

dylan604
0 replies
1d3h

I read it as the first 2fa code was used to login, then the system quickly attempted to add this new payee which required a second 2fa code, so the phishing site quickly sends another request stating the code saying the first was rejected.

whodev
1 replies
1d4h

This almost happened to my S/O. Luckily I had setup NextDNS to block newly registered domains along with a list of uncommon TLDs so the site got blocked.

TacticalCoder
0 replies
1d1h

Luckily I had setup NextDNS to block newly registered domains along with a list of uncommon TLDs so the site got blocked.

I go further: I generate tens of thousands of variants of all the "sensitive" websites we use (like banks and brokers).

All the "levenshtein edit distance = 1" and some of the LED = 2. All variation of TLDs, etc.

I blocklist most TLDs (now that most are facetious): the entire TLD. I blocklist many countries both at the TLD level and by blocking their entire IP blocks (using ipsets).

For example for "keytradebank.be", I generate stuff like:

    # Generated by typosquat.clj for keytradebank.be (9809 entries)
    0.0.0.0 keeytraebank.be
    0.0.0.0 kebtradebank.be
    0.0.0.0 kytradebani.be
    0.0.0.0 keytrxdebak.be
    0.0.0.0 kewytadebank.be
    0.0.0.0 keytgadbank.be
    0.0.0.0 aeytradeank.be
    0.0.0.0 keytradebsan.be
    0.0.0.0 keymradebnk.be
    0.0.0.0 kytradeb9nk.be
    0.0.0.0 ketrade-bank.be
    0.0.0.0 keytradbeban.be
    0.0.0.0 eytradebafk.be
    0.0.0.0 keytraebank.ee
    0.0.0.0 keytrad3bak.be
    0.0.0.0 keytradebzn.be
    ...
I don't care that most make no sense: I generate so many that those who could fool my wife are caught by my generator.

I then force the browser to use the "corporate" DNS settings: where DoH/DoT is forbidden from the browser to the LAN DNS. I can still use DoH/DoT after that if I feel like it.

So any DNS request passes through the local DNS resolver (the firewall ensures that too).

My firewall also takes care of rejecting any DNS attempt to an internationalized domain names (by inspecting packets on port 53 and dropping any that contains "xn--"). I don't care a yota about the legit (for some definition of legit): "pile of poo heart" websites.

My local DNS resolver has 600 000 entries blocked I think, something like that.

I then also use a DNS resolver blocking known malware/porn sites (CloudFlare's 1.1.1.3 for example).

So copycat phishing sites have to dodge my blocklist, the usual blocklists (which I also put in my DNS), then 1.1.1.3's blocklist.

P.S: some people go further and block everything by default, then whitelist the sites they use. But it's a bit annoying to do with all the CDNs that have to be whitelisted etc.

paholg
1 replies
1d2h

I'm curious if the different SMS message would have mattered in practice.

I for one don't ever read those messages, and Android at least will usually copy the code for you making them even easier to ignore.

dools
0 replies
15h48m

I read those messages. The ones from one of my banks that uses SMS and differentiates them, says "your code to do BLAH is BLAH". I was actually saved from phishing once because my credit card company included the vendor and the amount in the transaction SMS and it was for a different site and a much larger amount than what I thought I was spending.

joncrocks
1 replies
1d4h

the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee.

I think at least some UK banks will do this. When I've done it using a card + card reader, you select the option to choose which type of operation you're trying to do. And if you're just trying to login it just displays a rolling code, but for authorisation of particular events it will take the form of a challenge/response, i.e. you have to select the operation on the card reader + enter a code provided from the site. This should I think prevent _simple_ replay attacks.

I even think for some transactions such as transfers over a certain amount, you have to enter the amount into the reader as part of the code generation.

arp242
0 replies
1d3h

Yes, my AIB card reader works like this. When transferring money to an unknown account I also need to enter the amount and "sign" that with the card reader. For adding a new payee it's a challenge/response.

gcr
1 replies
23h32m

Your solution wouldn’t have prevented the attack you describe unless the user can immediately tell the difference between login 2FA codes and “new payee” 2FA codes and knows not to enter one code into the wrong form.

dools
0 replies
15h53m

Well, that's what I'm saying. When I get an SMS from one of my banks for example it says "your code to transfer X to Y is ABC" or "Your code to add a new payee is ABC". In this case she had a code generating app, but the codes were not different for login versus other high risk actions. The same is true for my other bank, which has a code which you use to login, and the same code generator, with no distinction, is used for example when you make a large BPay transaction.

Tuna-Fish
1 replies
1d3h

The way both my banks work is that I log into the bank, do something that requires confirmation, and then I need to go open my app to confirm it, and it shows all the details for what exactly I'm confirming in the app.

dsego
0 replies
1d1h

I believe it's the 3d secure protocol, my bank has it, usually a push notification or with a token.

EGreg
1 replies
1d3h

I never understood why these sms confirmations don’t tell you what you are actually confirming.

They should also tell you when some major change was made.

Seems so silly!

arp242
0 replies
1d3h

When I tried to pay on a website a while ago I kept getting "unknown error". Fast forward about an hour waiting in the helpdesk phone queue, and turns out you need to set up a special password for that. This is not an "unknown" error, it's a known error... Why can't it just show me? Sigh.

I wonder how many people they've need to "help" with this. Yes, I know there's tons of old code in many banks, but they would have saved money if they had a single developer work on this full-time for a month or something. Support people may be cheaper than devs, but they're not free.

weberer
0 replies
1d3h

The Nordea ID app tells you if you're verifying a purchase and shows how much you'll pay.

solardev
0 replies
1d

FYI, passkeys do not require anything in hardware. You can connect them to software only password managers like 1Password or Bitwarden.

Where they are nice though is that they are also tied to a specific origin (domain), so a phishing site can't ask for the real passkey. But I've never seen a passkey be a primary source of authentication, so they can always fool the user to falling back to some weaker auth (email reset or 2fa).

shortrounddev2
0 replies
1d3h

I've noticed my browser has started recognizing URLs that look similar to legit URLs of bigger companies and then warns me that the site is likely a phishing site. Sometimes it gets false positives for URL shorteners (like goo.gl instead of google.com)

rexf
0 replies
21h57m

just this week, I clicked on the 1st search result ad for "amazon" in google search. It led me to a windows-themed "Virus detected" amazon clone. I'm not using Windows. I was able to close the tab, but it left a bad taste in my mouth for google search results.

(I know I could have just typed "amazon.com" and gone directly. But browser autocomplete makes it a tiny bit easier to use the omni-url bar and just type "amazon" than "amazon.com")

renonce
0 replies
1d3h

My bank app asks for different tokens for different operations. A code for login, a code for transfers (the code needs to be generated with the payee account number as input). So it’s not a problem of tokens vs SMS.

recursive
0 replies
1d1h

If the business model of your search engine is based on ads, your (search user) relationship with them is fundamentally adversarial. Ad blockers will get you some temporary respite, but it doesn't change the nature.

This is an observation from a happy kagi subscriber that doesn't use an ad block.

michaelmrose
0 replies
23h43m

I wish we could break people of the habit of searching for websites that they visit all the time and using search results to navigate to them.

Maybe a secure browser profile that blocks search engine usage and can only visit sited in bookmarks or a whitelist so if you get a new bank and its not on the common whitelist have to explicitly add it to bookmarks.

Use your Chrome secure profile tm for banking and refuse to auto complete payment info on the insecure side.

marcosdumay
0 replies
20h21m

Instead, the 2fa app should show you the action you are authenticating, just like the SMS version.

But actually, we have put way too much stuff on the (inherently transient) web. What solves your problem is permanent client-side storage. Your friend shouldn't reach the bank through a google search.

macrael
0 replies
21h11m

Passkeys or FIDO hardware tokens are the solution, as written up by Google ages ago, because they only enter the TOTP code when the URL matches the right site, it wouldn't enter the code for the phishing URL

lolinder
0 replies
1d2h

It's worth reminding your loved ones that the FBI specifically recommend using an ad blocker in search engines to avoid exactly this kind of scam [0].

Use an ad blocking extension when performing internet searches. Most internet browsers allow a user to add extensions, including extensions that block advertisements. These ad blockers can be turned on and off within a browser to permit advertisements on certain websites while blocking advertisements on others.

[0] https://www.ic3.gov/Media/Y2022/PSA221221

ikekkdcjkfke
0 replies
1d

ADS ARE THE PRIMARY WAY ELDERS GET DEFRAUDED

idontwantthis
0 replies
1d1h

That’s fun to read about because my bank forces you to use their integrated 2fa which always rejects the first code you put in.

funmi
0 replies
1d1h

So really the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee. I haven't seen any bank with that level of 2fa yet, has anyone else?

HSBC actually has this. All of their country-specific apps allow you to generate a different security code depending on whether you want to login to the website, verify a transaction (e.g. transfer funds to payee), or re-authenticate (e.g. to change your personal info, like your phone number).

Here's a screenshot of what that looks like on their Australia app (similar screens in their US and UK apps): https://www.hsbc.com.au/content/dam/hsbc/au/images/ways-to-b...

They've had this for years. I'm not quite sure why this isn't a standard yet or at least been adopted by other US banks.

dtx1
0 replies
1d2h

So really the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee. I haven't seen any bank with that level of 2fa yet, has anyone else?

My local german bank uses an App specifically for 2fa. When i log in i have to approve the login within the app and the website redirects automatically. It shows me that I am approving a login or a transaction with all the transaction details. Since I don't enter my second factor into the browser, a replay wouldn't be possible and it would be VERY obvious to spot the difference between approving a login and approving a transaction. German Sparkasse for those that care.

aidenn0
0 replies
19h6m

I was saved from a phishing attack by using a password manager that refused to auto-fill my password since the hostname didn't match.

aareet
0 replies
1d4h

So really the ideal is not just having an app that generates a token but one that generates a specific type of token depending on what type of transaction you're performing and won't accept, for example, a login token when adding a new payee. I haven't seen any bank with that level of 2fa yet, has anyone else?

Some banks in India have a separate “transaction password” that’s required to operate on the account vs just login and view balances. It’s not a rotating token, but it’s somewhat close to what you’re suggesting.

kkfx
19 replies
1d7h

The modern auth invented just to push mobile + cloud model is DISGUSTING. We have since decades smart cards for various things, from payments to IDs, why the hell not keep inserting readers in keyboards and laptops bodies, selling cheap desktop USB reader and teach people to use them? Simply because with them there is no way to force mobile computing allowing some third party to snoop a bit in end users lives.

I hope a day or another people will understand and IMPOSE an end to such crappy unsafe practice.

worksonmine
16 replies
1d7h

Many people today don't even own a computer and do everything on their phones. Teaching the masses safe habits rather than convenient ones is a difficult problem, most don't care.

Hamuko
10 replies
1d7h

You can use Yubikeys, which are basically the modern and better version of "smart cards", on phones and tablets just fine. I have a Yubico Security Key on my keychain and I can use it on my iPhone with NFC or with my iPad using USB-C.

kkfx
7 replies
1d6h

You need it. While your bank already gives you (typically) a card you can also use as is for auth for them. Your country probably have some e-documents already, no need for extras to authenticate the public sector services and so on.

The point is offering something already usable and gives people a habit on that. After we might add yubi for generic services like GMail and so on.

Hamuko
6 replies
1d6h

I have zero clue as to what you're talking about. And what card am I getting from my bank?

kkfx
2 replies
1d5h

A bank card to pay stuff, witch is a smart card, NFC capable, you can use (as is common in various EU countries) to authenticate yourself on your internet banking.

Similarly various countries offers eIDs (some I know Estonia, Belgium, Italy, Germany, France) witch are NFC ISO 14443A/B who are used to authenticate the Citizen on various public services.

Many universities and some high school as well offer an NFC badge witch is a smart card, and could be used to authenticate institution website and so on.

All those examples are already in use since years, but used for limited activities and mostly not advertised. It's just a matter of spread them. In Italy for instance since some years national eID card (CIE) is used to access fiscal services to send for instance you filled tax forms, to pay some tax and so on, while national health service card is used to buy tobacco from every automatic vending machines since much more (to prove you are >18 years old), France start since last year the same with France Connect+ witch as Italy, German etc is the pan European eIDAS system to offer digital docs and services to all. All countries have invented absurd systems to AVOID using eIDAS with smart cards in most cases, while we all have them. Only to push the "app" cloud+mobile model.

Hamuko
1 replies
1d4h

My Visa card definitely doesn't work for any online bank authentication in Finland. It's strictly for payments. For authentication, it's user ID + PIN with a paper two-factor, or user ID + phone authenticator. Some banks also have physical two-factor hardware.

kkfx
0 replies
1d2h

Well, in Germany, Nederland, Belgium Visa, Mastercard works so, I imaging is just a matter of choice from the bank side. In Italy RSA token (small key chain with an LCD display) was fairly common as another option and some banks have solved the PSD/DSP2 article five with a captcha post-OTP for transactions (i.e. Unicredit), few have chosen more complex OTP with a cam to read a Qr but they are simply too expensive to became spread. In France curiously most banks still do not use a second factor allowing login with just ridiculous "random sorted" virtual keyboards to makes keylogging not work. I guess the world is vary, but I'm also sure enough that Finland have some eIDAS eID document witch can be used like bank cards.

hocuspocus
2 replies
1d6h

I assume your bank gives you a debit card. And many government IDs have NFC chips nowadays.

Hamuko
1 replies
1d4h

Pretty sure that neither my Visa Credit/Debit or my passport works for any kind of digital authentication. I think you can specifically get an ID that works as a smart card, but since you don't need just the specific ID card, but also a reader + faffing about, adaptation is super low.

hocuspocus
0 replies
1d4h

Parent's point is that the hardware is perfectly able to identify you, but we choose not to.

In 2024 having a card reader is indeed not that great, but I still have the one given by my bank ~20 years ago, as it's a strong factor which I can use to set up weaker second factors (typically push notification to the mobile app, nowadays).

We could imagine several ways people link their real, physical government ID to a trusted device. Every smart phone has had a built-in security key for the past 5 years or so. Banks have to check your ID at some point due to KYC. We could kill multiple birds with one stone.

jltsiren
1 replies
1d6h

Unfortunately physical keys are getting obsolete in many places, and people are no longer routinely carrying their keychains around.

hocuspocus
0 replies
1d4h

People carry they smartphone around though.

I keep my Yubikeys in a drawer at home and use my phone as day-to-day security key.

buccal
4 replies
1d7h

NFC/RFID in many mobile devices allows for interfacing smart cards very similarly to wired connection.

commandersaki
3 replies
1d7h

Yeah but it's a tradeoff in usability, that's an accessory that needs to be provisioned and carried around.

kkfx
2 replies
1d6h

How many have a smartphone with a cover able to hold cards? How many have wallet in their pockets? Where the trade off in usability? Having a sole pin and a card to access various services instead of passwords and copypasting OTP or something similar with crappy and dysfunctional apps.

commandersaki
1 replies
1d6h

How many have a smartphone with a cover able to hold cards?

I use a wallet that holds cards, but not common or popular, and are you seriously suggesting that we insert this thing into our phones, which would probably mean you'd have to dislodge from the case, wallet or not, and align the card into the slot. Not to mention how much space it'd consume in a smart phone. You & maybe a very tiny cohort want this, the general public don't, especially for the marginal security benefit. Anyways as others say, the modern equivalent is NFC, but again getting everyone to buy and carry an accessory is asking too much. Modern smartphones already have modern security and in recent years have been exposing their security coprocessor chip to the OS.

kkfx
0 replies
1d5h

no need to "insert" most smart cards nowadays are NFC and most smartphones have a reader built-in in their battery so all you need is just flipping the "book cover" to allow reading, even without extracting it. On a desktop having a small usb flat reader or one built-in in the keyboard (common two decades ago in various setup, for contact based smart cards back then) or one aside the touchpad area in a laptop could provide the desktop part.

I use it normally to declare my taxes for instance, with a small desktop card reader (ReinerSCT CyberJack) as a "security device" in Firefox to authenticate for instance, just putting the card on the reader, open firefox going to the relevant website, click on eIDAS login, entering the national ID card PIN and being in. A pin for all public sector services, no apps needed, no regular password changes and so on.

stavros
0 replies
1d7h

We do do that, it's WebAuthn/passkeys.

intelVISA
0 replies
1d4h

We have that with FIDO2, unfortunately there is too much $$$ to be made perpetuating the problem, propping up adjacent ecosystems like cloud and leaky auth apps.

elric
18 replies
1d4h

I've long suspected that companies which force SMS 2FA don't really care about security, they just want your phone number, and 2FA is a convenient bit of security theatre to make you give it to them.

Rygian
4 replies
1d3h

Or they were using 2FA by email until an auditor told them "that's not 2FA" at which point they realized that their middleware to send notifications supports SMS as well as email.

aftbit
3 replies
1d3h

I don't quite understand that. It's not like sending an SMS to my phone is any more secure or harder to access than sending an email to my phone. Additionally, many seem to want a "real phone number", not a VoIP number like Google Voice.

Meanwhile treasurydirect.gov still just uses a verification code via email. If it's good enough for the Treasury, it's probably good enough for a bank.

pwg
1 replies
1d

It's not like sending an SMS to my phone is any more secure or harder to access than sending an email to my phone.

It's not, but much like 'fax' hanging on in the medical environment because it has been labeled "secure" by the regulations, there is a line in some regulation rule somewhere that labels "SMS" as "secure" but does not label "email" as "secure", and because they do the minimum to meet the regulation, they go with "SMS" and go on about their day.

mjevans
0 replies
22h26m

Important distinction.

'Fax' is 'exempt' not secure. So of course many places will take the relief valve even though the service is both a poor fit / quality and horridly insecure. When it works right the records aren't even in a sealed envelope but just on the output of some printer somewhere for anyone to see!

mewpmewp2
0 replies
1d2h

You can access e-mail from outside of your phone, but SMS usually not unless synced with cloud. If your e-mail gets hacked then all of yoir 2FA everywhere with e-mail would be useless.

rafram
3 replies
1d4h

That’s definitely part of it. Phone numbers are the new SSNs - unique identifiers that never change and connect you across services - except you also hand them out to everyone you meet. One might say it seems like a bad system!

dylan604
1 replies
1d3h

since COVID, i've had 3 new numbers. i'm sure that's an edge case, but it happens. my second number came when I brought my own device to a pre-pay plan on a new carrier that said my number was not able to be ported. then, when i upgraded phones, the pre-pay number was not eligible for carrying over to the new device.

I know I'm not the first person to be unable to port a number, so calling a phone number something that never changes is a bit skewed

rafram
0 replies
15m

Yeah, I didn’t mean that they never change in reality, but that they’re treated as if they never change. (Same with SSNs.) I can only imagine how many hundreds of services I’d lose access to if I lost my number. Hours and hours talking to customer service.

lolinder
0 replies
23h24m

the new SSNs - unique identifiers that never change and connect you across services

And just like SSNs both the "unique" and the "never change" are only true of the spherical cow version of the system. Phone numbers are actually substantially worse at being unique and unchanging, what with people in families sharing a phone or trading phone numbers, people forgetting to transfer the number when switching carriers, people intentionally switching numbers in an attempt to end spam calls... The number of ways to break the assumed invariants is actually quite high.

See Falsehoods Programmers Believe About Phone Numbers [0].

[0] https://github.com/google/libphonenumber/blob/master/FALSEHO...

nolist_policy
3 replies
1d3h

No, most companies actually want your phone number for spam prevention.

I think the contribution of Spammers to the decline of the Internet is underrated.

arp242
2 replies
1d2h

Only by those who never worked on these kind of services. Running something like a webmail service is being flypaper for dickheads. As soon as you gain any sort of popularity you will have some very hard and sharp lessons about the lengths spammers will go through to make abuse your service.

First rule of designing anything: "if some cunt can make a buck by completely fucking over your system then that cunt will completely fuck over your system because that cunt is a cunt."

lolinder
1 replies
23h22m

You don't even have to be running a webmail service, the instant you use any service to send an email with even one user-controlled field (even something as innocuous as their name) you already have a problem.

arp242
0 replies
17h47m

Yeah, I meant "webmail" in the broadest possible sense. And it's even broader than that: anything that allows making anything public really: from forum comments to Instagram to WordPress sites to Wikipedia.

Remember that for about ten years there was a person who consistently and frequently inserting images of ceiling fans in random articles.

brandon272
2 replies
1d2h

They force SMS 2FA because it is a lot more frictionless to assume that your users have a phone number than to assume that they have a 2FA app installed on their phone and know how to manage those tools. It's also easier to support.

ryandrake
1 replies
1d1h

Ugghhh, "frictionless" as if we're talking about logging into Candy Crush here. Are the "Growth Hackers" infiltrating banking apps now? I don't want my bank software to be frictionless. I want it to be secure.

brandon272
0 replies
1d

The person I was replying to said "they just want your phone number", which to me implies that we're not talking about 2FA at the level of banking apps, as the bank already has your phone number, among plenty of other details. Most banking apps I have used also do not use SMS 2FA.

calfuris
0 replies
1d1h

I've seen it forced on non-public-facing systems where the company already has everyone's phone number, so that can't be the only reason.

aftbit
0 replies
1d3h

They often want your phone number for anti-fraud or anti-sybil reasons. If they have free accounts, requiring a phone number helps prevent you from creating a new account to evade a ban and makes it easier to link bad behavior across accounts.

didntcheck
18 replies
1d5h

And unfortunately almost every bank forces me to use them, because their apps refuse to run on my rooted phone. Nice security win there!

jdiez17
8 replies
1d5h

Hot take: rooted phones are inherently less secure. That does not include GrapheneOS btw, since you don't have root privileges on an official build of GrapheneOS.

hiatus
2 replies
1d4h

Hot take: rooted phones are inherently less secure.

My computer is rooted, making it inherently less secure than my phone, yet I have no trouble accessing my bank website. What threat is a bank protecting against by disallowing app usage on a rooted phone?

twelve40
0 replies
1d2h

great question! probably historical reasons:

* computers have always been "rootable", so the banks can't do anything about that

* phones work with "apps", which are viewed as more dangerous than websites. So they came up with the concept of app curation (monitoring large appstores for lookalikes and viruses), and by rooting/sideloading you are violating that model.

* Repackaging a legit app into a malicious lookalike is relatively easy on Android, but harder to distribute if you combat rooting/sideloading.

* if your phone is rooted the bank may be concerned that you could be more susceptible to installing dangerous things, including apps that intercept your 2fa.

You can argue whether these points held up over time (or whether they make things more secure), but that seems to be why they do it. It costs them relatively little to try to combat rooting but potentially liable for losses if people get phished/hacked so...

throwaway290
0 replies
1d4h

What threat

The threat to majority. Very very few people own a computer than a phone. And those people are much more tech savvy.

lucianbr
1 replies
1d3h

There's always a root account, the only issue is who has access to it.

So... phones where a corporation has root are more secure that phones where the owner has root, you say? Secure for whom? For the user? Seems obviously wrong. It's more secure for someone else to have power over you?

Again, you're just a few words from "Freedom is slavery".

jdiez17
0 replies
1d

So... phones where a corporation has root are more secure that phones where the owner has root, you say?

You're putting words in my mouth that I explicitly rejected when I said "that does not include GrapheneOS". Just to prevent the follow up "well actually GrapheneOS is an organization": they don't have any kind of root access to GrapheneOS phones. The only thing they can do is push system updates, which you can (1) reject and (2) verify if they are the same updates being pushed to all users, to avoid targeted attacks.

Secure for whom? For the user? Seems obviously wrong. It's more secure for someone else to have power over you?

Yes, secure for the user. Sure, power users that very carefully review any system mods they install with root powers would have the same level of security as with a non-rooted phone. But most people won't read the source code of root apps/extensions they install.

It's easier to tempt mobile phone users to install "cosmetic improvement/customization whatevers" that happen to require elevated privileges, than desktop Linux users. It's well known that many Android apps bundle near-malware that slurps all data possible, and will ask for root privileges if that is detected.

The fact is that mobile phones tend to contain more sensitive data than desktop computers (and are thus significantly more secure by default than Linux/Windows computers). Contacts, private messages, photos, etc. It's a more valuable target, so more effort is put in developing malware for phones.

hedora
1 replies
1d4h

"Less secure" depends on your threat model.

I'm much less worried a hypothetical attack where I accidentally give sudo access to a malicious app than I am about the well-established ongoing attacks where Google violates the entire population's privacy, or the regular stream of malware that makes it into the official app store.

jampekka
0 replies
1d3h

Hotter take: if you don't have root, you've been pwned.

crazygringo
7 replies
1d2h

That is a security win.

On a rooted phone, you've made it possible for other apps to spy on and steal your banking information.

Bank apps not running on phones where security has been compromised seems entirely reasonable.

didntcheck
3 replies
1d1h

Only if I grant them root, which I'd only do to a very small number of open source apps

I instead have to use my desktop web browser, and desktop operating systems have a far worse security model than Android. No special permissions are generally needed to capture the screen, capture/inject keystrokes, or open .mozilla/whatever/cookies.sqlite

So my phone is still the significantly more secure environment. The fact that I have the ability to grant root does not make it "compromised"

crazygringo
2 replies
1d1h

Only if I grant them root

But that's exactly the point. The bank doesn't know what you've granted root. It doesn't know if you're a security researcher, or somebody installing pirated apps with spyware.

The bank can't enforce that on desktop web browsers, but at least it can on mobile.

oneshtein
1 replies
23h9m

Nope, they cannot enforce that on mobile when I have root.

crazygringo
0 replies
22h44m

Then why did the root commenter say:

because their apps refuse to run on my rooted phone
S201
2 replies
23h39m

Bank apps not running on phones where security has been compromised seems entirely reasonable.

I have root access on my laptop and I log in to my bank's website just fine. Making apps not run on rooted phones is just perpetuating the cycle of forcing users to comply with the restrictions placed upon them by Apple and Google. Root access != less secure. It means control over the device you paid for and own.

solardev
0 replies
18h32m

...and you're probably less safe as a result. In the 90s and early 2000s, running as root (admin) was the Windows default for home computers, and that's why we had such a malware and spyware problem then. It wasn't until UAC limited user and app permissions on purpose and Windows Defender became standard that it began to get better.

Root access for you means you have control, sure. But it often does mean you're less safe too, depending on your OS's security model and what other apps can run as you. That's why limited sudo and other "root ish, but only in small doses" models were made. And that's assuming you know what you're doing.

For Jane Grandma, root of any sort means power she'll never need and a footgun to lose her life savings with. It's a good thing mobile phones protect ordinary users from themselves. Most people don't need root access any more than they need the ability to reprogram the ECU on their car.

Besides, on a rooted phone, I thought there were already ways to fool an app into thinking it's not rooted...? Or did they change that?

lmz
0 replies
23h20m

I don't think the root permission ban is for the website. In most cases it's about how your phone + the bank's app has become the new hardware token / key generator. Before smartphones I could log on to the bank's website but any transaction will have to be authenticated using a hardware token (presumed secure). That's moved into an app now.

diego_sandoval
0 replies
1d2h

At least you have an alternative.

In my country, almost all banks force the use of app 2FA without SMS as an alternative.

If I don't want to buy and carry an extra phone around, I'm limited to using the one bank that doesn't require it.

0xbadcafebee
12 replies
1d4h

I can't think of any reason why we should not make password managers mandatory for all web authentication today, with the password manager being the 2nd factor.

Your desktop, laptop, tablet, and phone can all share a password manager. They work offline and online. Passwords generated are unique, breaking password reuse attacks. Password managers support auto-filled TOTP codes per-login. They support passkeys. There's password managers built into browsers in addition to the 3rd party ones. There are personal, family, and enterprise options. They could be installed as a system service to isolate them from userland attacks. They support advanced functionality like SSH keys, git signing and biometrics.

If you're a stickler about having a completely independent factor from your desktop/phone/etc, password managers could be used with different profiles on different devices, and allow several easy ways to pass an auth token between devices (via sound, picture, bluetooth, network, etc), ensuring an independent device authenticates the login to avoid malware attacking the password manager.

We already have the tools to do something way more secure than SMS, and it's already on most of our devices/browsers. We just have to make it the preferred factor.

amluto
4 replies
1d4h

I can't think of any reason why we should not make password managers mandatory for all web authentication today, with the password manager being the 2nd factor.

A password manager is, in essentially every respect except interoperability, inferior to WebAuthn. Let’s not make an inferior solution mandatory when we already have a superior solution.

jampekka
3 replies
1d3h

Let’s not make an inferior solution mandatory when we already have a superior solution.

With a slight caveat that it doesn't work. At least not on Linux without some proprietary junk dongles or their emulators.

mixmastamyk
2 replies
1d2h

Huh, can you be more specific? I thought I was using this on Linux with bitwarden. Is a yubikey “junk?”

jampekka
1 replies
1d

Software/OS passkeys weren't supported, at least not well enough for Github, on Linux when I last tried. Per web search they still don't.

Stuff that I could do without, like a yubikey, is junk in my books.

mixmastamyk
0 replies
23h15m

Bitwarden supports, may be beta. Passkeys are usually controlled by corporate controlled devices—why I haven’t used them yet. See:

https://news.ycombinator.com/item?id=39698502

Keeping the storage separate (or not) from the device may not be important to you but keys are useful to some, for that reason.

jampekka
2 replies
1d3h

I can't think of any reason why we should not make password managers mandatory for all web authentication today, with the password manager being the 2nd factor.

Basic usability? The security theatre is making computing more and more yanky every year, with questionable benefits, and with no regard to the drop in efficiency.

For most accounts I don't care much if they are compromised. And have never been compromised even with a lot of "worst practices".

Would you agree also that MFA should be mandated for everybody's doors? Or to my bike?

warkdarrior
0 replies
1d1h

Would you agree also that MFA should be mandated for everybody's doors? Or to my bike?

Attacks in the digital world are simply more scalable than in real world. I can try to log into 1000 Gmail accounts in seconds, but it'll take me hours to try to open 1000 doors.

ted_dunning
0 replies
1d1h

My front door effectively has 2FA.

You have to HAVE the key and you have to KNOW exactly how to wiggle the key to get it to work.

UncleMeat
2 replies
1d3h

The tools aren't the hard part. The hard parts are adoption and recovery.

SMS has an extraordinary advantage in that the vast majority of people transparently have access to it. No need to download another app. No need to install anything. No need to buy a special usb device. It also has a recovery mechanism built in, as the carriers will all let you move your phone number to a new device. This, of course, comes with the high cost of sim-swapping attacks. But few companies will be happy with "customers just lose their accounts when they drop their phones in the toilet."

We'll see if the google/apple security key system takes off. That's probably the best bet we've got given the ubiquity of these ecosystems.

zelphirkalt
1 replies
10h56m

How I would loath to rely on Goole or Apple to be able to make payments or confirm other actions. Sure as hell they would call home about what actions I am performing, and associate that data with some Google account or Apple Id or so, that they will force me to have.

No thank you.

UncleMeat
0 replies
5h51m

That's fine. I don't think any individual is foolish for preferring to keep these companies out of the process.

But it is just undeniable at this point that any authentication system other than raw passwords must come from any already ubiquitous ecosystem that doesn't require people to download, install, or buy anything new. Hoping that yubikeys take off is fantasy.

rsync
0 replies
1d1h

I would much rather we just handed everybody an RSA token.

Dead simple… Works off-line… Requires no account or personal infra to use…

… And as a bonus I already have a nice workflow where a WebCam is pointed at my token sitting on my desk.

I kid.

Or do I … ?

throw0101d
11 replies
1d7h

If the choice between no 2FA and SMS, which is better?

ossobuco
7 replies
1d7h

As the linked post says itself, "2FA-SMS is Better Than Nothing"

hun3
2 replies
1d7h

But it's also the most cost expensive (from provider side) among 1FA and 2FA-OTP

pilif
1 replies
1d6h

I think conversion rate and support cost associated with 2FA-OTP are worse enough for SMS to still be worth it, especially as a phone number also gives you a good marketing ability and a reasonably unique identifier for a user.

If not, everybody would be using OTP already.

reginald78
0 replies
1d5h

That is what everyone dances around in these discussions. It doesn't matter if it is a good second factor because it is an excellent user tracking identifier and that is what they were really after. Twitter and facebook both lied about only using these numbers for security and then almost immediately put them to use for advertising purposes. We only know about it because they were big enough to sue, I'm sure every crappy site that gets the number sells it. As a bonus, it also allows them to dump a lot of the infrastructure and support problems onto some one other than themselves.

The biggest problem with SMS-2FA in my opinion is a lot of places are setup so it isn't even a second factor. I can often reset my password just through email so it just seems like throwing a threadbare blanket marked security over the top of a user tracking scam.

ajross
1 replies
1d5h

The linked article says that at the very end, in the very last sentence, just so they can evade this kind of discussion. Clearly the takeaway any regular user (also the typical too-pedantic-for-their-own-good HN commenter) is going to take away is clearly "Don't use SMS 2FA", and they will therefore make the wrong decision.

Use 2FA. Use 2FA. Use 2FA. Worry about the design decisions in your spare time.

JohnMakin
0 replies
1d3h

Exactly this. The concerns about SIM swapping are real but simply do not apply in 99.999999% of cases. It's an extremely targeted attack. Adoption rates of SMS are higher than other more secure methods like authenticator apps, and given the choice of no 2FA and 2FA SMS, you obviously should pick the latter and understand it isn't bulletproof. I find it difficult to come up with any argument otherwise.

I think there is this false idea that if SMS was not an option, people would gravitate to authenticators and other such solutions. I've provided technical support trying to get supposedly technical people to use these tools, and trust me, there are huge hurdles of adoption here. The amount of people that are unable to enter 6 digits into a prompt within 15 seconds is astounding.

Passwordless solutions are cool, and I have implemented them, but are extremely prone to footguns.

upofadown
0 replies
1d6h

I would argue that a 1FA unguessable password used once is just as good. Certainly better than the case where the provider offers account resets using just SMS thus having effectively 1FA SMS.

account42
0 replies
1d3h

That really depends what else the company uses your number for now that you have given it to them for 2FA. Often enough it ends up being usable as a one factor for account "recovery".

UncleMeat
1 replies
1d3h

Two perspectives: the business and the user.

For a sophisticated user who can confidently use distinct and strong passwords for each service and protect those passwords, SMS-based 2FA offers minimal safety improvement.

For a business, they know that a significant number of their users don't do this. These users are exposed to credential stuffing attacks. SMS-based 2FA means you need to phish somebody (or otherwise obtain the code). That's an improvement for these users.

The only time where there is an active reduction in security is when SMS can be used as single factor. This is frustratingly common for password reset flows, which allows a sim-swap attack to fully compromise an account.

jrochkind1
0 replies
1d2h

I feel like you have two choices for password reset flows:

1. Insecure ones

2. Ones where many users needing recovery will get locked out with no ability to recover their accounts, guaranteed

8organicbits
0 replies
1d5h

We've seen companies do a lot of silly things with SMS. Facebook used 2FA SMS for ads [1]. Companies sometimes use your phone number from SMS 2FA as a single factor for password reset. I think this is debatable.

[1] https://news.sophos.com/en-us/2018/10/01/facebook-turn-off-s...

F30
11 replies
1d10h

"CCC researchers had live access to 2nd factor SMS of more than 200 affected companies - served conveniently by IdentifyMobile who logged this sensitive data online without access control."

slow_typist
10 replies
1d8h

Look at the list of customers, most of them should be able to build their own service.

Instead they bought API access without the leastest of due diligence, putting their customers and their reputation at risk.

Additionally, the merging of different customer’s data by the processor is probably not GDPR-compliant (even if access control was in place).

lmz
5 replies
1d8h

most of them should be able to build their own service.

Isn't the hard prt the connectivity bit i.e. negotiating with the various telcos? I once saw a telco use a third party SMS vendor for messaging their own customers for an app - because setting it up internally was too much of a hassle.

PinguTS
3 replies
1d6h

So you say, that for Google, Amazon, Facebook, Microsoft, which are among those costumers, it is too hard to negotiate with the various teclos?

Tepix
1 replies
1d3h

It's not their core business, which is why they let SMS aggregators deal with it and merely switch inbetween those.

fatnoah
0 replies
22h54m

Yes, and there are multiple levels of aggregators. For example, in a past life, I built SMS APIs and back-ends, including ones used by smaller telecoms to enable their subscribers to send/receive SMS. (We were pretty small, and only accounted for something like 0.5% if US SMS traffic)

We connected to multiple aggregators. It's been a few years, but the big players in the US (Verizon, AT&T, Sprint, T-Mobile) were split between different aggregators. It was a similar situation in Europe.

A big part of working with a new aggregator was a full review of security and privacy, and that became even more important as we began the process of being acquired by an F100 company.

I'm still trying to figure out why messages were stored in S3 buckets to begin with. That's an architecture choice that makes little sense to me, especially since the limited size of SMS makes them pretty space efficient.

lmz
0 replies
1d5h

Not in the US at least for those companies, but the world is a big place and this other comment https://news.ycombinator.com/item?id=40935323 mentioned places like Gambia and Burkina Faso... It just makes sense to outsource local delivery to companies that are better connected locally.

stavros
0 replies
1d7h

No, the hard part is having to secure all these little random services that I've now built. Why would I not just pay for someone whose job it was to worry about this instead?

sleepyhead
3 replies
1d7h

We at MakePlans were affected by this breach as we use Twilio. We are not using Twilio Verify (their 2FA api) but rather handle 2FA SMS ourselves in our app using Twilio as one of our providers. So the CCC definition of this being only 2FA-SMS is incorrect, it was all SMS sent through this Twilio third party gateway that was exposed to a limited set of countries (France, Italy, Burkina Faso, Ivory Coast, and Gambia).

GDPR is not necessary applicable here. An SMS gateway is most likely classified as a telecom carrier, and thus any local telco laws would be applicable and not GDPR. That applies only to the transfer of the SMS though, so for example a customer GUI of sent SMS would be out of that scope.

(And before someone tells us that SMS 2FA is insecure I would like to point out that we use this for verification purposes in our booking system when a customer makes a booking. So for end-customers, not for users. It is a chosen strategy for making verification easy as alternatives are too complex for many consumers. All users however authenticate with email and password, and have the option of adding TOTP 2FA).

slow_typist
2 replies
23h33m

I think 2FA via texts is better than no 2FA. But only if you do not make the texts world readable.

Apart from that, to me it seems justifiable to follow a risk based approach. Booking systems up to a certain value/amount, fine. Online Banking and health related services, thank you, no.

sleepyhead
1 replies
22h21m

It's not really 2FA even. More like a magic link (which is what we use for verification via email). The customer has no password, just verifies using a code via sms/email.

slow_typist
0 replies
2h25m

Passwordless, so to speak. Does it help with conversion rates?

brandon272
7 replies
1d2h

The perspectives and interests of NIST and the things that a service provider has has to worry about with respect to their customer/user experience are not necessarily aligned.

Customer: "What do you mean two factor app? I thought the code was supposed to come to my phone?"

Support: "It did, but we no longer support SMS two factor authentication."

Customer: "But I had no problems when the code came to my phone."

Support: "Yes, but NIST recommends that we don't use SMS 2FA"

Customer: "What's NIST? I'm finding this very frustrating, I need to get into my account."

akira2501
4 replies
22h13m

Customer: "But I had no problems when the code came to my phone."

"Unfortunately, many of our other customers, and customers of other financial institutions were not correctly protected by the code alone.. and were still getting scammed or confused.. and losing _all_ their money."

Customer: "[...] I'm finding this very frustrating, I need to get into my account."

"That is understandable, but we take the security of your account and your personal information very seriously, and this requires us to make changes to maintain that security in the face of new threats and actors as they evolve."

brandon272
2 replies
15h29m

It is very much software-world-thinking to believe that dismissive Kafkaesque responses that just shut down the conversation without addressing the fundamental issues around usability and customer satisfaction will ameliorate the situation.

For a lot of service based businesses they see their customers face to face and it is imperative that the customers have a seamless experience. Imagine having a business where customers who can't sign into some online system you have are bringing in old Android phones and wanting help from your staff members on how to get 2FA set up on those devices and it is easy to understand why many such businesses settle on SMS based 2FA.

akira2501
1 replies
15h24m

"We face this. It's about 5% of users. Our margins are large enough we don't worry about this segment. They need our service more than we need them."

- Any Large Bank Anywhere

brandon272
0 replies
14h55m

Most large banks already largely offer non-SMS 2FA through their companion mobile apps. This is about pretty much every other service you have that does not have a dedicated mobile app and doesn't want to teach their users how to manage your 2FA codes.

Terretta
0 replies
20h37m

Stop threatening me. And what does Hollywood have to do with me logging in?

LordKeren
0 replies
1d

It’s all a gradual improvement over time though, as both companies are able to adopt better practices and customers become accustomed to it. Many, many more people are using TOPT than a decade ago.

Hello71
0 replies
1d

it would be most convenient to have no 2FA. hell, skip the password too, then nobody will forget theirs. security is tradeoffs, but NIST says "if you take security seriously, you should not use SMS 2FA".

TacticalCoder
6 replies
1d

Out of curiosity, I just tried with ChatGPT 4o... Screenshot of a legit banking website and asking it to describe it to me, to give me the exact URL in the screenshot and to tell me if it's legit or not.

It described me the whole page, explaining it was a login page to log in to bank X in country Y. He compared the URL with the bank's name, etc.

Then I modified one letter in the URL, changing "https://online.banking.com" (just an example) to "https://online.banklng.com" and asked ChatGPT 4o again.

He said it was a phishing attempt.

So, basically, you can, today, already have a screenshot automatically analyzed and have a model tell you if it's seemingly legit or not.

swatcoder
0 replies
1d

I know everybody's doing it because they don't know better, but it's a terrible idea to make the inductive leap from one successful sample to some abstract sense of what a ML model is suited for. Especially for anything important.

As a sibling comment noted, performance will almost certainly be sensitive to temperature (randomness), exact prompt phrasing, exact sequence of messages in a dialog, and the training-data frequency of both the site being analyzed and the phishing approach used.

One could conceivably train a specialized ML model, perhaps with an LLM component, to detect sophsticated phishing attempts and I would assume this has even been done.

But using a relying on generic "helpful chatbot" to do that reliably and sufficiently is a really bad idea. That's not what it's for, not what's good at, and not something its vendor promises for it to remain good at even if it happens to be today.

shreddit
0 replies
1d

So what you are telling me is that windows “recall” feature could actually protect people from phishing attacks…

measuredincm
0 replies
19h57m

dumbass

mattigames
0 replies
1d

Did you just call chatGPT "he"? Oh that may get you in quite a lot of hot water this days!

compootr
0 replies
1d

that's called a hallucination. AI models are simply guessing what to say with differing sizes of word banks

At it's best, it may even "recognize" the top 90% of sites. Often, it's not a bulletproof solution, and shouldn't be trusted to generate either false positive/negative

My best operational security advice is not to click shit in your inbox and navigate directly to the hostname you trust to do sensitive actions

beepbooptheory
0 replies
1d

Did you ask it with the modified page in the same context?

averageRoyalty
5 replies
1d7h

Hardly an SMS issue, an issue with a vendor not properly securing a sensitive datastore.

TonyTrapp
4 replies
1d6h

It is an SMS issue in the sense that OTPs and hardware tokens don't require their rotating secrets to be written to some potentially publically-readable datastore. This specific attack vector simply does not exist for those technologies.

commandersaki
2 replies
1d5h

I don't see why SMS would need to write to a store, public or not. One can implement SMS-2FA using TOTP for example, it's just that the TOTP secret is not shared with the recipient.

TonyTrapp
1 replies
23h21m

Yes, it is not a technical necessity to store these messages. But there is the option to do it (and some people are evidently doing it). The point is that for one-time-passwords, it's not even an option, not matter how hard you try. You simply cannot make this class of mistake. Unless you try really really hard to fuck up and, say, for some very weird reason, exfiltrate the one-time passwords generated on the user's device every few seconds.

warkdarrior
0 replies
19h7m

How does the bank verify the OTP generated on the user's device?

captrb
0 replies
1d3h

What if my OTP base data is exported to a publically-readable datastore? I could be tricked into exporting the QR codes from Google Authenticator, for example. Though I see that there are significantly better 2FA methods, it does seem like the biggest flaws with SMS 2FA are in the insecure implementations, not the actual concept.

weinzierl
4 replies
1d2h

Can someone explain to me how SIM swapping actually works?

All the articles and videos I found are like:

1. Attacker calls phone companies support hotline or alternatively his confidante there

2. ** MAGIC **

3. Atacker has access to SMS messages sent to victims number

I understand that some might be deliberately vague but I don't want a step by step instructions, just a high level technical overview.

And to give another hint why this is so hard for me to understand: To the best of my knowledge, if I call my phone company with whatever scenario that I can imagine that involves my SIM, all they will do is send me a new SIM to my physical address.

toast0
1 replies
1d2h

If you have a never registered, not expired SIM for a carrier, the carrier can register it to an account given the IMSI. You can also do this with eSIM without needing a physical SIM.

So, step 1, convince the carrier representative. Step 2, give the the IMSI. Step 3, put the sim in your phone and receive SMS.

If you do step 1 in a physical store, the representative will probably give you a new sim from their stack even.

weinzierl
0 replies
1d

Thanks, this is the hint I needed.

zinekeller
0 replies
1d2h

Except for state-level attacks (in which case you're screwed anyways), in some countries the process tends to be lax (on-the-spot issuance of replacement SIM without robust identity verification or allowing SIM replacement to any arbitary address without verification). This also does not consider insider attacks, where people in the company... can just re-issue any SIM for any number they please (and therefore there are people who are willing to issue illicit SIMs in exchange for money).

mnw21cam
0 replies
1d2h

And to give another hint why this is so hard for me to understand: To the best of my knowledge, if I call my phone company with whatever scenario that I can imagine that involves my SIM, all they will do is send me a new SIM to my physical address.

That's basically SIM-swapping. The only step you haven't described is getting the new SIM sent somewhere else, which probably isn't too hard a thing to achieve given sufficient corruption.

Ultimately, the phone company uses its information to work out where to send an SMS, and that information is an entry in a database - SMS to number X is routed to SIM card ID Y. If an inside job can change that database entry for a while, that's enough to attack SMS-2FA.

cpcallen
4 replies
1d3h

In the UK it seems that almost all online banking transactions are now verified by SMS. As far as I can tell this is required by law, and replaced the previous, bank card + card reader + pin verification system, which was not only more secure but also did not depend on having a working mobile phone with signal.

I hope that this will in due course be recognised as a terrible mistake and rectified. Unfortunately my hope is only faint.

xnorswap
2 replies
1d3h

Which bank? I'm with LLoyds and transactions are verified via the app, not SMS.

howerj
1 replies
1d2h

Same with Natwest / Virgin. I do not think I have ever verified anything with SMS banking wise, sometimes you get alerts via SMS though.

funmi
0 replies
1d1h

Same with HSBC (globally actually, not just in the UK).

switch007
0 replies
11h9m

First line is not true at all. SMS is an option, but many support app based 2fa

Agree about the card reader being useful for offline. But I never remembered the thing and was often stuck when travelling

JohnMakin
3 replies
1d3h

This causes far more harm than good - even this article admits SMS 2FA is better than nothing. For several 99.99999% of use cases, it is fine, SIM swapping is an extremely targeted attack. If you are the type of person that can be targeted by an attack like that, don't use SMS for anything important. Simple.

Tepix
1 replies
1d3h

But did you RTFA? SMS aggregators can also be hacked or can leak SMSs by accident.

JohnMakin
0 replies
1d3h

This would still be a targeted attack if exploited, and arguably much more difficult than sim swapping. And yes, I did RTFA, and my point still stands.

mixmastamyk
0 replies
1d1h

Ban would need to be combined with a requirement for something else.

omh
3 replies
1d3h

The article conflates two issues that have different security implications.

The "1-click login" links are a concern and just having access to the SMS would be enough to take over things like WhatsApp.

But 2FA codes seem notably less worrying. They are the second factor and require an attacker to have the password too. For these cases I'm much more relaxed about the use of SMS and the risks of interception.

pphysch
2 replies
1d3h

They are the second factor and require an attacker to have the password too.

For every leaked database of SMS messages there are 1000 leaked databases of account credentials

samspot
0 replies
1d2h

I think 999 of those databases are the same data set. I lost a password ten years ago from a blog breach and I get almost a monthly notification about it showing up again and again.

omh
0 replies
1d3h

Good point.

But what's the threat model here?

I didn't think of 2FA as being protection against password reuse. People should still avoid reusing passwords and change them if they know of a breach.

Are there really attackers who are picking up breach databases and then sim-swapping to get the 2FA as well?

thepasswordis
2 replies
1d1h

I’ve recently become pretty disillusion with 2FA in general.

Google has recently started enforcing their own “click yes on already authorize mobile device” 2FA, which is very frustrating.

I have hardware 2FA keys that I keep in a safe. I deliberately do not keep them on me, and using them to re-auth is mentally an “event”.

This is not the case with my cell phone, which my kids play with, gets left on my dresser while the cleaners work, etc.

Really pushing me to run my own services again, but that obviously comes with its own challenges.

warkdarrior
1 replies
1d1h

Google lets you choose which authenticators to use (SMS, push to mobile, TOTP, etc). It sounds like you should disable push to mobile for your accounts.

thepasswordis
0 replies
23h13m

You cannot disable this anymore. You can add a hardware key, but cannot disable the mobile confirmation thing.

sleepyhead
2 replies
1d7h

Apparently the messages on the S3 bucket were updated every five minutes: https://www.zeit.de/digital/datenschutz/2024-07/it-sicherhei...

The CCC definition of this being only 2FA-SMS is incorrect though. It was not only Twilio Verify (2FA API) that was affected, it was all SMS sent through this vendor.

PinguTS
1 replies
1d6h

Where do you have the Twillio Verify reference from? It is nowhere mentioned.

sleepyhead
0 replies
1d6h

It is not but CCC is indicating that this provider was only used for 2FA. Sorry I was getting a bit ahead of myself here, this was earlier exposed as a breach of Twilio's vendor (IdentifyMobile). In the case of Twilio they offer an API for 2FA, Twilio Verify. I wanted to clarify that this breach was not only for 2FA, Verify API in the case of Twilio, but for all SMS sent through IdentifyMobile.

rsync
2 replies
1d2h

Random thought I’ve been having as we keep bringing this topic up these past few weeks…

How interesting or uninteresting would bi-modal 2FA be ?

That is: you receive a code by text and you enter the code by email…

I haven’t spent any time to work out whether this significantly changes the attack surface but… At first glance it does seem like you would need to own two different account types…

… So I guess a first question would be: does this exist anywhere? Has anyone ever seen this or done this?

warkdarrior
0 replies
1d1h

Bi-modal 2FA is already here: you receive a code by text and you enter the code in your web browser (or a proprietary app like a banking app).

Moving from web browser to email for entering the 2FA code means that you (the user) have to make sure to send email to the correct address, not one provided by the attacker.

hypercube33
0 replies
1d1h

How do you secure email then? This obviously won't work for login to that.

DanielHB
2 replies
1d3h

Sweden solved this problem years ago with BankID

https://en.wikipedia.org/wiki/BankID

It is amazing what a little cooperation between public and private institutions can achieve. It is the only way to login and 2fa to government services and most banks (some legacy systems are still supported by banks) and it works great.

It is incredible there is no system like this for every country, heck it is incredible that there isn't a system like this for the whole EU.

mixmastamyk
0 replies
1d2h

Is it true that it doesn’t support Linux as the wiki implies? I guess the card form could be used instead.

jampekka
0 replies
1d2h

EU is introducing Digital Wallet for this. I hope it will nicer to use than the Finnish version of BankID. Also would be nicer to be less dependent of banks or other private rent-seeking institutions.

Not having too high hopes though.

https://ec.europa.eu/digital-building-blocks/sites/display/E...

unstatusthequo
0 replies
22h3m

I like that IdentifyMobile's website[0] isn't even protected with a valid HTTPS cert. Falls back to HTTP. Oh and it's WordPress. And last updated 2015. Guess that's all telling. Nice that so many important companies used this crappy provider for such things.

[0] http://www.identifymobile.com/

uconnectlol
0 replies
1d1h

Wow SMS 2FA forced bullshit that suddenly got astroturfed right on the day of the Snoweden revelations is actually indeed bullshit. When will they have opt out of this or is this just the end of the web? 20 years ago I did not need or want anything more than a password (obviously cryptographic key auth would be better but not if it's brought to you by X.509). And of course all the HNers who eat this shit up and defend it like little dogs are suddenly on the other side. Email verification is fucking dumb too, and of course now every email forces phone SMS shit.

tamimio
0 replies
1d

The rule of thumb is that you should always avoid any services that still rely on SMS or phone numbers as an ID or 2FA. They simply don’t care about your privacy or security, even if they advertise it. A prime example is Signal.

Unfortunately, for some other services, like banks or government agencies, you don’t have any option. You can only minimize the impact by using a unique password and username and keeping them updated.

skilled
0 replies
1d5h

Twilio said the data was accessible between May 10 and May 15, 2024[0].

I mean, even if we disregard the auth codes thing, which according to CCC were being generated on a static timer, if someone did get access to this bucket - they would have gotten away with a juicy list of phone numbers and names from some of the top companies, at the very least.

I'm not sure how hard it would be for an S3 scanner to guess "idmdatastore", so it is difficult to say if anyone else got in. Even if not, a live database storing live data without encryption or anything is crazy. I feel like IdentifyMobile will feel the wrath of this no matter what.

[0]: https://stackdiary.com/twilio-issues-an-alert-about-a-securi...

simoncunningham
0 replies
21h17m

Certain financial institutions in some regions mandate telephone-network based 2FA for their customers accounts, and in the event of an account compromise attempt to pin the onus of liability on the customer. Maddening they wont give customers better options if they want to secure themselves.

seoulmetro
0 replies
18h31m

That's because SMS verification isn't 2FA. It's faux 2FA. You don't possess your phone app or your phone number... it can be cloned and intercepted. A key you hold on your person is 2FA.

refurb
0 replies
23h59m

In Singapore, the banks have moved away from SMS entirely, even for notifications. Now they have to come through the app.

But for login you basically register a single phone, download a certificate to it and that becomes your second factor. If you login via web or another phone, you need to approve the login from that phone.

Of course if you lose the phone (or it's damaged) you need to go to the bank to fix it, but that seems like a reasonable approach.

efitz
0 replies
23h19m

Several financial institutions I work with require 2FA with SMS, and do not offer an option for HOTP/TOTP. FML.

bigmattystyles
0 replies
1d3h

It always feel useless when you get the second factor on the very device you are logging in from. I know it's not because you still have to physically have the device but instinctively, I always think true 2FA should involve different devices.

ablob
0 replies
6h55m

Any push-based service would be vulnerable to this, wouldn't it? The medium doesn't matter if somewhere in the chain someone stores the message (in public).

RajBhai
0 replies
1d3h

How about the login service send the code encrypted in the SMS such that it can only be decrypted on the phone of the actual user? Still vulnerable to phishing attempts, but better than relying on deficiencies of SMS technology .