I work in this space and have dealt with variants of this exact vulnerability 20+ times in the last 3-4 years across a wide range of login providers and SaaS companies. The blog post is correct, but IMO the core problem itself is too far gone to be fixable. Delegated auth across the internet is an absolute mess. I have personally spoken with plenty of Google and Microsoft engineers about this, so I can guarantee they are all already well aware of this class of problems, but changing the behavior now would just break too many existing services and decades-old corporate login implementations.
The "fix" at this point is simply – if you are using "Sign in with XYZ" for your site, do not trust whatever email address they send you. Never grant the user any special privileges based on the email domain, and always send a confirmation email from your own side before marking the address as verified in your database. All the major OAuth providers have updated their docs to make this explicit, as the post itself points out. In fact I'm surprised there even was a payout for this.
Doesn't this fail if the user registered an account (on google) with the plus sign address, and then signed into your service with that google account before getting the boot? Unless you're sending a verification email for every sign-in...
Yes, absolutely do this! This is what Slack does, and what we do at my current employer (defined.net). "Magic link" email + TOTP is pretty slick.
Agreed. But unfortunately it's also highly phishable.
Can you explain how you would phish a user with a magic link? Since the service is generating a one-time code, and sending it directly to the user's email inbox, I am not sure how an attacker would intercept the code.
The attack works by getting the user onto a page you control that looks like a slack page that says, "we need you to confirm your email". User enters their email and gets a legitimate email from slack. User enters the code on the original phishing page and the attacker gets a link that lets them log in as the user. I built this exact exploit for slack in a few hours. It was trivial.
I've never seen a foolproof way to mitigate this. Best you can do is big warnings in the email telling the user never to enter the code anywhere but slack.com. You can also do fancy stuff like comparing IP addresses to make sure they're from the same region but the attacker can also do fancy stuff like detect where your IP is from and use a VPN to get an IP in the same area.
Thanks - but this sounds like an email 2FA flow, not a magic link.
A magic link is a link a user clicks in their browser, that lands them on the appropriate service, where the one-time code is part of the URL. The service consumes the token and provides the user with a (first factor) authentication token.
In other words, the email doesn't display a code which they could go paste into the attacker's page. Though they may still need to perform a 2FA flow following the magic link flow (and this portion is still phishable!)
Your critique is definitely valid for most forms of 2FA (email, SMS, and TOTP.)
You are correct that this mitigates the security problems.
However, the method you're describing has fallen out of favor, in large part because mobile email apps often use a built-in browser that doesn't share cookies with the system browser. This creates several confusing UX problems. You also can't use a logged in device to log in a new device, unless you implement something like QR login which is also phishable.
Slack for example used to work the way you describe but now uses emailed codes for 1FA login.
The foolproof way is to not send codes as a 2FA (be it mail, SMS or whatever). There is always a risk that the user fails to verify where they're putting that code. Instead use something that verifies the domain without relying on the user, e.g. U2F or passkeys.
In that case the user needs to be fooled to sent the physical device or passkey-app-backup to the attacker. This is much more suspicious and needs a much worse fool than someone entering a code after they already entered their user+password.
If you know that the user uses the same browser to open links sent via mail as they use for their login: For the 2FA step, set a cookie with some unique value on your login domain and sent the user a mail with a link. Opening the link only finishes the login and starts a valid session if that unique cookie is present. This makes it harder for an attacker, since they need to inject that cookie into the victims browser, which means they need to find an XSS-style exploit. Of course you then want to reduce the attack surface by putting the login function on a subdomain of its own.
And obviously this fails if the user is about to login e.g. on their personal computer and then tries to verify the session on their company phone. This can be good or bad, depending on the scenario.
TBF, I wish some companies would use even that basic code-by-whatever 2FA. I've seen cases which have like 5 different domains, all with various logins that customers and employees use. Want to phish them? Just register another domain that looks similar enough to the others and sent some mails. But then there are still services limiting the password length to something like 16, so I think we will still have plenty of work...
Passkeys are cool, but not widely deployed yet. Also, there's currently no way to express "give X identity Y access on Z server", unless X identity already has a passkey set up with the server. This is a non starter for decentralized networks with lots of federated and self-hosted servers. This use case is trivial with email.
As with sibling comment, what threat vector do you see phishing risk with?
A race condition where the phishing email lands first, user clicks link to g00gle.com, gets a convincing message that they also need to present username and password?
See response to sibling
Thank you - as sibling also mentioned, what you're describing in isn't a magic link but a standard TOTP/HOTP delivered via email which absolutely is phishable in the manner you described.
Magic link is a process where you enter your email address and the service sends you an email that contains a clickable hyperlink that contains a cryptographically strong, short-lived nonce in the URI that is used as a proof-of-possession factor (the email account) to authenticate users.
See third cousin
I hate this experience. It's an absolute pain when emails are delayed and sign in fails, or just when some apps fail to persist state when I switch to my email client on mobile.
The idea is that the company's Zoom admin should always specify the exact list of users who are allowed to be in their Zoom account. The email domain should have no bearing. So if you sign in as bob@mycorp.com, and that is a valid corporate account with the right permissions, you are let through. If you try bob+foo@mycorp.com, it should always fail. The pattern of "oh they have a mycorp.com email so they are probably legit" was broken from the start.
This is thankfully less of an issue now since everyone is moving to SAML-based logins and SCIM provisioning.
This will help, but at the same time will ruin the whole idea of seamless corp users registration in external service, and damage adoption and increase friction.
If a company has no directory of their employees, associated company email addresses, and employment status, they likely have much worse problems.
Most companies have these directories but in different forms, including Wiki pages. The idea of auto-enrollment is that it is based on a standard and widely adopted OAuth protocol.
Employee records as wiki pages? That must be a joke, right?
Why not? If your company has lots of employees, like tens of them, you may create a nice template for those pages.
Why not on a piece of paper then?..
It's hard to share and update.
Because it’s not suitable for machine-processing of employee data. This belongs in a database, Active Directory, or HR software. Do you also do payroll with wiki pages?
They sure have those directories in different forms out of sheer need but every time I’ve seen people have to consult wiki pages to get information that should be in a standardized directory it’s a shitshow that is borderline impossible to automate.
Hate to break it to you but SAML is same shit different coat of paint, the xml encryption/signature/encoding stuff it pulls makes it just as much a tarpit for bugs and misconfiguration.
SCIM seems pretty decent though to explicitly state who is and isn't on the Guestlist.
What I’ve also seen is integrations with a different OIDC endpoint for company X. It’s still OIDC, but it’s not “sign in with Google”.
Isn't that your typical 2FA flow?
Why couldn't Google prevent signups using domains of Google Apps customers? That doesn't seem like it would break anything.
because the collisions can already exist when someone signs up for google workspace. and what are you going to do, delete those accounts? a lot of those would be personal accounts on educational domains...
That's not a reason to not allow individual signups on @customer.domain AFTER customer.domain is a Google Workspace domain, which is the hole being discussed in the article.
FWIW Adobe actually lets businesses take ownership of individual accounts with the same domain... see https://www.adobe.com/legal/terms.html section 1.4
Google has an ownership taking process, the user gets informed and during next login their address changes to a gtemp account and they are informed
I used to have a google account with my personal domain then I registered to Google Workspace 4-5 years ago. Since then, I have this message when I log in :
I've been pressing "Do it later" for years. I'm still using this account for youtube, maps and other services.
I should probably use a secondary domain and use this address.
Some service prevents me from changing my email address when signing with google and weird behavious happens when it tries to use my user%domain.com@gtempaccount.com
Google documents it here: https://support.google.com/accounts/troubleshooter/1699308?h...
I feel as though this is a consequence of organizations not really understanding how complex the space truly is. The way I've watched OAuth2 + OIDC get adopted in various companies was never from a security-first perspective; rather, it's always sold as a "feature": login with x, etc. Even when there are moves to make flows more secure - PKCE, for example - you end up playing a game of "whack-a-mole" with various platforms doing shitty things in terms of cookie sharing, redirect handling, and the like. The fundamentals of 3-legged OAuth2 are sound and there's tons of prior art (CAS comes to mind), but the OpenID Foundation should be tarred and feathered for the shitty way they market and sold OIDC.
OpenID Connect and all its extensions are so high in complexity and scope. The documents themselves are massive and written in a quite hard to understand form. I've implemented many protocols and RFCs so I feel I have some experience.
Because OpenID Connect and OAuth2 are so closely related, I worry that some of this overengineering is making it's way back into new OAuth2 extensions.
I'm worried both will eventually collapse under their own weight, creating a market for a new, simpler incumbent and setting us back another 10 years as all this has to get reinvented again.
My outside impression is that the OIDC folks are highly productive with really strong domain knowledge and experience, but they're not strong communicators or shepherds with a strong enough vision.
The sad thing is that this is the second thing with the OpenID name that's going down this path. The original OpenID concept was great but also collapsed due to their over-engineering.
Agree - you only need to look at things like the hybrid flows to see where things fall apart: why would you issue an id_token that contains user information to a client which hasn't yet fully authenticated itself via a code-to-token exchange with passing it's client_id + secret? If you look at certain implementations, such as Auth0, you'll find that they actually POST that token back at the redirect_uri, since, a) it's at least registered against the client; b) it's not subject to capture as easily. The spec says NOTHING about protecting this info, though.
This would render the login with x feature useless from a user pov.
Would be be useless? Wouldn’t the verification email happen once, then after which you gain all the SSO benefits?
You still get the benefit of not remembering an extra password.
But in this particular case the attacker will actually receive a confirmation email, so what's the point?