return to table of content

Google OAuth is broken (sort of)

paxys
39 replies
1d3h

I work in this space and have dealt with variants of this exact vulnerability 20+ times in the last 3-4 years across a wide range of login providers and SaaS companies. The blog post is correct, but IMO the core problem itself is too far gone to be fixable. Delegated auth across the internet is an absolute mess. I have personally spoken with plenty of Google and Microsoft engineers about this, so I can guarantee they are all already well aware of this class of problems, but changing the behavior now would just break too many existing services and decades-old corporate login implementations.

The "fix" at this point is simply – if you are using "Sign in with XYZ" for your site, do not trust whatever email address they send you. Never grant the user any special privileges based on the email domain, and always send a confirmation email from your own side before marking the address as verified in your database. All the major OAuth providers have updated their docs to make this explicit, as the post itself points out. In fact I'm surprised there even was a payout for this.

pcthrowaway
26 replies
1d2h

Never grant the user any special privileges based on the email domain, and always send a confirmation email from your own side before marking the address as verified in your database

Doesn't this fail if the user registered an account (on google) with the plus sign address, and then signed into your service with that google account before getting the boot? Unless you're sending a verification email for every sign-in...

andygeorge
12 replies
1d1h

Unless you're sending a verification email for every sign-in...

Yes, absolutely do this! This is what Slack does, and what we do at my current employer (defined.net). "Magic link" email + TOTP is pretty slick.

apitman
10 replies
1d1h

"Magic link" email + TOTP is pretty slick.

Agreed. But unfortunately it's also highly phishable.

johnmaguire
5 replies
21h39m

Can you explain how you would phish a user with a magic link? Since the service is generating a one-time code, and sending it directly to the user's email inbox, I am not sure how an attacker would intercept the code.

apitman
4 replies
20h58m

The attack works by getting the user onto a page you control that looks like a slack page that says, "we need you to confirm your email". User enters their email and gets a legitimate email from slack. User enters the code on the original phishing page and the attacker gets a link that lets them log in as the user. I built this exact exploit for slack in a few hours. It was trivial.

I've never seen a foolproof way to mitigate this. Best you can do is big warnings in the email telling the user never to enter the code anywhere but slack.com. You can also do fancy stuff like comparing IP addresses to make sure they're from the same region but the attacker can also do fancy stuff like detect where your IP is from and use a VPN to get an IP in the same area.

johnmaguire
1 replies
19h14m

Thanks - but this sounds like an email 2FA flow, not a magic link.

A magic link is a link a user clicks in their browser, that lands them on the appropriate service, where the one-time code is part of the URL. The service consumes the token and provides the user with a (first factor) authentication token.

In other words, the email doesn't display a code which they could go paste into the attacker's page. Though they may still need to perform a 2FA flow following the magic link flow (and this portion is still phishable!)

Your critique is definitely valid for most forms of 2FA (email, SMS, and TOTP.)

apitman
0 replies
18h8m

You are correct that this mitigates the security problems.

However, the method you're describing has fallen out of favor, in large part because mobile email apps often use a built-in browser that doesn't share cookies with the system browser. This creates several confusing UX problems. You also can't use a logged in device to log in a new device, unless you implement something like QR login which is also phishable.

Slack for example used to work the way you describe but now uses emailed codes for 1FA login.

archi42
1 replies
19h35m

The foolproof way is to not send codes as a 2FA (be it mail, SMS or whatever). There is always a risk that the user fails to verify where they're putting that code. Instead use something that verifies the domain without relying on the user, e.g. U2F or passkeys.

In that case the user needs to be fooled to sent the physical device or passkey-app-backup to the attacker. This is much more suspicious and needs a much worse fool than someone entering a code after they already entered their user+password.

If you know that the user uses the same browser to open links sent via mail as they use for their login: For the 2FA step, set a cookie with some unique value on your login domain and sent the user a mail with a link. Opening the link only finishes the login and starts a valid session if that unique cookie is present. This makes it harder for an attacker, since they need to inject that cookie into the victims browser, which means they need to find an XSS-style exploit. Of course you then want to reduce the attack surface by putting the login function on a subdomain of its own.

And obviously this fails if the user is about to login e.g. on their personal computer and then tries to verify the session on their company phone. This can be good or bad, depending on the scenario.

TBF, I wish some companies would use even that basic code-by-whatever 2FA. I've seen cases which have like 5 different domains, all with various logins that customers and employees use. Want to phish them? Just register another domain that looks similar enough to the others and sent some mails. But then there are still services limiting the password length to something like 16, so I think we will still have plenty of work...

apitman
0 replies
19h5m

Passkeys are cool, but not widely deployed yet. Also, there's currently no way to express "give X identity Y access on Z server", unless X identity already has a passkey set up with the server. This is a non starter for decentralized networks with lots of federated and self-hosted servers. This use case is trivial with email.

SgtBastard
3 replies
21h21m

As with sibling comment, what threat vector do you see phishing risk with?

A race condition where the phishing email lands first, user clicks link to g00gle.com, gets a convincing message that they also need to present username and password?

apitman
2 replies
20h54m

See response to sibling

SgtBastard
1 replies
18h13m

Thank you - as sibling also mentioned, what you're describing in isn't a magic link but a standard TOTP/HOTP delivered via email which absolutely is phishable in the manner you described.

Magic link is a process where you enter your email address and the service sends you an email that contains a clickable hyperlink that contains a cryptographically strong, short-lived nonce in the URI that is used as a proof-of-possession factor (the email account) to authenticate users.

apitman
0 replies
15h33m

See third cousin

Macha
0 replies
18h42m

I hate this experience. It's an absolute pain when emails are delayed and sign in fails, or just when some apps fail to persist state when I switch to my email client on mobile.

paxys
11 replies
1d2h

The idea is that the company's Zoom admin should always specify the exact list of users who are allowed to be in their Zoom account. The email domain should have no bearing. So if you sign in as bob@mycorp.com, and that is a valid corporate account with the right permissions, you are let through. If you try bob+foo@mycorp.com, it should always fail. The pattern of "oh they have a mycorp.com email so they are probably legit" was broken from the start.

This is thankfully less of an issue now since everyone is moving to SAML-based logins and SCIM provisioning.

SergeAx
8 replies
1d

This will help, but at the same time will ruin the whole idea of seamless corp users registration in external service, and damage adoption and increase friction.

layer8
7 replies
1d

If a company has no directory of their employees, associated company email addresses, and employment status, they likely have much worse problems.

SergeAx
6 replies
21h52m

Most companies have these directories but in different forms, including Wiki pages. The idea of auto-enrollment is that it is based on a standard and widely adopted OAuth protocol.

layer8
4 replies
21h20m

Employee records as wiki pages? That must be a joke, right?

SergeAx
3 replies
21h7m

Why not? If your company has lots of employees, like tens of them, you may create a nice template for those pages.

IlliOnato
1 replies
20h38m

Why not on a piece of paper then?..

SergeAx
0 replies
20h37m

It's hard to share and update.

layer8
0 replies
18h3m

Because it’s not suitable for machine-processing of employee data. This belongs in a database, Active Directory, or HR software. Do you also do payroll with wiki pages?

faeriechangling
0 replies
21h28m

They sure have those directories in different forms out of sheer need but every time I’ve seen people have to consult wiki pages to get information that should be in a standardized directory it’s a shitshow that is borderline impossible to automate.

tehbeard
0 replies
18h42m

This is thankfully less of an issue now since everyone is moving to SAML-based logins...

Hate to break it to you but SAML is same shit different coat of paint, the xml encryption/signature/encoding stuff it pulls makes it just as much a tarpit for bugs and misconfiguration.

SCIM seems pretty decent though to explicitly state who is and isn't on the Guestlist.

klodolph
0 replies
1d1h

What I’ve also seen is integrations with a different OIDC endpoint for company X. It’s still OIDC, but it’s not “sign in with Google”.

malfist
0 replies
1d2h

Unless you're sending a verification email for every sign-in...

Isn't that your typical 2FA flow?

tln
4 replies
1d2h

Why couldn't Google prevent signups using domains of Google Apps customers? That doesn't seem like it would break anything.

asmor
3 replies
20h51m

because the collisions can already exist when someone signs up for google workspace. and what are you going to do, delete those accounts? a lot of those would be personal accounts on educational domains...

tln
2 replies
19h42m

That's not a reason to not allow individual signups on @customer.domain AFTER customer.domain is a Google Workspace domain, which is the hole being discussed in the article.

FWIW Adobe actually lets businesses take ownership of individual accounts with the same domain... see https://www.adobe.com/legal/terms.html section 1.4

meandmycode
1 replies
19h21m

Google has an ownership taking process, the user gets informed and during next login their address changes to a gtemp account and they are informed

rnotaro
0 replies
2h22m

I used to have a google account with my personal domain then I registered to Google Workspace 4-5 years ago. Since then, I have this message when I log in :

Your account has been modified. > The address user@domain.com is no longer available because an organization has reserved rickynotaro.com. Why is this important now? > > Don't worry. Your data is safe. To use them, you must create an account with a different email address. Your password and security settings will remain unchanged.

Account details > > What type of account do you want to use? > > An account with Gmail and a new Gmail address > Select this option to add Gmail to this account. Unfortunately, we cannot move your data into an account with an existing Gmail address. > > Account using an email address belonging to you, not linked to Google. Ex.: myname@yahoo.com > Choose this option if you want Google products except Gmail. > Btn_Continue Btn_Do_it_later Not sure what to do?

I've been pressing "Do it later" for years. I'm still using this account for youtube, maps and other services.

I should probably use a secondary domain and use this address.

Some service prevents me from changing my email address when signing with google and weird behavious happens when it tries to use my user%domain.com@gtempaccount.com

Google documents it here: https://support.google.com/accounts/troubleshooter/1699308?h...

krooj
2 replies
1d1h

I feel as though this is a consequence of organizations not really understanding how complex the space truly is. The way I've watched OAuth2 + OIDC get adopted in various companies was never from a security-first perspective; rather, it's always sold as a "feature": login with x, etc. Even when there are moves to make flows more secure - PKCE, for example - you end up playing a game of "whack-a-mole" with various platforms doing shitty things in terms of cookie sharing, redirect handling, and the like. The fundamentals of 3-legged OAuth2 are sound and there's tons of prior art (CAS comes to mind), but the OpenID Foundation should be tarred and feathered for the shitty way they market and sold OIDC.

treve
1 replies
20h3m

OpenID Connect and all its extensions are so high in complexity and scope. The documents themselves are massive and written in a quite hard to understand form. I've implemented many protocols and RFCs so I feel I have some experience.

Because OpenID Connect and OAuth2 are so closely related, I worry that some of this overengineering is making it's way back into new OAuth2 extensions.

I'm worried both will eventually collapse under their own weight, creating a market for a new, simpler incumbent and setting us back another 10 years as all this has to get reinvented again.

My outside impression is that the OIDC folks are highly productive with really strong domain knowledge and experience, but they're not strong communicators or shepherds with a strong enough vision.

The sad thing is that this is the second thing with the OpenID name that's going down this path. The original OpenID concept was great but also collapsed due to their over-engineering.

krooj
0 replies
13h21m

Agree - you only need to look at things like the hybrid flows to see where things fall apart: why would you issue an id_token that contains user information to a client which hasn't yet fully authenticated itself via a code-to-token exchange with passing it's client_id + secret? If you look at certain implementations, such as Auth0, you'll find that they actually POST that token back at the redirect_uri, since, a) it's at least registered against the client; b) it's not subject to capture as easily. The spec says NOTHING about protecting this info, though.

Racing0461
2 replies
23h41m

always send a confirmation email from your own side before marking the address as verified in your database

This would render the login with x feature useless from a user pov.

yardstick
0 replies
21h40m

Would be be useless? Wouldn’t the verification email happen once, then after which you gain all the SSO benefits?

ses1984
0 replies
23h40m

You still get the benefit of not remembering an extra password.

SergeAx
0 replies
1d

But in this particular case the attacker will actually receive a confirmation email, so what's the point?

clintonb
34 replies
1d2h

Related: More service providers need to stop using email as the primary identifier (as Google’s docs recommend). When I changed my username on Google Apps, I spent a lot of time dealing with issues at Slack, Datadog, GoLinks, and others.

mattgreenrocks
29 replies
1d2h

What should providers be using?

I've always presumed email was the most stable global identifier for a user, but that assumption appears to be wrong.

apitman
5 replies
1d1h

I disagree with GP. I think email is generally a solid identifier, and would be curious to know why they needed to change theirs.

kagakuninja
4 replies
1d

Because comcast subscribers lose their email address when switching email providers, and the address becomes available to other subscribers.

This happened to me, and I cannot use my comcast email with google services, and some others. I need a separate email address for job hunting, because google calendar will assume the provided email address is linked to google services, and the notifications will go to the other guy.

It is really fucking annoying.

apitman
2 replies
21h2m

This is solvable by using a more stable email provider, ideally with your own domain. And yes I know domains need to be much easier for the average person to use (and avoid accidentally losing). That's one reason I run a domain registrar, to try and make this more accessible.

Once someone has their own domain, it also opens up things such as hosting your own IdP (or paying a small monthly fee to have someone else host it for you) and sidestepping email entirely.

JohnFen
1 replies
20h20m

This is solvable by using a more stable email provider, ideally with your own domain.

Sure, but requiring ordinary people to do this is essentially a nonstarter. The whole point of SSO is to minimize user friction. Requiring a user to also set up a special email account with another service is a dramatic increase in friction, and I expect that a large percentage of users simply won't do it. Why would they?

apitman
0 replies
15h34m

The main problem would be solved by using a Gmail account, right?

rngname22
0 replies
1d

also known in industry as Account Takover

sirius87
4 replies
1d2h

Curious to hear answers to this too. People forget usernames and that alone would lead to drop off. People are also reluctant to give out phone nos. which is anyway a terrible identifier and has the same issues as email.

rajasimon
3 replies
1d1h

Why can't the phone number be unique? I have a unique phone number and trust them as my email unless I transfer myself out of my country.

JohnFen
1 replies
1d

Just like with email addresses, there are a nontrivial number of people who share the same phone number.

SoftTalker
0 replies
23h18m

Phone numbers are also recycled, probably much more often than email addresses.

kahnclusions
0 replies
18h50m

Another edge case is that phone numbers get transferred and reused.

I might verify the number, and then stuff happens in the real world while I haven’t logged into your service, and someone new registers with my number.

The number is assigned to my account, but the new account can prove they own the number. If you’ve made the number unique, then you need special handling here.

deathanatos
4 replies
1d

If you're logging in with OIDC (as is the case w/ the OP), the combination of the issuer and the `sub` claim identify the user (the "subject"). The relying party (the system wanting to authenticate the user) just treats sub as an opaque string (unique within that issuer). The `email` claim is just the "End-User's preferred e-mail address." (And … "The RP MUST NOT rely upon this value being unique" … the end user's email might change … for whatever reason the end-user, or their IDP, might prefer. Also, any other IDP might claim that email as the preferred email, potentially truthfully, too.)

The docs, however, seem to be discussing the notion of the relying party's "user" object. For that … use a UUID, an auto-incrementing int, some artificial key, totally up to you. Link that user object with the (iss, sub) tuple above. But you should consider¹ whether user can adjust their authentication method with whatever you're building: e.g., if I change OIDC providers … can I adjust in your RP what IDP my account is connected to? (Same as I might need to update an email, or a password in a more classic login system.)

(¹The answer is also not always "yes", either; I work with a system where the IDP is pretty much fixed, because it's a party we trust. All signins have to come from that specific IDP, because again, the trust relationship. But, on the open web where I'm just building Kittenstagram, you don't care whether the user is signing in with Hooli's IDP, Joja's, etc.)

jbmsf
2 replies
20h43m

I get it, but this is also, frankly, terrible. I should not be required to store your identifiers in my system in case order to login users.

I've always felt that email+email_verified would make much more sense.

I don't actually care about the email address being a unique person, just that they have access to it.

clintonb
1 replies
18h22m

The email address is not guaranteed to be stable.

jbmsf
0 replies
14h39m

I get it, but you're throwing technical specifications at a product/human/application problem.

No one wants to build an application that has to invent its own id scheme or manage this complexity. That fact that the specs don't provide a solution here -- something like informing you when an email address is no longer valid (again, I get it, this is hard/impossible) -- means that the spec will always be in conflict with actual usage.

yardstick
0 replies
21h36m

This is how we do it- sub+issuer associated with an account in our system. The user is issued a username for our system, they enter that then they are presented with the login options (eg password, IDP providers, etc). This also forces the customer organisation to specify exactly who they want to have access (which in a org with 10k+ employees of which only a few dozen need to login to us, that’s a good thing).

Plus this approach allows multiple accounts each associated with the same IDP account. Useful if the user needs multiple accounts for whatever reason.

lykahb
3 replies
1d1h

The subject identifier, "sub".

merb
2 replies
1d

Microsoft basically has no useful sub (sub is only useful when it comes to app credentials for Microsoft). It has oid, but if you want to support n-providers with just a login field and not a sign in with n button, than you have a problem. Or should the user know its entry lid and insert it into the username field? Most of the time you use an email and what Microsoft does in c# and their docs mean that you connect that with the oid. And also that’s why it’s stupid that Microsoft and google do not treat the email/preferred_username identifier that well. Because everybody changes the oidc spec

frenchyatwork
1 replies
21h50m

Is Microsoft's sub claim unstable?

I think you might have misunderstood the point. Miscellaneous claims like email/preferred_username shouldn't be used to identify 3rd party logins. Apart from not necessarily being unique, they're also vulnerable to change. Changing your email shouldn't make you lose access to all your accounts. The point of the sub claim is that it's it's unique and stable.

merb
0 replies
21h0m

No the sub is not unstable, it’s just the sub is unique per client_id.

yeah I know that. We basically do both. You create the account with the email/upn but we also save the oid and than we use the oid for matching. If the email changes we update it. If you started your account without the provider and than somebody configured domain+tenant id we first match via upn and after the first login it will use oid. User still uses upn to start the flow but the matching uses oid. But we are only dealing with b2b tough. And we have our own login site that of course needs a upn as well, thus the upn of Microsoft is the same as ours. If you change the upn on the Microsoft side you need to change the login upn on our side aswell. Another solution would‘ve been to have a unique logon site, in this case it would be possible to directly go to the IdP, but it does not matter that much with login_hint.

kagakuninja
3 replies
1d

I have a comcast email named after an old movie monster. Turns out that a former comcast customer once had the same account, and registered it with google (and others). Not only can I not use that address with google services, creating google calendar events using my email will sent the notification to the other guy (as a result, I created another email address for job interviews).

Every year I get several notifications that this guy has done something with his x-box, or registered a new device for something or other. It is absolutely nuts, and companies like Google refuse to let people like myself claim our own email addresses.

philote
2 replies
1d

If you own the Comcast email address, how is he registering anything with it? Wouldn't he need access to the email to verify his accounts?

kagakuninja
0 replies
21h2m

He used to own the account, now I own it. I've tried resetting the google account password and the like, there seems to be nothing I can do.

bobthecowboy
0 replies
23h43m

Because a shocking number of services don't actually require you to verify you control the email address. I'm talking about mainstream services like Spotify.

I'm in a similar situation, except in my case, the person providing the wrong email address never controlled it - it's firstnamelastname@gmail.com, which I signed up for when gmail was still invite only.

If I go to sign up for a service that someone else has signed up for with my email, I just do a password reset, and take control of the account. Either by transferring it to another email address that doesn't exist and then creating a new account, or if that doesn't work I just nuke the data.

crote
2 replies
1d2h

Just use an integer or GUID or something as primary key. It is still totally fine to use an email address as username, of course - just keep a separate email-to-user mapping and don't use the email itself as primary key.

Treat the email address like a name field: it's probably not going to change, but don't make it impossible to do so when someone wants to.

andygeorge
1 replies
1d1h

Just use an integer or GUID or something as primary key

We're not talking random apps and services, we're talking about the big providers that are commonly used for SSO, where "just change ur primary key" is wildly impractical at best, and more likely impossible at their scale. That ship, as it were, has already sailed.

saurik
0 replies
1d1h

Those providers already don't use email addresses as their primary key: their login APIs all allow you to get access to an underlying ID of the user (I say "an" as, in the case of Facebook at least, they no longer give you the global one but map you to an application-specific one, to prevent 3rd party apps doing correlation).

smeyer
0 replies
22h44m

I changed my last name when I got married, and at the time I had a work email address that looked like oldlastname@company.com on the company's Google Workspace. Given that oldlastname was no longer my name, I changed my email address to newlastnmae@company.com.

This worked fine in Google services and some of the many work applications using Google for auth, but some of them were using the email address as a global identifier in a way that broke down when I changed my name. The services that migrated successfully were using a more stable identifier that persisted despite the address change.

JohnFen
0 replies
1d

I've always presumed email was the most stable global identifier for a user, but that assumption appears to be wrong.

It might be the most stable, but that doesn't mean you can assume that it's at all stable.

Also, email addresses can't be assumed to uniquely identify a single person -- lots of people share email accounts with others.

jbmsf
3 replies
20h29m

I can't agree. This guidance just pushes responsibility down onto application developers and I would expect most of them to either do nothing or implement guidance inconsistently.

The right thing here is to offer APIs that fit the needs of the applications that will use them with as little extra responsibility as possible.

In this case, I'd have hoped that Google would set email_verified to false so that applications (or downstream IDPs) would know that they had to do extra verification.

clintonb
2 replies
17h23m

The OIDC spec does provide a stable identifier, and Google implements it properly. My Google Workspace address will always be verified, so your solution doesn’t apply. Changing my email address at Google itself won’t unverify it. Also, you’re still relying on developers to read the docs and properly implement the spec.

jbmsf
1 replies
14h37m

I think you missed the point. No application developer wants to use "sub" as the identifier; they want to use "email" or "phone" because these a) are actual ways to message a human and b) do not require a deep understanding of any technical spec to do the intuitively obvious thing.

I am not saying that my solution works today. I am saying that is a completely natural thing to want and the fact that it doesn't work or that we're even having this discussion is failure of the people who designed and implemented these specs.

clintonb
0 replies
9h31m

I don’t see it as a failure of the spec, but developers failing to read said spec. By the way, I’m a developer who does want a stable ID for users authenticating via third-parties. The fact is that email addresses and phone numbers can change, and should not be considered stable identifiers. If folks want to extract that information from an ID token, they can; but, don’t use them as a primary key.

No deeper understanding required.

cedws
26 replies
1d3h

Is it just me that feels $1337 is an insult? FAANG pays way too low bounties for this kind of stuff. This kind of info would be much more valuable on the black market.

thrdbndndn
5 replies
1d3h

From a practical perspective, they probably should "match" what black market values these exploits, and I surely wish they can give much higher bounties in general (and they for sure can afford!), but I don't think they ethnically need to (so it's not an insult in my view).

Turning these exploits/vulnerabilities to black market is not only immoral but also highly illegal, so the "value" is inflated due to these "risk" factors. You can't really expect the same from the affected company themselves.

It's like saying if you found a lost item and you ask a large sum from the owner when you return it, because "I can get much more if I choose to just sell it on the street".

pcthrowaway
3 replies
1d2h

Turning these exploits/vulnerabilities to black market is not only immoral but also highly illegal

I assumed 'black market' here means irresponsible disclosure, which there are many sites operating legally (Zerodium being a prime example)

Who are the customers? Theoretically nation-state actors, but do we really know? Either way, you're selling the vulnerability to a private party. To my knowledge, selling knowledge of an exploit to almost anyone is legal (unless it could be classified treason or a threat to national security or something).

As is publishing the security research after responsibly disclosing (as the blog author did here), though we've had to fight pretty hard to get to the point where warning people of threats to their digital safety (often because companies are too lazy to protect their users) is generally understood to be legal.

thrdbndndn
1 replies
1d2h

I'm not a legal expert, but is it necessary for an act to pose a threat to "national" security for it to be considered illegal in places like the United States?

In my country, we have a law known as "The Crime of Destroying Computer Information Systems." This law makes it a criminal offense to intentionally harm computer systems in a way that could compromise them (which is somewhat vague in its definition, I'd admit). This includes leaking private information from these systems, and it applies even if the affected systems belong to private entities. And if you sell exploits to a third party and are later caught, you will be considered an accomplice and there are precedents for this.

progmetaldev
0 replies
22h49m

The United States has similar laws in place. There have even been cases where people were convicted for responsible disclosure, since they had to circumvent the system to determine that there was indeed an exploit. It's not as common as it used to be, but there are plenty of small financial firms that would still go after someone reporting an exploit.

tptacek
0 replies
1d2h

Zerodium isn't going to pay you $133.70 for this.

aliceryhl
0 replies
1d1h

If you pay too much in bounties, you risk having your own red-team employees leave so that they can report bugs externally and get paid much more via bounties.

jjice
4 replies
1d3h

Nope, I'm with you. Based on the quick blurb about what the vuln way, $1337 is an absolutely steal for Google. Paying for a team or outside pentesters to attempt to find this would be _way_ more expensive.

twisteriffic
1 replies
1d3h

Especially when Microsoft paid out about 75k for essentially the same issue.

agwa
0 replies
1d2h

Did Microsoft pay the entire $75k? The people who found that issue reported it to multiple stakeholders, and their blog post[1] merely says they were awarded $75k in total. I assume the bulk of the bounties were paid by the service providers who failed to heed the warning in Microsoft's documentation.

Also, the Microsoft issue was far worse as it could be exploited by anyone; the Google issue requires a rogue employee or a misconfigured email ticketing system.

[1] https://www.descope.com/blog/post/noauth

nonethewiser
0 replies
1d3h

It’s essentially 0 as far as they’re concerned.

Cthulhu_
0 replies
1d3h

Paying for a team or outside pentesters to attempt to find this would be _way_ more expensive.

But doesn't Google have teams of internal pentesters already? You could hire dozens of external companies and they might not find it.

This system is a "no cure, no pay" approach. I do think they should have paid the reporter a lot more though.

lightedman
3 replies
1d3h

"Is it just me that feels $1337 is an insult?"

Y0U 4R3N7 31173 3N0U6H 70 C47CH 7H3 R3F3R3NC3

poizan42
1 replies
1d3h

They could have given them $313373 or at least $31337 instead.

pests
0 replies
23h52m

A third of a million? You must be sky high.

esafak
0 replies
1d3h

So you're saying it's a joke on multiple levels.

agwa
3 replies
1d3h

To be clear, the information shows a rogue employee how to create accounts in third-party apps (Slack, Zoom, etc.) that won't be automatically deleted when the employee is terminated. I'd love to hear why you think this information would be "much more valuable" than $1337 on the black market as that is not obvious to me.

Also, if anyone should be paying bounties, it's the third-party apps, since they're the ones which are vulnerable. I'm impressed Google is paying a bounty just for pointing out a footgun. I would probably not have bothered reporting this to Google if I had found it; $1337 would be more of a pleasant surprise to me than an "insult".

dpedu
1 replies
21h32m

Because these non-Gmail Google accounts aren’t actually a member of the Google organization, they won’t show up in any administrator settings, or user Google lists.

That's why. This bug allows an attacker to retain access to various accounts attached to an already-compromised company or employee of the company. Not only that, but the retention is completely invisible to the account administrators.

Needing the same level of access that an employee has in order to utilize it doesn't make it less valuable. There are plenty of valuable bugs that can only be utilized from specific positions. Consider how many hacks have happened because an employee's devices or accounts were compromised, rather than some server system that no one individual owns. The recent Okta hack happened that way.

agwa
0 replies
20h54m

The rogue accounts would show up in the administrative settings in the third-party apps, and they would stick out like a sore thumb because they'd have weird email addresses. So they're not completely invisible, albeit not visible from one central place.

Needing the same level of access that an employee has in order to utilize it doesn't make it less valuable.

The only way that would be true is if compromising an employee account has no cost, which is obviously not the case. Thus, attackers would prefer to purchase a vulnerability that doesn't require also compromising an employee account.

I trust tptacek is correct that Zerodium wouldn't even pay $133.70 for this: https://news.ycombinator.com/item?id=38722395

paxys
0 replies
1d2h

In fact I'd argue that Google paying a bug bounty for something that is well-defined and documented behavior and will never be "fixed" actually undermines the program.

sp332
1 replies
1d3h

Google hasn't fixed it, so it seems they really don't value this info.

dpedu
0 replies
21h30m

Per the article, Google fixed it, but only for google.com accounts.

vultour
0 replies
1d2h

Buy why should Google pay them at all? One of the first screenshots of their documentation says you shouldn't trust the email claim, so they're obviously aware of this issue. The problem is third parties using Google's OAuth incorrectly. If anything, Slack/Zoom/etc should be paying.

tokai
0 replies
1d3h

Could at least have been $31337.

toasted-subs
0 replies
23h56m

Eh depends on if the person is financially stable. The tongue and cheek number may stand out stronger on a resume.

rahkiin
0 replies
1d3h

Might this be because to be actually vulnerable a company needs to have the ticketing-like system in a sort-of unsafe setup?

dtx1
0 replies
1d3h

I'd certainly try to sell the exploit to someone else instead.

rwmj
16 replies
1d3h

My main takeaway from this is that web authentication is still a horrible mess.

c0pium
12 replies
1d2h

…because people don’t read the docs and instead just assume that it works how they think it should.

asylteltine
8 replies
1d2h

If so many people are making the same mistakes, it’s your fault, not the users.

c0pium
6 replies
1d1h

If people don’t read, it’s their fault. Reading the docs is not a big ask.

asylteltine
4 replies
1d1h

Typical engineer spotted.

pests
2 replies
1d1h

How dare we read documents before we put technology into use.

Like asking a bridge engineer to know the spec of his bolts.

asylteltine
1 replies
22h5m

And if every purchaser of said bolts implemented them incorrectly, is it more likely that your specs, docs, or bolt design are faulty? Or do you just think no it’s everyone else who is stupid?

c0pium
0 replies
19h13m

Except in this case the vast majority of people have read the docs and then gone on to implement them correctly.

I’m sorry that you wrote buggy code because you didn’t read. But it’s the height of arrogance to assume that because you did, everyone else surely must have as well.

scottyah
0 replies
1d

I don't think that's engineer so much as lazy bureaucrat in power. It's the mentality Douglas Adams made fun of with the "Your planet is being destroyed, you had plenty of time to read the posted notice".

apitman
0 replies
1d1h

Documentation (including code comments) are vital and important, but it's far better to bake the proper constraints into the code/specs so that it's hard to make mistakes. Then the docs are less necessary, shorter, and easier to understand. I think this is what GP is getting at.

wnevets
0 replies
1d1h

Sounds like a classic footgun to me.

toasted-subs
0 replies
23h57m

This is why login is a horrid mess. Because if it's too easy then people who don't know what they are doing set up websites.

physicsguy
0 replies
1d1h

Half of it I think is because people take "basic auth" offered by web framework, and then try to retrofit OAuth/OIDC/SSO on top of it.

Too
0 replies
23h11m

Have you seen the oauth docs? I can’t imagine anyone having read and understood them fully, unless you dedicate your life to it.

Dalewyn
2 replies
1d

When some people ask why most of us sane and practical folks still use and demand simple password authentication, it's because passwords fucking work.

lesuorac
0 replies
1d

I'm still firmly in the mutual TLS camp. Nothing is easier then never having to type in a password and good luck cracking TLS.

JohnFen
0 replies
1d

So much this. OAuth and their ilk are, in my opinion, not trustable and suffer from real usability issues.

apitman
14 replies
1d3h

I remember being surprised to learn that Microsoft would send Email claims that were not created or validated by Microsoft, and that the email claim in general was not considered reliable.

This was counter-intuitive to me, because I had thought the entire purpose of OIDC was to establish reliable identity via a 3rd party like Microsoft.

That might have been the original intent, but I find it very useful that OIDC can be more flexible. For example, I run a free login provider[0], and it works by validating an identity with a 4th party identity provider (IdP) either with upstream OIDC or direct email, and creating a privacy screen between the app and that IdP (ie so Google can't track every app you're logging into). The fact that you can bring your own email to Google means you can get the security and UX of Google OIDC with the privacy of email + password, with the huge caveat that now you have to trust LastLogin instead of Google. But we're working on protocols to reduce that dependence.

Google’s documentation in fact warned against using Email as an identifier

I completely disagree with Google on this. Email is the only truly federated identity that people actually use. Until we have something better widely deployed (and there are some promising alternatives in the works), I believe email addresses should be treated as identities.

[0]: https://lastlogin.io

caseysoftware
6 replies
1d

I believe email addresses should be treated as identities.

No, for two reasons:

Outside of the western world, phones are more common than computers and easier UIs in general so a phone number is more likely to be their identity.

In addition, that means you're completely handing off your identity to your email provider. Considering many - looking at you google - are faceless organizations that can and will shut down your access without notice or appeal, you could lose everything.

Background: I launched Okta's OAuth and OIDC products, put together LinkedIn's courses on the same, and doing it again at Pangea Cyber.

bastawhiz
3 replies
1d

Outside of the western world, phones are more common than computers and easier UIs in general so a phone number is more likely to be their identity.

When I worked at Stripe, we found that far more people lost access to phone numbers than email addresses. And the reason is simple: if you can't pay your phone bill, you lose your phone number.

While going through support tickets to tabulate which auth issues we should focus on, I came across one person who had a Stripe balance that they needed to feed their kid. But they couldn't log in because they couldn't pay their phone bill and had lost their number. The very fine support folks got the situation resolved with other identity checks, but it was a huge wakeup call.

You simply _cannot_ use an identifier that requires ongoing payment for identity purposes. You and I are probably privileged enough to never have to worry about this, but everyone who falls below the lower middle class is entirely vulnerable to losing _everything_ this way.

you're completely handing off your identity to your email provider. Considering many - looking at you google - are faceless organizations that can and will shut down your access without notice or appeal, you could lose everything.

Versus handing off your phone number to organizations that routinely get socially engineered to transfer phone numbers. This is such a common attack that my mom knows about it. Ironically, the facelessness of most email providers also protects you from having your identity yoinked out from under you by one of their staff: I don't personally know a single person whose had their email turned off as a result of social engineering.

mtaba
0 replies
23h36m

Plenty of other reasons to "lose" a phone number. Especially temporarily. Accounts deemed inactive. Locked devices, in some cases.

What makes the situation intolerable is the proliferation of Google-inspired "customer service" designed to prevent any prospect of useful contact with paying customers. Kafka-esq nightmares are currently an everyday hazard.

lostmsu
0 replies
13h39m

Another common issue is moving countries which often changes your phone number.

caseysoftware
0 replies
23h28m

I'm not advocating for phone number, simply saying that "assuming/forcing email is insufficient"

Giving people the option between phone, email, or whatever is a better approach so they can plan accordingly.

apitman
0 replies
15h24m

Unless you're using PKC, you're completely handing off your identity to someone. The question is what's your threat model for having your identity taken from you. For me I've decided I trust DNS, because if it fails we probably have bigger problems. So email on a custom domain is as strong an identity as I require. I see a Gmail account as good enough for most people, but not for me personally.

Ideally I would like to see people hosting their own IdP servers from their laptops at home over something like ngrok but e2ee, but we have a ways to go for that.

Aside: given your background at Okta and ngrok, we have a lot of overlapping work. I'm curious of your thoughts on my LastLogin.io project?

Jnr
0 replies
23h25m

Sadly more and more countries require mobile service providers to check identity of the user before providing a phone number. Not everyone wants to be easily identifiable IRL. Meanwhile, unique e-mail addresses are available by anyone without proof of identity.

And the goal more often is to identify a unique user, not have a user that can be traced back to real life.

k8svet
2 replies
1d

I think it would be really great if you could compare/contrast LastLogin with Dex. Especially given they're both Go. Just curious if you evaluated dex, etc.

Also toyed with... Portier (nee Mozilla Persona). But ultimately dealing with this email normalization seemed like a losing/hard battle. And I mostly accepted that I'll likely die before a good solution is pioneered here and have refocus attention elsewhere. :/

apitman
1 replies
22h12m
k8svet
0 replies
21h11m

Hats off, what a README.

throwawaaarrgh
1 replies
1d2h

I believe email addresses should be treated as identities.

Because nobody ever shares an email address, and email is super secure

We can wait forever or we can begin building the solution

apitman
0 replies
1d2h

Sharing email identities is a reasonable way to give group access to a resource.

And solutions are being built. The mission of LastLogin is to accelerate this.

strken
1 replies
19h15m

There's an important difference between email as an identifier in your own system, versus email as an identifier for a linked Google account. I agree with you that email works okay as an identity within a system you control, but agree with Google that it's a terrible way of linking with an external system: ephemeral, potentially tied to multiple accounts (!), and pointless when you've got an actual ID to use instead.

apitman
0 replies
15h18m

That's fair. I'm coming at this from the perspective that using Google SSO is great for UX, but a bad idea for privacy and vendor lock-in reasons.

I built LastLogin.io to provide the SSO UX without sacrificing privacy and to eventually move towards better protocols for user-controlled identity

dividuum
7 replies
1d3h

Why is it even possible to create a new Google account with an email like 'user+suffix@domain' if 'user@domain' is already handled by google's mail servers and thus applies the plus-routing rules? Even in the non-exploity case that seems like a great way to create confusing mail setups.

Racing0461
3 replies
23h37m

user+suffix@domain

It's even worse than that. At least the +XYZ is specified in the email rfc. Google has decided even further that periods in the name also go to the cononical email. ie hi.my.name@google.com is equal to himyname@google.com and routes all emails to the second.

paulddraper
0 replies
19h39m

Google (and most email providers) also treat the user portion as case-insensitive.

hudell
0 replies
22h47m

I could swear I at least once received an email that was sent to myname.mydomain@gmail.com or something similar in my myname@mydomain email. It's been several years but I remember thinking that was fucked up and looking into the full email to see if there was any other explanation for me receiving it, which I did not find.

Too
0 replies
23h16m

Another fun one is upper case vs lower case. I’ve been bitten by systems that are case sensitive, while the rest of the email world mostly is not.

paxys
0 replies
1d3h

A domain can freely move between mail servers. Google has a specific handling for a+b@domain.com, other servers might not. At the end of the day they are two unique email addresses, and that's how they should be treated across the internet.

hypeatei
0 replies
1d3h

I think this aliasing feature is too complex for its own good. Especially at Google's scale.

asylteltine
0 replies
1d2h

Because of how old and legacy googles authorization system is. A “Google account” is just a string.

theK
5 replies
1d3h

* August 7th - The issue was triaged

* October 5th - Google paid $1337 for the issue

Is it just me or does it seem a bit odd that payout after triage took almost two full months? Initially I was positively surprised that they came up with a triage verdict within 2-3 days but what's the deal with the payout coming so late?

rwmj
2 replies
1d3h

It's pretty normal for large companies to take ages to pay up. The real problem here is this major bug only elicited a token $1337 payment.

toasted-subs
0 replies
23h50m

Yeah, definitely should have had a higher payout.

theK
0 replies
22h7m

The real problem here is this major bug only elicited a token $1337 payment.

Agreed.

j0hnyl
1 replies
1d3h

Not sure about Google VRP, but I've gotten multiple payouts from Chrome over the years and I believe there's a schedule. The rewards panel meets every x weeks in order to award payouts on qualifying reports. Almost no bug bounty programs pay upon triage by the way, they pay after resolution.

imroot
0 replies
1d

I run a bug bounty program and I pay upon successful triage: while our engineering teams do have security SLA’s, it’s not fair to whomever reported the vulnerability to wait for our (sometimes broken) processes in order to be paid.

superkuh
2 replies
1d2h

A microcosm. There is no such thing as OAuth2. OAuth2 is just a way for megacorps to implement their own arbitrary/proprietary auth systems. It is a toolkit for making auth systems, not an auth system. So we end up in a world where oauth2 was supposed to be a standard, but instead every megacorp has their own incompatible implementation. And sure, people will dev for the megacorp use cases... but that just means the "standard" is now whatever google does/etc.

And they all have their own little bugs like this. We should go back to oauth1. It was a real standard. Not a toolkit for making standards.

paulddraper
0 replies
19h37m

Um...okay?

Is cross-provider compatibility related to this article about the security of Google OAuth2?

kevindamm
0 replies
1d

I wouldn't really call this a bug, more like an unfortunate side effect of combining these particular components: domain names that can change ownership, BYO email (as backup email & email provenance), the liberal allowance of plus-aliases (which I'm sure someone somewhere is claiming a business need or they would have killed it long ago), and service implementers not reading the documentation (or largely copying solutions from a video or example with cut corners for brevity/simplicity, likely to facilitate its easy consumption).

If I were designing a circuit with a few PCB components and needed to introduce resistors and transistors as appropriate for the voltage and current needs of the device .. would you expect me to read the data sheet or just guess it from a simpler example and run with it? In a lot of cases the circuit would still work, or it would after a few bench tests and a bit of probing. But maybe it wouldn't be as efficient and a component would short out leading to low MTF and sad customers. Worst case scenario maybe combusting batteries and real harm. Now ask yourself, is it really the PCB modules' manufacturers fault that the device fails prematurely? Or is the device manufacturer the one responsible for reading the data sheet?

I don't typically hold all software to such rigorous expectations but when it deals with authentication and authorization I would expect service owners to be thorough.

TFA even says that the issue doesn't exist if the docs are followed. Alphabet did at least acknowledge there's a weakness there by granting the bounty, maybe they'll provide some controls for company administrators to allowlist/rejectlist plus-aliases or nonexistent roles, or maybe restrict the migration of Apps-affiliated emails to non-org claims? (My guess is they're measuring the impact of this, or prioritizing the measurement of impact, where priority is low because it is a problem with clients that assume email claims are more authoritative and permanent than they actually are).

I suppose the definition of "bug" depends a lot on the definition of "expected" and who's expecting, but I would assert it is not a deviation from intended behavior, at least, and not unexpected to those who grokked the docs.

rowls66
2 replies
1d3h

Is this really a Google OAuth issue, or more failure my many service providers to properly verify the OAuth token assertions before allowing access? Seems to me the latter.

mikea1
0 replies
1d

I believe OAuth is working as expected. It provides valid authentication/identity for email addresses because "user@domain" and "user+wildcard@domain" are still validated as email addresses "owned" by the user.

The issue is with the Google org website: admins cannot revoke credentials for accounts/emails they cannot see.

Because these non-Gmail Google accounts aren’t actually a member of the Google organization, they won’t show up in any administrator settings, or user Google lists.
kr0bat
0 replies
1d2h

It sounds like the issue is that these service providers are obeying Google's aliasing rules, but also ignoring the fact that you shouldn't be using email as a primary identifier [1]? It's funny, if they had adhered to the spec more they'd be fine; but if they adheredess and treated alias' as distinct emails, these platforms would at least be more secure.

[1] https://developers.google.com/identity/openid-connect/openid...

CaptainOfCoit
2 replies
1d3h

Today I’m publicizing a Google OAuth vulnerability that allows employees at companies to retain indefinite access to applications like Slack and Zoom, after they’re off-boarded and removed from their company’s Google organization. The vulnerability is easy for a non-technical audience to understand and exploit.

October 5th- Google paid $1337 for the issue

Is that a joke? Does Google really value security so low?

bawolff
1 replies
1d3h

If i read the post right, the behaviour in question was already mentioned in the docs before they reported this. I'm more surprised they got any money instead of a "its a feature not a bug" response.

chuckadams
0 replies
23h57m

Exactly. They paid for a detailed example they can point to of why one should follow the docs. Lot cheaper than a tech writer.

wizofaus
1 replies
1d

Anyone else unable to access what I assume are supposed to links for the centred bits of text throughout that article? I had a hard time understanding parts of it...

wizofaus
0 replies
9h22m

Turns out, once I got to read it on desktop machine, they weren't links at all but captions for images that weren't showing for me on mobile.

physicsguy
1 replies
1d1h

The OIDC spec tells you that you must not to use e-mail as a unique identifier. You must use the 'iss' and 'sub' fields as username in your application.

Why? Well, for a start, it's obvious that user e-mail addresses can be re-used. If you've got a contractor working for Business A and Business B, both who create a user account in their authentication service for them, then as a SaaS platform, you can't match their e-mail address to a single B2B customer.

Secondly, there's the really obvious thing that e-mail addresses change. Businesses get bought, change name, go through mergers, etc. etc., and people's names change too (marriage, divorce, because they feel like it).

I found implementing SSO to be really challenging for a start-up. Getting it correct is hard, and you need to have a good understanding of the general concepts and OIDC and OAuth2 before trying to put it into use. Auth0 have a good book. If you don't understand this, then you'll probably end up doing something like implementing password grant auth everywhere and leave your application insecure.

k8svet
0 replies
1d

Sometimes reading articles like these are a good well to alleviate any accumulating imposter syndrome. "Oh, I'm interfacing with a third party system for something that represents an abstract actor. it better have a stable, non-stringy reliable identifier". And yes, the spec is very clear about this, beyond just basic considerations of building a remotely robust system.

egamirorrim
1 replies
21h51m

What am I missing here? Outside of the support system/zendesk and unattended old domain methods identified I can't make a new Google account for whatever@mydomain.com without being asked to verify it - so what's the real likelihood of abuse?

JeremyNT
0 replies
21h34m

The idea is that you do this in advance, at a time when you have legitimate access, then you later lose that access.

So say you have a egamirorrim@mydomain.com google account legitimately. You can use an alias like egamirorrim+woopsie@mydomain.com to create a new google account with a verified email address, resulting in "log in with google" google sending an email claim egamirorrim+woopsie@mydomain.com.

Then, later, mydomain.com fires you. You can no longer log in with the real egamirorrim@mydomain.com associated account, as it was disabled by an administrator. However you can still log into the new google account, egamirorrim+woopsie@mydomain.com , since it's not associated with your organization.

The thing is, afaict this then only becomes a problem if the provider is doing authz based exclusively off of the email claim. I've used OIDC in the past and you are not supposed to grant access to resources based on parsing text in email addresses claim!

I can understand why the blog post author found this counterintuitive, but as they note the docs even warn against doing this.

The blog post goes on to make this statement:

Most of the service providers I tested did not use HD, they used the email claim.

... OK, well what are they "using" it for? Does this trick actually work on any real world services? If so, I would like for them to be named and shamed.

Even if you (erroneously) assume this value is unique and immutable, that alone doesn't necessarily grant access to anything in and of itself.

VoodooJuJu
1 replies
22h8m

I'm struggling to understand how to reproduce this.

I have a Google Workspace organization: org.com

I create a new user: sneed@org.com

Where/when/how is the sneed+alias@org.com created? How is the user sneed@org.com doing this if they don't have administrative access to organization management?

uxp8u61q
0 replies
21h37m

Just try it, google will merrily redirect all emails to a+b@org.com to a@org.com. To answer your questions:

Where/when/how is the sneed+alias@org.com created?

Where: in gmail's alias list

When: at account creation

How: gmail/gw redirect everything of the form abc+xyz@domain to abc@domain

How is the user sneed@org.com doing this if they don't have administrative access to organization management?

The user isn't doing anything, it's a "feature" of google mail.

SergeAx
1 replies
1d

Not that I am being vain (though I actually am), but how's that my post of the same link 4 days ago got just one upvote? :)

https://news.ycombinator.com/item?id=38670644

JohnFen
0 replies
1d

Whether or not a link gets any attention here depends on a lot of things aside from the link itself. For instance, if it happens to hit the front page at the same time as something else that is drawing everyone's attention then it may simply go unnoticed.

Also, although this isn't the case here, if it's just a link without a description or some sort of commentary explaining why the link is of interest, it may not get the traction you'd expect.

theteapot
0 replies
20h3m

What's the actual vulnerability? Steps to reproduce?

sam0x17
0 replies
1d

October 5th- Google paid $1337 for the issue

love the tongue-and-cheek "leet" amount of 1337

ok123456
0 replies
23h58m

This is a feature, not a bug. Anyway, what is to stop someone who owns a domain from doing this with actual forwards to do this sort of ban evasion?

merb
0 replies
1d2h

Well the best solution is basically to allow the creation of the account but keep it deactivated so that a human needs to check it. That at least works for things like gitlab or other things were an Organisation signs up. The problem of the hd claim is actually not one since you need to validate your domain and if your a saas provider that is b2b only that’s ok. Microsoft is even worst tough, where you need a different claim than email, depending on what you are doing. (UPN)

intrasight
0 replies
23h38m

I use OAuth2 all the time. And I don't understand the conflation of email and OAuth2 being discussed in that article. With Google's OAuth2, I can get the user's email - but so what? I have no need for it and I never use it.

"Because these non-Gmail Google accounts aren’t actually a member of the Google organization, they won’t show up in any administrator settings, or user Google lists."

I don't understand that statement either. They do show up. Now of course the org could choose to not do anything to manage the access of those users - which is common enough. I made a tool used by some of my larger clients to a) get reports of users and their permissions (available via Google's APIs) and b) batch delete those user permissions.

LelouBil
0 replies
1d3h

Following this flow, you can create a Google account using a support ticket email address, potentially view the contents of the ticket to finish the account creation, and start using the support email address to Oauth into stuff.

That could impact lots of small companies.

FergusArgyll
0 replies
23h53m

I may or may have not used a version of this to sign up for multiple free trials - for multiple products.

sign up johndoe@gmail.com then next time johndoe+1@gmail.com then +2 ad infinitum

AtNightWeCode
0 replies
20h47m

Using emails like this is usually found during pen tests. Not Googles fault I would say even though I think OAuth is overly complicated. This is along the line of sending secrets in the token. Tokens are signed, not encrypted.