return to table of content

Microsoft Chose Profit over Security, Whistleblower Says

minisooftwin
32 replies
3h30m

“If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security,” the company’s CEO, Satya Nadella, told employees.

Satya's model of making security a priority at Microsoft:

- Cram ads in every nook and corner of Windows. Left, right, centre, back, front, everywhere. What else is an operating system for?

- Install a recorder which records everything you do. For the benefit of users of course - you know, what if a user missed an ad and wants to go back and see what they missed.

- Send a mail to your employees and tell them "Do security". Mission accomplished - Microsoft is now the most secure platform.

VyseofArcadia
18 replies
3h19m

The Microsoft bribes scandal broke not too long after I had to take the "hey don't do bribes" training at Microsoft.

That event really drove home for me the fact that all of the trainings, emails, processes, etc. are mostly plausible deniability. There are people who care about security at MS. I know, I've met them, but for the most part all of this exists so that Satya can plausibly say in court or in front of congress, "well we told them to do security better. This is clearly the fault of product teams or individual contributors, not Microsoft policy and incentives."

montjoy
14 replies
2h49m

I dunno, that’s a pretty cynical take. Isn’t it just as plausible that they became aware of the bribes internally and were trying to curtail them when the scandal broke out? Or maybe the “don’t do bribes” training actually worked enough for someone to whistleblow even if official internal channels failed? Those who are doing wrong often try to stymie others from making positive changes out of fear, greed, etc.

Edit: I just want to add that there are things to be cynical about - I’m not completely naive. If it’s your legal department heading up the training then you can be pretty sure that there was a cause for it.

blowski
5 replies
2h31m

Yes, massive companies are a nest of conflicting priorities. The sales team wants to do whatever it takes to win the deal, and the legal team wants everyone to behave ethically at all times. The board wants to be shocked(!) when it turns out those goals are in conflict, with the ethical side sometimes losing out, to remove any personal risk to themselves.

mistrial9
4 replies
1h47m

legal team wants everyone to behave ethically at all times

do you really believe that? compliance under scrutiny, more like it

johnnyanmac
3 replies
1h23m

The best job is sitting around and doing nothing. So ideally yes.

But sure, ethically speaking when things get heated they will exploit every loophole they can find to avoid liability. So, lawful evil?

sophacles
1 replies
58m

The best job is sitting around and doing nothing.

That sounds like a terrible job.

johnnyanmac
0 replies
55m

Well you can take it as literally or figuratively as you wish. Depends on the person.

mmcdermott
0 replies
14m

Most corporate law guidance is about risk mitigation, not about ethics. Less activity generally translates to less risk.

You can see a similar phenomenon with security professionals. True, the only secure computer is one disconnected from the Internet, turned off, put in a Faraday cage, on the moon, under armed guard - but that's not useful.

creaghpatr
3 replies
1h59m

Probably neither, "don't do bribes" training is standard onboarding procedure at any Fortune 500 company. Just ironic timing from OPs POV

ein0p
1 replies
1h15m

Not just onboarding. Most, if not all, large companies waste at least an hour of their employees time on this per year, while themselves bribing politicians in DC.

VyseofArcadia
0 replies
56m

It was, in fact, a story arc in an at the time recent-ish season of SBC[0].

[0] Microsoft's yearly training that is done in the form of a TV drama about MS employees facing ethical dilemmas

tialaramex
0 replies
42m

But this is exactly why it's standard procedure. I worked for a huge Credit Reference Agency and it was very obvious that this is ass covering.

Sarah and Bob in the New York Office of Huge Corp must take the training so that the CEO can swear all his employees know not to bribe people. In the event that Manuel, who is given $100 000 per week of company money to bribe the locals in Melonistan so that they don't interfere with Huge Corp's operations is actually brought before the government and forced to spill the beans the CEO will insist they had no idea and some Huge Corp minion gets sacrificed. Manuel will be replaced, Melonistan will be assured quietly that his replacement will provide make up money ASAP.

In Arms this is even worse, because there it's secretly government policy to bribe people, even though it's also illegal. So then sometimes even if you can prove there was a crime, the government will say "We'll take that evidence thank you very much" and poof, the crime disappears, if you make too much fuss you'll be made to disappear too.

lupusreal
1 replies
2h6m

That doesn't seem plausible, because you can't stop bribery by telling people that bribery is against the rules. Everybody already knows that.

If they became aware of bribery and genuinely wanted to stop it, the way is to publicly punish the culprits as harshly as they can, to demonstrate to others that enforcement of the rules can happen.

ein0p
0 replies
1h11m

Yes and no. You might not even realize that what you did constitutes giving or receiving a bribe. What cracks me up though is that all large US megacorps give tens of millions of dollars in thinly veiled bribes to officials each year, as they browbeat their employees into not accepting a god damn fruit basket from a thankful client.

giobox
0 replies
47m

Maybe. However such training is essentially considered mandatory compliance at any publicly traded company once you reach a certain size, especially if you sell to the government, and IMO probably not related to any specific event they became aware of.

I've had to do the same mandatory anti-bribing public officials training annually at US companies a fraction the size of Microsoft. The anti-bribe training is so common at large companies in the US, there are companies that sell ready made one-size-fits-all training videos specifically on this topic that are then usually the thing the employee has to sit through anually.

In my experience, different cultures have different feelings on the moral failings of bribes. Some of my colleagues grew up in countries where it is a common business practice, it probably makes sense for large orgs with global employee base to have to establish some kind of baseline for acceptable business practices. Similarly, I know several people who came to study computer science in the US and tried to bribe police officers upon being pulled over for speeding, simply because it's how you handle the matter where they grew up.

ClumsyPilot
0 replies
25m

dunno, that’s a pretty cynical take

Just days ago a major US corporation was found guilty of hiring Death Squads in Columbia. Literally to murder people.

Why do we have this common illusions that corporation will not steep down to the dirtiest crimes they can get away with?

https://www.bbc.com/news/articles/c6pprpd3x96o

tptacek
0 replies
56m

Microsoft has for over two decades been one of the largest and most sophisticated employers of security talent in the industry, and for a run of about 8 years probably singlehandedly created the market for vulnerability research by contracting out to vulnerability research vendors.

Leadership at Microsoft is different today than when the process of Microsoft's security maturation took place, but I'll note that through that whole time nerd message boards relentless accused them of being performative and naive about security.

pjmlp
0 replies
2h43m

Yes, hence why I take all those company values trainings as Bull******.

doe_eyes
0 replies
1h40m

Eh. For the most part, the trainings can be taken at face value. Even if the management's dealings with governments and partners are questionable, no company wants random employees accepting personal kickbacks from vendors.

There's a liability avoidance component to trainings, but mostly for non-business misconduct. For example, for sexual harassment, the company will say they tried everything they could to explain to employees that this is not OK, and the perpetrator alone should be financially liable for what happened. That defense is a lot less useful in business dealings where the company benefits, though.

tombert
9 replies
2h20m

I have no broad evidence of this, but I suspect that the more beginner-friendly Linuxes are guilty of a lot of the sins that you laid out here. I seem to remember some controversy with Canonical recording your searches when hitting the super key, and Ubuntu having Amazon ads built in by default.

People who love to geek out about computers can of course install Arch or Gentoo or NixOS Minimal and then audit the packages that they're installing to see that there's no obvious security violations, but it's unrealistic to think that most non-software-engineer people are going to do that.

I really don't know how to fix this problem; there will always be an incentive for Microsoft (and every other company) to plaster as many ads as they think that can get away with, as well as collecting as much data as possible. I don't know that I would support regulation on this, but I don't know what else could be done.

lupusreal
2 replies
2h3m

Debian is a perfectly reasonable choice for casual linux users. Ubuntu's supposed usability improvements over Debian are greatly exaggerated. It's mostly just marketting.

tombert
1 replies
1h56m

Fair enough. I haven't used Debian in quite awhile (I think since 2009 or so?), so I can't speak to current stuff, but I do remember it being pretty hard to install then. I'm sure they have refined it considerably since then, and of course I am fifteen years more experienced now than I was.

Personally it's hard for me to go back after I accepted the dogma of NixOS, but maybe if I manage to talk my parents into using Linux I'll install Debian for them.

1oooqooq
0 replies
55m

install arch. not even kiding.

make a "shutdown" button on the desktop that locks everything and do a full upgrade.

any issue is solved with, try tomorrow after a reboot. you'd be surprised how fast fixes arrive at rolling distros

abrouwers
2 replies
1h58m

I mean, if you have no evidence of this, why even post such an (incorrect) conspiracy theory comment?

tombert
1 replies
1h54m

Well the Amazon ads in Ubuntu absolutely did happen, as well as the searches with the super key. [1]

I'll admit it's maybe a bit of an extrapolation to assume that they're as bad as Microsoft, which is why I disclosed that I didn't have a ton of evidence for this.

[1] https://www.gnu.org/philosophy/ubuntu-spyware.en.html I realize that GNU is sort of conspiratorial in its own right, but at least one entity seemed to agree that there's problems with it.

bregma
0 replies
8m

Well, here are the facts (I was an insider at the time, and this is my testimony).

Searches were anonymized and sent through Canonical servers to provide extended search result sets. This was configurable and could be disabled. Canonical of course had your IP address so they could reply, just like any and every HTTP server does. Your search query was not stored anywhere or aggregated, and it was not associated back to the originating IP address except to reply. Your privacy was respected and protected at all times.

The Amazon search did appear as a plugin in an early prelease. It was never shipped in a released Ubuntu.

The goal was to make things as easy as possible, even for the technically averse (who were still commonplace a decade ago), while still respecting and protecting your privacy.

Of course, no matter what you do, someone is going to scream for everyone to come witness the oppression inherent in the system. We did it anyway with the expectation of baseless knee-jerk outcry and we were not disappointed.

abdullahkhalids
1 replies
1h4m

It's not surprising when a linux distribution was taken over by a capitalistic firm, it decided to forgo good values, and instead prioritized profits over everything else.

I really don't know how to fix this problem

Stop using software made by companies that do bad things. Improve the software that doesn't.

nicce
0 replies
58m

Stop using software made by companies that do bad things. Improve the software that doesn't

Or stop buying their stock... but that is difficult thing to embrace. As, we know, these companies are very profitable.

nicce
0 replies
1h22m

I seem to remember some controversy with Canonical recording your searches when hitting the super key, and Ubuntu having Amazon ads built in by default.

It was also other way around with Microsoft. If you deploy Ubuntu VM in Azure, they contacted you in LinkedIn to offer commercial support.

Not joking: https://www.theregister.com/2021/02/11/microsoft_azure_ubunt...

akira2501
0 replies
18m

you know, what if a user missed an ad and wants to go back and see what they missed

I have meetings with adtech guys and this gets pitched every time. Along with "a way to save ads so you can watch them again at home later!" And "alexa enable ads that you can talk to!"

_heimdall
0 replies
2h27m

To be fair to Satya, every leader should be judged on what they do not what they say. This isn't a Microsoft or Satya problem, pick a large corporstion and you'll find examples of this behavior everywhere.

Words in an email hold absolutely no weight, when leaders choose to trade security for something else that's all employees need to know.

HumblyTossed
0 replies
1h50m

Say one thing, do another.

everdrive
27 replies
4h57m

I'm not defender of Microsoft, but I don't know if I could point to any company which does not put profit over security.

diggan
10 replies
4h53m

I guess the issue becomes when they say security is the top priority (and have been for two decades), yet all actions point towards it not being so.

Bill Gates in 2002: "So now, when we face a choice between adding features and resolving security issues, we need to choose security."

https://www.wired.com/2002/01/bill-gates-trustworthy-computi...

Satya Nadella in 2024: "If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security."

https://www.theverge.com/24148033/satya-nadella-microsoft-se...

sealeck
3 replies
3h50m

Turns out businesses have a stated preference for "nice things for the customer/society" but a revealed preference for money.

mook
1 replies
3h0m

Would that be securities fraud, because they're lying to investors?

(Going by Matt Levine's "everything is securities fraud" logic here to see if that might actually change behavior…)

thiagoharry
0 replies
2h21m

Investors are very happy with profit over security choices. Moreover, decisions to maximize profitability thinking only in short term is also not bad for them if they perceive that can sell their shares before the consequences. A company that do not place profit above other things is not a good company to invest money and see it grow. A company will invest in security only as long as it increases profitability. Doing otherwise is not maximizing profits and lose investors. If you are a "security company", surely this means that you need the security to sell the product and get profitability. Other companies will have other tradeoffs to choose how much they invest in security to maximize profitability.

nox101
0 replies
3h20m

then the laws need to change so bad security costs companies money.

ziddoap
1 replies
4h14m

Profit is an implicitly assumed first priority for basically every business, otherwise the business wouldn't be around.

I don't know of any company that has profit in their slogan, or in the core values statement, etc.

formerly_proven
0 replies
3h34m

I don’t put “breathe” at the top of my TODO list, either.

ls65536
1 replies
4h35m

Obviously, nobody is going to outright admit they put profits above security; indeed, they will often state the opposite. But their closely-held beliefs will shine through when it comes time to make decisions and the outcomes of those decisions are exposed to their customers and to the public.

lesuorac
0 replies
3h47m

Does Bill or Satya write code anymore? It could very well be that they consider security the top priority but it's a moot point because they're so removed from operations.

Although I would suspect that you're effectively right in that they either don't have it as a top priority or think they do but have a reveal preference of they don't. For example, an engineer that does rigorous security testing and finds nothing as well as launches one project gets promoted less often than an engineer that launches two projects and doesn't do rigorous security testing.

outside1234
0 replies
3h46m

Unless you care about your review and promotion, in which case do features.

_heimdall
0 replies
2h21m

Related to the GPs point, do you know of any company that publicly admits that they chose profit above all else?

aaomidi
3 replies
4h5m

Let's Encrypt

Google Trust Services

Disclaimer: I've worked in both of these :)

bdcravens
1 replies
3h44m

What products do those two companies sell?

1oooqooq
0 replies
34m

they sell market protection. to google.

it makes crawlers much more expensive. makes everyone depend on their CDNs etc.

meandmycode
0 replies
3h42m

Any company with sufficient size will fail to incentivise the things they claim at the top, unfortunately the impacts of decisions (especially during austerity) are poorly understood, so even the supposedly best intending will fail once you reach a size

isodev
2 replies
3h32m

Isn’t there a point when a company becomes so big and so impactful to multiple layers of our life, that it should be impossible for them to continue focusing on profit alone?

I’m not talking about regulation per se, but holding humans in charge of such corps more accountable.

ossobuco
0 replies
3h10m

I don't think it's going to happen unless we decide to nationalize private services that are vital to people.

Why don't we have a public maps system, or a content sharing platform? Services like google maps/search or youtube by now are part of the infrastructure of our society.

The same way as roads/railways or energy production are publicly owned in many countries the same should happen for digital services. In good parts of Europe railways are publicly built and maintained while the trains are privately owned.

1oooqooq
0 replies
38m

today that means "too big to fail". in wall st it's called "jackpot"

drpossum
1 replies
4h39m

I genuinely think Proton as a company would prefer to cease to exist rather than offer insecure products. In fact there's a lot of offerings I would use (and pay more for) and they could make but choose not to (like a calendar that is not over an airtight protocol and could integrate with my regular calendar clients).

1oooqooq
0 replies
39m

counter point: nordvpn

from day one everyone knew they were fsb pupets, and people are still giving them money.

sqeaky
0 replies
3h35m

If no company can make security the priority then maybe no company can be trusted with OS development.

noqc
0 replies
3h27m

Microsoft possesses, to put it lightly, a number of government contracts. I think this puts them in a bit of a pickle.

micromacrofoot
0 replies
2h24m

In other words, when faced with an existential threat...

* go bankrupt because we can't be secure

* be less secure and stay in business

...guess which one will almost always win.

Microsoft of course, as a multi-trillion-dollar company has no such threat and there's no reasonable excuse for this.

ls65536
0 replies
4h43m

I'm sure there are some companies that realise security (or rather the critical lack of some important aspect of it) can impact profits, but that depends a lot on who their customers are too. Ultimately, if the customers who pay for a vendor's products and services don't value it, then the vendors won't value it either, short of any regulatory or legal requirements that might compel them otherwise. However, given that many large organizations (including governments) are Microsoft customers, it's strange to see in this case. Maybe there's a kind of "it can't happen to us" or "nobody will find out about it" arrogance going on, but they must now be seeing that the reputational damage is likely to have negative impacts, including hurting future profits, down the road.

haliskerbas
0 replies
3h51m

Agreed it’s deliver value for shareholders >>>>>>>>> everything else

dfxm12
0 replies
3h51m

This isn't about Microsoft, per se. This is about the fact that there's no risk for companies who do, even if they're bidding for government work. Hopefully whistleblowers making these things public will lead to the public putting pressure on their elected officials to actually make some regulations with teeth in this area. I'm not holding my breath, but it is something I consider in the voting booth.

LightHugger
0 replies
4h6m

They are rare, but Mullvad comes to mind immediately. They have made several decisions that directly impacted their bottom line (no recurring subscriptions where they need to keep the customer's credit card on file) to the benefit of their customer's security.

1vuio0pswjnm7
15 replies
6h50m

Sounds like the same Microsoft culture as has always been. Like a cult. It can do no wrong. The conversation with Microsoft businesspeople at conferences was always the same: Microsoft has no deficiencies, there is nothing it isn't working on and it has a solution for every possible problem. Other sources of software do not exist. There is only Microsoft. Total illusion put forth by delusional employees. The outside world can be ignored because life in the cult is good.

freedomben
4 replies
4h58m

This comment sounds hyperbolic, but it really isn't. It's really bad. This has been my experience with Microsoft employees also.

In my experience, what makes for bad software is PM and engineering hubris. You definitely need some vision and confidence as just following user feedback is a recipe for terrible software as well. The key is to find the right balance and straddle that line.

If it's been long enough for insiders to tell the story of Windows Phone and the eventual cancellation, I'd be fascinated to hear the story of that (from inception to death) and how that went internally given the culture.

tracker1
2 replies
3h43m

Just wanted to say that I thought the Windows Phone (the last version of such) was relatively nice. It had a decent developer experience, but was pretty much an also ran and didn't have enough market share to overcome mindshare for first party apps. When so many first apps were iOS first and Android later, throwing a third option in the mix just missed the mark more often than not.

I was already in the Android ecosystem and far less cynical at that point about Google.

Ylpertnodi
1 replies
3h14m

I didn't get a Windows phone because i don't trust Microsoft. A friend had one and it was really ok, but no way for me.

tracker1
0 replies
3h2m

In retrospect, I don't trust Apple or Google either...

fingerlocks
0 replies
1h54m

Can confirm, it is 100% hubris based on my limited time of working at Microsoft.

There is pervasive NIH syndrome, re-inventing the wheel, and massive amounts of over engineering and unnecessary abstraction caused by chasing the endless "But what if...?" dragon.

This behavior is justified, and critics are silenced, by the "But we're an enterprise company!" cop-out

Drakim
3 replies
5h7m

Don't worry, Microsoft has big ambitions and huge plans about how to truly present themselves as more safety oriented in the future.

NekkoDroid
2 replies
4h59m

AI Safety. Code is run through AI to check for vulnerabilities. Files are analyzed by AI to ensure they aren't malware. Every instruction is run through AI to ensure nothing maliciously is happening (mostly enforcing DRM :). Every pixel is output by AI to ensure you see nothing not intended for your precious eyes.

freedomben
1 replies
4h50m

You joke, but (the DRM part at least) is the future I fear is coming. It could hit us from so many angles (not forgetting Chrome's Web Environment Integrity and Apple's Private Access Tokens), and with all the money and power behind it (big tech plus big copyright), and the complete apathy of the average user towards this, it seems inevitable.

NekkoDroid
0 replies
4h34m

The DRM part wasn't really part of the joke, just a sad truth that is being worked on more and more that sadly fit into the joke.

eganist
2 replies
5h3m

If memory over the last two decades serves, this is a relatively recent degradation.

Microsoft's security reputation prior to the recent (5ish years?) failures was largely built up on top of the work stemming from the Trustworthy Computing memo.

https://www.wired.com/2002/01/bill-gates-trustworthy-computi...

diggan
1 replies
4h58m

Bill Gates in 2002: "So now, when we face a choice between adding features and resolving security issues, we need to choose security."

Satya Nadella in 2024: "If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security."

Microsoft in 2024: Run this software on your computer so we can take a screenshot of everything you do, index it and we promise Security is still, and have always been, the priority. And yes, we do store data unencrypted on your disk, why are you asking?

NekkoDroid
0 replies
4h55m

And yes, we do store data unencrypted on your disk, why are you asking?

But don't worry, you need to be an administrator to open the file. What? your average person daily drives an administrator account? How should we have known that???

surfingdino
0 replies
3h45m

I'll give you a 4-letter word... Zune /s

magicalhippo
0 replies
2h26m

What Microsoft can provide are lots of nice stickers saying they conform to this or that security standard, making security folks in IT departments all warm and fuzzy.

At least that's how it appears from our POV, selling B2B applications. They don't seem to care that much about actualities as long as the security checklist passes.

diggan
0 replies
4h55m

To be honest, that sounds like every company that suffers from delusions of grandeur and wants to conquer the planet, one way or another.

What you're saying is equally true for Apple, Google, Amazon and most other public companies today. You're never gonna get "Use this Microsoft product" as an answer from Apple support/engineer even if that product would solve your particular problem better.

ThinkBeat
12 replies
3h46m

This whole article seems a bit odd to me. What is "the product" ?

Presumably this is not related to earlier problems with SolarWinds.

Did MS screw up. Yes.

However, all things have bugs.

I takes one person finding one bug and exploiting it. and there are enormous resources going into finding one, and I am certain that this is the only one.

I am sure the NSA is sitting on a pile of them.

Whereas the developers have to think about everything that can happen and protect against it.

Does this make Microsoft different from its competitors?

I think Microsofts strategy is somewhat similar to Linus:

Where security patches are often not part of new releases due to the burden of establishing what the consequences of bigger changes would be, and the fact that security people dont do sane things.

(But you can of course pull them and make it part of an in-house distro.

https://lkml.iu.edu/hypermail/linux/kernel/1711.2/01357.html

amaccuish
3 replies
2h20m

Because as far as I can tell, there was no "vulnerability" here, it's just how the product works. Stealing an OAuth key is just as bad. Stealing a domain's krbtgt key is just as bad.

Businesses want that when they login to a computer, they are SSO'ed in to all their apps. That's how ADFS works, you authenticate to it using kerberos and it issues you a SAML token. Here they stole apparently the key used to sign the SAML token so they could generate their own.

Unless there was some vulnerability that exposed the key publically, I fail to see how in this particular incident its Microsoft's fault.

Thorrez
1 replies
1h22m

Stealing an OAuth key is just as bad

What is an "OAuth key"? Do you mean an OAuth token? No, Golden SAML is worse than stealing an OAuth token, because an OAuth token is valid for 1 user, but Golden SAML can be used to impersonate any user. Also, OAuth tokens expire, but Golden SAML doesn't expire (although if you steal an OAuth refresh token, that won't expire).

I fail to see how in this particular incident its Microsoft's fault.

Andrew Harris wanted to warn customers about the weakness, and tell them they can prevent the weakness by disabling seamless SSO. Other Microsoft people said no, that would alert hackers to the attack, we want to keep the attack secret, and it also would jeopardize our contracts by making the default setting sound insecure. Then Golden SAML was published publicly, so that first reason was no longer valid, but Microsoft still wouldn't tell customers they could prevent the attack by disabling seamless SSO. Then Solarwinds happened, and Microsoft finally advised customers to disable seamless SSO.

what-the-grump
0 replies
39m

I think there is too much confusion in the details of the actual attack.

You have to steal the private key for the SAML signing certificate for an app. The correct answer would be to scope any token to only have access to what the app has access to, the second layer which is documented in their 2020 article, is to require mfa on admin actions, and the 3rd layer is to disconnect azure admin accounts from on-prem admin accounts preventing this type of attack.

But disabling SSO altogether is non-starter for most businesses, what are we going to do tomorrow? Spend months recreating 100,000x accounts in various applications, no.

We decrypt ssl traffic in our company, someone steals the private key and now can read the entire stream including your bank account details, lets stop decrypting ssl traffic because someone might leak the key? The answer from the infosec communinity has been its worth the risk.

temac
0 replies
1h10m

This is ignoring security in depth, weaknesses, and security architecture. When ignoring that, you can not pretend, and MS did pretend, that you had a good enough stance on security. Fixing discovered vulns alone is mandated, it gives you maybe half a point, but the other 9.5 points or at least 5 before you can claim you care about security require more than fixing known vulns or waiting for world scale incident to "respond". You have to prevent issues.

drewda
2 replies
3h19m

You might want to read the actual article.

My understanding is that it was a two-part exploit:

1) The Solarwinds product was hacked to allow backdoor access to organizations' on-prem networks.

2) The hackers then took advantage of the "Golden SAML" vulnerability in Microsoft's Active Directory Federation Service (AD FS) to leapfrog via "seamless SSO" from the on-prem network into the organization's cloud resources hosted by Microsoft.

The article is all about how various Microsoft leaders and staff did not fix #2, because many said it would never be an actual issue exposed to the world.

This is extra damning because Microsoft is selling components at the core of both governments' on-prem and cloud systems, so if they don't take security extra seriously, their systems can present passive vulnerabilities.

tbrownaw
1 replies
3h9m

You might want to read the actual article.

ProPublica articles in general are structured in a way that makes them a pita to extract actual useful information from.

BeetleB
0 replies
2h42m

It's in the article's headline.

And at the risk of annoying everyone, a GPT summary:

This article investigates how Microsoft, in pursuit of profit and market dominance, overlooked significant security vulnerabilities that left the U.S. government and other entities exposed to cyberattacks by Russian hackers. The whistleblower, Andrew Harris, a former Microsoft cybersecurity specialist, discovered a serious flaw in a Microsoft application used for cloud-based program access. Despite Harris's persistent warnings over several years, Microsoft delayed addressing the flaw, prioritizing business interests, particularly securing a lucrative deal with the federal government for cloud computing services.

The security loophole was within Active Directory Federation Services (AD FS), which if exploited, would allow attackers to impersonate legitimate users and access sensitive data without detection. Microsoft's decision to deprioritize this issue, despite internal and external warnings, eventually led to the significant SolarWinds cyberattack, affecting numerous federal agencies and demonstrating the consequences of the security oversight.

Microsoft's response to these accusations has been to emphasize its commitment to security, stating that they take all security issues seriously and review them thoroughly. However, ProPublica’s investigation reveals a culture within Microsoft that sometimes places business growth and competitiveness over immediate security concerns, reflecting broader issues within the tech industry related to balancing profit-making with customer security.

The article sheds light on internal conflicts, the company's handling of security vulnerabilities, and the broader implications of such practices for national security and customer trust. It also highlights the challenges faced by whistleblowers and cybersecurity professionals in advocating for swift action on security issues within large corporations driven by profit motives and competitive pressures.

shkkmo
0 replies
3h22m

Microsoft had a known, high consequence, security flaw that they did not acknowledge or fix, they had evidence that indicated it had already been exploited and they knew they had limited to no ability monitor for exploitation. This choice lead directly to the SolarWinds hack that happened in 2019 was discovered in late 2020 and acknowledged by the USG in early 2021.

Many companies make bad choices around security for profit, however that factors I listed above make this extremely egregious.

I would seriously question any use of Microsoft products in any security conscious organization after this reveal. I also hope that anyone negatively effected by the Solar Winds sue Microsoft for knowing about the vulnerability for years without fixing it or disclosing it.

psychoslave
0 replies
3h6m

What is "the product" ?

Human attention sink where you can throw ads and other propaganda, what else?

pgraf
0 replies
3h29m

It is true that nothing is 100% secure. Sitting on a major security vulnerability internally with a motivated employee pushing to fix it and doing nothing for business reasons is not negligence, but malice. People in the chain of command need to be held accountable for this.

latexr
0 replies
3h27m

However, all things have bugs.

There are bugs and there are critical flaws you’ve been warned about. This is the latter.

The fact that this was known by Microsoft but not fixed is the story.

SuchAnonMuchWow
0 replies
3h24m

Harris said he pleaded with the company for several years to address the flaw in the product, a ProPublica investigation has found. But at every turn, Microsoft dismissed his warnings, telling him they would work on a long-term alternative — leaving cloud services around the globe vulnerable to attack in the meantime.

That is not a screw-up, that is a deliberate decision.

spydum
11 replies
3h23m

As per usual, executive platitudes around "security first" don't matter.

If you pay and promote people for features, and don't reward security culture, people are not dumb: they and the management layers will optimize for that.

I don't know how to design incentives to solve for this, but this is always going to be the way it is.

olivierduval
8 replies
3h10m

I think that it could be "security as a feature"

Usually, a feature is included in a product if the marketing show that it will grow the business more than the cost of the feature. Maybe we can try the same idea ?

"We identified this vulnerability, and it will impact X % of our customer and Y % will leave (+ reputation damage) so we will loose BIGNUMBER $. However, we can correct it for SMALLNUMBER $ in Z days. Decision ?"

nicce
2 replies
2h28m

Real security cannot be feature.

Your complete system design and other features should be based on the idea of ”security first”, if you really want to build secure systems.

hulitu
1 replies
2h19m

Your complete system design and other features should be based on the idea of ”security first”, if you really want to build secure systems.

One can argue that the most secure system is the one turned off and not used. And i am not talking about devices with builtin batteries.

nicce
0 replies
1h31m

One can always argue that, but, fundamentally security is about limiting the systems' use for its purpose and eliminate all unwanted scenarios.

If you need to use the system, you cannot turn it off or not to use it.

Sohcahtoa82
2 replies
2h3m

Security shouldn't be seen as a feature, it should be the default.

Advertising something as "secure" SHOULD be seen as silly as advertising it as "doesn't crash". But we're not ready for that, I guess.

makeitdouble
0 replies
1h9m

It's absolutely hard, but you need to advertise and promote security for it to stay relevant, internally and externally. The moment it becomes the "default" I think the only way is downward.

The marketing dept should do something for that, that's their job. If Apple can tout privacy as a feature, Microsoft can find a way to have security as a shiny feature on their keynote, with internal projects rewarded for increasing security by x% etc.

johnnyanmac
0 replies
1h15m

With the increasing number of breaches over the years, it is 100% a feature. I see it as insurance: ideally nothing happens, but if/when something happens the company should be ready to compensate for damages.

mewpmewp2
0 replies
2h16m

And where do you take those numbers from?

Also identification is one thing, but good security should mean the vulnerability didn't occur in the first place.

Then you also need to get budget for identifying vulnerabilities.

After that you need budget to research how costly the vulnerability could be.

But before getting those budgets you need budget again to propose all of that and data to prove its value.

Unless you use your own time to do all of that or accidentally stumble upon something.

I think the only realistic way to get any sort of budget is if a deep enough incident actually happens. And this will only last maybe for a year until most of the decisionmakers have been rotated with new ones wanting to only deliver again.

imglorp
0 replies
39m

They did that in FTA:

In the months and years following the SolarWinds attack, Microsoft took a number of actions to mitigate the SAML risk. One of them was a way to efficiently detect fallout from such a hack. The advancement, however, was available only as part of a paid add-on product known as Sentinel.

So you sell me a submarine with screen doors, avoid fixing it for years, cripple internal processes that would fix it, and then you want to charge me for a water alarm? That's chutzpah.

tjpnz
0 replies
1h29m

Managers are already held accountable for their teams when they underperform. The same should also apply for their security blunders.

jrm4
0 replies
2h52m

I do.

It's law, regulation and liability.

Until heads roll, until someone is punished, likely nothing will happen.

pgraf
8 replies
3h19m

Imagine a major bridge that was built by a contractor. A internal safety inspector repeatedly warned his supervisors of structural deficiencies that could lead to the collapse of the bridge. Furthermore, in the pass of time two external sources publicly warned about the issue, but the company downplayed the importance. Finally, the bridge collapses. It becomes evident that the company did nothing about the issue because it didn‘t want to loose contracts selling more flawed bridges. The public would justifiably go nuts, and there would be legal consequences for everyone involved.

What is different in our industry that companies (and managers) get away with such malice?

magicalhippo
2 replies
2h31m

Here in Norway a bridge built with known structural deficiencies did in fact collapse[1], and basically nothing has happened except tax payers get to pay even more for a new bridge.

Unless enough lives are lost, people generally don't care that much it seems.

[1]: https://www.nrk.no/innlandet/statens-vegvesen-legg-fram-rapp...

nicce
0 replies
2h24m

basically nothing has happened

Maybe they proudly stated knowing the risks, and while unfortunate, risks became reality. And then everything is fine.

_heimdall
0 replies
2h23m

I'm not sure if this would line up with the Dunbar number or something similar, but it sure seems reasonable that societies and centralized power should never grow beyond the scale where people stop caring.

If the public is expected to keep government and corporstions in check but the public doesn't care, it can only end poorly.

natsucks
1 replies
2h37m

I don't understand how this doesn't destroy a company. They willfully ingored a serious risk and it had major national security implications.

dfedbeef
0 replies
57m

Have you tried to use Google customer support

red_admiral
0 replies
3h12m

Wasn't there something a bit like that with the Morandi bridge that collapsed in Italy?

(There was definitely something like that with the Mottarone cable car that had been running for years with the safety catch disabled. When the tow-rope snapped, wiht no catch, the cabin rushed down and killed everyone on board.)

johnnyanmac
0 replies
1h10m

Boeing in a nutshell.

What is different in our industry that companies (and managers) get away with such malice?

Software isn't immediately life threatening. That's why it's all thr wild west outside of medical and aerospace. While it sucks to have PI leaked to the internet, you do have time to at least take action compared to a door in an airplane coming off.

delfinom
0 replies
2h23m

What is different in our industry that companies (and managers) get away with such malice?

Lack of professional licensure that binds you to state regulation with jail time as one of the stated punishments besides financial liability.

Heh, the government could start effecting change by mandating licensure and sign-offs by licensed individuals when contracting for software products sold to the government.

JohnMakin
8 replies
2h36m

The misaligned incentives between security and profit, especially in public companies, is not really a fixable problem without a massive cultural shift. I'm not sure at this point what could even trigger one.

I've always dabbled in cybersecurity, taking on the hat in various roles over the years but have refused to go full time into it due to what I have personally seen in the industry - an overwhelming focus on compliance rather than actual good security practices, and the compliance standards are either very lacking or poorly enforced.

graemep
2 replies
2h24m

This is exactly it. There is no incentive to prioritise security. It is not visible to customers, except in terms of compliance, most likely a check-list approach.

I think it needs a massive cultural shift, but from customers. If customers were willing to evaluate security (consumers cannot, but enterprise can) properly, demand binding assurances, and make buying choices accordingly industry would respond.

Of course MS is too strongly entrenched in the desktop market for this to be completely effective.

wyldberry
0 replies
1h27m

When I first left offensive security consulting and joined an internal defensive team, a wise ex-agency person said to me "In product development, the first things to often get axed are security, and performance. They are invisible to the user, until they aren't, and rarely do failures in those areas end a company."

Granted this was prior to ransomware really blowing up, but even that itself is a different threat model that doesn't mean your product has to be good at security.

hulitu
0 replies
2h21m

If customers were willing to evaluate security (consumers cannot, but enterprise can)

Where i work, IT is outsourced and decision to buy most of the SW is made by managers who have no idea about computers.

pimlottc
1 replies
22m

In the profit-center view, everything is either a cost center or a profit center. And it is nearly impossible to get anyone to truly care about a "cost center".

nicce
0 replies
17m

What if the company is providing only cybersecurity-related services? Could it be in this case, that everything is on profit side.

mihaaly
0 replies
23m

I may be off, but to me as an affected outsider (user) the continuing insistance of using passwords after decades (yes, several decades) of problems and proven vulnerability, then to 'mitigate' with putting second line of 'defense' on the very fragile and non-transparent smarphone infrastructure instead of doing real reforms is a sign of not giving a faint fack.

Sohcahtoa82
0 replies
2h9m

an overwhelming focus on compliance rather than actual good security practices

I'm an application security engineer. I find that it depends widely on the company. You're right that compliance is purely just a checklist and does and doesn't actually do much for security. At best, it slows down a determined internal attacker. ie, a developer can't install a back door since code reviews are enforced by SCM before merging is allowed. But all the ISO-27001 and SOC-2 audits in the world won't prevent trivial attacks like SQL injection.

So the actual security depends on how much buy-in the AppSec team can get from project management. I've had companies where I point out an obviously exploitable flaw that can easily cause DoS, and with some determination could get RCE, and I get radio silence. Others, I point out a flaw where I say "It's incredibly unlikely to be exploitable, and attempts to exploit would require millions of requests that would raise alarms, but if someone is determined enough..." and project management immediately assigned the ticket and it was fixed within a week.

I can tell you one thing that's not doing any favors is overly zealous penetration testers that feel like they need to report SOMETHING so they invent something that's not an issue. For example, in one app I worked on, after logging in, the browser would make an API call to get information about the current user, including it's role. The pentester used Burp Suite to alter the response to the call to change the role to "admin", and sure enough, the web page would show the user role as "admin", and so the pentester reported this as a privilege escalation. They clearly didn't go on to the next step of trying to do something as admin, though, because if they did, they'd see the backend still enforces proper RBAC. Changing that role to "admin" essentially just made all the disabled buttons/functionality in the web app light up, but trying to do anything would throw 403 Forbidden.

But I digress...

The misaligned incentives between security and profit, especially in public companies, is not really a fixable problem without a massive cultural shift.

The EU seems to have figured it out, but the USA is a hypercapitalist hell-hole. It's such a shame that the population is mostly convinced that any regulation is bad and an attack on freedom. I roll my eyes at the Libertarians that claim that the Free Market(tm) will punish bad actors while the worst actors are rising to the top. Bad acting is profitable.

1oooqooq
0 replies
51m

it was like that in the 90s too.

until people like cult of dead cow started to both sell the solutions and give it the tools to exploit everyone not implementing the solutions.

today things like dmca actually protect the malicious incompetent and business which don't take on it are fools.

execveat
7 replies
3h5m

I work in infosec, and this sounds like a communication failure on the whistleblower's part.

Contrary to what many people believe, the profits should be prioritized over security for the most companies, that's only natural (after all, they don't generate any profits themselves, typically). The key is finding the right balance for this tradeoff.

Business leaders are the ones that are responsible for figuring out the acceptable risk level. They already deal with that every day, so it's nonsensical to claim they aren't capable of understanding risk. InfoSec's role for the most part is being a good translator, by identifying the technical issues (vulnerabilities, threats, missing best practices) that go beyond the acceptable risk profile and to present these findings to the business stakeholders, using the language they understand.

Either the guy wasn't convincing enough, or he failed to figure out the things business cares about & present the identified risk in these terms.

jmuguy
2 replies
2h30m

This is framing the story as a simple interaction (or interactions) between Harris and business leaders at Microsoft. It wasn't. Microsoft has a team responsible for translating between security researchers like Harris and its product teams/leadership. That team dismissed Harris because that team's priority was to ignore or downplay issues that were brought to it. Harris went around them and was still ignored. It seems like he tried everything short of calling the press directly to get someone to pay attention. Even after the issue was made public by other security researches, MS did nothing.

What happened here was a systematic failure on MS' part to address a fundamental flaw in one of the most critical pieces of security infrastructure at the entire company.

Companies like MS (and everyone else it seems) need to get out of this Jack Welsh mindset of the only thing that matters is the shareholders. MS acts as the gatekeeper of the most valuable organizations and governments on the planet. Their profits have to take a backseat to this type of thing or they shouldn't be allowed to sell their products to critical organizations and governments.

execveat
1 replies
2h20m

I might be misunderstanding, but from Andrew's Linkedin it looks like he wasn't a security researcher at MS, he was actually the person responsible for translating between security researchers and the upper management:

> Evangelize security services, practices, products, both internally and externally.

> Leading technical conversations around strategy, policy and processes with FINSEC and DoD/IC executive staff.

Thorrez
0 replies
1h10m

he was actually the person responsible for translating between security researchers and the upper management:

According to the article, the group in charge of taking input from security researchers and deciding which vulnerabilities need to be addressed was Microsoft Security Response Center (MSRC), and Andrew Harris wasn't a member of it.

cplat
1 replies
2h43m

During my Master's, security was one of the subjects I took. It started with an equation that related risk (how much you'd lose if something bad happened), the probability of that risk, and the cost of mitigating that risk. The instruction being, one tries to find a mitigation that costs less than the exploitation of the risk. And note here that "cost" does not refer to just money, but could be computational cost, energy consumed, etc.

execveat
0 replies
2h26m

For the MS size entities, the risk calculation is way more complicated. The 1:1 between cost of mitigation vs cost of exploitation only applies to opportunistic attacks, really. At the level where APTs get involved, the data / access might be so valuable that they'd gladly outspend blue team's budget by a factor of 10-100.

civilized
1 replies
2h46m

Why not go even further? Why not say that the whistleblower was wrong and Microsoft business leadership was right? Maybe their profits from ignoring this issue have been fantastic, and the externalities from e.g. mass theft of national security secrets are not Microsoft's problem.

execveat
0 replies
2h37m

Well, because as a security person I can only evaluate his actions from the point of security. Evaluating actions of MS business leadership is beyond my expertise.

I highly doubt that the senior leadership would willingly accept this kind of liability. But you need to put it into right terms for them to understand. Politics play important role at that level as well. There are ways of putting additional pressure on the c-suite, such as making sure certain keywords are used in writing, triggering input from legal or forcing stakeholders to formally sign off on a presented risk.

Without insight knowledge, it's impossible to figure out what went wrong here, so I'm not assigning blame to the whistleblower, just commenting that way too often techies fail to communicate risks effectively.

xyst
1 replies
3h15m

“If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security,”

corporate morality is a Potemkin village. It's all about the profit and appeasing the shareholder, baby!

is anybody honestly surprised at this point? The abbreviation of "M$" is well deserved despite small OSS contributions and attempts to PR their way out of previous history (ie, United States v. Microsoft Corp. [2001])

MattSteelblade
1 replies
41m

So...Golden SAML isn't a vulnerability, as the CyberArk article quoted in the post reiterates, it's a type of attack that requires completely comprising the box before using. Unless I am misunderstanding something, I don't see any particular flaw, per se. As Microsoft (mocked in the article) would say, it's not crossing a security boundary. SSO will ALWAYS have this particular tradeoff. If your SSO infrastructure is compromised, everything that uses it is at risk of being compromised.

spdgg
0 replies
15m

Sounds like the vulnerability was one within AD FS and that exposed the private key, making golden SAML possible.

ChrisMarshallNY
1 replies
2h4m

I think that when companies sell to the government, there is so much money to be made, and such a huge PR boost, that they are incentivized to cover up the naughty bits (a certain airframe manufacturer, comes to mind).

It can mean anything from concealing slightly embarrassing stuff, to massive, systemic, deliberate, fraud; sometimes, the whole spectrum, over time.

It often seems to encourage a basic corrosion of Integrity and Ethics, at a fundamental cultural level.

When leaders say "Make Security|Quality a priority," but don't actually incentivize it, they set the stage.

For example, routinely (as in what is done every day) rewarding or punishing, based on monetary targets, vs. punishing one or two low-level people, every now and then (when caught), says it all. They are serious about money, and not serious at all, about Security|Quality.

If you want to meet a goal, you need to incentivize it. Carrots work better than sticks. Sales people get a lot of stress, and can get fired easily, but they can also make a great deal of money, if they succeed. Security people don't get fired, if they succeed, and get fired, if they don't. Often, the result of good work is ... nothing ... No breaches, no disasters, no drama. Hard to measure, as well. How to quantify an absence?

Sales: Lots of carrot, and the same stick as everyone else gets. Easy to measure, too.

Security: No carrot. All stick. The stick can be a really big stick, too; with nails driven through it.

I'm really not sure what the answer is, but it's cultural, and cultural change is always the most difficult thing to change.

slashtom
0 replies
54m

I think this is sort of it but I don't think it's the carrot that's the problem here. I believe it's the process and yeah ultimately the culture.

I don't think you want sales concerned about security, their focus should and only be on growth. The problem is if you don't give jurisdiction and power to the other side to actually say no this priority (security fix) goes in before work is done on this new feature, then you have an imbalanced system.

If the project manager who is incentivized toward growth is the decision-maker for deciding what is prioritized, well of course naturally you'll have the PM choosing growth over security.

Process needs fixing, give more agency and jurisdiction to the other side to effect change. It's not like security doesn't see what the issues are, it's just the fixes are not prioritized and the culture and process isn't balanced between both.

thiagoharry
0 replies
2h29m

Like any other private company? All choices are to maximize profit, even when they spend resources in security, it is to maximize profit.

stratigos
0 replies
1h50m

This comes off like a study being published that shows tobacco is harmful to the lungs.

say_it_as_it_is
0 replies
1h25m

Business decisions involve profits against everything, not just security. Delaying shipments to make a product more secure can affect revenue targets.

photochemsyn
0 replies
1h47m

Yes, it's called investment capitalism - as long as the consequences of actions one demanded are never felt by oneself, due to limited liability of the financiers and shareholders, then such behavior will never change.

The solutions are well known - the corporate death penalty is a good one, which dissolves the legal and financial structures of the company (the real assets such as factories are unharmed by this, and may simply be sold to a new more reliable set of financiers and shareholders, or may be nationalized and managed by the state, or may be handed over to the workers who run the place to see if they can form an employee-owned company or not, etc.).

This isn't such a radical viewpoint, even many venture capitalists agree that this is the right way to go, e.g. on the airlines:

https://www.cnbc.com/video/2020/04/13/government-should-let-...

outside1234
0 replies
3h45m

Unless that other priority is laying people off. Then the layoffs are more important.

nonrandomstring
0 replies
1h40m

Surely, a whistleblower is someone who reveals a truth that nobody knows?

mikeegg1
0 replies
18m

Will the whistle blower end up the same way as Boeing whistle blowers? "See something; say something."

mihaaly
0 replies
38m

I observed they chose profit over usability and user needs as well (the list is toooo long, I save all of us from pouring all here, let's say I am contemplating getting a completely different job where I do not have to run circles around the way Windows is corrupted), so this fits into the big picture afterall.

hypeatei
0 replies
4h3m

Execs should absolutely be held responsible, but the human factor is always there. Many times people will take the easy route and get worn down by security practices or roadblocks.

I think it's too easy to go "alright focus on security" and then expect it trickle down and figure itself out.

hooverd
0 replies
2h48m

Security first, but security from whom?

fredgrott
0 replies
3h7m

oh lets put fonts in the user space rather then the kernel space what could ever go wrong? this not new its a major feature of how MS works

ckozlowski
0 replies
1h58m

There's a pretty big caveat in this story which I feel is being looked over:

"Disabling seamless SSO would have widespread and unique consequences for government employees, who relied on physical “smart cards” to log onto their devices. Required by federal rules, the cards generated random passwords each time employees signed on. Due to the configuration of the underlying technology, though, removing seamless SSO would mean users could not access the cloud through their smart cards. To access services or data on the cloud, they would have to sign in a second time and would not be able to use the mandated smart cards."

The U.S. Government (USG) is one of MSFT's largest (if not the largest) customers. The user base is enormous, and the AD footprint equally so. I have experience working in this space; the user and roles management is a nightmare with comprimised credentials, locked out accounts, and the like. Given the nature of their work, it's a constant target.

The USG has been attempting to move everyone to smart card auth to help mitigate some of these issues. Removing passwords and turning everyone to two-factor auth would greatly reduce their attack surface. They've been pursuing this for years.

So along comes this guy, and he says that, as part of this fix, just tell all of their customers to turn this off.

I don't dispute the danger of the original SAML flaw. But I think Harris is unfairly judging the rest of MSFT's reaction here. He's asking them to turn off two-factor auth across entire agencies. I might as well hand an attacker a set of credentials because that's the amount of effort and time they would need to phish a set off someone.

To reiterate, the flaw in AD FS was bad and needed immeditate attention. But the short term mitigation Harris proposes would drastically hurt their security and open tons of customers to attacks of the very sort they were trying to prevent. This story is spun as another instance of a company not caring about security, but I see a "whistleblower" who had a very narrow view of their customers overall security posture, and threw a fit when this was pointed out to him.

"To access services or data on the cloud, they would have to sign in a second time and would not be able to use the mandated smart cards.

Harris said Morowczynski rejected his idea, saying it wasn’t a viable option."

I would fully expect most government agency Info Sec Systems Managers (ISSMs) to say the same.

cellu
0 replies
3h55m

surprisedpikachu.jpg

SebFender
0 replies
1h44m

We needed an article to make sure this was clear.