This might be my personal experience, but I've never encountered a QA team that actually writes the tests for engineering.
I have only had QA teams that wrote "test plans" and executed them manually, and in rarer cases, via automated browser / device tests. I consider these types of tests to be valuable, but less so than "unit tests" or "integration tests".
With this model, I have found that the engineering team ends up being the QA team in practice, and then the actual QA team often only finds bugs that aren't really bugs, just creating noise and taking away more value than they provide.
I would love to learn about QA team models that work. Manual tests are great, but they only go so far in my experience.
I'm not trying to knock on QA folks, I'm just sharing my experience.
From what I've seen, the value in QA is product familiarity. Good QA'ers know more about how the product actually works than anybody else. More than PM's, more than sales, and more than most dev teams. They have a holistic knowledge of the entire user-facing system and can tell you exactly what to expect when any button gets pushed. Bad QA'ers are indeed a source of noise. But so are bad devs, sysadmins, T1/2 support, etc.
Not disagreeing with this, but there's one thing they won't always be aware of. They won't always know what code a dev touched underneath the hood and what they might need to recheck (short of a full regression test every single time) to verify everything is still working.
I know that the component I adjusted for this feature might have also affected the component over in spots X, Y, and Z, because I looked at that code, and probably did a code search or a 'find references' check at some point to see where else it's getting called, and also I usually retest those other places as well (not every dev does, though. I've met some devs that think it's a waste of time and money for them to test anything and that's entirely QA's job).
A good QA person might also intuit other places that might be affected if it's a visible component that looks the same (but either I haven't worked with too many good QA people or that intuition is pretty rare, I'm guessing it's the latter because I believe I have worked with people who were good at QA). Because of that, I do my best to be proactive and go "oh by the way this code might have affected these other places, please include those in your tests".
It doesn't necessarily matter what code was changed, a change in code in Module A can cause a bug in Module B that hasn't been changed in a year. A QA test plan should cover the surface area of the product as used by consumers whoever they might be. While knowing some module had fixes can inform the test plan or focus areas when the test schedule is constrained, only testing changes is the road to tears.
Test plans never account for everything, at least in my experience, especially edge cases. And it's rare that I've seen any QA team do a full regression test of the entire site. There's only been a few times where I've seen it authorized, and that's usually after a major refactoring or rewrite.
I'm not in QA, I write code, so I defer to whatever they decide for these things usually, these are just observations from what I've seen.
I just try to make sure I test my code enough that there isn't anything terribly broken when I check it in and fixes I need to make tend to be relatively minor (with a few exceptions in my past).
Also I'm not necessarily talking basic functionality here. I'm currently working for a client that's very picky about the look and feel, so if a few pixels in padding get adjusted where it's noticeable, or a font color or size gets adjusted a bit, in one place and it affects something else, there could be complaints. And a test plan is not likely to catch that, at least not any on any projects I've worked on.
This is a good point, but there are some QA that do review code (source: me - started career in QA and transitioned to dev). When making a test plan, an important factor is risk assessment. If QA has a hunch, or better when the dev lead flags complex changes, the test plan should be created and then the code diffs should be reviewed to assess whether or not the plan needed revising. For example, maybe the QA env doesn’t have a full replica of prod but a query is introduced that could be impacted if one of the joining tables is huge (like in prod). So maybe we’d adjust the plan to run some benchmarks on a similar scale environment.
I’m definitely biased since I started in QA and loved it. To me, good QA is a cross section of many of the things people have mentioned - technical, product, ops, security - with a healthy dash of liking to break things. However, reality is that the trend has been to split that responsibility among people in each of those roles and get rid of QA. Works great if people in each of those job functions has the bandwidth to take on that QA work (they’ll all have a much deeper knowledge of their respective domains). But you’ll lose coverage if any one of those people don’t have time to dedicate to proper QA.
(I’ll also completely acknowledge that it’s rare to have a few, let alone a full team, of QA people who can do that.)
Not really. As QA I always reviewed the checkins since yesterday before opening up the daily build. Between the bug comments and the patch comments, even if the patch itself is a bit Greek to me, I can tell what was going on enough to be a better tester of that area.
This is a great model, until those people so familiar with the business needs end up.. doing business things instead. It's really hard to keep people like that in a QA role once the business recognizes their value. Kind of the same problem with QA automation people - once they become really good at test automation, they are effectively software developers, and want to go there.
I have never once heard of a problem that QA folks end up in project or product management too often, and almost always have the problem of not being able to escape the QA org despite many years. Most companies are extremely resistant to people moving tracks, especially from a “lower status” org like QA or CS. It’s the exception not the rule.
I think that's a compensation problem more than anything else. I've known some QA folks who enjoyed QA and would have stayed in that role if they could have justified the massive differential in comp between QA and SWE or product development. If we valued QA and compensated it at the same level we do those other roles then there would be a lot less difficulty retaining good QA folks.
Agreed! I did have some good experiences at my last job with the QA team, but it was definitely a unique model. They were really a "Customer Success" team, it was a mix of QA, sales, and customer support.
These "Customer Support" reps, when functioning as QA, knew the product better than product or eng, exactly how you're describing. I did enjoy that model, but they also did not write tests for us. They primarily executed manual test plans, after deploys, in production. They did provide more value than creating noise, but the engineering team still was QA, at least from an automated test standpoint.
We had no dedicated QA, but would consistently poach "Customer Success" team members for critical QA work for the exact reasons your listed. Worked quite well for us.
Especially for complex products that are based on users chaining many building blocks together to create something useful, devs generally have no visibility into how users work and how to test.
To your point, the QA team is the customer's advocate. As you say, they know the product, from the customer's perspective, better than anyone else in the development organization.
Where I've seen QA teams most effective is providing more function than "just" QA. I've seen them used for 2nd tier support. I've seen them used to support sales engineers. I've also seen QA teams that take their manual test plans and automate their execution (think Selenium or UiPath) and have seen those automations included in dev pipelines.
Finally, the QA team are the masters and caretakers of your test environment(s), all the different types of accounts you need for testing, they should have the knowledge of all the different browsers and OSes your customers are using, and so forth.
That's a lot for the dev team to take on.
That also means they test from a different perspective than the dev does. If I get a requirement my build is based on my understanding of the requirement, and so is my testing.
A separate QA person coming at it from the customer's perspective will do a test that's much more likely to reflect reality.
I completely agree with that. It really comes down to having the right skills as a QA person. If you don't know how the product is used and only click on some buttons, you will never reach the states in the software that real users reach and therefor you will also not be able to reproduce them.
Frameworks like playright can record as code user actions and you can replay them in a test.
So you can make your QA teams create plenty of tests if you give them the right tools.
In my experience such tests are brittle as hell.
You're not wrong, but a good, well resourced QA org can both help write or develop more flexible tests, and also help fix brittle tests when they do break. The idea of brittle tests that break often being a blocker is predicated on practices like running every type of test on every commit that exist to deal with a lack of QA effort in the first place.
Maybe recorded integration tests are run on every release instead of every commit? Maybe the QA team uses them less to pass/fail work and more to easily note which parts of the product have changed and need attention for the next release? There's lots of possibilities.
That would limit the frequency of releases.
the engineering team are usually great at writing tests that test their code, a good QA can test alongside them to find cases they've missed and issues that automated code tests can't find. The QA person doesn't have to spend time checking that the app basically works, they can be confident in that and spend their time testing for other 'qualities' But yes, I've known QA teams that will only find bugs that no one cares about or are never likely to happen - often because they are not trained on the product to be able to dig deep
It seems so obvious to me that your typical engineer, who spent hours / days / whatever working on a feature, is never going to test the edge cases that they didn’t conceive of during implementation. And if they didn’t think of it, I bet they’re not handling it correctly.
Sometimes that’ll get caught in code review, if your reviewer is thinking about the implementation.
I’ve worked in payroll and finance software. I don’t like it when users are the ones finding the bugs for us.
I started off as a dev, wanted to change to being a tester/QA but was told by the CEO that "the customers are better at finding bugs than we are so just give the app a quick look over and ship it out" - I left soon after that.
in the classic model, most QA orgs were a useless appendage. partially by construction, but largely because QA gets squeezed out when dev is late (when does that happen?). they aren't folded in early, so they twiddle their thumbs doing 'test infrastructure' and 'test plans', until they finally get a code drop and a 48 hr schedule to sign off, which they are under extreme pressure to do.
but every once and a while you ran across a QA organization that actually had a deep understanding of the problem domain, and actually helped drive development. right there alongside dev the entire way. not only did they improve quality, but they actually saved everyone time.
Not sure why this was downvoted, that second paragraph is right on the money.
Saying "useless appendage" sounds to me like it's the QA team that's the problem, when what you're really saying is that it's the organization and process that pushed QA teams into irrelevance. I agree with your assessment overall, and those issues were one of the driving forces behind companies dispensing with QA and putting it all on the developers.
Unit tests are great when you provide data that the methods expect and are sane. It's not until users get in front of the UI and submit data that you never even thought about testing with your unit tests.
To me, unit tests are great to ensure the code doesn't have silly syntax errors and returns results as expected on the happy path of coding. I would never consider that QA no matter how much you randomize the unit test's input.
Humans pushing buttons, selecting items, hover their mouse over an element, doing all sorts of things that have no real reason but yet they are being done anyways will almost always wreck your perfect little unit tests. Why do you think we have session playback now, because no matter what a dev does to recreate an issue, it's never the exact same thing the user did. And there's always that one little WTF does that matter type of thing the user did without even knowing they were doing anything.
A good QA team are worth their weight in $someHighValueMineral. I worked with one person that was just special in his ability to find bugs. He was savant like. He could catch things that ultimately made me look better as the final released thing was rock solid. Even after other QA team members gave a thumbs up, he could still find something. There were days were I hated it, but it was always a better product because of his efforts.
Unit tests are used to test functions that have only defined inputs, and whose outputs depend only on those inputs.
You can extract a lot of business logic into those kinds of functions. There's a whole art in writing "unit testable code". Those unit tests have value.
What's left is the pile code and scenarios that need to be tested in other ways. But part of the art is in shrinking down that pile as much as possible.
The type of testing QA should be doing is different from the type of testing that devs should be doing. One doesn't substitute for the other.
I remember Steve Maguire saying this in Writing Solid Code (that they're both necessary, and both types of testing complement the other). He criticized Microsoft employees who relied on QA to find their bugs. He compared QA testing to a psychologist sitting down with a person and judging whether the person is insane after a conversation. The programmer can test from the inside out, whereas QA has to treat the program like a black box, with outputs and effects resulting from certain inputs.
A good QA person is to a software developer as a good editor is to a writer. Both take a look at your hard work and critique it ruthlessly. Annoying as hell when it's happening, but in my experience well worth it because the end result is much higher quality.
I might just be too old, but I remember when QA people didn't typically write tests, they manually tested your code and did all those weird things you were really hoping users wouldn't do. They found issues and bugs that would be hard to universally catch with tests.
Now we hoist QA on the user.
Working with younger devs I find that the very concept of QA is something that is increasingly foreign to them. It's astounding how often I've seen bugs get to prod and ask "how did it work when you play around with it locally?" only to get strange looks: it passed the type checker, why not ship it?
Programmer efficiency these days is measured in PRs/minute, so introducing bugs is not only not a problem, but great because it means you have another PR you can push in a few days once someone else notices it in prod! QA would have ruined this.
This drives me crazy. It's a cheap way of saying we're ok shipping crap. In the past, I've been part of some QA audits where the developers claimed their customer support log sufficed as their test plan. This wasn't safety-critical software, but it did involve what I would consider medium risk (e.g., regulatory compliance). The fact that they openly admit they are okay shipping bad products in that environment just doesn't make sense to me.
yeah my experience is basically the same, usually if a place has qa at all, it's one person in a team who doesnt have an adequate environment or data set to test with and they effectively end up just watching the developer qa their own work and i end up screaming into a pillow every time i see "Tester: hi" pop up on my screen.
the one exception to this was when i was qa (never again) and i made sure we only ever did automated tests. unfortunately management was nonexistent, devs made zero effort to work with us, and naturally we were soon replaced by a cheap offshore indian team who couldn't tell you the difference between a computer and a fridge anyway.
i think a lot of it just stems from companies not caring about qa, not knowing who to hire, and not knowing what they want the people they hire to achieve. "qa" is just like "agile", where nobody can be bothered to actually learn anything about it, so they make something up and then pat themselves on the back for having it.
Microsoft used to test this way, at least in the team I worked with. SDEs still wrote unit tests. But SDETs wrote a lot of automated tests and whatever random test tools that ended up being needed. The idea was to free up more SDE time to focus on the actual product.
I think that era is over after the great SDET layoffs of 2014/2015? Now I guess some SDE teams are tasked with this kind of dev work.
I think the thing missing from a lot of these conversations is what problem domain you're working in. The more "technical" your problem domain is the more valuable automated testing will be over manual. For almost anything based on user experience and especially mass-market customer-facing products, human QA is far more necessary.
In either case, the optimal operating model is that QA is embedded in your product team. They participate in writing tickets, in setting test criteria and understanding the value of the work being done. "Finding bugs" is a low value task that anyone can do. Checking for product correctness requires a lot more insight and intuition. Automated test writing can really go either direction, but typically I'd expect engineers to write unit tests and QA to write e2e tests and only as much or as little as it actually saves time and can satisfactorily indicated success or failure of a user journey.
At least from my knowledge of the gaming world there are QA devs who do find the issues and fix them if they've the ability to do so, point out code that should be taken a look at, and all of that. I find it extremely valuable to have another set of eyes in the code with a very more focused perspective, sometimes different from the dev.
I've only ever had an official QA team in one job, at a Fortune 1000. When I started we didn't have anyone yet, but eventually they hired an mid-manager from India and brought him over (as in relocated his whole family). He then brought on a QA person he had worked with previously.
I did not work well with the mid-manager, who was both my new boss and the QA person's (not too relevant here). However, I do give him credit for the person he hired.
That QA person, a young Indian woman with some experience, was actually phenomenal at her job, catching many mistakes of ours both in the frontend and in the APIs.
She not only did a bunch of manual testing (and thus discovered many user-facing edge cases the devs missed), she wrote all the test cases (exhaustively documented them in Excel, etc. for the higher-ups), AND the unit tests in Jest, AND all the end-to-end tests with Playwright. It drastically improved our coverage and added way more polish to our frontend than we otherwise would've had.
Did she know everything? No, there was some stuff she wasn't yet familiar with (namely DOM/CSS selectors and Xpath), and it took some back-and-forth to figure out a system of test IDs that worked well enough for everyone. She also wasn't super fluent with the many nuances of Javascript (but really, who is). There was also a bit of a language barrier (not bad, but noticeable). Overall, though, I thought she was incredible at her job, very bright, and ridiculously hard-working. I would often stay a little late, but she would usually be there for hours after the end of the day. She had to juggle both the technical dev/test tasks, the cultural barriers, and managing both up and across (as in producing useless test case reports in Excel for the higher ups, even though she was also writing the actual tests in code), dealing with complex inter-team dynamics, etc.
I would work with her again any day, and if I were in management, I'd have promoted the heck out of her, trained her in whatever systems/languages she was interested in learning, or at least given her a raise if she wanted to stay in QA. To my knowledge the company didn't have a defined promotion system though, so for as long as I was there, she remained QA :( I think it was still better than the opportunities she would've had in India, but man, she deserved so much more... if she had the opportunities I did as an American man, she'd probably be a CTO by now.
I have a few times. But the only common thing in the QA industry, is that every company does it differently and think they're doing it the "normal way".
I share the same experience that the QA team writes test plans not and “code level” tests
That said, those test plans are gold. They form the definition of the product’s behavior better than any Google Doc, Integration Test, or rotating PM ever could.
Anecdotally, from my time on Windows Vista I remember an old-school tester who didn't write any code, just clicked on stuff. From what I could tell, in terms of finding serious bugs he was probably more valuable than any of the SDETs who did write code. His ability to find UI bugs was just amazing (partly due to familiarizing himself with the feature specs, I think, and partly due to some mysterious natural ability).
We have SDETs for that. And they do a great job. But QA is where polish happens. When you get good QA people, who know the app better than the developers, better than the users, who anticipate how the users will use the product? These people should be paid their weight in gold.
That's only your personal experience because our QE team at Red Hat spend a very large amount of their time coding new tests or automating existing ones. They use this framework: https://avocado-vt.readthedocs.io/en/latest/Introduction.htm...
This is because there is no formal way to run a QA org, people get hired and are told to “figure it out”. Then as other posters said the other orgs ignore the QA org because they have no understanding of the need. What you’re describing is a leadership problem, not a QA usefulness problem.
I was surprised by the opposite of this (after entering my first real job at Google, after startup founder => seller.)
People wrote off QA completely unless it meant they didn't have to write tests, but, it didn't track from my (rather naive) perspective that tests are _always_ part of coding.
From that perspective, it seemed QA should A) manage go/nogo and manual testing of releases B) keep the CI green and tasks assigned for red (bonus points if they had capacity to try fixing red) C) longer term infra investments, ex. what can we do to migrate manual testing to integration testing, what can we do to make integration testing not-finicky in the age of mobile
I really enjoyed this article because it also indicates the slippery slide I saw there: we had a product that had a _60% success rate_ on setup. And the product was $200+ dollars to buy. In retrospect, the TL was into status games, not technical stuff, and when I made several breakthroughs that allowed us to automate testing of setup, they pulled me aside to warn me that I should avoid getting into it because people don't care.
It didn't compute to me back then, because leadership _incessantly_ talked about this being a #1 or #2 problem in quarterly team meetings.
But they were right. All that happened was my TL got mad because I kept going with it, my skip manager smiled and got a bottle of wine to celebrate with, I got shuffled off to work with QA for next 18 months, and no one ever really mentioned it again.
Hello. I am QA that writes tests for engineering. Technically, my title is a Software Development Engineer in Test (SDET). Not only do I write "test plans", I work on the test framework, infrastructure and the automation of those test plans.
Every company is different on how they implement the QA function. Whether it be left to customer, developers, customer support, manual only QA, or SDET. It really comes down to how much leadership values quality or how leadership perceives QA.
If a company has a QA team, I think the most success comes when QA get involved early in the process. If it is a good QA team, they should be finding bugs before any code is written. The later they are involved, the later you find bugs (whether the bugs are just "noise" or not) and then the tighter they get squeezed between "code complete" and release. I think that the QA team should have automation skills so more time is spent on new test cases instead of re-executing manual test cases.
Anyways, from my vantage point, the article really hits hard. QA are sometimes treated as second class citizens and left out of many discussions that can give them the context to actually do their job well. And it gets worse as the good ones leave for development or product management. So the downward spiral is real.
Weird, I have had the opposite experience that most shit slips through the cracks of automated testing & manual testing by an experienced QA is 10x more effective.
I've seen good QA teams who own and develop common infrastructure, and can pursue testing initiatives that just don't fit with engineering teams. When developing a new feature, the team developing it will write new tests to cover the functionality of the feature, and will own any failures in those tests moving forward. But while they're doing that, the QA team is operating a database of test failures, performance metrics, etc. that can provide insight into trends, hot spots needing more attention, etc. They're improving the test harnesses and test frameworks so it's easier for the engineering teams to develop new tests quickly and robustly. While the engineering team probably owns all of the unit tests and some integration tests - a dedicated QA team focuses on end-to-end tests, and tests that more accurately recreate real world scenarios. Sometimes there are features that are hard to test well because of non-deterministic behavior, lots of externalities, etc., and I think QA should be seen as an engineering specialty - sometimes they should collaborate with the feature teams to help them do that part of their job better and teach them new testing techniques that may be appropriate that perhaps aren't obvious or common.
I would also second another comment that pointed out that good QA folks often know the real surface area of the product better than anyone. And good QA folks also need to be seen as good QA folks. If you have a corporate culture that treats QA folks like secondary or lesser engineers, that will quickly be a self-fulfilling prophecy. The good ones will leave all the ones who fit your stereotype behind by transitioning into dev roles or finding a new team.
The Q&A teams I've seen worked the way you describe initially except they were valuable.
They weren't there for engineering, they were there for product quality. Their expertise was that they knew what the product was supposed to do and made it did it. Things like "unit tests" help development but they don't make sure the product satisfies client requirements.
If engineering is really on top of it, they learn from QA and QA seems to have nothing to do. But don't let that situation fool you into thinking they are "just creating noise and taking away more value than they provide"
The last two organizations I worked for had full QA teams with people who wrote the tests, not just test plans. The devs sometimes provided features to facilitate it, but the QA teams were the ones that constructed the tests, ran them, and decided if the software was ready to be released. Some things had manual tests, but a large percentage was fully automated.
I work with the regulated drug development industry, and believe there is a useful and important distinction between Quality Control (QC) and Quality Assurance (QA). I wonder if perhaps this distinction would be useful to software quality too.
QC are the processes that ensure a quality product: things like tests, monitoring, metrology, audit trails, etc. No one person or team is responsible for these, rather they are processes that exist throughout.
QA is a role that ensures these and other quality-related processes are in place and operating correctly. An independent, top level view if possible. They may do this through testing, record reviews, regular inspections and audits, document and procedure reviews, analyzing metrics.
Yes, they will probably test here and there to make sure everything is in order, but this should be higher level - testing against specifications, acceptability and regulatory, perhaps some exploratory testing, etc.
Critically they should not be the QC process itself: rather they should be making sure the QC process is doing its job. QA's value is not in catching that one rare bug (though they might), but in long term quality, stability, and consistency.
In my company the engineering team mostly writes unit tests. Then there was a weekly manual QA exercise where the oncall engineer followed a checklist with our actual mobile app on an actual phone before it went to the store. When this started to take almost the entire day, we hired a contract workforce for it. The contract workforce is in the process of automating those tests, but the most important ones still get human eyes on.
FWIW, I have seen that same model have some success, provided management is willing to stand-up for QA. When QA isn't actively writing tests, they can still provide some balance against human biases that tend toward following the easiest path. In these cases, QA provides an objective viewpoint and backstop to cost and schedule pressures that might lead to bad decisions. This might be most valuable on safety-critical code, but I suppose it can still apply at various levels of risk.
I've seen where this has went poorly as QA was slowly eroded. It became easier and easier to justify shoddy testing practices. Low-probability events don't come around often by their very nature and it can create complacency. I've seen some aerospace applications have some close calls related to shortcomings in QA integration; in those cases, luck saved the day, not good development practices.
I think that complicates conversations like this. I’ve seen a range of QA people ranging from the utterly incompetent to people who knew the product and users better than anyone else to people writing code and tackling gnarly performance or correctness issues.
If your company hires the low end of that scale, any approach is going to have problems because your company has management problems. It’s very easy to take a lesson like “QA is an outdated concept” because that’s often easier than acknowledging the broken social system.