return to table of content

Maybe getting rid of your QA team was bad

wmichelin
51 replies
22h19m

This might be my personal experience, but I've never encountered a QA team that actually writes the tests for engineering.

I have only had QA teams that wrote "test plans" and executed them manually, and in rarer cases, via automated browser / device tests. I consider these types of tests to be valuable, but less so than "unit tests" or "integration tests".

With this model, I have found that the engineering team ends up being the QA team in practice, and then the actual QA team often only finds bugs that aren't really bugs, just creating noise and taking away more value than they provide.

I would love to learn about QA team models that work. Manual tests are great, but they only go so far in my experience.

I'm not trying to knock on QA folks, I'm just sharing my experience.

yabones
13 replies
22h7m

From what I've seen, the value in QA is product familiarity. Good QA'ers know more about how the product actually works than anybody else. More than PM's, more than sales, and more than most dev teams. They have a holistic knowledge of the entire user-facing system and can tell you exactly what to expect when any button gets pushed. Bad QA'ers are indeed a source of noise. But so are bad devs, sysadmins, T1/2 support, etc.

cableshaft
4 replies
21h55m

Good QA'ers know more about how the product actually works than anybody else. More than PM's, more than sales, and more than most dev teams.

Not disagreeing with this, but there's one thing they won't always be aware of. They won't always know what code a dev touched underneath the hood and what they might need to recheck (short of a full regression test every single time) to verify everything is still working.

I know that the component I adjusted for this feature might have also affected the component over in spots X, Y, and Z, because I looked at that code, and probably did a code search or a 'find references' check at some point to see where else it's getting called, and also I usually retest those other places as well (not every dev does, though. I've met some devs that think it's a waste of time and money for them to test anything and that's entirely QA's job).

A good QA person might also intuit other places that might be affected if it's a visible component that looks the same (but either I haven't worked with too many good QA people or that intuition is pretty rare, I'm guessing it's the latter because I believe I have worked with people who were good at QA). Because of that, I do my best to be proactive and go "oh by the way this code might have affected these other places, please include those in your tests".

giantrobot
1 replies
21h22m

Not disagreeing with this, but there's one thing they won't always be aware of. They won't always know what code a dev touched underneath the hood and what they might need to recheck (short of a full regression test every single time) to verify everything is still working.

It doesn't necessarily matter what code was changed, a change in code in Module A can cause a bug in Module B that hasn't been changed in a year. A QA test plan should cover the surface area of the product as used by consumers whoever they might be. While knowing some module had fixes can inform the test plan or focus areas when the test schedule is constrained, only testing changes is the road to tears.

cableshaft
0 replies
20h55m

Test plans never account for everything, at least in my experience, especially edge cases. And it's rare that I've seen any QA team do a full regression test of the entire site. There's only been a few times where I've seen it authorized, and that's usually after a major refactoring or rewrite.

I'm not in QA, I write code, so I defer to whatever they decide for these things usually, these are just observations from what I've seen.

I just try to make sure I test my code enough that there isn't anything terribly broken when I check it in and fixes I need to make tend to be relatively minor (with a few exceptions in my past).

Also I'm not necessarily talking basic functionality here. I'm currently working for a client that's very picky about the look and feel, so if a few pixels in padding get adjusted where it's noticeable, or a font color or size gets adjusted a bit, in one place and it affects something else, there could be complaints. And a test plan is not likely to catch that, at least not any on any projects I've worked on.

hysan
0 replies
21h30m

They won't always know what code a dev touched underneath the hood and what they might need to recheck (short of a full regression test every single time) to verify everything is still working.

This is a good point, but there are some QA that do review code (source: me - started career in QA and transitioned to dev). When making a test plan, an important factor is risk assessment. If QA has a hunch, or better when the dev lead flags complex changes, the test plan should be created and then the code diffs should be reviewed to assess whether or not the plan needed revising. For example, maybe the QA env doesn’t have a full replica of prod but a query is introduced that could be impacted if one of the joining tables is huge (like in prod). So maybe we’d adjust the plan to run some benchmarks on a similar scale environment.

I’m definitely biased since I started in QA and loved it. To me, good QA is a cross section of many of the things people have mentioned - technical, product, ops, security - with a healthy dash of liking to break things. However, reality is that the trend has been to split that responsibility among people in each of those roles and get rid of QA. Works great if people in each of those job functions has the bandwidth to take on that QA work (they’ll all have a much deeper knowledge of their respective domains). But you’ll lose coverage if any one of those people don’t have time to dedicate to proper QA.

(I’ll also completely acknowledge that it’s rare to have a few, let alone a full team, of QA people who can do that.)

asadotzler
0 replies
20h16m

They won't always know what code a dev touched underneath the hood and what they might need to recheck (short of a full regression test every single time) to verify everything is still working.

Not really. As QA I always reviewed the checkins since yesterday before opening up the daily build. Between the bug comments and the patch comments, even if the patch itself is a bit Greek to me, I can tell what was going on enough to be a better tester of that area.

bbarn
2 replies
21h45m

This is a great model, until those people so familiar with the business needs end up.. doing business things instead. It's really hard to keep people like that in a QA role once the business recognizes their value. Kind of the same problem with QA automation people - once they become really good at test automation, they are effectively software developers, and want to go there.

taurath
0 replies
20h10m

I have never once heard of a problem that QA folks end up in project or product management too often, and almost always have the problem of not being able to escape the QA org despite many years. Most companies are extremely resistant to people moving tracks, especially from a “lower status” org like QA or CS. It’s the exception not the rule.

importantbrian
0 replies
21h35m

I think that's a compensation problem more than anything else. I've known some QA folks who enjoyed QA and would have stayed in that role if they could have justified the massive differential in comp between QA and SWE or product development. If we valued QA and compensated it at the same level we do those other roles then there would be a lot less difficulty retaining good QA folks.

wmichelin
1 replies
22h4m

Agreed! I did have some good experiences at my last job with the QA team, but it was definitely a unique model. They were really a "Customer Success" team, it was a mix of QA, sales, and customer support.

These "Customer Support" reps, when functioning as QA, knew the product better than product or eng, exactly how you're describing. I did enjoy that model, but they also did not write tests for us. They primarily executed manual test plans, after deploys, in production. They did provide more value than creating noise, but the engineering team still was QA, at least from an automated test standpoint.

mbb70
0 replies
21h25m

We had no dedicated QA, but would consistently poach "Customer Success" team members for critical QA work for the exact reasons your listed. Worked quite well for us.

Especially for complex products that are based on users chaining many building blocks together to create something useful, devs generally have no visibility into how users work and how to test.

taylodl
1 replies
21h40m

To your point, the QA team is the customer's advocate. As you say, they know the product, from the customer's perspective, better than anyone else in the development organization.

Where I've seen QA teams most effective is providing more function than "just" QA. I've seen them used for 2nd tier support. I've seen them used to support sales engineers. I've also seen QA teams that take their manual test plans and automate their execution (think Selenium or UiPath) and have seen those automations included in dev pipelines.

Finally, the QA team are the masters and caretakers of your test environment(s), all the different types of accounts you need for testing, they should have the knowledge of all the different browsers and OSes your customers are using, and so forth.

That's a lot for the dev team to take on.

kgermino
0 replies
21h27m

That also means they test from a different perspective than the dev does. If I get a requirement my build is based on my understanding of the requirement, and so is my testing.

A separate QA person coming at it from the customer's perspective will do a test that's much more likely to reflect reality.

hasoleju
0 replies
22h1m

I completely agree with that. It really comes down to having the right skills as a QA person. If you don't know how the product is used and only click on some buttons, you will never reach the states in the software that real users reach and therefor you will also not be able to reproduce them.

BiteCode_dev
3 replies
22h4m

Frameworks like playright can record as code user actions and you can replay them in a test.

So you can make your QA teams create plenty of tests if you give them the right tools.

chopin
2 replies
21h59m

In my experience such tests are brittle as hell.

Osmose
1 replies
21h47m

You're not wrong, but a good, well resourced QA org can both help write or develop more flexible tests, and also help fix brittle tests when they do break. The idea of brittle tests that break often being a blocker is predicated on practices like running every type of test on every commit that exist to deal with a lack of QA effort in the first place.

Maybe recorded integration tests are run on every release instead of every commit? Maybe the QA team uses them less to pass/fail work and more to easily note which parts of the product have changed and need attention for the next release? There's lots of possibilities.

Kinrany
0 replies
10h46m

Maybe recorded integration tests are run on every release instead of every commit?

That would limit the frequency of releases.

philk10
2 replies
22h8m

the engineering team are usually great at writing tests that test their code, a good QA can test alongside them to find cases they've missed and issues that automated code tests can't find. The QA person doesn't have to spend time checking that the app basically works, they can be confident in that and spend their time testing for other 'qualities' But yes, I've known QA teams that will only find bugs that no one cares about or are never likely to happen - often because they are not trained on the product to be able to dig deep

e28eta
1 replies
21h52m

It seems so obvious to me that your typical engineer, who spent hours / days / whatever working on a feature, is never going to test the edge cases that they didn’t conceive of during implementation. And if they didn’t think of it, I bet they’re not handling it correctly.

Sometimes that’ll get caught in code review, if your reviewer is thinking about the implementation.

I’ve worked in payroll and finance software. I don’t like it when users are the ones finding the bugs for us.

philk10
0 replies
21h15m

I started off as a dev, wanted to change to being a tester/QA but was told by the CEO that "the customers are better at finding bugs than we are so just give the app a quick look over and ship it out" - I left soon after that.

convolvatron
2 replies
22h14m

in the classic model, most QA orgs were a useless appendage. partially by construction, but largely because QA gets squeezed out when dev is late (when does that happen?). they aren't folded in early, so they twiddle their thumbs doing 'test infrastructure' and 'test plans', until they finally get a code drop and a 48 hr schedule to sign off, which they are under extreme pressure to do.

but every once and a while you ran across a QA organization that actually had a deep understanding of the problem domain, and actually helped drive development. right there alongside dev the entire way. not only did they improve quality, but they actually saved everyone time.

lambic
0 replies
22h3m

Not sure why this was downvoted, that second paragraph is right on the money.

cratermoon
0 replies
22h3m

Saying "useless appendage" sounds to me like it's the QA team that's the problem, when what you're really saying is that it's the organization and process that pushed QA teams into irrelevance. I agree with your assessment overall, and those issues were one of the driving forces behind companies dispensing with QA and putting it all on the developers.

dylan604
1 replies
21h16m

Unit tests are great when you provide data that the methods expect and are sane. It's not until users get in front of the UI and submit data that you never even thought about testing with your unit tests.

To me, unit tests are great to ensure the code doesn't have silly syntax errors and returns results as expected on the happy path of coding. I would never consider that QA no matter how much you randomize the unit test's input.

Humans pushing buttons, selecting items, hover their mouse over an element, doing all sorts of things that have no real reason but yet they are being done anyways will almost always wreck your perfect little unit tests. Why do you think we have session playback now, because no matter what a dev does to recreate an issue, it's never the exact same thing the user did. And there's always that one little WTF does that matter type of thing the user did without even knowing they were doing anything.

A good QA team are worth their weight in $someHighValueMineral. I worked with one person that was just special in his ability to find bugs. He was savant like. He could catch things that ultimately made me look better as the final released thing was rock solid. Even after other QA team members gave a thumbs up, he could still find something. There were days were I hated it, but it was always a better product because of his efforts.

tunesmith
0 replies
17h59m

Unit tests are used to test functions that have only defined inputs, and whose outputs depend only on those inputs.

You can extract a lot of business logic into those kinds of functions. There's a whole art in writing "unit testable code". Those unit tests have value.

What's left is the pile code and scenarios that need to be tested in other ways. But part of the art is in shrinking down that pile as much as possible.

JohnFen
1 replies
21h35m

The type of testing QA should be doing is different from the type of testing that devs should be doing. One doesn't substitute for the other.

trealira
0 replies
21h23m

I remember Steve Maguire saying this in Writing Solid Code (that they're both necessary, and both types of testing complement the other). He criticized Microsoft employees who relied on QA to find their bugs. He compared QA testing to a psychologist sitting down with a person and judging whether the person is insane after a conversation. The programmer can test from the inside out, whereas QA has to treat the program like a black box, with outputs and effects resulting from certain inputs.

IKantRead
1 replies
21h37m

A good QA person is to a software developer as a good editor is to a writer. Both take a look at your hard work and critique it ruthlessly. Annoying as hell when it's happening, but in my experience well worth it because the end result is much higher quality.

I might just be too old, but I remember when QA people didn't typically write tests, they manually tested your code and did all those weird things you were really hoping users wouldn't do. They found issues and bugs that would be hard to universally catch with tests.

Now we hoist QA on the user.

Working with younger devs I find that the very concept of QA is something that is increasingly foreign to them. It's astounding how often I've seen bugs get to prod and ask "how did it work when you play around with it locally?" only to get strange looks: it passed the type checker, why not ship it?

Programmer efficiency these days is measured in PRs/minute, so introducing bugs is not only not a problem, but great because it means you have another PR you can push in a few days once someone else notices it in prod! QA would have ruined this.

bumby
0 replies
20h37m

Now we hoist QA on the user.

This drives me crazy. It's a cheap way of saying we're ok shipping crap. In the past, I've been part of some QA audits where the developers claimed their customer support log sufficed as their test plan. This wasn't safety-critical software, but it did involve what I would consider medium risk (e.g., regulatory compliance). The fact that they openly admit they are okay shipping bad products in that environment just doesn't make sense to me.

yungporko
0 replies
21h47m

yeah my experience is basically the same, usually if a place has qa at all, it's one person in a team who doesnt have an adequate environment or data set to test with and they effectively end up just watching the developer qa their own work and i end up screaming into a pillow every time i see "Tester: hi" pop up on my screen.

the one exception to this was when i was qa (never again) and i made sure we only ever did automated tests. unfortunately management was nonexistent, devs made zero effort to work with us, and naturally we were soon replaced by a cheap offshore indian team who couldn't tell you the difference between a computer and a fridge anyway.

i think a lot of it just stems from companies not caring about qa, not knowing who to hire, and not knowing what they want the people they hire to achieve. "qa" is just like "agile", where nobody can be bothered to actually learn anything about it, so they make something up and then pat themselves on the back for having it.

wombat-man
0 replies
21h32m

Microsoft used to test this way, at least in the team I worked with. SDEs still wrote unit tests. But SDETs wrote a lot of automated tests and whatever random test tools that ended up being needed. The idea was to free up more SDE time to focus on the actual product.

I think that era is over after the great SDET layoffs of 2014/2015? Now I guess some SDE teams are tasked with this kind of dev work.

tootie
0 replies
21h42m

I think the thing missing from a lot of these conversations is what problem domain you're working in. The more "technical" your problem domain is the more valuable automated testing will be over manual. For almost anything based on user experience and especially mass-market customer-facing products, human QA is far more necessary.

In either case, the optimal operating model is that QA is embedded in your product team. They participate in writing tickets, in setting test criteria and understanding the value of the work being done. "Finding bugs" is a low value task that anyone can do. Checking for product correctness requires a lot more insight and intuition. Automated test writing can really go either direction, but typically I'd expect engineers to write unit tests and QA to write e2e tests and only as much or as little as it actually saves time and can satisfactorily indicated success or failure of a user journey.

spookie
0 replies
21h47m

At least from my knowledge of the gaming world there are QA devs who do find the issues and fix them if they've the ability to do so, point out code that should be taken a look at, and all of that. I find it extremely valuable to have another set of eyes in the code with a very more focused perspective, sometimes different from the dev.

solardev
0 replies
21h26m

I've only ever had an official QA team in one job, at a Fortune 1000. When I started we didn't have anyone yet, but eventually they hired an mid-manager from India and brought him over (as in relocated his whole family). He then brought on a QA person he had worked with previously.

I did not work well with the mid-manager, who was both my new boss and the QA person's (not too relevant here). However, I do give him credit for the person he hired.

That QA person, a young Indian woman with some experience, was actually phenomenal at her job, catching many mistakes of ours both in the frontend and in the APIs.

She not only did a bunch of manual testing (and thus discovered many user-facing edge cases the devs missed), she wrote all the test cases (exhaustively documented them in Excel, etc. for the higher-ups), AND the unit tests in Jest, AND all the end-to-end tests with Playwright. It drastically improved our coverage and added way more polish to our frontend than we otherwise would've had.

Did she know everything? No, there was some stuff she wasn't yet familiar with (namely DOM/CSS selectors and Xpath), and it took some back-and-forth to figure out a system of test IDs that worked well enough for everyone. She also wasn't super fluent with the many nuances of Javascript (but really, who is). There was also a bit of a language barrier (not bad, but noticeable). Overall, though, I thought she was incredible at her job, very bright, and ridiculously hard-working. I would often stay a little late, but she would usually be there for hours after the end of the day. She had to juggle both the technical dev/test tasks, the cultural barriers, and managing both up and across (as in producing useless test case reports in Excel for the higher ups, even though she was also writing the actual tests in code), dealing with complex inter-team dynamics, etc.

I would work with her again any day, and if I were in management, I'd have promoted the heck out of her, trained her in whatever systems/languages she was interested in learning, or at least given her a raise if she wanted to stay in QA. To my knowledge the company didn't have a defined promotion system though, so for as long as I was there, she remained QA :( I think it was still better than the opportunities she would've had in India, but man, she deserved so much more... if she had the opportunities I did as an American man, she'd probably be a CTO by now.

shados
0 replies
22h2m

but I've never encountered a QA team that actually writes the tests for engineering.

I have a few times. But the only common thing in the QA industry, is that every company does it differently and think they're doing it the "normal way".

sghiassy
0 replies
21h40m

I share the same experience that the QA team writes test plans not and “code level” tests

That said, those test plans are gold. They form the definition of the product’s behavior better than any Google Doc, Integration Test, or rotating PM ever could.

senderista
0 replies
21h32m

Anecdotally, from my time on Windows Vista I remember an old-school tester who didn't write any code, just clicked on stuff. From what I could tell, in terms of finding serious bugs he was probably more valuable than any of the SDETs who did write code. His ability to find UI bugs was just amazing (partly due to familiarizing himself with the feature specs, I think, and partly due to some mysterious natural ability).

scruple
0 replies
12h21m

We have SDETs for that. And they do a great job. But QA is where polish happens. When you get good QA people, who know the app better than the developers, better than the users, who anticipate how the users will use the product? These people should be paid their weight in gold.

rwmj
0 replies
21h23m

That's only your personal experience because our QE team at Red Hat spend a very large amount of their time coding new tests or automating existing ones. They use this framework: https://avocado-vt.readthedocs.io/en/latest/Introduction.htm...

righthand
0 replies
22h9m

This is because there is no formal way to run a QA org, people get hired and are told to “figure it out”. Then as other posters said the other orgs ignore the QA org because they have no understanding of the need. What you’re describing is a leadership problem, not a QA usefulness problem.

refulgentis
0 replies
21h57m

I was surprised by the opposite of this (after entering my first real job at Google, after startup founder => seller.)

People wrote off QA completely unless it meant they didn't have to write tests, but, it didn't track from my (rather naive) perspective that tests are _always_ part of coding.

From that perspective, it seemed QA should A) manage go/nogo and manual testing of releases B) keep the CI green and tasks assigned for red (bonus points if they had capacity to try fixing red) C) longer term infra investments, ex. what can we do to migrate manual testing to integration testing, what can we do to make integration testing not-finicky in the age of mobile

I really enjoyed this article because it also indicates the slippery slide I saw there: we had a product that had a _60% success rate_ on setup. And the product was $200+ dollars to buy. In retrospect, the TL was into status games, not technical stuff, and when I made several breakthroughs that allowed us to automate testing of setup, they pulled me aside to warn me that I should avoid getting into it because people don't care.

It didn't compute to me back then, because leadership _incessantly_ talked about this being a #1 or #2 problem in quarterly team meetings.

But they were right. All that happened was my TL got mad because I kept going with it, my skip manager smiled and got a bottle of wine to celebrate with, I got shuffled off to work with QA for next 18 months, and no one ever really mentioned it again.

me_smith
0 replies
21h23m

Hello. I am QA that writes tests for engineering. Technically, my title is a Software Development Engineer in Test (SDET). Not only do I write "test plans", I work on the test framework, infrastructure and the automation of those test plans.

Every company is different on how they implement the QA function. Whether it be left to customer, developers, customer support, manual only QA, or SDET. It really comes down to how much leadership values quality or how leadership perceives QA.

If a company has a QA team, I think the most success comes when QA get involved early in the process. If it is a good QA team, they should be finding bugs before any code is written. The later they are involved, the later you find bugs (whether the bugs are just "noise" or not) and then the tighter they get squeezed between "code complete" and release. I think that the QA team should have automation skills so more time is spent on new test cases instead of re-executing manual test cases.

Anyways, from my vantage point, the article really hits hard. QA are sometimes treated as second class citizens and left out of many discussions that can give them the context to actually do their job well. And it gets worse as the good ones leave for development or product management. So the downward spiral is real.

marcelr
0 replies
21h42m

Weird, I have had the opposite experience that most shit slips through the cracks of automated testing & manual testing by an experienced QA is 10x more effective.

macksd
0 replies
21h29m

I've seen good QA teams who own and develop common infrastructure, and can pursue testing initiatives that just don't fit with engineering teams. When developing a new feature, the team developing it will write new tests to cover the functionality of the feature, and will own any failures in those tests moving forward. But while they're doing that, the QA team is operating a database of test failures, performance metrics, etc. that can provide insight into trends, hot spots needing more attention, etc. They're improving the test harnesses and test frameworks so it's easier for the engineering teams to develop new tests quickly and robustly. While the engineering team probably owns all of the unit tests and some integration tests - a dedicated QA team focuses on end-to-end tests, and tests that more accurately recreate real world scenarios. Sometimes there are features that are hard to test well because of non-deterministic behavior, lots of externalities, etc., and I think QA should be seen as an engineering specialty - sometimes they should collaborate with the feature teams to help them do that part of their job better and teach them new testing techniques that may be appropriate that perhaps aren't obvious or common.

I would also second another comment that pointed out that good QA folks often know the real surface area of the product better than anyone. And good QA folks also need to be seen as good QA folks. If you have a corporate culture that treats QA folks like secondary or lesser engineers, that will quickly be a self-fulfilling prophecy. The good ones will leave all the ones who fit your stereotype behind by transitioning into dev roles or finding a new team.

joe_the_user
0 replies
21h15m

The Q&A teams I've seen worked the way you describe initially except they were valuable.

They weren't there for engineering, they were there for product quality. Their expertise was that they knew what the product was supposed to do and made it did it. Things like "unit tests" help development but they don't make sure the product satisfies client requirements.

If engineering is really on top of it, they learn from QA and QA seems to have nothing to do. But don't let that situation fool you into thinking they are "just creating noise and taking away more value than they provide"

g051051
0 replies
21h27m

The last two organizations I worked for had full QA teams with people who wrote the tests, not just test plans. The devs sometimes provided features to facilitate it, but the QA teams were the ones that constructed the tests, ran them, and decided if the software was ready to be released. Some things had manual tests, but a large percentage was fully automated.

earth_walker
0 replies
20h44m

I work with the regulated drug development industry, and believe there is a useful and important distinction between Quality Control (QC) and Quality Assurance (QA). I wonder if perhaps this distinction would be useful to software quality too.

QC are the processes that ensure a quality product: things like tests, monitoring, metrology, audit trails, etc. No one person or team is responsible for these, rather they are processes that exist throughout.

QA is a role that ensures these and other quality-related processes are in place and operating correctly. An independent, top level view if possible. They may do this through testing, record reviews, regular inspections and audits, document and procedure reviews, analyzing metrics.

Yes, they will probably test here and there to make sure everything is in order, but this should be higher level - testing against specifications, acceptability and regulatory, perhaps some exploratory testing, etc.

Critically they should not be the QC process itself: rather they should be making sure the QC process is doing its job. QA's value is not in catching that one rare bug (though they might), but in long term quality, stability, and consistency.

closeparen
0 replies
22h0m

In my company the engineering team mostly writes unit tests. Then there was a weekly manual QA exercise where the oncall engineer followed a checklist with our actual mobile app on an actual phone before it went to the store. When this started to take almost the entire day, we hired a contract workforce for it. The contract workforce is in the process of automating those tests, but the most important ones still get human eyes on.

bumby
0 replies
20h45m

FWIW, I have seen that same model have some success, provided management is willing to stand-up for QA. When QA isn't actively writing tests, they can still provide some balance against human biases that tend toward following the easiest path. In these cases, QA provides an objective viewpoint and backstop to cost and schedule pressures that might lead to bad decisions. This might be most valuable on safety-critical code, but I suppose it can still apply at various levels of risk.

I've seen where this has went poorly as QA was slowly eroded. It became easier and easier to justify shoddy testing practices. Low-probability events don't come around often by their very nature and it can create complacency. I've seen some aerospace applications have some close calls related to shortcomings in QA integration; in those cases, luck saved the day, not good development practices.

acdha
0 replies
20h16m

I think that complicates conversations like this. I’ve seen a range of QA people ranging from the utterly incompetent to people who knew the product and users better than anyone else to people writing code and tackling gnarly performance or correctness issues.

If your company hires the low end of that scale, any approach is going to have problems because your company has management problems. It’s very easy to take a lesson like “QA is an outdated concept” because that’s often easier than acknowledging the broken social system.

Zelphyr
39 replies
21h45m

I worked at two companies 15-20 years ago that invested in top-tier QA teams. They were worth their weight in gold. The products were world class because the QA team were fantastic at finding bugs we developers didn't think of looking for because we were too close to the problem. We are too used to looking at the happy path.

One key attribute to both companies is that it was dictated from on high that the QA team had final say whether the release went to production or not.

These days companies think having the developers write automated tests and spend an inordinate amount of time worrying over code coverage is better. I can't count how many products I've seen with 100% code coverage that objectively, quantifiably doesn't work.

I'm not saying automated testing is bad. I'm saying, just as the author does, that doing away with human QA testers is.

PH95VuimJjqBqy
12 replies
20h57m

The QA culture has to be there, not just dictates that the QA has final say.

I've seen companies where that's true and it was still trash because the QA were mostly low-paid contract workers who only did exactly what they were told and no more.

onlyrealcuzzo
10 replies
20h32m

Worked at a company where QA had the final say - and that was by far the most toxic / worst environment I have ever been in.

QA also REFUSED to let developers write automation tests, also REFUSED to let us run them ourselves.

What a nightmare.

YMMV, but just having the final say is not a silver bullet for sure.

mikestew
7 replies
20h26m

That’s not because “QA had the final say”, it’s because your QA team were ass clowns. Any QA team that discourages dev from writing or running tests needs to be burnt to ground and rebuilt.

ponector
6 replies
20h17m

Unless dev is switching to write test 100% of time and become a tester it is highly not recommended to let developers write tests.

That QA was not a clown, they've seen some shit...

Would you let QA to write features in your production code?

ahtihn
2 replies
19h53m

Test code is just code. If you can write test code you can write production code. If you can write production code you can write tests.

If your concern is that devs don't have the right mindset for testing, you can have them collaborate with a QA specialist to define the test cases and review the test implementation.

ponector
0 replies
19h36m

In theory yes.

In practice, devs will not write good tests for their features, QA will be kept away from committing to production code.

Btw, if it is just code - why developers cannot implement features without bugs? No need in QA in such ideal world.

PH95VuimJjqBqy
0 replies
4h34m

better yet, developers write their own tests, QA write their own tests.

QA is a safety net, but they are not the first line of defense.

NegativeK
1 replies
16h42m

I spent a long time in QA.

Devs that don't test their own code are usually wasting the QA team's time with garbage. It also tends to cultivate (or is a symptom of) an environment where groups are throwing projects over the wall, so to speak, without tightly integrating QA into the process. This wastes significant amounts of time.

PH95VuimJjqBqy
0 replies
4h35m

that's exactly what was being described, moat building. It's political rather than effective.

mdavidn
0 replies
18h5m

Developers need to write tests so they understand how to structure the application in a manner that can be tested. The test engineer brings a fresh set of eyes and different expectations about how the application should work. That's valuable too.

VHRanger
1 replies
20h22m

You see that with devops often. When they are incentives to block stuff instead of enabling the product they become the roadblock team.

It's why Google has SRE instead of devops people largely

mlrtime
0 replies
4h12m

SRE's through Error Budgets can also block stuff. They just use a different tool and metric.

Also, the SREs should be knowledgable enough to work on the fix, not just block.

Aachen
0 replies
3h42m

this is the internet, where saying a thing doesn't make it true.

have at your claims good sir.

ponector
5 replies
20h25m

I've seen developers writing tests with no assertions, or with assertTrue(true). Always green, with 100% coverage!

The same people have been asking: why should I write tests if I can write new features?

And then one senior QA comes and destroys everything.

Once I found that if I press f5 50 times during a minute then backend will go in outOfMemory while spinning requests to the database.

feoren
2 replies
19h42m

I've seen developers writing tests with no assertions

This can be OK if the code executing without throwing exceptions is itself testing something. If you have a lot of assertions written directly into the code, as pre- or post-conditions for instance. But I'm guessing that wasn't the case here.

WrongAssumption
1 replies
16h29m

Why would it be ok to not test the assertions would be triggered if the conditions are bad? How would you verify that the assertions are correct with just a happy path? If you run a code with pre-post assertions, then remove all the assertions the same test will continue to pass.

feoren
0 replies
20m

Agreed: such tests are not sufficient to test the pre- and post-conditions themselves, and are definitely happy-path focused. But such tests may be sufficient to test the happy-path code itself (which the in-code checks are also testing). It would be ridiculous to have an entire test suite with no assertions for these reasons, but it's not ridiculous (in isolation) to see some unit tests with no assertions. In theory -- and this is not advice: theory and practice are different -- but in theory, if you've absolutely air-tightly tested the "happy path", then you've basically proven no other paths occur (short of cosmic rays).

But you could actually make this same argument about unit tests with assertions: do you have any test that those assertions are written correctly? Do you unit test your unit tests? Pre- and post-conditions living "in situ" are the equivalent thing to "control" and test assertions (respectively) living in your unit tests. Would you be more comfortable if the in-situ checks were simply cut/pasted into the unit test? I wouldn't be!

kcb
1 replies
20h3m

That's why the dev teams tests and the QA teams tests are not mutually exclusive.

ponector
0 replies
19h25m

Testing is continuous multilayered multistaged process.

Testing should start before first developer wrote first line of code for the project. Architecture blueprints, set of requirements should be tested as early as possible. But that is not happening in real life, only in books. In real life pm will bring cheap contractor from India one month before the target release date.

mikestew
5 replies
20h24m

I can't count how many products I've seen with 100% code coverage that objectively, quantifiably doesn't work.

That’s because code coverage doesn’t find the bugs that result from code you didn’t write, but should have. Code coverage is but one measure, and to treat it as the measure is folly.

(But, yes, I have heard a test manager at a large software company we’ve all heard of declare that test team was done because 100% coverage.)

thaumasiotes
4 replies
13h32m

It's more because code coverage isn't measured correctly. What you want is branch coverage.

If you have something like this:

    if condition1:
       do_something1()
    if condition2:
       do_something2()
    if condition3:
       do_something3()
There are 8 possible paths for the code to follow here, but you can cover 100% of lines of code in one test. If your code coverage measure tells you that testing the case where conditions 1, 2, and 3 are all true achieves 100% coverage, your measure is worthless. Covering this code cannot be done in less than eight tests.

alternatex
3 replies
12h37m

I think most code coverage analysis tools are smart enough to consider flow control statements. At least when it comes to C# ones.

fho
1 replies
12h29m

They do, but that's not the point GP made. An example how this could fail is if the first branch sets up something that branch two uses. Two tests are written that call branch 1&2 and not call them. 100% code coverage, even the non-happy path was tested.

But in reality, if those branches have any interaction, you would need to write eight test cases for every combination of branches being run and not being run.

thaumasiotes
0 replies
7h16m

Well, again, eight is a lower bound. If the software uses any values beyond the bare minimum (here) of three separate booleans, you're likely to need more.

Take this simplified code:

    if condition1:
       do_something()
You need one test for code coverage and two for branch coverage, but if there's a bug in the code as written you'll find that you actually need more. Assume that we want to run do_something() when, and only when, condition2 is true.

Now we have four cases:

- condition1 && condition2: do_something() runs, which is correct.

- condition1 && !condition2: do_something() runs, which is incorrect.

- !condition1 && !condition2: do_something() does not run, which is correct.

- !condition1 && condition2: do_something() does not run, which is incorrect.

If you happened to write tests for the first and third of those options, your tests will make it look like your code works, and your (official) coverage metrics will look comprehensive. But because your coverage metrics (in reality) are actually terrible, your code is secretly not working.

The big problem we've just run into is that the number of test cases we need depends on the number of bugs in our code. If the goal of writing tests is to prove that we don't have bugs, this is a terrible result; it means our goal is fundamentally impossible to achieve. We don't know whether we have bugs, so we don't know how many tests we need.

thaumasiotes
0 replies
11h32m

Why do you think that? It isn't true.

supportengineer
4 replies
21h38m

I've seen QA/QE greatness and it was similar to how you describe. A different chain of command for deciding if releases are certified for production. Different incentive structures as well.

Not to mention, at one recent employer, the QE team wrote an enormous amount of code to perform their tests - It was more LOC than the modules being tested/certified.

eitally
1 replies
20h57m

I had a team like that once. It was glorious. And ultimately, I'm convinced it led to overall faster development cycles because the baseline code quality & documentation was so much better than it would have been without such a great QA manager. The QA team, of course, was also technical -- mostly with SWE backgrounds -- and they were primarily colo'd in the same office as the dev team. I still remember the epiphany everyone had one planning cycle when it was mutually understood that by generally agreeing to use TDD, the QA team could participate actively in the real engineering planning and product development process.

... Then I left and my CIO let go the onshore QA team in favor of near term cost savings. Code quality went way down and within a year or two several apps needed to be entirely rewritten. Everything slowed down and people started pointing fingers, and before you knew it, it was time for "cloud native rearchitecting/reengineering" which required an SI to come in with "specialists".

MichaelZuo
0 replies
6h30m

I've heard this enough times that I'm becoming convinced there's a lot of 'reinventing the wheel' busywork going on in the industry.

And not even superior wheels, lower quality, more fragile wheels by clumsier wheel designers.

munificent
0 replies
19h16m

> It was more LOC than the modules being tested/certified.

So much code. I hope they had a QAQA team to test all that.

esafak
0 replies
20h49m

So how was QA incentivized?

alkonaut
2 replies
20h59m

I love my really thorough QA’s. Yes it’s an antipattern to let me as a dev lean too much on them catching what I won’t. But where I dread even running the code for a minute, they enjoy it. They take pride in figuring out edge cases far beyond any spec. They are definitely worth their weight in gold. It lets developers have confidence when changing things in the same sense a good type system does. For some classes of very interactive apps (e.g games) having unit and integration tests just doesn’t cover the parameter space.

ponector
1 replies
20h11m

People here are talking about skillful QA worth their weight in gold.

Unfortunately people in the industry who have actual power in planning budgets don't think so. An article is right. QA engineers now are viewed as janitors: no one respects then, better to outsource to cheap location.

bluGill
0 replies
18h13m

Depends on industry. Some care about quality, they respect qa.

hedora
1 replies
15h46m

The best approach I’ve seen is to do all of the above.

- Have QA run pessimal versions of real use cases. Trying to sell a word processor to lawyers? Format the entire US legal code and a bajillion contracts in it, then duplicate it 10x and start filing usability/performance bugs.

- Have the engineers test everything with randomly generated workloads before committing. Run those tests nightly, and fix all the crashes / failures.

- Have Product Management (remember them?) work with marketing and sales to figure out what absolutely has to ship, and when.

Make sure it only takes one of the above three groups to stop ship, and also to stop non-essential development tasks.

Arainach
0 replies
9h31m

The devil is in the details of "essential development tasks".

An uncountable number of products have died or devolved because "we don't have time to do it that way, put in the quick fix"

fizx
1 replies
21h2m

How often did you release?

SkyPuncher
0 replies
19h13m

That's my question as well. Bug free code is not the goal. Valuable product is.

turok
0 replies
7h1m

We try to do this, QA owns deployment to production and has the final say if a feature needs a re-write. The almost adversarial incentives of dev and QA are why it works. Dev wants to close ticket, QA wants the feature to work, Product wants to tick a box, all collaborate so that the closed ticket delivers a working feature and not just one of those three things.

bluGill
0 replies
18h16m

Automated tests take manual testing from 60% to 50% of the time to develop quality software. Valuable, but not a bullet to manual tests.

pjsg
38 replies
21h3m

At the start of my career (late 70s), I worked at IBM (Hursley Park) in Product Assurance (their version of QA). We wrote code and built hardware to test the product that was our area of responsibility (it was a word processing system). We wrote test cases that our code would drive against the system under test. Any issues we would describe in general terms to the development team -- we didn't want them to figure out our testcases -- we wanted them to fix the bugs. Of course, this meant that we would find (say) three bugs in linewrapping of hyphenated words and the use of backspace to delete characters, and then the development team would fix four bugs in that area but only two of the actual bugs that we had found. This meant that you could use fancy statistics to estimate the actual number of bugs left.

When I've worked for organizations without QA teams, I introduce the concept of "sniff tests". This is a short (typically 1 hour) test session where anybody in the company / department is encouraged to come and bash on the new feature. The feature is supposed to be complete, but it always turns out that the edge cases just don't work. I've been in these test session where we have generated 100 bug tickets in an hour (many are duplicates). I like putting "" into every field and pressing submit. I like trying to just use the keyboard to navigate the UI. I run my system with larger fonts by default. I sometime run my browser at 110% zoom. It used to be surprising how often these simple tests would lead to problems. I'm not surprised any more!

wrs
16 replies
20h43m

At Microsoft back in the day, we called those “bug bashes”, and my startup inherited the idea. We encouraged the whole company to take an afternoon off to participate, and gave out awards for highest impact bug, most interesting bug, etc.

hornban
14 replies
20h31m

This is a bit of an aside, but I have a question that I'd like to ask the wider community here. How can you do a proper bug-bash when also dealing with Scrum metrics that result in a race for new features without any regard for quality? I've tried to do this with my teams several times, but ultimately we're always coming down to the end of the sprint with too much to do to implement features, and so anybody that "takes time off" to do bug bashing looks bad because ultimately they complete fewer story points than others that don't do it?

Is the secret that it only works if the entire company does it, like you suggest?

And yes, I completely realize that Scrum is terrible. I'm just trying to work within a system.

Shaanie
6 replies
19h35m

That's not a problem with Scrum, it's a problem with your team. If you're doing a bug bash every sprint, then your velocity is already including the time spent on bug bashes. If it's not in every sprint, you can reduce the forecast for sprints where you do them to account for it (similar to what you do when someone is off etc).

If you're competing within the team to complete as many story points as possible that's pretty weird. Is someone using story points as a metric of anything other than forecasting?

JohnFen
4 replies
19h11m

Is someone using story points as a metric of anything other than forecasting?

Very nearly every company I've worked at that uses Scrum uses story points, velocity, etc., as a means of measuring how good you or your team are. Forecasting is a secondary purpose.

mb7733
1 replies
17h44m

Don't teams assign points to their own tickets? So how could one compare the points between teams?

jSully24
0 replies
16h11m

Yes. But many Sr. Leaders just see a number so it must also be a metric you can use for measurement. They do not understand it’s real use.

I picture a construction company counting the total inches / centimeters each employee measured every day. Then at the end of the year firing the bottom 20% of employees measured in total units measured in the last 12 months.

zaphirplane
0 replies
11h2m

Sounds easy to game. Click a button to foo the bar is <pinky> one million points

bolobo
0 replies
7h3m

Ahh the good old "Waterfall Scrum"

pierat
0 replies
19h0m

That's not a problem with Scrum, it's a problem with your team.

I've seen that justification time and again, and it feels disingenuous every time it's said. (Feels like a corrolary to No True Scotsman.)

I've also seen scrum used regularly, and everywhere I've seen it has been broken in some fashion. Enough anecdata tells me that indeed Scrum, as stated, is inherently broken.

ultrasaurus
0 replies
18h52m

Ah the classic: How do I improve quality in an org 'without any regard for quality'? :)

But assuming that everyone cares about quality (I know, a big leap), what has worked for me is: tagging stories as bugs/regressions/customer-found-this and reporting on time spent. If you're spending too much time fixing bugs, then you need to do something about it. New bugs in newly written code are faster to fix, so you should be able to show that bug bashes make that number going down quarter over quarter which contributes to velocity going up.

Alternately (and not scrum specific) I've had success connecting a CSM/support liaison to every team. Doesn't give you a full bug bash, but even one outside person click testing for 20m here and there gets you much of the benefit (and their incentives align more closely with QA).

m4rtink
0 replies
20h22m

Seems like another data point stating sprints don't make sense in real world projects ?

duderific
0 replies
19h15m

I'm kind of in the same boat re story points and Scrum metrics, but sometimes we can get management buy-in to create a ticket to do this sort of thing, if it's seen as high value for the business.

debatem1
0 replies
20h25m

The team with the lowest bug bash participation this week is the victim err host of next week's bug bash.

bolobo
0 replies
7h8m

Why are you putting the blame on scrum if you don't even implement it? I did scum in a previous company and it worked fine. Nobody looked at the story points except the devs during planning. We had a honest discussion with the product owner every time and did find the time to do tech debt.

It wasn't perfect, but it worked well.

Granted, it required a very specific management, devs with the right mindset and constraints on the kind of projects that could be done (anything customer facing with a tight deadline was off for instance. We used that for the internal infra). So I don't see how you would build a plane at Boing with scrum for instance. Or anything that require very tight coupling af many teams (or hardware).

But for us (60 devs in a company of 200), Saas, it worked great.

bigbillheck
0 replies
20h13m

Only assign points based on a (n-1)-day sprint instead of a n-day one.

JCharante
0 replies
19h27m

because ultimately they complete fewer story points than others that don't do it?

Solution: don't measure story points

jSully24
0 replies
16h18m

We do a similar thing but call it a bug hunt.

Not only do we uncover bugs, it’s a great way to get the whole company learning about the new things coming and for the product team to get unfiltered feed back.

pavel_lishin
5 replies
20h47m

When I've worked for organizations without QA teams, I introduce the concept of "sniff tests". This is a short (typically 1 hour) test session where anybody in the company / department is encouraged to come and bash on the new feature.

We call those bug-bashes where we work, and they're also typically very productive in terms of defects discovered!

It's especially useful since during development of small features, it's usually just us programmers testing stuff out, which may not actually reflect how the end users will use our software.

steveBK123
3 replies
20h44m

A good QA person is basically a personification of all the edge cases of your actual production users. Our good QA person knew how human users used our app better than the dev or product team. It was generally a competition between QA & L2 support as to who actually understood the app best.

The problem with devs testing their own & other devs code is that we test what we expect to work in the way we expect the user to use it. This completely misses all sorts of implementation error and edge cases.

Of course the dev tests the happy path they coded.. that's what they thought users would do, and what they thought users wanted! Doesn't mean devs were right, and frequently they are not..

fatnoah
1 replies
3h19m

It was generally a competition between QA & L2 support as to who actually understood the app best.

So true!

steveBK123
0 replies
3h15m

And to clarify specifically because those who haven't experienced don't understand it...

The uses of your app as intended by the authoring developers never matches the uses of your app out in the wild in the hands of human users.

Over time, power users develop workflows that may go unknown by dev/product/management and are only well understood by QA / L2 support.

The older the app, the more the divergence

justinator
0 replies
20h26m

This dude gets it.

danny_taco
0 replies
19h59m

Maybe we work in the same company. I'd like to add that usually the engineer responsible for the feature being bug-bashed is also responsible of refining the document where everyone writes the bugs they find since a lot are duplicates, existing bugs, or not bugs at all. The output is then translated into Jira to be tackled before (or after) a release, depending on the severity of the bugs found.

JonChesterfield
3 replies
19h36m

This meant that you could use fancy statistics to estimate the actual number of bugs left.

That's very clever. Precise test case in QA plus vague description given to dev. Haven't seen it before, thank you for sharing that insight.

bluGill
1 replies
18h22m

If there is a precise testcase, automate it. There real value of manual tests is whene they explore to find variations you didn't think of. Your manual tests should be explore x

JonChesterfield
0 replies
1h51m

Yes, automated tests are better than manual. The subtle point I'd missed is to consider not giving the dev team your automated test to improve the odds they fix the general pattern as opposed to your point instance.

The statistical side to it is really interesting too, still thinking about that.

dredmorbius
0 replies
16h29m

Generally, "The German Tank Problem":

<https://en.wikipedia.org/wiki/German_tank_problem>

There are similar methods used in estimating wildlife populations, usually based on catch-release (with banding or tagging of birds or terrestrial wildlife) or repeat-observation (as with whales, whose fluke patterns are distinctive).

krisoft
2 replies
19h13m

I've been in these test session where we have generated 100 bug tickets in an hour.

Is that like… usefull to anyone? Especially if they are duplicates. It feels to me that 10 different bugs is enough to demonstrate that the feature is really bad, after that you are just kinda bouncing the rubble?

notpachet
0 replies
14h12m

As noted in the post, one characteristic of a healthy QA environment at your work is how effectively you triage bugs. That includes detecting duplicates. One big QA smell for me is opening a team's bug backlog and realizing that there are shitload of dupes in there, because it means that no one really looked at them in detail.

NegativeK
0 replies
16h53m

It's "free" QA. And presumably you'll do it again later until it's better.

jxramos
2 replies
20h58m

that's very interesting to hide the source of the automated tests from the developers as a strategy. I can see that shifting the focus to not just disabling the test or catering to the test etc. I'll have to think about this one, there's some rich thoughts to meditate on with this one.

ansible
1 replies
19h36m

It is an interesting approach I hadn't heard of before. For complex systems though, often reproducing the bug reliably is a large part of the problem. So giving the developers the maximum information is necessary.

Any time a "fix" is implemented, someone needs to be asking the right questions. Can this type of problem occur in other features / programs? What truly is the root cause, and how has that been addressed?

semireg
0 replies
14h23m

Wow, less IS more. Hear me out.

How do we measure “more” information? Is it specificity or density?

Because here, assuming they can reproduce, keeping the information fuzzy can make the problem space feel larger. This forces a larger mental-map.

jrockway
2 replies
19h57m

I've always been impressed by hardware QA test teams I've worked with. On Google Fiber, they had an elaborate lab with every possible piece of consumer electronics equipment in there, and would evaluate every release against a (controlled) unfriendly RF environment. ("In version 1.2.3.4, the download from this MacBook Pro while the microwave was running was 123.4Mbps, but in version 1.2.4.5, it's 96.8Mbps." We actually had a lot of complexity beyond this that they tested, like bandsteering, roaming, etc.) I was always extremely impressed because they came up with test cases I wouldn't have thought of, and the feedback to the development team was always valuable to act on. If they're finding this issue, we get pages of charts and graphs and an invite to the lab. If a customer finds this issue, it just eats away at our customer satisfaction while we guess what could possibly have changed. Best to find the issue in QA or development.

As for software engineers handling QA, I'm very much in favor of development teams doing as much as possible. I often see tests bolted on to the very end of projects, which isn't going to lead to good tests. I think that software engineers are missing good training on what to be suspicious of, and what best practices are. There are tons of books written on things like "how to write baby's first test", but honestly, as an industry, we're past that. We need resources on what you should look out for while reviewing designs, what you should look out for while reviewing code, what should trigger alarm bells in your head while you're writing code.

I'm always surprised how I'll write some code that's weird, say to myself "this is weird", and then immediately write a test to watch it change from failing to passing. Like times when you're iterating over something where normally the exit condition is "i < max", but this one time, it's different, it actually has to be "i <= max". I get paranoid and write a lot of tests to check my work. Building that paranoia is key.

I like putting "" into every field and pressing submit.

Going deeper into the training aspect, something I find very useful are fuzz tests. I have written a bunch of them and they have always found a few easy-to-fix but very-annoying-to-users bugs. I would never make a policy like "every PR must include a fuzz test", but I think it would be valuable to tell new hires how to write them, and why they might help find bugs. No need to have a human come up with weird inputs when your idle CI supercomputer can do it every night! (Of course, building that infrastructure is a pain. I run them on my workstation when I remember and it interests me. Great system.)

At the end of the day, I'm somewhat disappointed in the standards that people set for software. To me, if I make something for you and it blows up in your hands... I feel really shitty. So I try to avoid that in the software world by trying to break things as I make them, and ensure that if you're going to spend time using something, you don't have a bad experience. I think it's rare, and it shouldn't be, it should be something the organization values from the top to the bottom. I suppose the market doesn't incentive quality as much as it should, and as a result, organizations don't value it as much as they should. But wouldn't it be nice to be the one software company that just makes good stuff that always works and doesn't require you to have 2 week calls with the support team? I'd buy it. And I like making it. But I'm just a weirdo, I guess.

ryan-duve
1 replies
9h14m

Going deeper into the training aspect, something I find very useful are fuzz tests.

Could you share some details of fuzz tests that you've found useful? I tend to work with backend systems and am trying to figure out whether they will still be useful in addition to unit and integration tests.

avidiax
0 replies
6h27m

Fuzz tests are most useful if they are run continuously/concurrently with development. Doing it that way, a change or decision that causes a fuzz test failure hasn't been built upon yet. Imagine building an earthquake resistant house on a shaker table vs. putting the 100% completed house on the same shaker table.

Doing fuzz testing at the end leads to a lot of low priority but high cost bugs being filed (and many low-cost bugs as well).

The utility of it is quite clear for security bugs. It requires low effort to find lots of crashes or errors that might be exploitable. For development in general, it tends to identify small errors or faulty architectural or synchronization decisions very early, while they are still easy to repair.

lulznews
0 replies
9h40m

Any issues we would describe in general terms to the development team -- we didn't want them to figure out our testcases

I’m sorry but this is just lol. Did the devs play back by creating bugs and seeing if your team could find them?

kristjansson
0 replies
14h44m

At $website, we used to call that “swarm”. All features had to go through swarm before being released, and all product managers were made to participate in swarm.

Its demise was widely celebrated.

godelski
20 replies
22h3m

The main problem with QA teams is the same problem with IT teams or even management. If they are doing their jobs well they appear to be doing nothing.

This often creates a situation where people need to "justify" their jobs. Usually this happens due to an over reliance upon metrics (see Goodhart's Law) rather than understanding what the metrics are proxying and what the actual purpose of the job is. A bad QA team is one who is overly nitpicky, looking to ensure they have something to say. A good QA team simultaneous checks for quality as well as trains employees to produce higher quality.

I do feel like there is a lack of training going on in the workforce. We had the "95%-ile isn't that good"[0] post on the front page not long ago and literally it is saying "It's easy to get to the top 5% of performers in any field because most performers don't actively train or have active feedback." It's like the difference between being on an amateur sports team vs a professional. Shouldn't businesses be operating like the latter? Constantly training? Should make hiring be viewed differently too, as in "can we turn this person into a top performer" rather than "are they already" because the latter isn't as meaningful as it appears when your environment is vastly different than the one where success was demonstrated.

[0] https://news.ycombinator.com/item?id=38560345

JohnMakin
9 replies
21h15m

This often creates a situation where people need to "justify" their jobs.

On DevOps teams I see this constantly. Usually the best-compensated or most senior "Ops" guy or whatever they're called at the company spends a lot of his time extinguishing fires that were either entirely of his own creation/incompetence, which makes it look like he's "doing something." You automate away the majority of the toil there and this person doesn't have a job, yet this pattern is so insanely common. There's little incentive to do it right when doing it right means management thinks you sit there all day and do nothing.

tstrimple
6 replies
20h44m

Which is one reason DevOps teams don't make sense. DevOps is a skill developers need to have. It needs to be embedded within the development team, not some other team's responsibility who only focuses on "DevOps" work. You create the build and deployment pipelines and move on to other project work. If you give someone a role and say their job is to do "DevOps" they will HAVE to invent things to do because that's such a small part of a project and once implemented doesn't need a ton of maintenance.

ponector
5 replies
19h51m

It is not about DevOps. In every organization if you do your work good and have no fuckups - you will not be recognized and promoted. But if you are hero-firefighter - managers will love you and help with promotion.

Because visibility is a key! Key to everything in the corporate life. If your work is not visible for managers - you are doing nothing.

tstrimple
1 replies
17h1m

Of course visibility is key. What other kind of option is there?

“We have no idea what you did this year and no one has any idea of the value you’re providing, but what the hell. Here’s a huge bonus!”

That doesn’t seem a little insane to you?

godelski
0 replies
14h3m

What other kind of option is there?

Placing value based on the work assigned? Obviously insanely easier said than done. But I think we should embrace the fuzziness a bit. Your manager should have a very good idea of this. If they don't, then they're the ones that should be let go because this is a significant aspect of their job. (may be learned through indirect methods)

That doesn’t seem a little insane to you?

Honestly, a lot of business practices and economics sounds insane to me. Similarly a lot of alternative suggestions (especially the latter) since they tend to not address the underlying issues but be bandaids. The ones that don't sound insane are often boring and very reasonable, but I think we've established that that's considered undesirable when you're evaluated by visibility. I think this is probably a significant part of the negative feedback loop.

JohnMakin
1 replies
16h55m

The fortunate thing is there are measurable metrics you can hit and improve upon as an X-Ops team, but implementing those takes buyin from the org or enough freedom to go rogue and do it on your own.

godelski
0 replies
14h9m

The fortunate thing is there are measurable metrics you can hit and improve upon as an X-Ops team

Question(s):

- What are the metrics?

- How aligned are the metrics with the actual goal?

- What isn't covered by the metrics?

- How is what's not covered by metrics evaluated?

- Can what's not currently covered by metrics theoretically be covered by some potentially unknown metric? (best guess)

godelski
0 replies
14h12m

It is not [just] DevOps

Definitely agree. In a sister comment[0] I mention a concept I've been calling "Goodhart's Hell" for a more general term. But I think what you are specifically mentioning is sometimes laughably called "Loud Laboring"[1]. I think this all falls under the broader umbrella of metric hacking and thus Goodhart's Law.

Idk why, but it really does seem like metric hacking is extremely pervasive in our modern society, and can be found nearly everywhere. What upsets me the most is that it too is found in the sciences. There also appears to be a strong correlation between the popularity (or hype) of a field and metric hacking.

[0] https://news.ycombinator.com/item?id=38647582

[1] https://news.ycombinator.com/item?id=37147707 || https://www.cnbc.com/2023/08/09/forget-quiet-quitting-loud-l...

slv77
0 replies
20h29m

Sometimes people want to be firefighters to protect the people they serve but others love how being in a crisis makes them feel alive. The later are why arson investigators first look at firefighters when doing an arson investigation. Some of them need the fires and will start them just to fight them.

There are a lot of these types that gravitate to crisis management roles like DevOps.

godelski
0 replies
20h46m

That's a fantastic example of what I'm trying to describe. It's kinda like thinking hours worked is directly proportional to widgets produced. There certainly are jobs and situations where this relationship holds (can't sell widgets if you aren't manning the store or can't produce turn crank widgets if the crank isn't being turned). But modern world widgets don't work that way and are more abstract. Sometimes fewer hours creates more widgets, sometimes the reverse. But widget production is now stochastic and especially in fields where creativity and brain power are required. (Using widgets for generalization -- in the economic sense--, insert any appropriate product or ask and I'll clarify)

philk10
5 replies
21h58m

When I work with new devs I can often trip them up with basic tests of double-clicking, leading spaces, using the back button on Android. They then learn these and from then on these issues dont appear ( well, OK, it might take a couple of times of a ticket being rejected because of these but they do quickly learn all my tricks ) I don't get measured on bugs found so there's no pressure on me to find stupid bugs just to boost my figures.

godelski
2 replies
20h43m

I don't get measured on bugs found so there's no pressure on me to find stupid bugs just to boost my figures.

Sounds like the right incentive structure. If you don't mind, how are you judged? Do you feel like the system you're in is creating the appropriate incentives and actually being effective? Certainly this example is but I'd like to know more details from an expert so I can update my understanding.

philk10
1 replies
20h11m

The system is really effective, I wanted to work at a place where the cliche "everyone cares about quality" is actually true and I found it - devs test, designers test, I test, customer has the chance to test the latest build every 2 weeks so that we can check that our quality checks are aligning with theirs. It gets to be a game of 'can the devs get it past my checks' and 'can I find new ways to trip them up' which builds up confidence in each others skill levels.

godelski
0 replies
18h47m

That's cool to hear. I also like the periodic table on your team's site haha. Looks like great philosophy

hotpotamus
1 replies
21h38m

I had a buddy who liked to do that kind of thing. I think his favorite trick was just to enter nothing in a form and hit enter to see what happens. It's probably his favorite because it deleted the database on some little thing I wrote at one point and we got a good laugh over it and I got a good lesson out of it.

philk10
0 replies
21h12m

Yep, I start off by entering nothing, then just spaces, then special characters and then finally get to entering some typical data

mattgreenrocks
2 replies
21h43m

Shouldn't businesses be operating like the latter? Constantly training?

There's a rampant cultural mind-virus that argues that 95%th percentile is somehow tons of work (rather than a lack of unforced mistakes), so everyone just writes it off. It's on full display at this very site. Just look on any post involving software quality, and read a bunch of comments suggesting widespread apathy from engineers.

Obviously every situation is different, but people seem to be pretty okay with relinquishing agency on these things and just going along with whatever local maxima their org operates in. It's not totally their fault, but they're not blameless either.

godelski
1 replies
20h52m

Yeah it is weird that it is believed that there is a linear scale to work in and quality considering how well known pareto/power distributions are. These distributions are extremely prolific too. I mean we even codify that sentiment in the 80/20 rule or say that 20% of time is writing code and 80% is debugging it. What's interesting is this effect is scalable. Like you see this when comparing countries by population but the same distribution (general shape) exists when looking at populations of states/regions/cities (zooming in for higher resolution) for any given country or even down to the street level.

Obviously every situation is different, but people seem to be pretty okay with relinquishing agency on these things and just going along with whatever local maxima their org operates in. It's not totally their fault, but they're not blameless either.

I agree here, to the letter. I don't blame low level employees for maximizing their local optima. But there's two main areas (among many) that just baffles me. The first is when this is top down. When a CEO and board are hyper focused on benchmarks rather than the evaluation. Being unable to distinguish the two (benchmarks and metrics are guides, not answers). The other is when you have highly trained and educated people actively ignoring this situation. To have an over-reliance on metrics and refusing to acknowledge that metrics are proxies and considering the nuances that they were explicitly trained to look for and is what meaningfully distinguishes them from less experienced people. I've been trying to coin the term Goodhart's Hell to describe this more general phenomena because I think it is a fairly apt and concise description. The general phenomena seems prolific, but I agree that the blame has higher weight to those issuing orders. Just like a soldier is not blameless for their participation in a war crime but the ones issuing the orders are going to receive higher critique due to the imbalance of power/knowledge.

Ironically I think we need to embrace the chaos a bit more. But that is rather in recognizing that ambiguity and uncertainty is inescapable rather than abandonment of any form of metric all together. I think modern society has gotten so good at measuring that we often forget that our tools are imprecise whereas previously the imprecision was so apparent that it was difficult to ignore. One could call this laziness but considering its systematic I'm not sure that's the right word.

pseudalopex
0 replies
13h24m

To have an over-reliance on metrics and refusing to acknowledge that metrics are proxies and considering the nuances that they were explicitly trained to look for and is what meaningfully distinguishes them from less experienced people. I've been trying to coin the term Goodhart's Hell to describe this more general phenomena because I think it is a fairly apt and concise description.

McNamara fallacy or quantitative fallacy.[1]

[1] https://en.wikipedia.org/wiki/McNamara_fallacy

bsder
0 replies
20h12m

Shouldn't businesses be operating like the latter? Constantly training?

"But then they'll leave for somewhere else for more money." </sarcasm>

Literally every company I have worked for. Meaningful training was always an uphill battle.

However, good training also requires someone good and broad in your technical ladder. They may not be the most up-to-date, but they need to be able to sniff out bullshit and call it out.

FAANG is no exception, either.

amlozano
13 replies
22h9m

This point is brought up in the article but I think it is at the real heart of the issue.

QA is almost always seen as a 'cost center' by the business and upper management. I have a hypothesis that you never ought to work in a department that is seen as a 'cost center'. The bonuses, the recognition, and the respect always goes to the money makers. The cost center is the first place to get more work with less hands, get blamed for failures, and ultimately fired when the business needs to slim up. I think the same thing applies to IT.

This spiral is why QA will always be a harder career than just taking similar skills and being a developer. It self reinforces that the best people get fed up and switch out as soon as they can.

serial_dev
4 replies
20h52m

Even as a developer (mobile app developer) I feel like one has to be careful not to work on "cost center" things.

Accessibility, observability, good logging, testing infrastructure improvements, CI/CD tweaks, stability, better linting and analyzer issues are all important, but you will be rewarded if you ship features fast.

This year I spent too much time on the former because I felt like that's what the team and app needed, because nobody on the team priorized these issues, and I'll be sweating at the end of the year performance reviews.

Now knowing this, I understand why the others didn't want to work on these items, so next year, I'll be wiser, and I'll focus on shipping features that get me the most visibility.

Sorry for the bugs in the app, but I need a job to pay my mortgage.

tstrimple
3 replies
20h21m

The purpose behind all those things you were pursuing (apart from accessibility) should have been to increase the rate at which the team is able to ship features. If your work on these items over the course of a year haven't demonstrably improved delivery speed, then what value did they actually bring? If they have improved delivery speed and you can show evidence for that, why would you be nervous going into a review?

hj3939393d
2 replies
18h31m

demonstrably improved delivery speed

Thinking this can be reduced to a single metric is the blight of modern software (and business in general, I think). Mapping an individual change to improved delivery speed is in the vast majority of cases an impossible task and any decent developer knows this. It's management+ that wants simple easy metrics since they lack the deep understanding required to do their job well. Software development is - despite management's hopes - not line work. It's much more akin to R&D. The line work gets eaten up by AWS.

tstrimple
1 replies
17h9m

If you can’t prove your contributions are worthwhile, you’re not going to get recognition. People putting out fires get recognition because they are solving visible and urgent problems. If your work reduces the chances of those fires, it should be measurable otherwise what is the point? Do you honestly believe someone who has spent a year “improving processes” but cannot measure the impact of that year of work deserves glowing praise?

And it’s not that managers don’t understand developers and the work they do. It’s that a lot of developers don’t engage at all with what a business actually does. They are working at IKEA while trying to convince management to use the nice dovetail joints instead of that garbage dowel based assembly. Not only do dovetails look better, they are substantially stronger! All very true statements from a craftsman woodworker. But a complete failure to understand the business and how and why it operates as it does and the value they are expected to provide within it.

carbotaniuman
0 replies
6h33m

It's wild that in this thread we're still falling victim to the McNamara fallacy.

https://en.wikipedia.org/wiki/McNamara_fallacy

robofanatic
3 replies
21h5m

QA is almost always seen as a 'cost center' by the business and upper management

Well everything involved in making a product is seen as a cost, that includes the entire development team - QA, Developers, Devops, PM ....

rubidium
2 replies
20h44m

No. That’s not actually how most orgs break it down. R&D, marketing, sales is “bringing new business” so are profit centers. This means their budget grows with revenue. Manufacturing, QA, IT and service are cost centers so get squeezed year-over-year even if revenue is flat.

jdlshore
0 replies
14h19m

It depends on the org. In my company, which is a SaaS-like company, all of product and engineering is a cost center, despite creating the product the company sells. It’s just the way they do their accounting.

itsdrewmiller
0 replies
14h18m

What software companies are not classifying QA as R&D?

perlgeek
0 replies
20h43m

I have a hypothesis that you never ought to work in a department that is seen as a 'cost center'

That's why I don't work in an IT department of a traditional business.

dclowd9901
0 replies
13h2m

I work on our frontend platform team and my perf work, CI design and process engineering definitely have a harder time getting recognition and promotions than folks who ship things that translate to dollars in the bank.

I don’t care though. I enjoy making things better and more robust. It makes my soul feel better. I’ll leave fucking things up to the cynics.

bumby
0 replies
20h52m

I agree with the 'cost center' sentiment, but I'll try to add some nuance from my experience.

1) Some organizations have come to really value what QA/QC brings to the table. From my experience, this seems to be more visible in manufacturing than software. I speculate this is because software is more abstract by its very nature and waste is harder to track.

2) The really good QAs are those who really believe in its mission, rather than those who are looking for the path of least resistance.

Both of those underscore the value lies in organizations and individuals who really buy-in to the QA ethos. There are lots of examples of both who are simply going through the motions.

NegativeK
0 replies
16h37m

I've worked in cost center groups almost all my life, and I wouldn't trade it for anything.

I'm fine with bonuses, etc going to other groups. I'm paid well as a tech worker, and many of those jobs would make absolutely miserable -- assuming I turned out to actually be good at them.

amtamt
9 replies
21h45m

The most conscientious employees in your organization are the most bitter. They see the quality issues, they often address them, and they get no recognition for doing so. When they speak up about quality concerns, they get treated like mouthbreathers who want to slow down. They watch the “move fast and break things” crowd get rewarded time after time, while they run around angrily cleaning up their messes. To these folks, it feels like giving a damn is a huge career liability in your organization. Because it is.

This is the bitter truth, no one wants to acknowledge.

DBAs and Infra, are in the same boat as QAs. Pendulam will swing back in not so long time frame i hope.

sporedro
1 replies
18h19m

While QA and testing is important, I’m not sure you can convince the people only concerned about profits… I think the “release it fast and patch it later” concept here to stay due to the internet being so accessible. Why bother spending tons of money and time when the users will just report the bugs and you can release updates over the internet they can download. Ever since physical copies of video games and software were replaced mainly by downloads, it seems like patching is cheaper. Of course this leads to horrendous security issues, bad user experience, etc. but who cares as long as the guy on top is maximizing profits.

antupis
0 replies
8h4m

“release it fast and patch it later” You are testing just different things eg is this product viable in the market? QA has its place but you should not cram it into every corner of software development and also the same time, not everything should be "release fast and break things".

jabroni_salad
1 replies
19h23m

My idea of the pendulum swinging back is 'you build it you run it', personally. Don't like the oncall pager? don't make it ring.

hinkley
0 replies
19h6m

The hard part for most people is learning to tell the devil in a necktie to fuck off every time they try to sweet talk you into volunteering for that kind of pain. Also with no recognition or compensation.

All it takes is one person on my team to defect and support an untenable amount of tech debt, and everyone on my team has to pay for it.

bradleyjg
1 replies
16h37m

DBAs and Infra, are in the same boat as QAs. Pendulam will swing back in not so long time frame i hope.

Ultimately it’s up to the customers. Will they walk because of bugs and outages or stay because of shiny new features?

truculent
0 replies
12h18m

I’m not so sure. I think software trends in the last 10-15 years or so have been driven heavily by a small number of influential players (FAANG, mega-VCs).

An influx of dumb money like that can shape and distort the market and overwhelm the feedback loops that would otherwise give consumers influence.

Perhaps now rates are up, the equilibrium changes, but I think it’s still easy to overestimate the number of “first movers” in an industry and the power of tacit or unconscious collusion.

mlrtime
0 replies
4h7m

This comment requires some context and nuance.

I've worked with conscientious engineers. Sometimes they are right but their delivery mechanism is broken. Sometimes they are just in the wrong place. If we're building a POC SASS product, it really doesn't need the quality of a avianoics microcontroller. All these trade-offs come with risk and cost, good engineers need to know the difference.

esafak
0 replies
21h8m

No, just jump ship and let the damn company fail. Fail faster, haha! This is how we have nice things; when bad companies are not propped up.

deniscepko2
0 replies
10h56m

Yeah this hit home so hard. Had to leave the startup i loved working at, because started doing all this release fast crap. And somehow people think releasing fintech stuff fast and untested is fine.

mschuster91
4 replies
22h23m

Another part is that there is barely any training for QA people. Even your average CS course will only graze the top of the topic, most usually some prof droning on about Java unit tests on some really old version of Java and a testing framework just as old.

There are no "Software Quality Assurance" academic degrees, there's barely any research into testing methodologies, there's barely any commercial engagement in the space aside from test run environments (aka, selling shovels to gold diggers), and let's face truth, also in tooling. And everything but software QA is an even worse state, with "training" usually consisting of a few weeks of "learning on the job".

Basic stuff like "how does one even write a test plan", "how does one keep track of bugs", "how to determine what needs to be tested in what way (unit, integration, e2e)" is at best cargo-culted in the organization, at worst everyone is left to reinvent the wheel themselves, and you end up with 13 different "testing" jobs in a manually clicked-together Jenkins server, for one project.

Defect Investigation: Reproduction, or “repro”, is a critical part of managing bugs. In order to expedite fixes, somebody has to do the legwork to translate “I tried to buy a movie ticket and it didn’t work” into “character encoding issues broke the purchase flow for a customer with a non-English character in their name”.

And this would normally not be the job of a QA person, that's 1st level support's job, but outsourcing to Indian body shops or outright AI chatbots is cheaper than hiring competent support staff.

That also ties in to another aspect I found lacking in the article: users are expected to be your testers for free aka you sell bananaware. No matter if it's AAA games, computer OSes, phones, even cars...

wmichelin
1 replies
22h18m

To play devil's advocate here, I did not _really_ get training for my software engineering role. I got a little bit from 1-2 college courses, but the vast majority of my role I had to pick up on the job or on my own.

I can tell you, I definitely didn't get training for QA tasks, but here I am doing them anyways. It's just work that needs to be done.

mschuster91
0 replies
22h12m

To play devil's advocate here, I did not _really_ get training for my software engineering role

Yeah and that is my point. It would be way better for the entire field of QA if there were at least a commonly agreed base framework of concepts and ways to do and especially to name things, if alone because the lack of standardization wrecks one's ability to even get testers and makes onboarding them to your "in house standards" a very costly endeavour.

tester756
0 replies
19h1m

Another part is that there is barely any training for QA people

What training is in your opinion needed?

Swizec
0 replies
22h17m

There are no "Software Quality Assurance" academic degrees, there's barely any research into testing methodologies,

There's a lot of this actually. Entire communities of people working on software quality assurance. Practitioners in this space call their field "resilience engineering".

The field likes to talk a lot about system design. Especially in the intersection of humans and machines. Stuff like "How do you set up a system (the org) such that production bugs are less likely to make it all the way to users"

temuze
3 replies
21h41m

I strongly disagree.

I worked at a company with a world-class QA team. They were amazing and I can't say enough nice things about them. They were comprehensive and professional and amazing human beings. They had great attention to detail and they catalogued a huge spreadsheet of manual things to test. Engineers loved working with them.

However -- the end result was that engineers got lazy. They were throwing code over to QA while barely testing code themselves. They were entirely reliant on manual QA, so every release bounced back and forth several times before release. Sometimes, we had feature branches being tested for months, creating HUGE merge conflicts.

Of course, management noticed this was inefficient, so they formed another team dedicated to automated QA. But their coverage was always tiny, and they didn't have resources to cover every release, so everyone wanted to continue using manual QA for CYA purposes.

When I started my own company, I hired some of my old engineering coworkers. I decided to not hire QA at all, which was controversial because we _loved_ our old QA team. However, the end result was that we were much faster.

1. It forced us to invest heavily on automation (parallelizing the bejesus out of everything, so it runs in <15min), making us much faster

2. Engineers had a _lot_ of motivation to test things well themselves because there was no CYA culture. They couldn't throw things over a wall and wash their hands of any accountability.

We also didn't have a lack of end-to-end tests, as the author alludes to. Almost all of our tests were functional / integration tests, that run on top of a docker-compose set up that simulated production pretty well. After all, are unit tests where you mock every data source helpful at all? We invested a lot of time in making realistic fixtures.

Sure, we released some small bugs. But we never had huge, show stopping bugs because engineers acted as owners, carefully testing the worst-case scenarios themselves.

The only downside was that we were slower to catch subtle, not-caught-by-Sentry bugs, so things like UX transition weirdness. But that was mostly okay.

Now, there is still a use case for manual QA -- it's a question of risk tolerance. However, most applications don't fit that category.

mixmastamyk
1 replies
21h8m

False dichotomy. Poor dev practice is not fixed by elimination of QA, but rather fixed by improving dev practice. The “five why’s” can help.

itsdrewmiller
0 replies
14h8m

If you're working without a net, you're going to be more careful. And 5 whys is not a particularly great overall practice. https://qualitysafety.bmj.com/content/26/8/671

jdlshore
0 replies
14h14m

This phenomenon is very well described by Elisabeth Hendrickson’s “Better Testing, Worse Quality” article.

robotnikman
3 replies
22h24m

Microsoft is probably the one case where this sticks out the most, at least for me anyways. Noticeably more bugs in updates since they dropped their QA team, in Windows as well as cloud products.

EvanAnderson
2 replies
22h15m

The revenue keeps rolling in so, clearly, they made the right business decision... >sigh<

I use a lot of MSFT software and services in the "day job". I wish there was some kind of consequence to them for their declining quality.

shiroiuma
0 replies
14h16m

The revenue keeps rolling in so, clearly, they made the right business decision... >sigh<

This is exactly the point I was going to make. Their stock price is doing great! So obviously they've done the right thing for their position in the market: the market has rewarded them for not wasting money on QA and just letting users suffer with the bugs.

I wish there was some kind of consequence to them for their declining quality.

If people keep insisting on throwing money at them no matter how bad their software is, then there's no reason for them to improve their quality.

danesparza
0 replies
21h59m

Almost like you should be able to file a 'bug report' or something. Maybe they should build a team to make sure their code quality is up to snuff...

natbennett
3 replies
21h28m

Strongly disagree with the literal premise of this post. The idea of having a separate team with the mandate to “ensure quality” was always ridiculous. Occasionally it was helpful by accident but it was always ridiculous. “Quality” isn’t something you can bake in afterwards by testing a bunch.

Getting rid of everyone with testing expertise, and treating testing expertise as inherently less valuable that implementation expertise? Sure, you could convince me that was a bad idea.

spinningD20
2 replies
21h24m

Doing every quality activity "after the fact" I agree is the issue. That's the root of the problem you're seeing, not that there was a separate quality team.

natbennett
1 replies
18h33m

It’s not the “separate” part that I think is ridiculous. It’s the fact that the team is named “quality assurance.” It relies on a metaphor from manufacturing that’s entirely inappropriate for software.

If you want to call it “Testing and Exploration” you’d get no argument from me. (Though I do think you’ll find that team is hard to staff.)

spinningD20
0 replies
14h28m

I'd call it something like "Risk analysis, identification and mitigation group"

guhcampos
3 replies
19h53m

“ To these folks, it feels like giving a damn is a huge career liability in your organization. Because it is.”

And it’s easy to see why.

Software Quality, Cose Maintainability, Good Design. These things only matter if you are planning to work on that company for a long time. If you’re planning to stay a couple years then hop to the next company, the most optimal path is to rise fast by doing high visibility work, then find use your meteoric rise as a resume material to get a higher paying job. Rinse and repeat. If that project is going to break or become unmaintainable in a couple years, who cares? You’re not going to be there.

Recognize the pattern? Startups work the same. It’s the “growth mindset” imprinted everywhere. If this product becomes unmaintainable in 5 years, who cares? I will have exited and cashed in.

I don’t judge people who do that exactly because it’s the practice the companies themselves use. I don’t like it, I actually hate it, but I understand people are just playing by the rules.

The fun part is watching managers and executives complaining about employee turnover, lack of company engagement, quiet quitting, like this isn’t them tasting their own poison.

BoxFour
1 replies
19h44m

Startups work the same. If this product becomes unmaintainable in 5 years, who cares?

This is a reasonable stance for a startup to take. The majority of startups likely won't last five years as they tend to fail.

Being alive in five years with technical debt is a good problem for most startups to have, because that means they managed to make it five years.

guhcampos
0 replies
15h57m

Exactly. That’s why I say I don’t like it, but understand it. I enjoy the fast paced and highly creative environments of startups more than the politics and bureaucracy of corporations, but the short term vision bothers me a lot. The result is I choose to work for midsized companies, or established startups. I kind of specialized in working on the growing pains of companies in their first hundred of engineers.

There’s a lot to say about startup culture and the growth mindset, but I don’t consider it necessarily evil. It exists, lots of the products we use and love would be impossible to build without it. It can be extremely harmful, though. It burns out people, it leads to excessive risk taking, it favors aggressive, invasive marketing, it rewards reckless management - yet it works.

It isn’t good or evil, like mostly everything in the World. It’s just… there.

hinkley
0 replies
19h3m

We've been through snake oil and pyramid schemes and now we have settled on depeche mode.

blastbking
3 replies
21h52m

Agree with the sentiment of this article but the disturbing ai generated images every paragraph were definitely not necessary - do people actually need to see these?

jawns
1 replies
21h48m

It did strike me as ironic that the article is about ruthlessly automating to avoid paying QA engineers, and it uses AI to avoid paying illustrators.

willsmith72
0 replies
21h35m

not saying this is you, but i get so tired of feedback about ai-generated images along the lines of "you're taking money away from local artists"

it's not one or the other. in my experience it's a decision of "no images" vs "ai images".

in this case, probably "no images" would've been better for the reading experience. but there was never any illustrator getting paid

Night_Thastus
0 replies
20h58m

I'd rather have stock or existing images, or none at all.

Every time I looked at one of those AI images my brain just kept seeing all the little weird parts that didn't make sense. Like a brain itch.

oaththrowaway
2 replies
22h9m

I had a boss at Yahoo who gave our QA to another team because "Facebook doesn't use QA, we shouldn't either". I can't remember if it was Facebook or MS, but he was willing to buy all of us a book talking about how amazing it was.

Long story short, it wasn't. It was like taking away a crutch. Of course we could have been more diligent about testing before having QA validate it, but it slowed development down so much trying to learn all the things we never thought to test that QA did automatically.

robocat
1 replies
19h55m

An article about Facebook's reason for no QA with some of the mitigations:

https://blog.southparkcommons.com/move-fast-or-die/

A bit recent to have affected Yahoo - but it sells a good story.

  We would celebrate the first time someone broke something.

  Let anyone touch any part of the codebase and get in there to fix a bug or build a feature. Yes, this can cause bugs. Yes, that is an acceptable tradeoff. We had a shared mythology about the times that the site went down and we all rallied together to quickly bring it back up.

Sounds like hell: running as close to the edge of the cliff as you can. Presumably totally ignoring thousands of papercuts of slightly broken functionality. Optimising to produce an infinite number of shallow bugs.

quadrifoliate
0 replies
14h7m

The difference is that bugs in the social network parts of Facebook (the ones where you see your friends and family's pictures and posts) are not directly making money for Facebook. The only real stuff that matters for the money is all the tracking.

I bet the people responsible for Facebook Ads Manager are a lot less enthusiastic about "move fast and break things", although I'd be interested to hear an opposing viewpoint from anyone here who's worked for that group.

nlavezzo
2 replies
20h14m

When we built FoundationDB, we had a maniacal focus on quality and testing. So much so that we built it in a language we invented, called Flow, that allowed us to deterministically simulate arbitrary sized FDB clusters and subject them to crazy conditions, then flag each system property violation and be able to perfectly reproduce the test run that triggered the violation.

We got to a point where the default was that all of our 10,000's of test runs each night would flash green if no new code was introduced. Tests that flashed red were almost always due to recent code additions, and therefore easily identified and fixed. It let our team develop knowing that any bugs they introduced would be quickly caught, and this translated to being able to confidently take on crazy new projects - like re-writing our transaction processing system post-launch and getting a 10x speed increase out of it.

In the end our focus on quality led to velocity - they weren't mutually exclusive at all. We don't think this is an isolated phenomenon, which led us to our newest project - but that's a story for another time.

JonChesterfield
1 replies
19h25m
nlavezzo
0 replies
17h46m

Yep!

hasoleju
2 replies
22h4m

I completely agree with the sentiment of this article: It is a big problem that being a "software tester" is not at all as prestigious as being a software engineer. Having someone who really understands how the users interact with the software and systematically covers all behavior in test cases is very valuable.

I experienced both worlds: I worked in an organization where 4 QA engineers tested each release that was built by 6 software engineers. Now I'm in a situation where 0 QA engineers test the work of 8 software engineers. In the second case the software engineers actually do all the testing, but not that systematically because it's not their job to optimize the testing process.

Having someone with the capabilities of a software engineer who's daily work is uncovering defects and verifying functionality is important. Paying someone who owns the testing process is more than justified commercially. The problem is: You don't find those people. For various reasons. Therefor you are stuck with making the software engineers do the testing.

But there is hope. There is a new standard released for my industry that requires organizations to have a QA department that is independent of the software engineering department. If they don't have that, they are not allowed to role out there software product complaint to the standard. Maybe this will help to reintroduce the QA Engineer as an important and prestigious role.

zabzonk
1 replies
21h52m

what industry is that?

hasoleju
0 replies
21h19m

Mechanical engineering and shopfloor software for factory automation in Europe. There is a new IT security standard released that also includes requirements for the software.

w10-1
1 replies
20h15m

For QA to be respected and protected, it has to identify what it's responsible for.

Luckily, that's easy: the "fault model", all the ways things can break. That tends to be a lot more complex than the operating model, the domain model, or the business model.

Once all the potential issues and associated costs for all the fault models are enumerated, then QA can happily offer to any other organization the responsibility for each one, and see who steps up to take it on.

In many cases, it can be done more cheaply in design, engineering, or automation; it's usually easier to prevent a problem than capture, triage, debug, fix, and re-deploy.

Organizations commonly make the mistake of being oblivious to the fault models and failing to allocate responsibility. That's possible because most failures are rare, and the link from consequences back to cause is often unclear. The responsibility allocation devolves to blame, and blame to "who touched this last"? But catastrophic feedback is a terrible way to learn, and chronic irritants are among the best ways to lose customers and staff.

bumby
0 replies
20h9m

I agree with you, but have the hunch that many PMs don't.

it's usually easier to prevent a problem than capture, triage, debug, fix, and re-deploy.

It really depends on the risk of the fault. To a PM under schedule pressure, the higher risk may be to break schedule in order to redesign to mitigate the fault. As you said, many failures are low probability, so PMs are used to rolling the dice and getting away with it. Often they've moved on before those failures rear their ugly heads.

An organization really needs the processes that establish guardrails against these biases. Establishing requirements to use the tools to define the fault model can go a long way, although I've seen people get away with turning a blind eye to those requirements as well. You also need to mate it with strong accountability.

ptmcc
1 replies
22h12m

I stepping-stoned through QA on my way into development, now a decade something ago, and this part stands out as especially true in my experience:

This created a self-reinforcing spiral, in which anyone “good enough at coding” or fed up with being treated poorly would leave QA. Similarly, others would assume anyone in QA wasn’t “good enough” to exit the discipline. No one recommends the field to new grads. Eventually, the whole thing seemed like it wasn’t worth it any more. Our divorce with QA was a cold one — companies just said “we’re no longer going to have that function, figure it out.”

I've worked with a handful of talented career software QA people in the past. The sanity they can bring to a complex system is amazing, but it seems like a shrinking niche. The writing was on the wall for me that I needed and wanted to get into fulltime dev as QA got increasingly squeezed, mistreated, and misunderstood. At so many companies QA roles went into a death spiral and haven't recovered.

Now, as the author points out, a lot of younger engineers have never worked with a QA team in their career. Or maybe have worked with a crappy QA team because its been devalued so much. So many people have never seen good QA that no one advocates for it.

reactordev
0 replies
22h3m

anyone in QA wasn’t “good enough”

This is why. Engineers have some of the most inflated egos that they set an extremely high bar for being “part of the club”. Sometimes that’s corporate policy (hire better than you) and sometimes it’s just toxicity (I am better than you). Without realizing that the most valuable skills they could learn are soft skills. I’m open to finding anyone willing to code. Whether it’s from QA, sales, Gerry’s nephew, recent CS grad, Designer turned coder, or that business analyst that taught themselves Python/Pandas.

A good QA team is sorely missed. A bad QA team turns the whole notion of QA teams sour. Just the same for development teams :D

I think devs are first line of defense. Unit tests etc. QA is second line (should we release?), feature testing, regression, UX continuity, etc. There’s value in it if you can afford it.

mynameisnoone
1 replies
12h49m

It's not so much a Shakespearean "to have a QA team or not ...", but responsibility and accountability.

SWEs must produce unit testing because they know the code best. Dumping this responsibility onto QA is slow, evil, and wrong like quality control only vs. QA+QC.

QA teams must have the authority to ensure complicated code gets unit testing code coverage.

QA teams should provide partial and full-up integration tools and testing with the support of SWEs.

QA teams must have stop-the-assembly-line authority to ensure quality and testing requirements are met.

QA teams (or tools teams that support multiple QA teams) must make testing faster and more efficient.

There ought to be a suite of smoke tests such that untested code cannot be committed to the "production" branch, whatever that looks like, except in rare extreme emergencies.

All production-ready commits should be squashed, signed-off by another SWE, and have a test plan.

Test plans should be auto-generated, wherever possible.

Tests combined with test infrastructure should be able to be added to auto-bisect/blame breakage

Which tests must run and pass to accept a production proposed diff should be auto-minimized to those that touch particular area(s) and their precise-as-possible-but-still-correct dependencies.

Other areas that must not be neglected and shoveled onto SWEs: product management, UAT, UX, and operations.

xtracto
0 replies
7h7m

I am a CTO and have always tried to have the QA team to be part of the Product Office (CPO) . It tends to keep us honest and ultimately it aligns the decision of churning out new features and building crap in the same team/person.

Now, don't get me wrong, my teams always do unit and integration testing, along with automation of those two. Devs are responsible for the quality of their work. But ultimately it os the product team, with input from their QA team the ones deciding if a new feature is ready for release as it is, or needs more polish

harshalizee
1 replies
22h12m

This is a symptom of a larger issue in tech where C/E suites are trying really hard to turn engineers into some sort of fungible cogs in the system that can be swapped in and out and in different parts of the system and still have everything work perfectly in order.

righthand
0 replies
22h4m

Yep I see this a lot where people want swappable engineers and no one is able to understand if you have an engineer working in the Frontend most of the time they will not be acquainted with backend work. Nor is there a need or logical way to keep everyone working across the stack to keep them in this swappable state. Each time you change a persons job they need retraining and orientation. Code is code but a drop down menu is not a database insert.

evilantnie
1 replies
21h54m

QA has always been about risk management. There are multiple ways to manage risk, and some of those ways can be more cost effective to a business. As software shifted towards SaaS offerings, deployments (and rollbacks) became quicker, customer feedback loops also got lightning fast. Team's can manage the risk of a bug more efficiently by optimizing for mean-time-to-recovery. This muscle is not one that QA teams are particularly optimized for, thus their effectiveness in this new model was reduced. I've found that holding on to QA function in this environment can severely dilute the ownership of quality as a requirement from engineers.

QA is still extremely valuable in any software that has long deployment lead times. Mobile apps, On-Prem solutions, anything that cannot be deployed or rolled back within minutes can benefit from a dedicated QA team that can manage the risk appropriately.

spinningD20
0 replies
21h44m

There are so so many instances where "rolling back" is just not a feasible solution. Working for a SaaS company with mobile/web/api and huge db's, migrations, payroll uses in the product, rolling back is and should always be a LAST RESORT. In 99% of the cases, something significant enough to want to roll back usually results in a "hot patch" workflow instead because rolling back or etc has its own risk.

QA has always been about risk management.

100%.

QA should be related to identifying risk, likelihood of failure, impact of failure to user, client and company. The earlier this is done in the varying processes, the better. ("shift left" but I've seen a ton of differences with how people describe this, but generally QA should start getting involved in the "design phase")

Another example from my own first-hand experience:

A company I worked for made a product that plugged into machines that were manufacturing parts, and based on your parameters it would tell you whether or not the part was "good" or "bad".

When interviewing the leadership of the company, as well as the majority of the engineering group, "what is the biggest risk with this product" they all said "if the product locks up!". Upon further discussion, I pulled out a much larger, insidious risk; "what if our product tells the client that the part is 'good' when it is not?"

In this example, the part could be involved in a medical device that keeps someone alive.

You're not going to be able to roll that back.

cloths
1 replies
22h18m

I can't agree more.

Focus: There is real value in having people at your company whose focus is on the quality of your end product. Quality might be “everybody’s job”…but it should also be “somebody’s job”. Yes indeed, naturally every person have just one focus, having dedicated person focus on QA is important.

Another practice, or buzz word (or used to be buzz word:) ), Exploratory Testing, which can pretty much be conducted only by dedicated QA.

philk10
0 replies
22h3m

That's pretty much my role - I don't write test cases, I'll explore the system and try to find issues that the devs have missed. Then they learn from what they missed so I have to explore more to find other types of issues.

bgribble
1 replies
21h39m

I was lucky enough to work in a small eng team with 1 full-time dedicated QA person. One of the very few coworkers from my long career that I have really tried hard to poach away from whatever they were doing after our shared workplace went bust.

Yes, part of the job was to write and run manual test suites, and to sign off as the DRI that a certain version had had all the automated and manual tests pass before release.

But their main value was in the completely vague mandate "get in there and try to break it." Having someone who knows the system and can really dig into the weak spots to find problems that devs will try to handwave away ("just one report of the problem? probably just some flaky browser extension") is so valuable.

In my current job, I have tried for 5+ years to get leadership to agree to a FT QA function. No dice. "Developers should test their own code." Yeah and humans should stop polluting the ocean and using fossil fuels, how's that going?

ncphil
0 replies
21h10m

"Developers should test their own code" is emblematic of a juvenile mindset in people who regularly fire up their "reality distortion field" to avoid the effort of educating themselves on their own operations (and that helps them deny responsibility when things go South). As W. Edwards Deming, bane of all "gut instinct" executives, once wrote, "The consumer is the most important part of the production line. Quality should be aimed at the needs of the consumer, present and future." The lack of a dedicated quality team shows a lack of respect for your customers. You know, the people you need to buy your products or services (unless you're intent on living off VC loans until you have to pull the ripcord on your golden parachute).

ThalesX
1 replies
21h25m

I am currently working with a startup that spends a lot of time on building tests that need to be refactored every sprint because it's early stage and the assumptions change. I am shocked at the amount of developer-hours spent on tests that need to be disabled / deleted instead of just hiring 1 - 2 super cheap manual testers that just go through the flows days in and out.

For me it's a no brainer, if I were CEO / CTO, until product-market-fit is achieved and exponential growth is visible, I'd just outsource Q&A and that's that.

spinningD20
0 replies
21h18m

When outsourced, you either A) rely on someone in your org to tell them what to test and what the workflows are, ie use them as a warm body/monkey to click on things for you - this is what most people see QA as, which is silly - or B) you rely on the outsourced QA to know your product and know what is important or what all of the edge cases are.

If your product is non-trivial in size or scope, ie it is not a cookie-cutter solution, then the testing of your product will also be non-trivial if you want it to work and have a good reputation (including during those all-important live demos, poc's, etc).

QA does not mean "click on things and go through the happy path and everything is fine" - not saying you are implying that, but gosh the amount of companies that think it's child's play is crazy.

Ensorceled
1 replies
21h35m

I hired a great QA Developer a few months ago. They are building out integration, performance and end to end tests; finding bugs, speeding up releases and generally making things better.

I get asked, every week, if they are contributing and what are they contributing.

It's exhausting, so I can't imagine what it feels like to actually BE in QA.

warkdarrior
0 replies
19h51m

# of bugs found per week should be sufficient metric of productivity.

teunispeters
0 replies
19h13m

I really like working with good QA teams... ... maybe because I've never worked with a bad team. As a developer, familiarizing myself with QA processes has never been a mistake, either.

somewhereoutth
0 replies
21h50m

Systems used by humans have to be tested by humans. Those testers can either be your customers, or your QA team - as your dev and sales teams will be busy doing other things.

sharts
0 replies
13h2m

The problem with QA is nobody outside of QA understands QA. They think QC is QA so they automate their QC "testing" and assume that is QA. It's not. Moreover they never listen to QA nor respect them as people.

QA are usually the best people to know what's what when the rubber meets the road. They know where the bodies are buried. They often understand and have a pulse on customers and usage better than product managers. They know the ins-and-outs of how various features interact and how a product as a whole works better than silo'd developers.

Historically they were effectively the only "product owners" that could certify a release before it went out. They would coordinate with the right people to ensure all technical and non-technical deliverables and dependencies were met before releases. T hey would be the best approximate and power users.

They often maintained test infrastructures on which deployments could be tested. In fact, they were the origins of automation or DevOps as a "thing" because they are the ones who saw all the friction points daily giving rise to CI/CD. Often times nobody listened because they were concerned with features.

QA has always been about investing resources within an organization to improve it. In effect, building things for within it to improve it. Now that we have gotten the message to some degree of optimizing some manual pain points -- we kick to the curb those who got us there without any regard to the value they provided -- and instead decide to push to prod and test on production to piss off customers even more.

If you ask yourself -- would you feel comfortable driving a car that was built and tested with automation alone? What about a space shuttle or an airplane? Or about medications?Or would you prefer that a human test drivers or test pilots put those products through their paces before signing off on them? -- it might drive it home better.

But then again software industry has devolved since the tech bro + VC culture pretty much ate it as they chased those sweet $$$

rychco
0 replies
20h8m

Does Medium automatically insert these AI generated images into every article now, or is that just the popular thing to do?

righthand
0 replies
22h12m

QA Engineers are some of the best debuggers too. They have their hands in the pipeline, src and test directories, and often work with all aspects of developing and deploying the application.

When I was a QA lead I often ran into software engineers that couldn’t be bothered to read a pipeline error message (and would complain daily in Slack) and when it came to optimizing the pipeline they would ignore base problems and pretend the issues that stemmed from the base problems were magical and not understood. Wasting days guessing at a solution.

The disrespect a QA engineer sees is not exaggerated in this article. Since most companies with QA orgs do not have a rigorous interviewing process like the Engineering orgs, the QA engineers are seen as lesser. The only SWE that have respect for them that I’ve met are the people who worked in QA themselves. The disrespect is so rampant that I myself have switched back to the Engineering org (I tried using seniority as a principal engineer and even shifted as a manager to make changes, but this failed because Engineering could not see past their own hubris and leadership peoples will not help you). My previous company before I was laid off hired a new CTO who claimed we could just automate away QA needs but had no examples of what she was talking about. This is the level of respect poured down from the top about building good software.

physicsguy
0 replies
9h48m

I personally think that having worked in both environments, the biggest issue is companies missing that triage function for bugs, etc. from people who can turn a customer reported bug into a step by step guide for reproducing things.

When I worked in the simulation space we used to get models sent in by customers where a convergence problem or crash would occur in 12 hours of running on a 128 core machine. Those were impossible as a developer to work with in debug mode which made the runtime even longer, so they needed someone to identify the cause of the problem and distill it down to a much smaller model where the bug could be replicated. The QA team in that were really application engineers and were subject matter experts, and they were absolutely invaluable.

pavel_lishin
0 replies
20h49m

Speaking of QA, using AI to generate comic-book-style illustrations is great until one of your heroes has 6 and 8 digits per hand.

At least with the comic style, you could plausibly say that that's canon to her character.

notpachet
0 replies
14h18m

I like this post, and agree with almost all of the written content. But man, those AI generated images are cancer.

notnmeyer
0 replies
21h37m

i never felt that devops and qa were at odds the way that this article suggests they are. in my experience nobody wants to or knows how to do run QA correctly so the org shoots themselves in the foot and does one of two things:

1. hire a contractor who just has no idea about anything. 2. hire someone and place them outside the engineering org (on the support team as a "support engineer" seems pretty popular) where they have little to no interaction with either engineering _or_ customers and expect them to work miracles.

l72
0 replies
21h29m

At our small tech company, QA is elevated to a whole different level. The QA lead(s) are involved in all product planning meetings and develop the requirements with the product team. Our QA lead has a phd and two have masters degrees! They know how the application is supposed to work better than most of the developers and play a big role throughout development. In my opinion (as the person that leads the developers), this is how it should work. They aren't some separate team we chunk over stuff to at the end of the day.

karmakaze
0 replies
21h45m

Having a QA team is like having an Ops team with stuff being 'thrown over the wall' to the downstream.

There's two kinds of tests. Regression testing, that should be automated and written and maintained by devs. New feature or change testing should be done by those that defined them, namely Product people. In the best case it's an iterative and collaborative process, where things can be evaluated in dev/test environments, staging environments, or production for beta flag enabled test users.

itqwertz
0 replies
21h58m

I noticed this trend a couple of years ago, they called it Shift-Left.

Basically, it was a way to get a developer to do more testing during development before handing over a feature to QA. This sets up the imagined possibility of firing all QA staff and having developers write perfect code that never needs to be tested thoroughly. Looks great on paper...

At a previous company, they started firing all of the manual QA devs and replacing them with offshore QA people who could do automated testing development (Cypress tests). The only problem was that those fired QA team members had significant business domain knowledge that never was transferred or written down. The result was a dumpster fire every week, with the usual offshore nightmares and panicked responses.

Make no mistake about this, it's just a cost-cutting measure with an impact that is rarely felt immediately by management. I've worked with stellar manual QA people and our effort resulted in bulletproof software that handled edge cases, legacy customers, API versions, etc. Without these people, your software eventually becomes IT-IS-WHAT-IT-IS and your position is always threatened due to the lack of quality controls.

hintymad
0 replies
17h38m

Maybe it's hard to find enough technically strong engineers with a passion on testing. Microsoft used to advocate how technically challenging to be a SDET - software engineer in test. IBM used to tell its test teams that testing engineers know the big picture, understand the products, and have to do awesome technical work. Unfortunately in reality, more engineers are more willing to directly develop products than becoming an expert in test.

gyudin
0 replies
22h17m

Getting rid of QA teams are a slow ticking bomb frequently imho. Cause some issues are might not be even breaking functionality. You can mess up some tracking/analytics and managers will make wrong decisions based on incorrect data. But personally I feel like within few years everything might change a lot. Machines 100% will be be better at coding, maintenance and testing things.

gluteart
0 replies
1h53m

I joined my current company just months before they started to cut down small QA team we had then. QA Automation was supposed to be the answer. 1.5 years forward - product quality dropped, automation for the client side application barely exists and even those parts that is covered by it prone to bugs.

I tried to draw attention to the fact that at least some manual QA is needed, but even after obvious fail (some people lost their job) managers are adamant. Automation, 'special bug-hunting projects', 'we should concentrate on code quality' lectures, all-hands testing - anything, instead of very obvious solution to get QA team back. Development time is up, regression is often, communication became harder.

The only QA who still works in the company (now in a different role) became invaluable, because he is one of the very few people who deeply understands the product as a whole and knows all the services we work with.

I can't think of another example of so very obvious mistake and solution to it, that's being ignored so relentless.

fatnoah
0 replies
22h25m

In making the case for building up a QA org at my current startup, I repeat the mantra that QA is both a skillset and a mindset. Automated tests can tell us a lot, but skilled QA testers are amazing at edge cases to break things and providing human feedback about what looks and feels good gor users.

dclowd9901
0 replies
13h11m

A couple things I miss from not having dedicated QA:

1) someone who deeply understands how the product should work

2) someone who’s good at writing performant and maintainable tests

bigEnotation
0 replies
4h44m

>> slowest part of software delivery is testing

In my experience the slowest part has been marking a feature as done. I loved working at places with QA. I could assign tickets to QA once the PR was up.

Now I gotta build in that I’ll be bumping PRs for review for approximately 30-50% of the time I’m working on a feature.

annoyingnoob
0 replies
22h10m

I remember a time before agile and devops. Seems like QA has always been looked down on, and always considered a bottleneck.

amaterasu
0 replies
21h17m

Ignoring the common trope that developers are bad testers (I am, but not all devs are), QA presence allows teams to move faster by reducing the developer test burden to automated regression, and developer acceptance testing only. Good QA can often assist with those tasks too, further improving team velocity. Also, moving tasks to people who specialise in them is not usually a poor decision.

The best way I've found to sell QA to management (especially sales/marketing/non-technical management), is to redefine them as marketing. QA output is as much about product and brand reputation management as finding bugs. IMO, nothing alienates customers faster than bugs, and bad experiences result in poor reputation. Marketing and sales people can usually assign value to passive marketing efforts, and recognise things that are damaging to retention and future sales.

alanjay
0 replies
22h23m

But Microsoft fired their entire qa department, and their software is rock solid. Right! Right?

TheBlight
0 replies
16h17m

QA teams mostly mattered back when shipping user-facing software was somewhat of an ordeal and generally used physical media. I really liked working with good QA teams. They would often find issues that would surprise me and occasionally lead to the most interesting diagnosis/debugging sessions I can recall. It was nice handing them a patched version and seeing them get to close the bug. It was a fun little process in that way.

RugnirViking
0 replies
16h47m

It makes a huge difference. It kinda feels bad to have your work dissected, but it feels so great to have institutional permission to slow down and make your work good

RedShift1
0 replies
21h42m

Now just to convince msft of this fact because their shit's been breaking left and right and all over the place.

KaiserPro
0 replies
20h2m

So for me, the QA team is the best source of product information in the entire engineering team. If they also do customer triage, then probably in the entire company.

They should know the product inside out, moreover, they know all the annoying bits that are unsexy and not actively developed.

yes, they find bugs and junk, but, they know how your product should be used, and the quickest/easiest way to use it. Which are often two different paths.

Bring your QA in the product cycle, ask them what the stuff that pisses them off the most.

They also should be the masters of writing clear and precise instructions, something devs and product owners could learn from.

ChrisMarshallNY
0 replies
16h1m

Testing has always been a huge deal with me. I tend to work alone, so I usually have to test my own stuff[0]. I’m pretty brutal. I learned from the best. I really feel that the level of testing I went through, for most of my career, would have a lot of modern devs, curled up in a fetal position, under their desks, whimpering.

We are currently in a “phase 2” test, of the project we’ve worked on, for the last year or so. It has shown us some issues (nothing major, though). Phase 1 testing showed us some nasties.

I had to force the team to do all this testing. They would have happily released before phase 1. I don’t think it would have ended well.

[0] https://littlegreenviper.com/miscellany/testing-harness-vs-u...