return to table of content

You are never taught how to build quality software

KeplerBoy
119 replies
1d

It will be necessary to deliver software without bugs in time.

Seems like a pretty bad premise to start an article on quality software. If you believe you can ship bug free code, it's time to switch careers.

codegeek
29 replies
22h25m

Yep. If you have written production grade software at real companies, you know that the moment you make that new commit (even if 1 liner change), you are now ready to accept that it could break something. yes you can do your unit tests, integration test, User Acceptance Tests and what not. But every code change = new possible bug that you may not be able to catch until it occurs to a customer.

Whenever I hear a developer say "I never ship buggy code", I am always cautious to dig in more and understand what they mean by that.

Gare
26 replies
21h59m

How about a formal proof? :)

I jest, but that should be the gold standard for anything life-critical and good to have for mission-critical software. Alas, we're not there yet.

akhosravian
10 replies
21h36m

I’m not a CS academic or a mathematician, but don’t Godel’s incompleteness theorems preclude a formal proof of correctness?

Verdex
5 replies
21h13m

No.

Godel means that we can't have an algorithmic box that we put a program into and out comes a true/false statement of halting.

Nothing is stopping you from writing the proof manually for each program that you want to prove properties for.

ALSO, you can write sub-turing complete programs. Those are allowed to have automated halting proofs (see idris et al).

kevindamm
4 replies
20h45m

What you're talking about is actually the Church-Turing thesis and the halting problem.

While, yes, computability and provability are very closely related, it's important to get attribution correct.

More details on what Gödel's Incompleteness Theorem really said are in a sibling comment so I won't repeat them here.

Verdex
3 replies
19h59m

it's important to get attribution correct.

Really? Says who?

Or perhaps you'll prove it from first principles. Although if turns out to be difficult, that's okay. Somebody mentioned something about systems being either complete or consistent but never both. Some things can be true but not proveably so. Can't quite remember who it was though.

kevindamm
2 replies
19h41m

Fair enough, I was being annoyingly pedantic.

[I believe that] it's important to get attribution correct.

Verdex
1 replies
19h22m

To be fair, annoyingly pedantic is the best kind of pedantic.

- Futurama (kind of)

kevindamm
0 replies
6h36m

sounds technically correct

Gödel's really was a rather unique mind, and the story of his death is kind of sad.. but I wonder if it takes such a severe kind of paranoia to look for how math can break itself, especially during that time when all the greatest mathematicians were in pursuit of formalizing a complete and consistent mathematics.

topaz0
2 replies
21h20m

No. There are plenty of things that can be proved, it's just that there exist true statements that cannot be proved.

kevindamm
1 replies
21h5m

That's closer, but still not quite right.

There are well-formed statements that can be proved but which assert that its godelized value represents a non-provable theorem.

Therefore, you must accept that it and its contradiction are both provable (leading to an inconsistent system), or not accept it and now there are provable theorems that cannot be expressed in the system.

Furthermore, that this can be constructed from anything with base arithmetic and induction over first-order logic (Gödel's original paper included how broadly it could be applied to basically every logical system).

kevindamm
0 replies
20h50m

The important thing to note is that it doesn't have anything to do with truth or truth-values of propositions. It breaks the fundamental operation of the provability of a statement.

And, since many proofs are done by assuming a statement's inverse and trying to prove a contradiction, having a known contradiction in the set of provable statements can effectively allow any statement to be proven. Keeping the contradiction is not actually an option.

Veserv
0 replies
21h20m

No. It merely prevents you from confirming every arbitrarily complex proof. Incompleteness is more like: I give you a convoluted mess of spaghetti code and claim it computes prime numbers and I demand you try to prove me wrong.

cochne
5 replies
21h36m

I never really got how proofs are supposed to solve this issue. I think that would just move the bugs from the code into the proof definition. Your code may do what the proof says, but how do you know what the proof says is what you actually want to happen?

MaxBarraclough
3 replies
20h40m

A formal spec isn't just ordinary source-code by another name, it's at a quite different level of abstraction, and (hopefully) it will be proven that its invariants always hold. (This is a separate step from proving that the model corresponds to the ultimate deliverable of the formal development process, be that source-code or binary.)

Bugs in the formal spec aren't impossible, but use of formal methods doesn't prevent you from doing acceptance testing as well. In practice, there's a whole methodology at work, not just blind trust in the formal spec.

Software developed using formal methods is generally assured to be free of runtime errors at the level of the target language (divide-by-zero, dereferencing NULL, out-of-bounds array access, etc). This is a pretty significant advantage, and applies even if there's a bug in the spec.

Disclaimer: I'm very much not an expert.

Interesting reading:

* An interesting case-study, albeit from a non-impartial source [PDF] https://www.adacore.com/uploads/downloads/Tokeneer_Report.pd...

* An introduction to the Event-B formal modelling method [PDF] https://www.southampton.ac.uk/~tsh2n14/publications/chapters...

xmprt
2 replies
20h21m

I think the reason that formal proofs haven't really caught on is because it's just adding more complexity and stuff to maintain. The list of things that need to be maintained just keeps growing: code, tests, deployment tooling, configs, environments, etc. And now add a formal proof onto that. If the user changes their requirements then the proof needs to change. A lot of code changes will probably necessitate a proof change as well. And it doesn't even eliminate bugs because the formal proof could include a bug too. I suppose it could help in trivial cases like sanity checking that a value isn't null or that a lock is only held by a single thread but it seems like a lot of those checks are already integrated in build tooling in one way or another.

valand
0 replies
4h11m

You mix up development problem with computational problem.

If you can't use formal proof just because the user can't be arsed to wait where it is supposed to be necessary, then the software project conception is simply not well designed.

MaxBarraclough
0 replies
19h58m

more complexity and stuff to maintain

Yes, with the current state of the art, adopting formal methods means adopting a radically different approach to software development. For 'rapid application development' work, it isn't going to be a good choice. It's only a real consideration if you're serious about developing ultra-low-defect software (to use a term from the AdaCore folks).

it doesn't even eliminate bugs because the formal proof could include a bug too

This is rather dismissive. Formal methods have been successfully used in various life-critical software systems, such as medical equipment and avionics.

As I said above, formal methods can eliminate all 'runtime errors' (like out-of-bounds array access), and there's a lot of power in formally guaranteeing that the model's invariants are never broken.

I suppose it could help in trivial cases like sanity checking that a value isn't null or that a lock is only held by a single thread

No, this doesn't accurately reflect how formal methods work. I suggest taking a look at the PDFs I linked above. For one thing, formal modelling is not done using a programming language.

dgacmu
0 replies
21h5m

Not really. Imagine the proof says: "in this protocol, when there are more than 0 participants, exactly one participant holds the lock at any time"

It might be wrong, but it's pretty easy to inspect and has a much higher chance of being right than your code does.

You then use proof refinement to eventually link this very high level statement down to the code implementing it.

That's the vision, at least, and it's sometimes possible to achieve it. See, for example, Ironfleet: https://www.microsoft.com/en-us/research/publication/ironfle...

AnimalMuppet
3 replies
21h16m

Formal proof of what? That it has no bugs? Ha!

You can formally prove that it doesn't have certain kinds of bugs. And that's good! But it also is an enormous amount of work. And so, even for life-critical software, the vast majority is not formally proven, because we want more software than we can afford to formally prove.

makapuf
0 replies
21h5m

Yeah, if you can have a formally proven compiler from slides, poorly written user stories and clarification phone calls to x86_64 binary then alright.

Verdex
0 replies
21h9m

This is an interesting point that I think a lot of programming can miss.

Proving that the program has no bugs is akin to proving that the program won't make you feel sad. Like ... I'm not sure we have the math.

One of the more important jobs of the software engineer is to look deep into your customer's dreams and determine how those dreams will ultimately make your customer sad unless there's some sort of intervention before you finish the implementation.

BoiledCabbage
0 replies
19h28m

Exactly, it's fundamentally impossible. Formal proofs can help with parts of the process, but it can guarantee no bugs in the product. These are the steps of software, and their transitions. It's fundamentally a game of telephone with errors at each step along the way.

What actually would solve the customer's problem -> What the customer thinks they want -> What they communicate that they want -> What the requirements collector hears -> What the requirements collector documents -> How the implementor interprets the requirements -> What the implementor designs/plans -> What the implementor implements.

Formal proofs can help with the last 3 steps. But again that's assuming the implementor can formalize every requirement they interpreted. And that's impossible as well, there will always be implicit assumptions about the running environment, performance, scale, the behavior of dependent processes/APIs.

It helps with a small set of possible problems. If those problems are mission-critical then absolutely tackle them, but there will never be a situation where it can help with the first 5 steps of the problem, or with the implicit items in the 6th step above.

underlipton
0 replies
21h36m
roenxi
0 replies
21h24m

As always, the branding of formal methods sucks. As other commentators point out, it isn't technically possible to provide a formal proof that software is correct. And that is fine, because formal software methods don't do that.

But right from the outset the approach is doomed to fail because its proponents write like they don't know what they are talking about and think they can write bug-free software.

It really should be "write software with a formal spec". Once people start talking about "proof" in practice it sounds dishonest. It isn't possible to prove software and the focus really needs to be on the spec.

radicalcentrist
0 replies
19h46m

To quote Donald Knuth, "Beware of bugs in the above code; I have only proved it correct, not tried it."

kaba0
0 replies
10h37m

Well, if your product is on the hello world complexity, you might make it bug-free by just yourself simply through chance.

Formal proving doesn’t really scale much further, definitely not to “enterprise” product scale.

bluGill
0 replies
21h8m

Even formally proved code can have bugs. If your requirement is wrong is the obvious thing. I don't work with formal proofs (I want to, I just don't know how), but I'm given to understand they have other real world limits that make them sometimes have other bugs.

wvenable
1 replies
20h45m

It's always amazing when I get a bug report from a product that's been running bug free in production for years with minimal changes but some user did some combination of things that had never been done and it blows up.

Usually it's something extremely simple to fix too.

codegeek
0 replies
19h40m

This happens a lot more than one may think especially with products that have lot of features. Some features are used sparingly and the moment a customer uses that feature a bit more in depth, boom. Something is broken.

datadeft
25 replies
22h24m

Spicy take on engineering. Why do we accept this for software when do not accept the same in other engineering domains?

ska
8 replies
22h5m

There are a few aspects. One is that we don't understand the fundamentals of software as well as the underpinnings of other engineering disciplines.

More importantly though, for the most part we choose not to do engineering. By which I mean this - we know how to do this better, and we apply those techniques in areas where the consequences of failure are high. Aerospace, medical devices, etc.

It differs a bit industry to industry, but overall the lessons are the same. On the whole it a) looks a lot more like "typical" engineering than most software development and b) it is more expensive and slower.

Overall, we seem to have collectively decided we are fine with flakier software that delivers new and more complex things faster, except where errors tend to kill people or expensive machines without intending to.

The other contributing thing is it's typically vastly cheaper to fix software errors after the fact than, say, bridges.

KeplerBoy
4 replies
20h26m

One is that we don't understand the fundamentals of software as well as the underpinnings of other engineering disciplines.

That sounds like an awfully bold claim. I have the feeling we understand software a lot better than we understand mechanical engineering (and by extension material sciences) or fluid dynamics. By a big margin.

I worked with finite element software and with CFD solvers, you wouldn't believe how hard it is to simulate a proper airflow over a simple airfoil and get the same results as in the wind tunnel.

ska
3 replies
20h5m

That sounds like an awfully bold claim.

To the contrary, it's nearly canonical. Most of the problems pointed out in the 70s (mythical man month) have still not been resolved, 50 years later.

you wouldn't believe how hard it

Oh, I'd believe it (I've designed and built similar things, and had colleagues in CFD).

But you are definitely cherry picking here. The problem with CFD is we don't understand the fluid dynamics part very well; turbulence is a big unsolved problem still, though we have been generating better techniques. This is so true that in an undergraduate physics degree, there is usually a point where they say something like: "now that you think you know how lots of things work, let's introduce turbulence"

But a lot of mechanical engineering and the underlying physics and materials science is actually pretty well understood, to the degree that we can be much more predictive about the trade offs than typically is possible in software. Same goes for electrical, civil, and chem. Each of them have areas of fuzziness, but also a pretty solid core.

imetatroll
1 replies
19h0m

The article is about delivering a complete, working project "on time". I have a neighbor whose home is being renovated and it is already 2x the time the contractor originally quoted.

Of course it is easier for a developer to walk away from something incomplete than an architect and the contractors involved in a physical project, but still, I hardly think that there is really much difference in terms of timelines.

ska
0 replies
18h45m

FWIW in my experience delays in e.g. home renos (or for that matter larger scale projects) are mostly for reasons unrelated to the engineering. In software projects, it's probably the #1 reason (i.e. we didn't know how to do it when we started).

Software is still absolutely king for number of large scale projects that just never ship, or ship but never work.

kaba0
0 replies
10h18m

To the contrary, it's nearly canonical. Most of the problems pointed out in the 70s (mythical man month) have still not been resolved, 50 years later.

Even with all of those applied, we wouldn’t be magically better. Complexity is simply unbounded. It’s almost impossible to reason about parallel code with shared mutable state.

omginternets
2 replies
21h55m

The modern car contains within it a perfect example the dichotomy:

1. The ECU ("hard" engineering)

2. The infotainment system ("soft" engineering)

Now, an interesting thing I have noticed is that "soft" software engineering pays more. Often substantially more.

ska
1 replies
21h52m

I think your salary observation is more of a firmware vs. hardware, rather then "soft" vs "hard" engineering.

Further to that, it's often informative to figure out what makes a company money. The highest paid software development roles tend to be doing things that are closer to revenue, on average. If you are a software developer at a hardware company (or an insurance company, or whatever), you aren't that close. Even worse if you are viewed as a cost center.

johnnyanmac
0 replies
21h14m

Further to that, it's often informative to figure out what makes a company money. The highest paid software development roles tend to be doing things that are closer to revenue, on average.

yeah. Who are those trillion dollar businesses and what do they rely on?

- Apple: Probably the better example here since they focus a lot on user-facing value. But I'm sure they have their own deals, B2B market in certain industries, R&D, and ads to take into account

- Microsoft: a dominant software house in nearly every aspect of the industry. But I wager most of their money comes not from users but other businesses. Virtually every other companies uses Windows, Word, and those that don't may still use Azure for servers.

- Alphabet: ads. Need I say more? Users aren't the audience, they are the selling point to other companies.

- Amazon: a big user facing market, but again similar to Microsoft. The real money is b2b servers.

- Nvidia: Again, user facing products but the real selling point is to companies that need their hardware. In this case, a good 80% of general computing manufacturers.

- Meta: Ads ans selling user data once again

- Tesla: CEO politics aside, it's probably the 2nd best example. Split bewteen a user facing product that disrupted an industry and becoming a standard for fuel in the industry they disrupted. There's also some tangential products that shouldn't be underestimated, but overall a lot of value seems to come from serving the user.

General lesson here is that b2b and ads are the real money makers. if you're one level removed that financial value drops immensely (but not necessarily to infeasible levels, far from it).

Gud
5 replies
22h16m

Trust me when I say this: even "other" engineering domains have to do patches.

The difference is that software can be used before it is fully ready, and it makes sense to do so. No one can really use a 90% finished power plant, but software at 95% capacity is still usually "good enough"

omginternets
1 replies
21h54m

e.g. product recalls?

Gud
0 replies
21h46m

I install high voltage switchgear on site. A common problem is all the changes that has been added during the design stage, circuits that have been removed or altered, work that has kind of mostly been done to the schemes by the overworked secondary engineer. Sometimes, the schemes have been changed after all the wiring is completed and shipped to site, making it my pain in the ass when it's time to do the commissioning.

The end result is never 100% perfect, but somewhere in between "not too bad" and "good enough".

daotoad
1 replies
21h53m

I think you're 90% there. There is also the cost to apply a patch.

If you want to patch a bridge, it's gonna cost you. Even if you only need to close down a single lane of traffic for a few hours you are looking at massive expenses for traffic control, coordination with transportation agencies, etc.

For most software it's pretty inexpensive to ship updates. If you're a SaaS company regular updates are just part of your business model. So the software is never actually done. We just keep patching and patching.

In some contexts, it is much more expensive to push out updates. For example, in the 00s, I worked on a project that had weather sensors installed in remote locations in various countries and the only way to get new software to them was via dial-up. And we were luck that that was even an option. Making international long distance calls to upload software patches over a 9600 baud connection is expensive. So we tested our code religiously before even considering an update, and we only pushed out the most direly needed patches.

Working on SaaS these days and the approach is "roll forward through bugs". It just makes more economic sense with the cost structures in this business.

Gud
0 replies
21h42m

Indeed. We calculate a $1 dollar fix in the factory costs $100 to fix on site.

rocqua
0 replies
13h6m

Thanks for this insight! It has pretty strong explanatory power. It also explains why rushed development can stall. It explains 'move fast and break things'.

There's even an added factor of learning more about what is really needed by putting a 95% done product into use.

Heck, it explains (stretching it here) space-x's success with an iterative approach to rocket design.

benwilson-512
1 replies
22h14m

My wife works as an acoustical consultant at a global construction firm. The things you hear about factories, offices, and even hospitals is wild. Don’t get me wrong the construction world works very hard to avoid issues but I think we in software tend to hold other engineering disciplines up on a pedestal that doesn’t quite match the messiness of reality.

jimt1234
0 replies
21h41m

Thanks for saying this. I think we in software engineering tend to think too binary: either the product is perfect (100% bug-free) or it's shit. There's always room for improvement, but compared to other engineering, overall, I think we're doing pretty good. As an example similar to your wife's, my friend used to work for one of the major car manufacturers doing almost the exact same job as Edward Norton's character in Fight Club. The cars had "bugs", they knew about it, but they didn't publicly acknowledge it until they were forced to.

rocqua
0 replies
13h13m

I mean, bridges collapse. That hasn't meant we gave up on engineering bridges. Point being, we have some risk tolerance, even for civil engineering.

Now we don't accept an engineer saying, "this bridge will probably collapse without warning", which we do accept with software. So there is a difference.

rockemsockem
0 replies
21h19m

We accept this in all fields of engineering. Everything is "good enough" and the seems to work reasonably well. You should remember this next time you hear about car recalls, maintenance work on bridges, or when some component in your laptop flakes out.

ponector
0 replies
20h46m

Imagine same approache in other domains:

Team are flying the airplane, the se time rebuild it to the zeppelin, testing new engines inflight.

Or construction. Let's build apartment block, but for few apartments we will test new materials, new layout, etc. Once there are walls of the first apartments we will let people live there. We will build how we can, according to the napkin plan. In the end we will put all tenants in and stress test strength of the structures. Or one day people return home and their apartments have totally different design and layout because someone from the HOA decided so to get a promotion.

pixl97
0 replies
22h6m

when do not accept the same in other engineering domains?

No, you just complain that your taxes are being used to build expensive roads and bridges. Or you think airplanes are far too expensive. Or that new cars are insanely expensive.

There are cost trade offs. In general, better quality more expense.

Also in software there is not an excessive amount of software engineers in relation to demand for software. So SWEs can get paid a lot to go build crappy software.

kaba0
0 replies
10h25m

Because complexity is boundless and in software it has no cost.

Building a house will have a restrictive initial budget for complexity, you don’t have enough in that budget for rotating floors, or an elevator that is catapulted to the correct floor, etc. These would cost both at engineering time and implementation time a huge amount. Less complexity is easier to analyze.

In case of software, complexity has negligible cost, relative to physical systems. You can increase it ad infinity — but proving it (the whole stack - from the hardware-OS-userspace software) correct is likely impossible with even the whole of mathematics, in certain cases.

digging
0 replies
21h13m

In addition to the other answers, there is the perennial and depressing one: Software bugs haven't killed enough people in a suitably visible/dramatic way to be regulated that heavily.

alexchamberlain
0 replies
22h17m

Most engineering domains expect failure; the fail safes, checklists etc prevent it causing real damage.

BeefyMcGhee
0 replies
21h47m

Because other engineering domains are "actual" engineering domains. They didn't just co-opt the word to have fancier sounding job titles.

fidotron
19 replies
23h50m

It is sad that people on here would believe this and that for whole platforms it is actually true, however, it absolutely is not universally true and the proof is all around us.

sanderjd
18 replies
22h23m

What proof is all around us?

fidotron
17 replies
21h53m

The amount of software in everyday objects which runs without exhibiting bugs to such a degree we do not notice most of it even exists.

sanderjd
14 replies
21h48m

Yes, but that software is not bug-free. The claim was not "it's impossible to make software that does not exhibit bugs too a casually noticeable degree".

People who know how the sausage is made will always know of a bunch of bugs that haven't been fixed exactly because they aren't impactful enough to be worth the effort required to fix them.

fidotron
13 replies
21h32m

If it works within specs it is bug free. It doesn’t matter how it is made if it works within specs, which is one of the real unfortunate truths of software.

The other is working out the correct specification is far harder than coding is.

For example it is trivial to write a bug free program that multiplies an integer between 3 and 45 by two.

digging
5 replies
21h12m

If it works within specs it is bug free.

No, it's functional. If it has bugs, it's not bug-free. By definition.

fidotron
3 replies
20h56m

What would it mean to be bug free then?

To quote a former marketing guy “it should work out what the user intends to do and do it”?

digging
1 replies
20h38m

To have no bugs, which is extremely unlikely for a program of any real complexity. Having bugs, and being functional, are fairly self-explanatory and independent of each other. No need to try to conflate them.

Not sure what your quote is supposed to mean. That's a textbook example of someone who doesn't understand software at all making laughable requests of their engineers.

fidotron
0 replies
20h15m

To be bug free we must be able to define what a bug is. So, what is a bug?

The reason for that quote is from what you have said a bug would be anything you didn't expect, even if it is consistent or not with the specification as that merely affects if we classify it as functional or not (a classification I profoundly disagree with, obviously). It is simply a negative rephrasing of what the marketing guy said and laughable in the same way.

ponector
0 replies
20h25m

Bug free software means developers would not disclose any information about present bugs in the software they ship to customers.

Really bug free commercial software does not exist. And can't exist. There are always bugs which are known but would not be fixed.

tonyarkles
0 replies
20h18m

Not to get too meta here but… what is your definition of a bug? One plausible definition is “system deviates from its specification”.

bluGill
3 replies
21h0m

Most devices work within the spec 99.9% of the time, but that last .1% it is outside the spec. The exact % is different for different projects of course, but the idea is still there: no software operates according to spec 100% of the time.

fidotron
2 replies
20h35m

It does though. My example of adding two ints within a known finite range would operate to spec 100% of the time.

You would have to introduce things like tolerance to hardware failure, but that is outside the spec of the software as stated.

sanderjd
1 replies
55m

Yes, but that's not real software.

fidotron
0 replies
21m
sanderjd
1 replies
2h17m

As you alluded, in practice no specs fully specify a truly bug free implementation. If you want to consider bugs that are within the specification as being bugs in the spec rather than bugs in the implementation, fine, but in my view that is a distinction without a difference.

(Personally, I think code is itself more analogous to the specification artifacts of other professions - eg. blueprints - and the process of creating the machine code of what is analogous to construction / manufacturing something to those specs.)

And even having said that, even the vast majority "bug free" software that nearly always appears to be operating "within spec" will have corner cases that are expressed in very rare situations.

But none of this is an argument for nihilism about quality! It is just not the right expectation going into a career in software that you'll be able to make things that are truly perfect. I have seen many people struggle with that expectation mismatch and get lost down rabbit holes of analysis paralysis and overengineering because of it.

fidotron
0 replies
2m

in practice no specs fully specify a truly bug free implementation.

Except for ones that do, obviously.

The key reason to make the distinction is because the fuzzy business of translating intention into specification needs to be fully accepted as an ongoing negotiation process of defining exactly what the specification is, and integrated into repeated deterministic verification that that is what has been delivered. Failing to do that is mainly a great way for certain software management structures to manipulate people by ensuring everything is negotiable all the time, and has the side effect that no one can even say if something is a bug or not. (And this pattern is very clear in the discussion in this thread - there is a definite unwillingness to define what a bug is).

IME the process of automated fuzzing radically improves all round quality simply because it shakes out so many of the implicit assumptions and forces you specify the exact expected results. The simple truth is most people are too lazy and/or lack the discipline needed to do it.

ponector
0 replies
20h28m

You know, there could be bugs in spec. And you can have a software written with bugs but according to spec.

When testing should start? BEFORE the first line of code is written.

ponector
0 replies
20h34m

I encounter bugs everywhere all time. List goes very long.

Microwave has random errors from time to time.

Robo vacuum freezes.

Parking meter malfunction.

Public transport ticket machine don't want to give me a ticket.

Online banking failing to make a transfer because I use UI with other language.

Mobile banking failing to make a transfer because I use not native currency.

Car has issues as well, incorrect fuel amount is injected by computer.

Online pages have tons of bugs, many are barely usable.

avgDev
0 replies
20h58m

There are people PUTTING out fires nonstop in many apps we all use.

I have been writing code for almost a decade now, and still make errors. I don't believe anyone is capable of producing bug free software.

I have also seen plenty of bugs in apps and games. I don't think I have ever witnessed a major game patch that was bug free.

FalconSensei
14 replies
1d

If you believe you can ship bug free code, it's time to switch careers.

Unfortunately, you are correct. Shipping in time and bug free are inversely proportional, and in a world were usually it's hard to argue with PMs for more time to have better testing, or paying tech debt... it's just a reality

colinmorelli
8 replies
1d

An infinite amount of time would not necessarily yield zero bugs.

But more importantly, once you've fixed the "show-stopping bugs," putting the software in front of customers is probably the best next step, as even if it's bug-free, that doesn't mean it solves the problem well.

reactordev
5 replies
1d

there is no such thing as zero bugs. There is only a marker in time for a suite of tests that show no bugs. Doesn't mean larva aren't living under the wood. You can't control all the bits (unless you built your own hardware/software stack).

colinmorelli
3 replies
1d

I think we're saying the same thing? That was my point. You're never going to achieve zero bugs no matter how much time you give yourself. Focus on getting the critical path right and creating a good experience, and then get it to customers for feedback on where to go next.

[The above does not necessarily apply in highly regulated industries or where lives are on the line]

reactordev
0 replies
23h5m

Yup, I also like how you call out "get it in-front of customers" as a step in the whole chain. Often sorely missed. Sometimes a bug to you, is a feature to them (gasp!)... so either make it a first class thing or train them on the correct path (while you fix the "bug").

ElectricalUnion
0 replies
23h31m

I would say that also applies on highly regulated industries or where lives are on the line.

On those you're of course expected to do safety and testing up to the limit of the "value of a statistical life"s within the expected project impacts, but it still has time and budget limits.

BobaFloutist
0 replies
23h26m

I like to think of "zero bugs" as the asymptote. As you spend more time, you discover increasingly fewer (and less significant) bugs per unit of time. POSSIBLY at the limit of infinite time you hit 0 bugs, but even if you could, would it be worth it? Doubtful.

I can think of far better ways to spend infinite time.

nonethewiser
0 replies
21h53m

there is no such thing as zero bugs.

Ok, I think we’ve gone too far. There absolutely is such thing as 0 bugs and sometimes code changes don’t have bugs. That is not to say it can be garunteed.

quickthrower2
0 replies
18h3m

We need to define bug but if bug is anything a customer (internal or external) is not happy with that passes triage and you can’t throw it back in their face. Then zero bugs would be impossible with even infinite time.

FalconSensei
0 replies
18h58m

An infinite amount of time would not necessarily yield zero bugs.

Never said that, just that quick turnaround for deliveries will usually mean more bugs, and having some extra time usually means less bugs

nradov
2 replies
1d

That's only true up to a point. There are some quality assurance and control activities that are essentially "free" in that they actually allow for shipping faster by preventing rework. But that requires a high level of team maturity and process discipline, so some teams are simply incapable of doing it. And even in ideal circumstances it's impossible to ship defect free software (plus the endless discussions over whether particular issues are bugs or enhancement requests).

johnnyanmac
0 replies
21h27m

yeah, it's a spectrum. Clearly no one is expecting an app to be truly bug free if the underlying compiler itself has bugs. But how often do users truly run into compiler level bugs?

I think when the author says "bug free", it's from the user perspective. where bugs either need to go out of your way to trigger or are so esoteric it's impossible to think about hitting without that user themself knowing the code inside out. Games is definitely an industry where the quality of code has always dipped to a point where users can easily hit issues in normal use, and only gets worse as games get more complex. That's where it gets truly intolerable

FalconSensei
0 replies
18h56m

There are tools that help, but you still need time to integrate those tools, learn how to use them, etc. If you are doing unit and integration tests, you need time to not only write those, but also actually plan your tests, and learn how to write tests. Which... needs time

d0gsg0w00f
1 replies
19h25m

Like the age old builder trope.

"Cheap. Fast. Good. Pick two."

NegativeK
0 replies
18h44m

That's the optimistic viewpoint.

The pessimistic viewpoint is that you get to pick up to one.

chopsuey5540
9 replies
1d

You might be correct today but that’s a pretty sad state of affairs, don’t you think we can do better? Most other engineering domains can deliver projects without bugs, with various definitions of “bug” of course

KeplerBoy
5 replies
23h57m

I'm not sure about that. Which engineering domain do you have in mind?

Maybe show-stopping bugs are somewhat unique to software engineering, but all somewhat-complex products are flawed to some extent imho.

It might be an unergonomic handle on a frying pan, furniture that visibly warps under the slightest load (looking at ikea shelfing) or the lettering coming off the frequently used buttons on a coffee machine.

bee_rider
3 replies
22h16m

But there do exist shelves that don’t warp, when used within some reasonable bounds.

I’d also quibble about the buttons on the coffee machine. They might be properly designed, just subject to the normal wear-and-tear that is inevitable in the real world. This is not a defect, physical devices have finite lifespans.

As far as computers go… if we got to the point where the main thing that killed our programs was the hard drives falling apart and capacitors drying out, that would be quite impressive and I think everyone would be a little bit less critical of the field.

kaba0
1 replies
9h26m

A shelve is a dumb primitive static object though. Even a simple hello world goes over a huge amount of lines of code before it is displayed on a screen, ANY one of which being faulty could result in a bug visible to the enduser. And most of that is not even controlled by the programmer — they might call into libc, which calls into the OS, which calls into drawing/font rendering libraries, that calls into video card drivers that “calls” into the screen’s firmware.

And this is almost the simplest possible program.

bee_rider
0 replies
3h33m

I think “hello world” is not really the simplest program in this context, in the sense that printing, as you note, involves touching all that complicated OS stuff. In terms of, like, actual logic complexity implemented by the programmer compared to mess carried along by the stack, it is really bad.

But I mean, I basically agree that the ecosystem is too complicated.

saled
0 replies
21h47m

Formally verified, bug free software exists. It just costs a LOT to produce, and typically isn't worth it, except for things like cryptographic libraries and life or death systems.

As the discipline has evolved, the high integrity tools are slowly being incorporated into typical languages and IDEs to generally improve quality cheaper. Compare C++ to rust for example, whole classes of bugs are impossible (or much harder to make) in rust.

ponector
0 replies
20h17m

There are many examples of catastrophic bugs in real life.

New bridges collapses, dams overflow s, planes crashes, vaccines kills, food kills, leaning towers and skyscrapers, capsized ships - catastrophic flaws are everywhere.

sanderjd
0 replies
22h22m

Their stuff has bugs too.

imiric
0 replies
20h43m

We can certainly do better, but it takes a _lot_ of time, effort, care and discipline; something most teams don't have, and most projects can't afford.

Bugs arise from the inherent complexity introduced by writing code, and our inability to foresee all the logical paths a machine can take. If we're disciplined, we write more code to test the scenarios we can think of, which is an extremely arduous process, that even with the most thorough testing practices (e.g. SQLite) still can't produce failproof software. This is partly because, while we can control our own software to a certain degree, we have no control over the inputs it receives and all of its combinations, nor over the environment it runs in, which is also built by other humans, and has its own set of bugs. The fact modern computing works at all is nothing short of remarkable.

But I'm optimistic about AI doing much better. Not the general pattern matching models we use today, though these are still helpful with chore tasks, as a reference tool, and will continue to improve in ways that help us write less bugs, with less effort. But eventually, AI will be able to evaluate all possible branches of execution, and arrive at the solution with the least probability of failing. Once it also controls the environment the software runs in and its inputs, it will be able to modify all of these variables to produce the desired outcome. There won't be a large demand for human-written software once this happens. We might even ban software by humans from being used in critical environments, just like we'll ban humans from driving cars on public roads. We'll probably find the lower quality and bugs amusing and charming, so there will be some demand for this type of software, but it will be written by hobbyists and enjoyed by a niche audience.

ElectricalUnion
0 replies
23h22m

To be an engineer is to know the expected system requirements and build a product that is extremely optimized for the system requirements.

There's a saying that I think fits very well here: "Any idiot can build a bridge that stands, but it takes an engineer to build a bridge that barely stands."

You don't want a bridge to cost 50 years and quadrillions of dollars to build, you want a cheap bridge safe for the next 50 years done in 2 years.

I would not call the resulting bridge "bug free", of course.

jasondigitized
5 replies
23h49m

It's perfectly acceptable to let bugs escape into production if those "cost" of fixing that bug higher than the "cost" to the user experience / job to be done. A bug that takes a week to fix that will only be encountered by a small amount of users in a small number of obscure scenarios may not need to be fixed.

billti
2 replies
22h7m

I think a common error is taking this view in isolation on each bug.

Fact is, if you ship enough 'low probability' bugs in your product, your probabilities still add up to a point where many customers are going to hit several of them.

I've used plenty of products that suffer from 'death by a thousands cuts'. Are the bugs I hit "ship blockers"? No. Do I hit enough of them that the product sucks and I don't want to use it? Absolutely.

pixl97
0 replies
22h4m

Software is commonly built on non-fungible components and monopolies.

Right, you don't want to use Microsoft Word, or SalesForce, or Apple vs Android, or X Whatever. It's highly unlikely you'll have a choice if you use it though.

owlbite
0 replies
17h28m

Very much this, and low risk bugs compound at scale.

If you're in a very large FANNG type company, and say you have 1000 components that each ship 1 bug each day that has a 0.1% chance of breaking something critical, that translates to a less than 50% chance you ship a working OS on any given day. And that may mean the entire company's productivity is impacted for the day depending on how broken it is.

AlotOfReading
1 replies
23h37m

This presupposes that you know what/where bugs will be found and how they'll impact future users. In my experience knowing either of these is very rare at the point where you're "building quality".

xkekjrktllss
0 replies
22h55m

how they'll impact future users

Most people in this thread understand that users' interests only matter insofar as they impact business profit.

I just think you're having a different conversation.

Dalewyn
4 replies
23h32m

A saying that I once heard and appreciate goes like this:

"A programmer who releases buggy software and fixes them is better than a programmer who always releases perfect software in one shot, because the latter doesn't know how to fix bugs."

Perhaps similar to the saying that a good driver will miss a turn, but a bad driver never misses one.

MaxBarraclough
3 replies
21h50m

That's backward. A successful software development methodology will tend to catch bugs early in the development pipeline.

The doesn't know how to fix bugs idea seems pretty silly.

Dalewyn
2 replies
19h10m

I think you misunderstand, I'm talking about a programmer who makes perfect, bug-free code in one shot. There are no bugs to catch and fix, because this "perfect" programmer never writes buggy code.

The moral of the sayings is, that "perfect" programmer is actually a bad programmer because he wouldn't know how to fix bugs by virtue of never needing to deal with them.

To reuse the driver analogy, the driver who never misses a turn is a bad driver because he doesn't know what to do when he does miss a turn.

MaxBarraclough
1 replies
8h22m

I don't see that I misunderstood anything.

If a software developer consistently delivers high-quality software on time and on budget, that means they're good at their job, pretty much by definition. It would make no sense to infer they're bad at fixing bugs.

It would make sense to infer instead that they're good at catching and fixing bugs prior to release, which is what we want from a software development process.

the driver who never misses a turn is a bad driver because he doesn't know what to do when he does miss a turn

Missing a turn during a driving test will never improve your odds of passing.

The driver who never misses a turn presumably has excellent awareness and will be well equipped to deal with a mistake should they make one. They also probably got that way by missing plenty of turns when they were less experienced.

Dalewyn
0 replies
3h28m

Yeah, you're misunderstanding.

What we are discussing isn't a real programmer we might actually find. No, we are talking about a hypothetical "perfect" programmer. This "perfect" programmer never wrote a bug in his entire life right from the moment he was born, he never had a "when they were less experienced" phase.

Obviously, that means this "perfect" programmer also never debugged anything. For all the perfect code he writes, that makes him worse than a programmer who writes buggy code but also knows how to go about debugging them.

rocqua
1 replies
13h18m

I think with formal analysis, whole bug classes can be eliminated. Add to that a rigorous programming style, and 'bug-free' code is within reach. There will remain bugss that make it through, but they will be rare, and will need a chain of mistakes.

Currently ways of coding to this kind of standard exist. But they are stupid. It involves things like no dynamic memory allocation, only fixed length for-loops, and other very strict rules. These are used in aerospace, where bugs are rather costly and rare.

kaba0
0 replies
10h30m

What seems to yield much better results is to have the program be built by two separate teams to a standard. Then both programs can be run simultaneously, checking each others output — I believe something like this is actually used in the aerospace industry.

sublimefire
0 replies
6h8m

Reminds me of a story about an engineer who was participating in a meeting with managers and partners. Manager was speaking of his team and how they will deliver software. Then, he asked the engineer to assure that the software will be bug-free. To this the engineer responded by saying he cannot guarantee there will be no bugs. The manager went nuts and started screaming.

Engineers cannot be responsible for all the vertical stack and the components which were built by others. If somebody claims it is bug free then they have not enough experience. Pretty much anything can fail, we just need to test as many possible cases as possible with a variety of available tools to reduce the chances of bugs.

reactordev
0 replies
1d

This kind of wisdom only comes from experience I think. Either that or higher order think. Like the article says, most of the time testing/tdd/qa is bolt on after-the-fact. Or a big push at the end with "QA Sprints" (are you sprinting or are you examining? what exactly is a QA sprint? I know what it is).

Once you get beyond "I wrote a function" and "I tested a function" and even still "I tested a function that was called by a function over the wire", you will come to a realization that no matter how edgy your edge cases, no matter how thorough your QA, there will always - ALWAYS be 0-day "undefined behavior" in certain configurations. On certain hardware. On certain kernels. It's an assurance that I guarantee that I'm almost positive it's bug free, since it passed tests, it passed human eyes, and it passed review - fingers crossed.

fivre
0 replies
21h50m

it will be necessary to deliver software without bugs that could have reasonably been avoided in time

ive had this sentiment thrown at me too often by peak move fast and break things types. it's too often a cudgel to dispense with all QA in favor of more new feature development. shipping shit that has the same pattern of flaws youve encountered in the past when youve been shown ways to catch them early but couldnt be bothered isnt accepting that you cant catch everything, it's creating a negative externality.

you usually can make it someone else's problem and abscond with the profits despite, but that doesn't mean you should

OnlyMortal
0 replies
1d

Correct. As I like to tell my team, if I’ve typed something I’ve caused a bug. It’s all about risk.

I assume I’m not infallible.

Writing some unit tests, C++ and mocking in my case, give both the team and myself some confidence I’ve not made things worse.

I’m the most experienced dev in the department.

HeyLaughingBoy
0 replies
20h38m

Nothing is bug free.

Not buildings, not bridges, not cars, not airplanes, not software. There are mistakes in every field of engineering and the best we can hope for is to minimize them as much as possible.

Engineering is knowing (among other things) how to categorize the potential mistakes, develop procedures to reduce the chance of them being made and in the case that some slip through (and they will), estimate their impact and decide when you're "good enough."

Software engineering is no different.

titzer
109 replies
21h51m

We do teach these things, they are just not core CS topics, but rather in other areas, relegated to electives like a software engineering course. At CMU we have entire Master's program for software engineering and an entire PhD program (in my department). We teach exactly the kinds of things the blog post is about, and more. Software Engineering is a whole field, a whole discipline.

I get that this is a blog post and needfully short, but yes, there are courses that teach these skills. There's a big disconnect between CS and SE in general, but it's not as bad as "no one teaches how to build quality software". We do work on this.

hardwaregeek
31 replies
21h17m

There's a massive gap between taught at CMU and taught at most universities. And even if it is taught, it's usually outdated or focused on very literal stuff like how to write web applications. I'd have killed for a class that actually focuses on implementation, on teamwork, on building complicated systems.

heelix
18 replies
19h5m

I've wished that students would be required to hand their semester long project to the person next to them each week.

steveBK123
10 replies
16h2m

Almost every CS course I took went the other way and had strict cheating policies that essentially made any group work verboten. There was 1 group project in 1 course I took in 4 years.

My spouse on the other hand took an explicitly IT program and they had group projects, engaging with real world users, building real solutions, etc.

sdiupIGPWEfh
8 replies
14h35m

strict cheating policies that essentially made any group work verboten

If I had to guess, some polytechnic school or another?

With some classes even forbidding discussing work with other students, where each assignment required a signed (digitally or otherwise) affidavit listing everyone you consulted, acknowledging that if you actually listed anyone, you were admitting to violating the academic honesty policies and if you didn't list anyone yet had spoken with others you were of course also violating the academic honesty policies.

Where only consulting the professors or TAs was allowed, the TAs were never around, and the professors refused to help because if they gave you any hints, it would apparently give away the answer, which would be unfair to the other students.

virgilp
2 replies
12h14m

I taught a course "in a previous life" and while I wasn't anything close to as strict as you say here, I can tell you the flip side: students would copy, modify superficially (some, less superficially) and then claim "it's my work, we just talked, that's why the similarities!" (with some even having the nerve to say, "it's not copied, look, plagiarism detection software says similarity is less than x%!) Perhaps I was wrong but I really wanted the students who took the course to put in the work themselves. Just choose a different course/course set, you _knew_ this one was going to be hard!

So yeah, the guideline was that we'll be eventual judges of what was "too similar" and if you're concerned, just don't discuss implementation details with anyone. I realize it prevents honest colaboration, and that's bad too... but sometimes it's a "damned if you do, damned if you don't" kind of situation.

toolslive
0 replies
10h18m

What we did when correcting the homework is compare the signature of the assembly output (not manually of course). You can move functions around, rename them, change the names of variables, .... but the signature of the instructions remains the same.

We caught 2 guys, of course we didn't know who copied from whom, but we quickly found out by challenging each of them with a fizz-buzz kind of question.

steveBK123
0 replies
3h59m

Sure, but given how we all work in the real world.. there should be courses that are explicitly "group project" based.

denton-scratch
2 replies
9h2m

I had students that copied one-another's work; in fact I had few students that didn't copy. It made it impossible to mark their projects correctly, so I asked my more-experienced colleagues.

The best advice I got was to explain the difference between collaboration (strongly encouraged) and plagiarism (re-take the course if you're lucky). Forbidding collaboration is a disastrous policy, so an instructor shouldn't have a problem with giving the same mark to each member of a group of collaborators. You just have to test that each individual can explain what they've submitted.

steveBK123
1 replies
4h0m

My school auto-graded everything (file drop your code to a server by midnight & get a score back). I don't recall a single instance of TA/professor written or verbal feedback on code.

denton-scratch
0 replies
40m

Yuh. I guess that's "modern times". I taught in the late eighties, and auto-grading wasn't a thing.

FWIW, I was a "temporary visiting lecturer", i.e. contract fill-in help. I had to write the syllabus (literally), the course plan, and the hand-outs. I also had to write all the tests and the exams; and I had to mark the exams, and then sit on the exam board through the summer holidays. The pretty girls would flutter their eyelashes at me, and ask for leniency. I suspect they took their cue from my louche colleague, who was evidently happy to take sexual advantage of his students in exchange for better marks.

[Edit] I liked this colleague; but I would not have introduced him to my wife or my daughter.

skeeter2020
0 replies
3h24m

I've taken course work at 6+ different universities, and in my experience different groups of international students have very different perspectives on what is cheating vs. collaboration. I think it's likely attributed to the western ideal of the individual vs. the collective.

sdiupIGPWEfh
0 replies
3h1m

Also, for whatever my purely anecdotal experience is worth, I'd say the general quality of a professor's teaching was negatively correlated with how hard-ass they were about anti-collaboration policies. That also held true for a couple classes I'd dropped and retaken with a different prof.

One might think I'm just rating easier professors higher, but no, there were definitely enough sub-par teachers with lax policies and forgiving grading who failed to impart what their course was supposed to cover. There were also tough professors I learned a lot from. It's just that I can't recall a single decent instructor among those obsessed with anti-collaboration policies.

WWLink
0 replies
12h28m

That's crazy! My uni had senior design projects, and labs. Shoot, my community college had those, too. Most of our classes had lab projects, and sometimes the lab projects were group projects. We were encouraged to help each other out. I can't imagine a class where collaboration was totally against the rules.

And I mean, that went with everything. In my EE classes we did lots of collaboration. We had study groups, homework groups, etc. It was a lot of fun. I'm sad to hear there are places where they straight up make that against the rules.

My uni also had engineering ethics classes - in my major it was mandatory to take 2 of them. I think it makes sense and should be more common for software engineers. A lot of software is used to operate planes, cars, medical equipment, and nowadays also help make decisions that can have life-threatening consequences.

accurrent
3 replies
13h37m

The university I went to had this. We had to maintain an application that was built by our seniors and then hand that off to the next batch.

skeeter2020
1 replies
3h22m

This is a really good idea, but I would hate to have to do it... unless I was the senior who got to do the original greenfield development!

robinson7d
0 replies
2h36m

Worst of all: the first seniors get to refactor the professor’s code.

akoboldfrying
0 replies
8h4m

I'm impressed by the closeness to reality, but curious how they could assess performance consistently?

usrusr
0 replies
5h59m

Not each week, but at my university, they had enough focus on the science part that most semester long solo projects were semester long and solo only in name. Students were assigned some component or incremental stage of some larger project spanning years and multiple PhDs, not some assignment for assignment's sake. That certainly did not make them perfect learning environments because students were left to their own in how they solve cooperation (postgrads running those macro projects might be considered PO, but not technical leads, and just as clueless as the rest) but the "pass on to the next each week" approach would have exactly the same gaps.

fsckboy
0 replies
17h10m

that would be a cool semester long project assignment: everybody has to plan/architect their own software project, and then work on implementing them, but you don't work on your own project, that you just manage.

ReactiveJelly
0 replies
11h45m

Me too. All throughout my education, group projects and speeches were both these scary, uncommon things that you could afford to fuck up, and I skated by on individual talent.

Now I'm an adult, several years into my career, wishing that the schools had done a bigger quantity of low-stakes group projects instead. It's a muscle that was never exercised

ricw
5 replies
19h26m

That was a dedicated software engineering course I took at university. Teams of 5. Had to put the soft eng theory into practice. And if’s not CMU.

calvinmorrison
3 replies
18h34m

Do they funnel soon to be grads into stressful zoom calls where product managers handwave an entire legacy stack still somehow running on coldfusion and want a rebrand with 'AI' starting Jan 1??

ekidd
2 replies
17h13m

No, but our professor assigned teams at random, gave us a buggy spec, and then changed the spec with one week to go during finals week. (This last part appears to have planned; they did it every year.)

This was a surprisingly effective course, if sadistic.

ungamedplayer
0 replies
12h3m

Talk about preparing graduates for the real world!

calvinmorrison
0 replies
15h19m

It certainly trains the programmer to not box themselves in with assumptions. What's one more for loop?

bcrosby95
0 replies
17h41m

I had something like this too - it was required for my CS degree. Our class split up into teams of 5. But the whole class of 30 was working on a single project. It was a semester-long project and each team also had to integrate with eachother to build the final solution.

sasaf5
3 replies
20h42m

+1 for teamwork... I wished there was an established field of study for teamwork in software. If you are impressed with 10x devs, imagine 10x teams!

habnds
2 replies
17h47m

I don't believe this personally, but I bet that many people with MBA's would tell you that this was what their MBA was about.

eastbound
0 replies
6h34m

In software engineering schools in France, both in my personal experience and other Bac+5 people I’ve hired, the entire Bac+4 year was dedicated to group projects. When you have 8 group projects in parallel at any time, it does teach team work, management, quality control, all the good stuff.

After the first failed project, everyone agrees that they need a boss without assigned duties. Then in 5th year, we’d have one or two class-wide projects (20-30 people).

This, along with joining event-organizing charities on the campus, where you’d work 3-5hrs a week to build a hundred-thousand-to-a-million dollars event (NGOs on campus, or concerts, or a sports meeting, to each their preferences, but it was the culture). And it wasn’t always a good time together, it’s not flowers all the way down.

I’m only surprised some school might consider engineering without group effort. On the other hand, I deeply deeply regret spending this much time in charities and not enough dating, as I have never been recognized for charity service, and family was important to me.

cqqxo4zV46cp
0 replies
17h23m

Yeah. A fair bit of this is just “people working in teams” stuff, that people that buy into ‘developer exceptionalism’ will tell you is sacred software developer knowledge. It isn’t.

Software engineering isn’t just about teamwork, and not all software development-related teamwork skill is generalisable to other industries, but it’s far from uncommon for there to be some trendy blog post laying out the sorts of things that, yes, an MBA program will teach someone. Which is fine, if not for the fact that these same people will scoff at “clueless MBAs”.

skeeter2020
0 replies
3h31m

This just isn't realistic in an general CS undergrad though. Students don't have the time or foundations to build a realistic complicated system, and the institutions don't have skills or currency to teach them how. You complain about content being out of date, but the foundations of quality software are not by and large technical nor go out of style. The implementations sure do.

What you're asking for is where vocational (2 year) programs really shine, and one of the few places where I've seen bootcamp graduates have an advantage. Unfortunately both then lack some really valuable underpinnings, example: relational algebra. It seems that once again there is no silver bullet for replacing time and experience.

jeffreygoesto
0 replies
12h3m

And then there was "Twenty dirty tricks to train software engineers" which I found even more suitable as preparation for working in the industry.

__loam
16 replies
21h27m

I took one of these kinds of classes in my masters program this year. They were totally obsessed with UML. It would be nice if these classes could move beyond dogma that is decades old.

titzer
5 replies
21h14m

CMU constantly reevaluates its MSE program with input from many different angles. I've participated here and I think we're trying hard to balance important foundational knowledge with practical skills of the day. I don't think we over-emphasize UML or any one particular silver bullet in our program.

ska
4 replies
20h57m

To a first approximation, software developers don't have masters degrees. If you are thinking about changing how an industry does its work, focusing on graduate courses seems counterproductive.

HeyLaughingBoy
3 replies
20h43m

I disagree. I have a Master's in Software Engineering and the way to change things is for those with the formal education to try and spread good practices as much as possible in the workplace. Sometimes the main benefit is just knowing that good practices exist so you can seek them out.

The biggest impact I've had at the places I've worked have been about procedures and methodology, not how to use UML or draw a dataflow diagram.

- Have a process around software releases. Doesn't matter what as much as it has to be repeatable.

- Review your designs, review your code, don't do things in isolation.

- Have a known location for documents and project information.

- Be consistent, don't do every project completely differently.

- Get data before you try to fix the problem.

- Learn from your mistakes and adjust your processes to that learning.

- And many more things that sound like common sense (and they are) but you'd be amazed at how even in 2023 many companies are developing software in complete chaos, with no discernible process.

ska
2 replies
20h28m

What I'm saying is that if your goal is to introduce more engineering rigor and your plan is for for the tiny percentage of graduate school graduates to percolate these ideas through the industry, it's probably a bad plan and likely to fail.

This was a thread about why software developers don't do engineering like other disciplines. One partial answer is that those other disciplines take it much more seriously at the undergraduate level, at least on average.

Probably the more compelling answer is that the industry doesn't' really want them to for the most part.

but you'd be amazed at how even in 2023

I really wouldn't.

sheepshear
1 replies
19h18m

Probably the more compelling answer is that the industry doesn't' really want them to for the most part.

This is it. Everyone is making money hand over fist despite not doing it. You might want it, but you don't need it.

HeyLaughingBoy
0 replies
6m

Sad but true.

petsfed
5 replies
20h34m

What would be better? Change tools every 3-5 years like the industry does, so by the time any given instructor actually has a grasp on a particular tool or paradigm, its already obsolete (or at least fallen out of fashion) too?

I'm no fan of UML, but the exercise is to teach students how to plan, how to express that plan, and how to reason about other people's plans. The students will certainly draw a lot of flow diagrams in their careers, and will almost certainly deal with fussy micromanagers who demand their flow diagrams adhere to some arbitrary schema that has only limited impact on the actual quality of their work or documentation.

UML is complete, at least.

failbuffer
1 replies
19h52m

A whiteboard is complete. Every other way of diagramming software is deficient. Change my mind. ;-)

master-lincoln
0 replies
9h7m

A whiteboard is just a medium to draw. Uml is a standard that says how to express certain ideas as symbols that can be drawn.

It's not clear to me what your argument is. Is it using whiteboards to draw uml instead of special uml software? If so, be prepared to take much longer to draw the diagram.

Or do you mean uml is deficient compared to free drawing of symbols on a whiteboard without a standard? If so, be prepared that nobody will completely understand your diagram without explanation

__loam
1 replies
15h38m

I haven't seen a UML diagram once in 7 years of working. The approach presented in the book "a philosophy of Software Design" is much better than the outdated bullshit from the 90s.

rvdginste
0 replies
4h11m

I never got the hate against UML. To me, UML is just a language to communicate about several aspects of a technical design, and to visualize a technical design to make it easier to reason about it.

I did not read the book "a philosophy of software design", but I just scanned the table of contents, and it is not clear to me how "a philosophy of software design" contradicts with using UML.

Are you telling me that in those 7 years of working, you never once used a class diagram? Or a database diagram? Or an activity diagram, deployment diagram, sequence diagram, state diagram?

I find that hard to believe... how do you explain your design to other people in the team? Or do you mean that you do make that kind of diagrams, but just use your own conventions instead of following the UML standard?

Personally, I often use UML diagrams when I am working on a technical design and I use those diagrams to discuss the design with the team. On almost every project I create a class diagram for all entities in the system. I rarely make a database diagram because that often is a direct translation of the class diagram. For some of the more complex flows in the system, I create an activity diagram. For stuff that goes through state transitions, I create state diagrams. In my case, this really helps me to reason about the design and improve the design. And all those diagrams are also very useful to explain the whole software system to people who are new on the project.

That does not mean that my diagrams are always strictly UML-compliant or that I make a diagram for every little piece in the software system. The goal is not to have UML-compliant diagrams, but to efficiently communicate the technical design to other people, and UML is nice for this because it is a standard.

Kamq
0 replies
19h48m

Change tools every 3-5 years like the industry does, so by the time any given instructor actually has a grasp on a particular tool or paradigm, its already obsolete (or at least fallen out of fashion) too?

I mean, yeah. Seeing that wave happen over the course of their college career would probably be better prep for a career than most CS classes.

crysin
2 replies
21h20m

Got my MS in SE at DePaul University in Chicago. Wrote more UMLs for those classes than I’ve done in 10 years of actual software development.

mrweasel
1 replies
4h45m

While I'm not suggesting that UML is necessarily the solution, I hope it's not, but the observation that so few developers touch anything that just looks like UML is a good indication that a lot of software is in fact NOT designed, it's just sort of hobbled together from a few sketches on a whiteboard.

rvdginste
0 replies
4h1m

I hope that people say they hate UML and then just make UML (class, database, activity, ..) diagrams according to their own conventions, but I am afraid you are right and that a lot of software is just "hobbled together"...

hinkley
0 replies
20h54m

Wait. This year?

I haven’t touched UML for ten years.

bakul
12 replies
21h24m

Do SE classes teach debugging skills? I hope they do. So many times I have seen people try random things rather than follow a systematic approach.

hinkley
5 replies
20h57m

I worked with programmers around my junior year and some of them were in classes I was in. I thought they were all playing one-upsmanship when I heard how little time they were spending on homework. 90 minutes, sometimes an hour.

I was a lot faster than my roommate, and after I turned in my homework I’d help him debug (not solve) his. Then I was helping other people. They really did not get debugging. Definitely felt like a missing class. But it helped me out with mentoring later on. When giving people the answer can get you expelled, you have to get pretty good at asking leading questions.

Then I got a real job, and within a semester I was down below 2 hours. We just needed more practice, and lots of it.

SamuelAdams
3 replies
18h40m

This is why internships and real world experience is so important. A course is 3 in class hours a week over 12-14 weeks typically. After homework and assignments it is ultimately maybe 40-80 hours of content.

Which means you learn more in one month of being on a normal, 40 hour workweek job than you have in an entire semester of one course.

astura
1 replies
17h43m

A course is 3 in class hours a week over 12-14 weeks typically. After homework and assignments it is ultimately maybe 40-80 hours of content.

Huh? I was spending 20+ hours a week on assignments alone in upper level software engineering classes.

Also, internships were required.

hinkley
0 replies
16h29m

Were you the sort of person who responsibly worked a little bit on the assignments over the course of the week/two weeks, or did you carve out an evening to try to get the whole thing done in one or two sittings?

My group did the latter. I think based on what we know now about interruptions, we were likely getting more done per minute than the responsible kids.

Including reading, we might have been doing 15 hours a week sustained, across 2-3 core classes.

But these were the sort of people who got their homework done so they could go back to the ACM office to work on their computer game, or work out how to squeeze a program we all wanted to use into our meager disk space quota.

Anything more than a B was chasing academia over practical knowledge. B- to C+ was optimal.

cqqxo4zV46cp
0 replies
17h17m

Not all hours are created equal. This is on the verge of saying “I took 1,000 breaths on my run, so if I do that again, it’s like going for a run.” Just because you’re measuring something, it doesn’t mean that you’re measuring the right thing. You’re just cargo-culting the “formal education is useless” meme.

gpcz
0 replies
16h4m

I believe that software-related college degrees are mainly there to get the horrible first few tens of thousands of lines of code out of people before they go into industry.

GuB-42
2 replies
6h28m

But debugging is about "trying out random things". You can call it a Monte-Carlo tree search is you want to sound smart.

And I don't feel is not something that is worth teaching in universities, because it is 90% experience and for me, the point of universities is not to replace experience, just give enough to students so that they are not completely clueless for their first job, the rest will come naturally.

What universities can teach you are the tools that you can use for debugging: debuggers, logging, static and dynamic analyzers, etc..., different classes of bugs: memory errors, the heap, the stack, injection, race conditions, etc..., and testing: branch and line coverage, mocking, fuzzing, etc... How to apply the techniques is down to experience.

In fact, that's what I find most disappointing is not juniors programmers struggling with debugging, this is normal, you need experience to debug efficiently and juniors don't have enough yet. The problem is when seniors are missing entire classes of tools and techniques, as in, they don't even know they exist.

usrusr
0 replies
4h23m

I suspect that GP was talking about some notetaking tactics to systematically narrow things down while throwing educated guesses against the wall. Because so much of debugging is running in circles and trying the same thing again. No amount of notetaking can completely remove that, because mistakes in the observation are just as much an error candidate as the code you observe, I'm convinced that some "almost formalization" routine could help a lot.

Good points on the tool side. While "debugger driven development" is rightfully considered an anti-pattern, the tool-shaming that sometimes emerges from that consideration is a huge mistake.

bakul
0 replies
1h23m

The "Monte-Carlo tree search" space is usually far too large for this to work well!

It is true that initially you may not know where the bug is but you have to collect more evidence if possible, see if you can always cause the bug to manifest itself by some tests, explore it further, the goal being to form a hypothesis as to what the cause may be. Then you test the hypothesis, and if the test fails then you form another hypothesis. If the test succeeds, you refine the hypothesis until you find what is going wrong.

Such hypotheses are not formed randomly. You learn more about what may be the problem by varying external conditions or reading the code or single stepping or setting breakpoints and examining program state, by adding printfs etc. You can also use any help the compiler gives you, or use techniques like binary search through commits to narrow down the amount of code you have to explore. The goal is to form a mental model of the program fragment around where the code might be so that you can reason about how things are going wrong.

Another thing to note is you make the smallest possible change to test a hypothesis, at least if the bug is timing or a concurrency related. Some changes may change timing sufficiently that the bug hides. If the symptom disappears, it doesn't mean you solved the problem -- you must understand why and understand if the symptom disappeared or the bug get fixed. In one case as I fixed secondary bugs, the system stayed up longer and longer. But these are like accessories to the real murderer. You have to stay on the trail until you nail the real killer!

Another way of looking at this: a crime has been committed and since you don't know who the culprit is or where you may find evidence, you disturb the crime scene as little as possible, and restore things in case you have to move something.

But this is not usually what happens. People change things around without clear thinking -- change some code just because they don't like it or they think it can be improved or simplified -- and the symptom disappears and they declare success. Or they form a hypothesis, assume it is right and proceed to make the "fix" and if that doesn't work, they make another similar leap of logic. Or they fix a secondary issue, not the root cause so that the same bug will manifest again in a different place.

zogrodea
0 replies
7h5m

What do you mean by people trying random things? I think that approach (if I understand the term correctly) is more or less what debugging is as a form of scientific investigation.

If you observe a car mechanic trying to find the problem with a car, he would go like: "is this pin faulty? No. Is the combustion engine faulty? No. Are the pedals faulty? Yes." where the mechanic starts with some assumptions and disproves them by testing each of those assumptions until (hopefully) the mechanic finds the cause of the fault and is able to fix it. Similar types of investigations are important to how natural science is done.

So it would be helpful if you can clarify your intended meaning a bit more. Maybe I or someone else would learn from it.

trealira
0 replies
20h39m

I'm not sure if software engineering classes in particular do, but at my university, they teach C++ in the second required course, and they teach you about using GDB and Valgrind on Linux there. They don't explicitly teach you about systematically debugging, though, beyond knowing how to use those two programs.

manicennui
0 replies
17h44m

Trying random things seems to be how a large number of professional software engineers do their jobs. Stack Overflow and now CodeGPT seem to contribute to this.

foobiekr
10 replies
16h47m

For years a friend tried to get his department (where he worked as a lecturer) to add a software engineering course where the basic syllabus was to (1) receive a TOI from the last semester and the code base, (2) implement some new features in the code base, (3) deploy and operate the code for awhile, and (4) produce a TOI for the next semester.

The code base was for a simple service that basically provided virus/malware scanning and included the malware scanner and signatures (this ensured there would never be an end to the work - there's always more signatures to add, more features, etc.)

I thought this was a fantastic idea and its a pity he never convinced them. That was more than fifteen years ago, and in his plan it would have just run forever.

AlphaWeaver
4 replies
15h23m

In my university, (US, state school,) we had a software engineering course exactly like this. It was great in theory, but in practice, the experience was rushed, the codebase was poor quality, (layers upon layers of nothing features with varying code quality,) and the background knowledge was completely ignored. The application we had to work on was a Tomcat Java Web application with an Angular frontend, when neither of those technologies were taught in any other classes (including electives.)

This approach to education can work, but I think simulating/mocking portions of this would have been more helpful (it could've been a teacher/TA managed codebase we started with rather than the monstrosity passed between generations of students who were inexperienced.)

Mtinie
1 replies
13h24m

Am I understanding correctly that your concern was that the course is too close to reality to be useful?

skeeter2020
0 replies
3h20m

You needed to partner with the business school to get a future MBA to convince the faculty (executives) the biggest return (profitability) was a total re-write!

azemetre
0 replies
14h56m

I think like your example where things go wrong is the most realistic exposure to programming you can give someone.

Learning why things are bad, and why it's bad to experience them offers a new level of appreciation or better ways to argue why certain things should be done.

adastra22
0 replies
11h13m

That sounds like a very realistic classroom experience. I think you missed the point?

mwcremer
1 replies
16h38m

TOI?

dmux
0 replies
16h18m

I’m guessing Transfer of Information.

steveBK123
0 replies
16h4m

I went to a STEM school and exactly 0 of the professors had been in industry ever or at least in the last 30 years. The only guy with some experience was an underpaid lecturer. He was also the only good lecturer.

A lot of professors just want to do research and mentor students onto the PHD track to self replicate. My mandated faculty advisor was basically like "go to the career center" when I asked about, you know, getting a job of some sort with my degree.

So yes, it is a real problem. CMU may stand out by actually having courses in the space, but it is not the norm by any means.

rjzzleep
0 replies
16h30m

The thing is for academics quality software sometimes isn't actually quality software.

My experience has been that people who's first jobs were in companies with quality software or who's job included reading through other people's quality software learn to write good software, the other ones learn whatever they saw in the environments they worked in.

maxwelljoslyn
0 replies
16h45m

That sounds like an excellent way to do practice that directly mirrors real-world SWE, while still cutting it down to an appropriate size for a pedagogical environment. What a good idea.

fudged71
8 replies
19h3m

I chose software engineering. 3 years into the program the head of the department made a speech at an event to the effect of "Software hasn't changed in the last 10 years". It instantly devalued the entire program for me.

anacrolix
4 replies
18h57m

I have news for you... He's not wrong. The porcelain is different, but the same methodologies and processes are in place. The biggest change recently is distributed (mostly ignored) version control, that's 20 years old, and continuous integration/development (probably also around 20 years old, but only catching on in the last 10-15 years).

Computer science has changed more, there are lots of developments in the last 5-10 years.

Solvency
1 replies
3h35m

So you're not a JS developer. Got it.

nick__m
0 replies
3h3m

your talking about the porcelain of software engineering, is talking about the core of the profession...

bombcar
0 replies
17h59m

The biggest change I’ve seen in 20 years is that things like DVCS and actual good tool chains for languages people actually use are available and in use.

_a_a_a_
0 replies
17h36m

there are lots of developments in the last 5-10 years

So tell us what these are so I/we can learn from you

davidgay
0 replies
15h50m

"Software hasn't changed in the last 10 years". It instantly devalued the entire program for me.

As opposed to maths, physics, philosophy, civil engineering, classical studies which have gone through complete revolutions in their topics, problems and study methods in the last 10 years?

cqqxo4zV46cp
0 replies
17h20m

Hah. This is classic knowitall CS/SE student hubris. They were almost certainly right.

corethree
0 replies
16h20m

I know where you come from and I know where the people who are responding to you come from too.

Software has changed in the last 10 years, but a lot of it has changed superficially. A green software engineer most likely won't be able to tell the difference between a superficial change and a fundamental change.

It has a lot to do with the topic of this thread. "Quality Software" It's a loaded term. There's no formal definition, everyone has their own opinion on it and even then these people with "opinions" can't even directly pinpoint what it is. So the whole industry just builds abstraction after abstraction without knowing whether the current abstraction is actually close to "quality" then the previous abstraction. It all starts out with someone feeling annoyed, then they decide to make a new thingie or library and then they find out that this new thing has new annoyances and the whole thing moves in a great flat circle.

That's the story of the entire industry just endless horizontal progress without ever knowing if we're getting better. A lot of the times we've gotten worse.

That being said there have been fundamental changes. Machine learning. This change is fundamental. But most people aren't referring to that here.

hinkley
5 replies
21h15m

If my friends hadn’t had such vividly bad experiences with the compiler class, I might not have taken the distributed computing class that was one of the other options to fulfill that category.

It’s not the most defining class of my undergrad years, but it was pretty damned close.

The fact that most people designing systems don’t know this material inspires a mix of anger and existential dread.

titzer
3 replies
21h12m

If you're specifically referring to CMU's compilers course, feel free to follow up with me offline.

ungamedplayer
1 replies
12h2m

Why does this feel like the professor trying to either fix the problem or give detention.

mikeyouse
0 replies
11h7m

Lol this is who asked whether it was at CMU - https://s3d.cmu.edu/people/core-faculty/titzer-ben.html HN is such a great community.

hinkley
0 replies
21h11m

No sorry, I was speaking in the general case not about CMU. This was ages ago and elsewhere. It was clear the prof got caught flat footed.

It was practically an elective and these days I hope it’s required.

jaybrendansmith
0 replies
4h9m

After seeing the same mistakes made over and over again I can't help but agree. This is how one builds enterprise software now, and it is poorly understood by most developers, although that is starting to change. If I were designing a college curriculum, required courses would include all of the normal CS stuff but also software engineering, distributed computing, design patterns, web application fundamentals, compiler design, databases, DevOps/Cloud, testing fundamentals, UX design, security basics, data science, IT project/process management, essentials of SOA, product mgt and requirements. Of course, it's been so long since I went to college, none of these things existed back in the day, so perhaps modern curriculum has all of these now!

AnimalMuppet
5 replies
21h19m

Here's the thing, though: Of a CS graduating class, 90% of them will work as software engineers, not as computer scientists. (All numbers made up, but I think they're about right.)

We don't need these things to be electives. We don't need them to be a master's program. We need an undergraduate software engineering program, and we need 90% of the people in CS to switch to that program instead.

titzer
2 replies
21h15m

I agree with you! It's hard to change curricula because there are so many competing interests. CS is an evolving field and things like machine learning have burst onto the stage, clamoring for attention. There is also an age-old debate about whether CS departments are trade schools, math departments, or science. Personally I think software engineering skills are paramount for 90% of graduates. How do we fit this into a full curriculum? What gets the axe? Unclear.

BoiledCabbage
1 replies
19h38m

There is also an age-old debate about whether CS departments are trade schools, math departments, or science. Personally I think software engineering skills are paramount for 90% of graduates.

The question as well is: are Chemical Engineering, Mechanical Engineering, Materials Engineering trade schools?

I think it's a key call out as CS touches on so many things.

There are arguments for it being math, science, engineering, a trade school or a combination of the above.

And then if you separate them out completely you end up with people doing CS being out of touch with what happens in the real world and vice versa.

I think in the end you probably need to have a single overall degree with a specialization in (Programming as Engineering, Programming as Math and Programming as Computer Science,) with lots of overlap in the core.

And then you still can have a both a bootcamp style trade school.

Now all of that said, that still doesn't solve the CS equivalent of "Math for business majors". Or the equivalent of "Programming for Scientists", or the like which is already a really important case to offer. Where you major in Bio/Chem/Other but being able to apply programming to your day job is important. Although that probably sits closer to the Applied Software category that you might find in business school like using spreadsheets, basic command lines, intro to databases or python.

But to your point, how rarely software engineering is being taught is a huge problem. Even if only 30% of degree holders took classes in it, it would be huge in helping spread best practices.

stephendause
0 replies
19h4m

I don't really think that software engineering is a trade per se. I think it is a much more creative activity that requires a lot more baseline knowledge. I think an automotive engineer is to a mechanic as a software engineer is to an IT administrator. There is still a fair amount of creativity and knowledge required for being a mechanic or IT admin, but I don't think it's nearly the same amount.

Software engineering is interesting, though, because it does not require as much formal education as many other engineering fields to get a job. I think this is in part because it is very economically valuable (in the US at least) and because the only tool you need is a computer with an Internet connection.

With all of that said, I think SWE is probably closer to a trade than other engineering disciplines, but not by all that much.

manicennui
1 replies
17h42m

A large number of CS programs are already glorified Java training courses.

whstl
0 replies
15h15m

Exactly what I see. I know a few cases of CTOs influencing the curriculum of CS courses, pushing it to be more “market ready”. Which was effectively turning it into a 4 year bootcamp.

Which is a big shame, because it’s very common to see developers who would have benefited from a proper CS curriculum, and have to learn it later in life.

3abiton
3 replies
19h52m

What does research in this field look like?

titzer
0 replies
18h30m

Testing, fuzzing, and verification are all pretty hot SE topics now, at least in my department at CMU. All of those have some element of program analysis, both static and dynamic, so there's potentially deep PL stuff, which is core CS. There's aspects to SE that are not just that, of course, which are studying social processes, management styles and practices, software architectures, design patterns, API design, etc.

stephendause
0 replies
19h13m

So I have not done research any software engineering field and have not read all that much either. One example that comes to mind from one of my courses that I took in software engineering is research around mutation-based testing. That form of testing is where you generate random variants of your test by doing things like deleting a statement, adding a statement, changing a less than sign to a greater than sign, etc. Then you check to see that at least one of your tests fails for each variant. If it does not, you either add a test or mark that variant as being effectively the same program. I forget what the term is for it. At any rate, I think there is still research being done on this topic, for example how to effectively generate programs that do not generate as many functionally identical programs. Software testing in general is a big part of software engineering, and I think there is still a fair amount of research that could be done about it.

In my opinion, the intersection of cognitive psychology and software engineering is also ripe for a lot of research. I feel like we as software engineers have a lot of intuitions about how to be productive, but I would love to see various things that could indicate productivity measured in a controlled lab setting.

gmadsen
0 replies
18h2m

I would assume something like this? https://dspace.mit.edu/handle/1721.1/79551

rockwotj
1 replies
4h35m

Rose Hulman is an undergraduate teaching school that also has a distinction between a Software Engineering Degree and a Computer Science Degree. The Software Engineering degree takes a few less math courses and instead takes classes on QA/Testing, Project Management, and Formal Methods

skeeter2020
0 replies
3h16m

SE is often taught out of the engineering school while CS the faculty of science, which in my experience makes a huge difference.

maxFlow
1 replies
18h2m

Thanks for sharing, I'm reviewing the curriculum for the program [0] and it looks great.

Do you know if any of these courses has been opened sourced? It would be great to have access to part of the material.

[0]: https://www.ece.cmu.edu/academics/ms-se/index.html.

zogrodea
0 replies
6h52m

Not the person you replied to, but I am aware that a functional programming course by CMU has had lectures (not exercises or content) freely available as YouTube recordings. https://brandonspark.github.io/150/

valand
0 replies
4h29m

It was a mandatory class too in the univ I attended.

What was not taught about were the techniques of QA's, the variety of tests and design pattern enforcements that would reduce QA cost by tenfolds.

tdeck
0 replies
13h58m

It seems to vary a lot from school to school. At my university (2010-2014) we were writing unit tests from the first CS class. I can't say we got much instruction on how to structure more complex applications in order to make them testable, however. Things like dependency injection and non-trivial mocking have to be learned on the job, which is a real shame. Even within the industry skills and approaches to designing code for tests feel very heterogeneous.

speedgoose
0 replies
21h27m

Yes, I learned these things as a computer science student in an engineering school. It wasn’t perfect but a good introduction.

sam0x17
0 replies
21h21m

Worth mentioning that SE isn't even a thing for the most part at non STEM-specific schools and/or outside very large colleges/universities

prendino
0 replies
8h51m

We were taught these things at the Bachelor's program in CS I went to in Sweden. At my first job I then slipped on a banana peel into a de facto tech lead position within a year, and I don't think it was due to my inherent greatness but rather that I was taught software engineering and the other colleagues at my level had not.

Ironically, the software engineering courses were the only ones I disliked while a student. An entire course in design patterns where strict adherence to UML was enforced felt a bit archaic. We had a course in software QA which mostly consisted of learning TDD and the standard tooling in the Java ecosystem, with some cursory walkthroughs of other types of QA like static analysis, fuzzing and performance testing. At the time it felt so boring, I liked to actually build stuff! A couple of years later I joined a team with very competent, CS-educated developers tasked to set up a workflow and all the tooling for testing software with high security requirements. They were dumbfounded when I knew what all the stuff they were tasked to do was!

podviaznikov
0 replies
15h19m

100% this. My master program was for software engineering not CS. 15 years ago in Ukraine.

We have that. Maybe not enough. But it's definitely not a new thing.

bhk
0 replies
14h57m

How many CS or CE grads from CMU are actually exposed to all of these topics?

Surely, "it is taught", but to whom and how widely?

randomdata
22 replies
1d

Is there any human activity where quality is an attribute successfully taught? In my experience, being able to produce something of quality is gained only through practice, practice, practice.

marcosdumay
14 replies
1d

Is there any human activity where quality is an attribute successfully taught?

Every industrial practice.

On the other hand, the title just means that programming is not an industrial practice. What should be obvious to anybody that looked, but some people insist on not seeing it.

addicted
9 replies
1d

Yeah and you can see other disciplines like Aviation where there are so many incredible processes to ensure learning and constant improvement.

Syntonicles
6 replies
22h47m

Will you please elaborate?

WJW
5 replies
21h5m

Aviation in particular has a very strong culture around (government mandated) checklists and post-crash investigations. This has both pros and cons. The pros is that every airline learns from the mistakes made by every other airline and over time the system becomes really quite safe indeed. The cons are that it is quite expensive and time consuming.

Imagine if every software company was obliged by law to:

- Every single release has to have been signed off by someone who got their "software release engineer" certification at the software equivalent of the FAA.

- This engineer is required by law to not sign off unless every box on a 534 item checklist has been manually verified.

- Any time an unplanned downtime happens at any company, a government team comes in to investigate the root cause and add points nr 535 through 567 to the checklist to make sure it never happens again.

If such a system was mandated for software companies, most of the common bugs would very rapidly become a thing of the past. Development velocity would also fall through the floor though, and most startups would probably die overnight. Only companies that could support the overhead of such a heavyweight process would be viable, and the barrier to entry would massively increase.

bluGill
3 replies
20h30m

I wish someone would create that 500 line checklist. I've seem attempts, but they tend to be either not actionable (is the software high quality - meaningless), or of metrics that are just gamed (is test code coverage > 80%?)

mdaniel
1 replies
15h45m

or of metrics that are just gamed (is test code coverage > 80%?)

The rebuttal to your implied Goodhart's Law <https://en.wikipedia.org/wiki/Goodhart%27s_law> that was offered by my manager was "tension metrics" <https://en.wikiversity.org/wiki/IT_Service_Management/Contin...>

If I understand his theory correctly, in your case there would be a competing metric to the "test coverage" one that said for any changeset, a test cannot itself change by more than 20% in the same changeset as non-test code. So you can change the code such that it still passes the existing tests, or you can change the test to adapt to new requirements, but you cannot rewrite the tests to match your newly changed code

I'm acutely aware this is a terrible example, the devil's in the details, and (in my experience) each company's metrics are designed to drive down their own organizational risk <https://en.wikipedia.org/wiki/Conway%27s_law>, combined with "you're always fighting the last war" :-D

marcosdumay
0 replies
3h42m

Hum, now you are proposing a process checklist that can't ever be completely checked out, by design.

The entire thing is terrible from principle. You won't find a good example, because that's not how you use a process checklist.

MaxBarraclough
0 replies
20h12m

Not quite what you're asking for, but the Joint Strike Fighter C++ Coding Standards document is freely available. [0] It's 141 pages.

It's specific to the complex and unsafe C++ language though, rather than addressing broader software development methodology.

[0] [PDF] https://www.stroustrup.com/JSF-AV-rules.pdf

JumpinJack_Cash
0 replies
19h40m

> Aviation in particular has a very strong culture around (government mandated) checklists and post-crash investigations

That's the reason why aviation can only shine when it becomes a private means of transportation, and I don't mean 70mm private jets but, 150k light helicopters.

When a critical mass is hit then accidents will become no more traumatic to the collective psyche than car accidents, the lighter the aircraft the better because it would seem exactly like a car crash as opposed to leaving a huge burning hole into the ground

piloto_ciego
0 replies
1d

How I just said this, as I transition to being a dev/engineer person… I find the lack frustrating at times.

marcosdumay
0 replies
23h57m

For pilots, there are many filters to ensure people that failed to learn can't take many responsibilities. They ensure the pilots study and train, but there isn't any theory making sure the pilots learn and get the best safety practices. (In fact, if you are comparing with CS teaching, pilot teaching will give you a heart attack.)

For engineers, the situation is very similar to software. There are many tools for enforcing quality, but there's no structure for teaching the engineers, and no, there isn't a widely accepted theory for how to teach design quality either.

The one place where people are consistently taught how to build quality is on manufacturing.

randomdata
2 replies
3h33m

What industrial practice isn't guided by carefully designed machines that remove the human variability?

marcosdumay
1 replies
2h1m

You probably meant "that remove some human variability".

Machine handling requires a lot of training and a lot of well thought out procedures.

randomdata
0 replies
1h3m

And, most importantly, practice, practice, practice.

rockemsockem
0 replies
21h16m

Citation needed. The key to the industrial revolution was trivializing the human work so as to take as many human errors out as possible and to systematize everything. I wouldn't call that type of process "teaching quality".

piloto_ciego
3 replies
1d

Absolutely.

I cannot fly professionally anymore due to health, but this is something we are taught in aviation and something I too find lacking from tech so far.

Like, you’re taught the standards as part of learning to fly, but as time goes on, you’re told to narrow your tolerance of what is acceptable. So if you are learning how to do steep turns, for instance, the altitude standard is +- 100’. You’re taught, “that’s the minimum, you should be trying for 50’” and then 20’, then the absolute best performance would be where you do your turn, the needle doesn’t move, and as you roll out on a calm day you fly through your wake. But the goal is “better, always better, what can I do better?” And flying is not graded on the overall performance, if you don’t do satisfactory everywhere you fail. Culturally satisfactory, isn’t, it’s the starting point.

That encourages a much more collaborative model I feel like. I’ve only worked one or two flying jobs that were not collaborative. In the outside world it sometimes feels the opposite. In flying, you truly want everyone to succeed and do well, and the company does. Even the guys I hated that I flew with, I didn’t want them to fail. If they failed, I was partially responsible for that.

It wasn’t always perfect, and I worked with some legendary assholes while I was flying, but truly, they supported me and I supported them, and if I screwed up (or they screwed up) the culture required that we found a way to minimize the future potential screwups.

You’re actually trained on what quality means in a wide variety of contexts too, and flight operations quality assurance (FOQA) is a big part of many airlines. In smaller bush operations where I worked, it is significantly more informal, but we truly had a “no fault” culture in nearly all the places I worked. It’s not perfect, but that’s the point, “ok how can we make this better?”

If someone had an idea for how to do something better, there may have been friction, but that was rare if it actually was better, and as soon as you could show how adoption was fast even at the less standardized operations I worked at.

Not saying it’s all unicorns and rainbows, but I feel like quality, and decision making, and “doing the right thing” were an integral part of the culture of aviation. “The weather is too bad and I cannot do this safely” grounds the flight, you don’t blast off into the shit (at reputable operators) to get the job done anymore (it’s not 1995), and it feels like this new industry is the opposite.

The entire concept of a “minimum viable product” is somewhat illustrative of the problem. It shouldn’t be the “minimum viable” it should be the “minimum quality product we’re willing to accept as a starting point.” But that doesn’t roll off the tongue in the same way.

We shouldn’t be striving for anything that’s the “minimum.” The minimum is only the beginning.

randomdata
2 replies
23h56m

> But the goal is “better, always better, what can I do better?”

Is that not the case in software? The incentive to improve may not be quite as strong as in aviation (crashing software isn't quite the same as crashing airplanes), but it is still pretty strong. Life is very miserable in software when quality isn't present.

zerkten
1 replies
23h13m

What happens when you work under a group of people who are satisfied at stage one of project X? You know you can iterate to get two stages further, but they want you to work on projects Y and Z. This is a very common situation where you, or even the whole development team has very little control.

Of course, management should be supportive of quality improvements, but their reality is either one where they are under genuine pressure to deliver projects X and Y to stage of quality through to not understanding or caring about quality.

My own experience is that individual programmers have vastly different ideas of quality is based on their experience and education. You can be struggling to get a team to improve and then you hire a somewhat normal individual with a very different background who makes a sizeable impact on quality and the culture of this in the team. I'm thinking specifically of someone who joined from aerospace, but I've seen it with finance backgrounds. I think the background matters less than the perspective and ability to hold people accountable (including yourself.)

randomdata
0 replies
2h32m

> What happens when you work under a group of people who are satisfied at stage one of project X?

No doubt the same as when the members of your garage band are happy to stay in the garage while you have your sights set on the main stage. You either suck it up and live in the misery, or you get better on your own time and leverage those improvements in the quality of your performance to move into a better position where quality is valued.

randomdrake
0 replies
1d

The arts. The further and further you go in instruction, the more it becomes about the little differences and quality. Practice always helps, but quality definitely taught and learned by many as well.

ordersofmag
0 replies
1d

Good teaching largely consists of setting the learner up in situations where they can practice effectively. To pick just one example many people are taught to improve the quality of their writing. This largely consists of giving guidance on what writing to attempt and (more importantly) guidance how to reflect on the quality of the writing you've just done so you can improve.

globular-toast
0 replies
9h54m

Practice only matters if you try to produce quality. If you just practice producing crap you'll only get good at producing crap. But I suppose if someone doesn't care about quality (and these people do exist) all bets are off really.

johngossman
21 replies
1d

There are Computer Engineering programs and a few universities that really emphasize internships and hands on practice. But at many universities, the CS department came out of the Math department and is focused on theory. Chemistry isn't Chemical Engineering either. I think that's okay. University isn't just a trade school--the idea behind almost any degree is to train the mind and demonstrate an ability to master complex material.

bluGill
18 replies
1d

What society needs is a mix of trade school a traditional university. If a university is not providing both they are failing everyone. (except the straw-man rich kid who will inherit a lot of money but not be expected to either also inherit/run a company or pass the money onto their kids - this is something that happens in story books but doesn't seem to be real world where the rich give their kids lots of advantages but eventually expect them to take over and run the family business)

A pure university education without considering is this degree useful in the real world is a disservice to education. However a pure trade school education that teaches how to do something without understanding is not useful (I don't think any trade school is that pure: they tell you to ignore hard stuff but generally give you deep understanding of some important things)

duped
17 replies
22h49m

If a university is not providing both they are failing everyone.

Why?

A pure university education without considering is this degree useful in the real world is a disservice to education.

I think this line of thinking is a much bigger disservice to higher education. It was very tiresome as an undergraduate to be surrounded by people that thought this way - and detrimental to everyone's education.

"I'll never use this knowledge" is the single worst thing you can say as a student, and it needs to be beaten out of undergrads' heads. Not encouraged.

sneed_chucker
7 replies
22h43m

I agree with you in principle, but it's very easy to have this attitude when the education isn't obscenely expensive.

Which is why the "I'm never going to use this, what a waste of time" feeling among American undergrad students is so common. If you fix the affordability problem and bring it back to where is was in the mid 70s (inflation adjusted) I think things would be a lot better.

duped
5 replies
22h23m

My point is that higher education isn't job training and doesn't pretend to be, and people who think it is or should are the ones that need education the most because they don't seem to get it.

sanderjd
3 replies
22h5m

and doesn't pretend to be

I'm not sure about this part... A very common pattern in my conversations with working class friends and family from my parents' generation is: "we were told that if we sent our kids to college, they'd have better lives than we did, but instead we all just ended up with more debt than we could handle".

It's tricky! If you tell teenagers and their parents the truth - this purely academic program will not train you for any job besides pure academia, which, while it can be a fantastic career, is a super risky hits business in which only a few will truly succeed - then that's only going to sound like a reasonable risk to take for wealthy families. But then you've badly limited your pool of academic researchers to this extremely small and honestly often not as promising set of rich kids.

Maybe one solution (which is not workable in the real world) would be: any academic program that does not have a viable "job training" component should only accept students on academic scholarship, regardless of their own means. If some neutral party thinks they are promising enough in that field to pay their way, they get to go for free, otherwise they don't get to go at all. The programs that do graduate people with directly marketable job skills could keep working the current mercenary way.

The reason this wouldn't work in reality is that the wealthy would still just game the scholarships in some way. Alas.

duped
1 replies
21h35m

I'm not sure about this part...

If you want to know what a university will teach your kids, ask them. They'll even tell you without asking them - it was pretty obvious to me as a dumb high school kid on campus visits what the emphasis of one program or another was going to be.

sanderjd
0 replies
19h56m

What I'm saying is: universities are incentivized to mislead people (including themselves!) about this.

If you are a working class family with a kid who is very talented at math, and you go sit down with the counselors and ask them: If my child studies pure theoretical math, will that open them up to a life full of possibilities? they will say "yes, it absolutely will". But that's not true. It might be true, but it's a big risk. It's a risk a wealthy family can very easily absorb. But if this child from this working class family takes on this risk using student debt, it might go poorly. They might very well be good at pure math but not be good enough to go into academia. Then they might be unsure what else they can do with that degree, unable to get their foot in the door at the kinds of employers where just a general proof-of-being-smart degree is enough. And now they have debt and uncertainty about what to do.

It also might work out great! But it's a risk. And I know a number of people who feel they ended up on the wrong side of that risk.

bluGill
0 replies
21h47m

There is a big difference in value between different degrees in the real world. Yet the costs are similar. What someone studies is very important and universities do not do a good job of telling people that.

There is nothing wrong with art/music/history. If you are interested by all means take a lot of courses in them. You can learn a lot of valuable skills which is why good universities required a diverse background of "generals" that these (and many more) fit into. However they give far more degrees in these things than are needed. (even physics gets more degrees than the world needs - but most getting a physics degree can better pivot to something else well paying).

Clubber
0 replies
22h7m

My point is that higher education isn't job training and doesn't pretend to be, and people who think it is or should are the ones that need education the most because they don't seem to get it.

That was true 50 years ago, but employers turned it into job training. My father in law retired a well off businessman with a History degree from Yale he got in the 50s. You know what a History degree from Yale qualifies you for today? Teaching History and maybe writing some books. The degree didn't change and Yale didn't change.

hiAndrewQuinn
0 replies
21h48m

No, I don't think that's it. I think it is simply that you have to put an awful lot of people through the explore part of the learning loop, to get a handful who will reach the exploit part of the loop, for any given subject.

99% of what we all learn in college is a waste of time for us. But we all have a unique 1% that is vital to who we become. Over time I expect that 1% to become 0.1%, then 0.01%, and for that vitality to become ever more concentrated in that sliver.

bluGill
6 replies
21h52m

Because like it or not most people are going to university to get a better jobs. Companies like university educated people because they learn deep thinking. However they often come out lacking important skills that are needed.

Sure there are a few going to university just for the fun of it. However most are expecting a job. Thus universities should train and emphasize thinking in more specific areas.

"I'll never use this knowledge" is the single worst thing you can say as a student, and it needs to be beaten out of undergrads' heads. Not encouraged.

This is tricky. I agree undergrads say this all the time when they are wrong but they don't know it. They have no clue what they will use and what they won't. This is something universities should figure out so they push people to avoid things they won't use. OTOH, a lot of what they are really teaching isn't the specific skill, but how to research and analyze data to find complex answers - it doesn't matter if you look at data from art or from science, what you are really learning is how to think and the specific knowledge gained is isn't important or the point (I think this is the point you were trying to make?).

mschuster91
2 replies
21h38m

Companies like university educated people because they learn deep thinking.

No. Companies love hiring higher-ed graduates because it removes a lot of cost and risk for them:

- hiring only people with degrees weeds out everyone unable to cope with a high-stress environment, for whatever reason - crucially, also including people who would normally be protected by ADA or its equivalent provisions in Europe.

- it weeds out people in relationships or with (young) children, which makes them easier to exploit and reduces the amount of unexpected time-off due to whatever bug is currently sweeping through kindergarten/school/whatever. Sure, eventually they will get into relationships and have children as they age, but looking at the age people start to have kids these days [0], that's a solid 5-10 years you can squeeze them for overtime.

- it saves companies a ridiculous amount of training. The old "tradespeople apprenticeship" way is very cost-intensive as you have to train them on virtually anything, not just stuff relevant to the job, e.g. using computers and common office software. Instead, the cost is picked up either by the taxpayer (in Europe) or by the students themselves in the form of college debt. The latter used to be reserved for high-paying jobs such as pilots who have to "work off" their training cost but got compensated really well, nowadays it's standard practice.

- it keeps the employee diversity relatively homogenous. There is a clear bias towards white and asian ethnicity in the US for higher ed [1], and among top-earning job, males still utterly dominate [2].

- related to the above, it also weeds out people from lower economic classes, although at least that trend has been seriously diminishing over the last decades [3].

[0] https://www.nytimes.com/interactive/2018/08/04/upshot/up-bir...

[1] https://hechingerreport.org/proof-points-new-higher-ed-data-...

[2] https://www.bankrate.com/loans/student-loans/top-paying-coll...

[3] https://www.pewresearch.org/social-trends/2019/05/22/a-risin...

bluGill
1 replies
20h49m

and among top-earning job, males still utterly dominate

And thus you destroyed you whole point: females dominate universities these days.

mschuster91
0 replies
18h29m

Depends on the degree program. In STEM, women are still the minority by far [1], especially in IT.

[1] https://www.stemwomen.com/women-in-stem-percentages-of-women...

duped
2 replies
21h41m

However they often come out lacking important skills that are needed.

Companies that offer the jobs are the ones that need to offer the job training.

(I think this is the point you were trying to make?)

Not really, it's that university education is kind of meta/self serving (the goal is not to train X number of students to do Y jobs, it's to give every student at the institution what that institution defines to be an education).

But like you said, the fact this produces better workers is a second-order effect. It's not the goal of most institutions. But not all institutions; some define "well educated" to have lots of industry practicum, and if you want that, go study at those institutions.

My main point is that it's not a "disservice" to eschew practicum or industry training as an educational institution.

bluGill
1 replies
20h50m

What society needs is the second order effect though. I don't care about education for the sake of education, I can for what education can do for me/society. Now some of what most institutions define as a good education is good for society (the ability to think is very useful), but I don't value/support education because of arbitrary definitions that an institution might come up with. I value/support education because people who have education tend to show specific abilities in society that I want more people to have. The more universities are in line with that and try to produce that the more I value/support them. (note that I didn't not formally define what those things are - this is a tricky topic that I'm sure to get wrong if I tried!)

When institutions allow student to take degrees that society finds less valuable (art,music...) they are doing society a disservice by not producing what society needs. Now if the student is wealthy (not rich) enough to afford that the price then I don't care: I don't need to impose my values on anyone else. However most people in a university are not that wealthy (most are young) and so if the degree granted isn't valuable to society the university robbed that student.

johnnyanmac
0 replies
19h27m

When institutions allow student to take degrees that society finds less valuable (art,music...) they are doing society a disservice by not producing what society needs.

1. what's wrong with a student pursuing their own personal goals? A person doesn't need to produce for society's sake.

2. despite that sentiment you hold, it's clear many people do value art and music. Maybe not in its pure form, but those artists do in fact fuel industries worth billions. Clearly "society" values something that requires such skills and thinking.

johnnyanmac
0 replies
21h8m

"I'll never use this knowledge" is the single worst thing you can say as a student, and it needs to be beaten out of undergrads' heads.

Everyone will think differently. I've never truly be research-minded and there's very much a bunch of odd classes that felt like a waste of my money (something to consider as education gets more expensive). But I do agree that there should be a space to foster researchers and especially one to overall round out a student, even if that space is more niche. I just don't think that everyone needs to go far into debt for that experience if they just want job training.

So I too desire a more explicit divide than "research university vs. industry university" and wish there were some better trade schools focused on software (not 6 month boot-camps. Think of a condensed university program without requirements of electives and maybe less supporting classes). But no one seems to be protesting this much.

iamthepieman
0 replies
21h33m

Strongly disagree with this. If a class (at any level) is strictly teaching "the subject" then that is a very good issue to raise by a student or anyone else. Great teachers don't just teach the subject though, they teach the skills necessary to engage with the subject and then apply them to said subject.

Unfortunately many programs are not designed this way and learning the appropriate skills is left as an exercise to the student usually in a sink or swim approach. So some students come out with the meta skills that a university education is touted for and others do not.

I do agree that "I'll never use this knowledge" can be a miserable attitude to have or engage with - especially when it's just a proxy for "I'm not interested in learning, just in getting good grades" but the idea itself is valid.

timeagain
1 replies
1d

Yeah but at those internships you aren’t taught how to build quality software, just how to ship a SPA that connects to an API in 15 weeks (or you’re not hired).

It is a good peek into the professional software world though!

bluGill
0 replies
20h45m

Before you can write quality software you need to be able to write large software. Most interns I see are learning how to work on non-trivial programs as this is their first chance to see something non-trivial. Then they get a "real job" and are shoved into extremely large programs.

Writing a thousand lines of bug free code isn't that hard, so the need for QA practices won't be apparent. Then you get the ten thousand line intern project and discover things are not always that easy. Then we throw you into a multi-million line project and good luck: you need a lot of that QA stuff to make it work.

ysofunny
17 replies
1d

you cannot be taught what nobody knows how to do

or maybe, them who know how to do this are just unable to spread this knowledge... something about how they think their private secret codes are the source of their wealth

when in fact, it's merely the scheme by which they mantain an advantageous capacity to extract energy from them seeking to learn how to build quality software

Verdex
15 replies
1d

This is what I came here for.

There's an obvious comprehensibility complexity to code to anyone who has spent almost any time what so ever trying to make something happen in software. However, we've got zero academics or theory around it.

Just 'best practices' (ie a thing other people are known to do so if things go wrong we can deflect blame).

And code smells (ie the code makes your tummy feel bad. yay objective measures).

And dogma (ie "only ONE return point per function" or "TDD or criminal neglect charges").

Sure, please do something for QA because it'll be better than nothing. But we're probably a few decades of waiting for actual theoretical underpinnings that will actually allow us to make objective tradeoffs and measurements.

pjmlp
7 replies
1d

There is plenty of academics on it, as real engineers, those that studied Software Engineering or Informatics Engineering, instead of fake engineering titles from bootcamps, should be aware.

Usually available as optional lectures during the degree, or later as Msc and PhD subjects.

Verdex
5 replies
1d

I'm all ears.

Although, so far I've only bumped into cyclomatic complexity (with some studies showing that it has worse predicting power than lines of code) and lines of code.

pjmlp
4 replies
23h39m
Verdex
3 replies
23h15m

I don't know. I was hoping for something like: "We know inheritance is bad because when we convert the typical example over to this special graph it forms a non-compact metric space" Or something like that.

Even though I find cyclomatic complexity uncompelling, it at the very least can slurp up code and return a value. Nicely objective, just not particularly useful or insightful to whether or not things are easy to understand.

The provided link looks suspiciously like they're going to talk about the difference between system, integration, and unit tests. The importance of bug trackers. And linters / theorem provers maybe.

I don't think these are bad things, but it's kind of a statistical approach to software quality. The software is bad because the bug chart looks bad. Okay, maybe, but maybe you just have really inexperienced people working on the project. Technically, the business doesn't need to know the difference, but I would like to.

pjmlp
2 replies
22h51m

If you want numbers and research like content, that is available as well.

"Measuring Complexity of Object Oriented Programs"

https://link.springer.com/chapter/10.1007/978-3-540-69848-7_...

Verdex
1 replies
22h30m

This is much more interesting.

I don't suppose you know where I can get their list of references without hitting a paywall? Specifically [16] and [24].

EDIT: [For anyone following along]

The linked paper is Measuring Complexity of Object Oriented Programs. Although, the paper isn't free. They reference several other papers which they assert talk about OO complexity metrics as well as procedural cognitive complexity, but unfortunately the references aren't included in the preview.

Apparently, there's also a list of Weyuker's 9 Properties which look easier to find information on. But these look like meta properties about what properties a complexity measurement system would need to have [interesting, but they don't really seem to comment on whether or not such measurement is even possible].

It looks like a lot of this research is coming out of Turkey, and has been maybe floating around since the early 2000s.

EDIT EDIT: References are included at the bottom of the preview.

EDIT EDIT EDIT: Kind of interesting, but I'm not sure this is going to yield anything different than cyclomatic complexity. Like, is this still an area of active research or did it all go by the wayside back in the early 2000s when it showed up? The fact that all the papers are showing up from Turkey makes me concerned it was a momentary fad and the reason it didn't spread to other countries was because it doesn't accomplish anything. Although, I suppose it could be a best kept secret of Turkey.

Renamed programs are defined to have identical complexity, which is pretty intuitively untrue, so I've got my concerns.

EDIT ^ 4: Doesn't seem to be able to take data complexity into account. So if you're dividing by input, some inputs are going to cause division by zero, etc. You might be able to jury rig it to handle the complexity of exceptions, but it looks like it can mostly handle static code. I'm not sure if it's really going to handle dynamically calling code that throws very well. I also don't think it handles complexity from mutable shared references.

Nice try, but unless there's a bunch of compelling research that no actually this is useful, I'm not sure this is going to cut it. And at the moment the only research I'm finding is more or less just defining functions that qualify as a cognitive measure under the Weyuker principles. I'm not seeing anyone even pointing it at existing code to see if it matches intuition or experience. Happy to be found wrong here, though.

pjmlp
0 replies
7h2m

Naturally if one is searching for perfection on this matter, most papers are far from providing it.

The main point still stays, "You are never taught how to build quality software" is wrong, and plenty of Engineering degrees do teach about it.

lainga
0 replies
1d

Sure, I've run into a couple. Here's a chart of Defects per KLOC. Great.

readthenotes1
2 replies
1d

You forgot the dogma of only one entry point per function, back from the day when you could do both.

(One exit point is a pet peeve of mine since it often makes the code a lot harder to read and think about vs exit asap)

randomdata
1 replies
1d

You may still not buy into it, but note that single exit was established for languages like C where an early exit can make it difficult to ensure that all resources are freed. It isn't meant for every language – and, indeed, languages that are not bound by such constraints usually promote multiple exit because of the reasons you bring up.

Spivak
0 replies
22h24m

And even that is wrong, single entrance/exit was originally because you had subroutines designed to be goto'd into at different points for different behavior and would goto different points outside the subroutine as the exit.

There are pretty much no languages left today where it's even possible to violate this principle without really trying, it's not about having single a return it's about all the functions starting at the top and return statements always taking you back to the same place in the code.

RandyRanderson
1 replies
1d

I wish more ppl felt this way. What a compliment it is to oneself when I hear ppl saying "write clean code" as if they know its address and had dinner with clean code just last night.

I was thinking there should be some metric around d(code)/dt . That is, as the software is used, 'bad' code will tend to change a lot but add no functionality. 'Good' code will change little even when it's used mode.

AlotOfReading
0 replies
23h48m

d(code)/dt isn't a very good metric though. Think of the Linux kernel. Drivers get some of the least maintenance work and are broadly the lowest quality part of the kernel. arch/ is busier than drivers/, but anything you find in the parts being touched are also significantly higher quality.

satisfice
0 replies
20h11m

The scientific groundwork for excellent testing, anyway, has already been done-- but not in the realm of computer science. This is because computer scientists are singularly ill equipped to study what computer scientists do. In other words, if you want to understand testing, you have to watch testers at work, and that is social science research. CS does not take social science seriously.

An example of such research done well can be found in Exploring Science, by Klahr. The author and his colleagues look very closely at how people interact with a system and experiment with it, leading to wonderful insights about testing processes. I've incorporated those lessons into my classes on software testing, for instance.

aschearer
0 replies
20h3m

I found these thought provoking:

The Power of 10: Rules for Developing Safety-Critical Code[1]

Assessing the Relationship between Software Assertions and Code Quality: An Empirical Investigation[2]

Cyclomatic Complexity and Basis Path Testing Study[3]

[1]: https://web.eecs.umich.edu/~imarkov/10rules.pdf

[2]: https://www.microsoft.com/en-us/research/wp-content/uploads/...

[3]: https://ntrs.nasa.gov/api/citations/20205011566/downloads/20...

drewcoo
0 replies
21h2m

you cannot be taught what nobody knows how to do

It's worse than that. No one can agree what "quality" means.

Mostly, the word is used as a weapon.

The pointy end of the weapon is what management pokes you with whenever anything unexpected happens. Typically they do this instead of making sure that problems do not happen (a.k.a. "management").

The weapon's handle is sometimes flourished by process gatekeepers who insist on slowing everything down and asserting their self-worth. This is not good for throughput, anyone else's mood, or eventually even for the gatekeepers.

People usually refuse to talk about quality in terms of actual business metrics because if anything unexpected happens that's not covered by the metrics, there will be fault-finding. And the fingers pointed for wrong metrics are typically pointed at middle management.

t43562
13 replies
20h27m

The reason I find it easier to work with people who have a degree in Computer Science is that I don't have to convince them of the need for good algorithms and not to try to implement parsers or cryptography by hand.

When it comes to software engineering I feel there is no qualification where you can feel that the gross naivety about quality and working in teams (and with other teams) has been similarly ironed out.

Instead you get people hardened in sin from their repeated experience of writing one horrible bit of untested code quickly and then leaving before the shit truly hit the fan :-)

One's management is very impressed with those kind of people and very contemptuous of those left behind who have to painstakingly clean up the mess.

ch4s3
4 replies
19h50m

Hum, interesting perspective. I did 95% of a masters in CS before leaving to do a startup and while I can see the value of parser generators, there are a LOT of times when it is appropriate, useful, and more performant to write your own simple parser. It's often in my opinion the right thing to to first for simple cases. Clearly you should test it and consider functional requirements, but a stringy protocol with clear delimiters is usually dead simple to parse and depending on your use case you can focus on a subset. If you're writing a language... my advice might be different.

valenterry
1 replies
13h56m

I've never had it once in my career where using a parser generator wasn't better. Given that it's an in-language parser generator and not some meta-language monstrum like ANTLR.

Maybe when writing your own programming language, own complicated data format or low communication protocoll while requiring extreme performance. But that seems to be super rare, at least in my area of profession.

ch4s3
0 replies
1h19m

I’ve had a very different experience. I can think of three occasions where I was able compare a hand written parser with something built using a parser generator or similar tool. In two of those cases, the hand written code was far easier to read and modify. This kind of code can also be easier to test.

Parser generators aren’t a free lunch and they vary considerably in quality.

chiefalchemist
1 replies
16h58m

My impression is the gist is: When you think like an engineer, your focus is on problem solving, and using the appropriate tool(s) to do that. On the other hand, typical developers instinct is to code, code, code; at least based on my experience.

ch4s3
0 replies
1h22m

I don’t know if you’re trying to be rude, but this comes across as quite disrespectful.

Parser tools are indeed very powerful, but those abstractions carry tradeoffs that any “engineer” will consider.

pkkm
3 replies
16h53m

Cryptography yes, but are you sure about parsers? As far as I can tell, there's some kind of U-curve there. Beginners code them by hand, intermediate-level programmers and intermediate-scope projects use parser generators, and people maintaining the most sophisticated parsers prefer to code them by hand too. For example, GCC used to have a bison parser, but they switched to a hand-coded recursive descent one because that let them produce more helpful error messages. Clang uses recursive descent too.

mdaniel
2 replies
16h18m

I offer, again, my JetBrains GrammarKit counterpoint from the last time that assertion came up <https://news.ycombinator.com/item?id=38192427>

>>

I consider the JetBrains parsing system to be world class and they seem to hand-write very few (instead building on this system: https://github.com/JetBrains/Grammar-Kit#readme )

- https://github.com/JetBrains/intellij-community/blob/idea/23... (the parser I'll concede, as they do seem to be hand-rolling that part)

- https://github.com/JetBrains/intellij-community/blob/idea/23... (same for its parser)

- https://github.com/JetBrains/intellij-community/blob/idea/23... and https://github.com/JetBrains/intellij-community/blob/idea/23...

- https://github.com/JetBrains/intellij-plugins/blob/idea/233.... and https://github.com/JetBrains/intellij-plugins/blob/idea/233....

Xeamek
1 replies
7h49m

To be fair though, jetbrains use case is fairly unique, as they basically want to implement parsing for as many languages as possible, all while doing it in a verry structured and consistent way, with having many other parts of their infrastructure being dependent on that parsing API. I think it's fair to say that those requirements are outside of the norm

mdaniel
0 replies
1h52m

I think that's a fine observation, but I'll also add that since their cases are almost always consumed in an editor context, they need them to be performant as well as have strong support for error recovery, since (in my mental model) the editor spends 90% of its time in a bad state. If I understand tree-sitter correctly, those are some of its goals, too, for the same reason

CipherThrowaway
1 replies
10h43m

The reason I find it easier to work with people who have a degree in Computer Science is that I don't have to convince them of the need for good algorithms and not to try to implement parsers or cryptography by hand.

Cryptography and parsers simply do not belong in the same sentence. There is never a time when it is a appropriate to write your own cryptography. OTOH, most large compiler and interpreter projects have handwritten parsers, and many of them have handwritten lexers too.

Writing a parser can be simple enough to fit into a take-home assignment, and hand-written parser code ends up looking pretty similar to an LL grammar anyway. Parsing is also the easiest part of writing compiler or language tooling, so if a hand-written parser is too high a bar for the team then the entire project might questionable.

I'm not saying never use a parser generator, but I would personally prefer to work on a project with a well tested hand-written parser than a project using a parser generator. Especially if it complicates the build process with extra tooling, or is something really dated like Bison or ANTLR.

kaba0
0 replies
10h42m

What’s dated about ANTLR?

quickthrower2
0 replies
18h6m

It is a culture thing. Try to avoid cowboy shops. The thing is the general standard seems higher than 20 years ago. Source control, unit testing and CI/CD are not controversial any more for example.

hyperthesis
0 replies
19h31m

Pushback on parsers. It's very difficult to provide useful diagnostic error messages with yacc/bison. So most languages end up with a hand-written recursive descent parser.

The only exception I personally know of is jq (uses bison). So it's difficult to produce helpful syntax error messages in the implementation of jq.

slily
13 replies
1d

I was actually thought how to build quality software (which is not limited to "having no bugs") in college, but I do not have the time or resources to apply this knowledge consistently in a corporate setting because of the pressure to deliver.

groestl
11 replies
1d

Because frankly, too much quality is not necessary, in many many cases. To know when you should or should not emphasize quality over quantity and speed, to meet a certain financial objective, is actually harder than writing quality software in the first place, I think.

Xeamek
7 replies
23h51m

This is false. Its just that the costs of low quality code are much less obvious and harder to measure then the dev time. But the ammount of bad code just piles on itself over and over and over and we end up in a world where hardwares becomes incrementally faster while software becomes slower and slower, and more bugier. I mean, in the strict sense of the world an individual company will not pay those costs, but on a societal scale, how much time (and thus money) is wasted daily by all the people who are waiting 30secconds for windows explorer to load? If your app have millions of users, literally every additional second your app wastes multiplies to tangential numbers.

It's akin to pollution, really: Individual company making 'dirty' things won't see the consequences. But scale this mindset out and suddenly we wake up in a world when trillions of dollars are spend to counteract those effects.

groestl
6 replies
23h11m

This is false.

I wonder where you get the confidence to make such a strong statement, which is clearly not warranted. I want to challenge you to broaden your view a bit: Not a lot of software is like Windows explorer. Not a lot of software is performance critical. A lot of software can do with bugs, with many many bugs, to be honest. A lot of code is run fewer than 100 times, before it's retired. Also, not a lot of software written has many users. Or enough users to support maintaining it. "Pollution" often affects just the author of the software themself. Software is just a means to an end, and the end decides, how much effort was warranted in the first place.

Xeamek
5 replies
22h41m

Not a lot of software is performance critical.

Not being performance critical doesn't mean it is justified to diss-respect users by wasting their time.

A lot of code is run fewer than 100 times, before it's retired. Also, not a lot of software written has many users.

Obviously we aren't talking about some simple automation scripts here.

"Pollution" often affects just the author of the software themself.

You are misunderstanding the pollution analogy. I'm not talking about polluting the codebase with code smell.

I am talking about costs of low quality being non obvious and only revealing themselves at global scale

groestl
4 replies
22h34m

Obviously we aren't taking about some simple automation scripts here.

This is moving the goalpost, and also ignores the fact that software exists on a spectrum from "simple automation script" to "messaging app used by millions". It seems you have a very narrow view of what software is, or what it is used for, and the constrains that apply when building it.

Xeamek
3 replies
22h28m

This is not moving a goalpost. Running a program less then 100 times total, across all its user, is just very little for anything that could be considered commercial. That really isn't a controversial statement. So I am simply excluding this category as an extremum.

groestl
2 replies
21h51m

Running a program less then 100 times total, across all its user, is just very little for anything that could be considered commercial

Running that kind of software for the central bank here. So kind of disputing your statement.

So I am simply excluding this category as an extremum.

Which ignores the long tail. Great approach.

Xeamek
1 replies
21h35m

What software are you running that gets less then 100 usages before it gets retired?

Great aproach

Unironically better, then trying to make prescriptions as broad and general as possible, because those usually are too generic to carry any actual value

groestl
0 replies
21h5m

Yearly reports. They can be buggy, can be off by millions, due to rounding errors. They can crash. They can run for days. Nobody cares enough to rewrite them, because regulation will change before that effort amortizes.

Also note that I wrote "code" originally, because there can be programs which are run very often, but certain code paths are not, so my statement applies even for some parts of popular software.

The image I think would be valuable for you to consider is a curve, where 20% of code has 80% of all executions, and 80% of code get's the rest. It makes sense to put in a lot of effort into writing the top 20%, but on any given day it is very likely you'll be working on the lower 80%.

slily
2 replies
1d

I agree in principle, but in my experience quality is not nearly prioritized highly enough. There is not enough understanding of quality attributes beyond the most visible ones like compute performance and stability (i.e. lack of bugs). And even for those I work on projects where people complain about lack of proper test coverage constantly, but it is impossible to dedicate time to improve that.

groestl
0 replies
1d

in my experience quality is not nearly prioritized highly enough

I agree with you on that one. IMHO it's because of that difficulty to know when to optimize for quality, but also because of sheer incompetence.

Xeamek
0 replies
23h47m

I'm pretty sure even those basic two, of performance and stability are extreemely undervalued, when you objectively look at how fast modern hardware is and yet how easy it is to find slowness simply in using devices in a day to day environments

teknopaul
0 replies
1d

Me too, Manchester uni was good.

I have written non-trivial systems deployed in 20 different sites that have never had a bug ever.

My best are usually second systems, focus on simplicity, standardisation and resist scope creep.

100% coverage with unit tests. 100% coverage with integration tests.

I've written a many things with zero bugs after delivery.

(and other that were never ending quality nightmares)

If I am under pressure to deliver I get strict with TDD, since no time for bug fixing.

benreesman
11 replies
17h32m

You learn this at quality shops. 10-15 years ago roughly FAANG.

Today? TailScale and stuff like that.

You can just noT have a bunch of pointless micro services and docker in your runc and layer on layer of json de/re, and unit test to get coverage but ignore quickcheck and hypothesis and fuzzing.

You can use stacked diffs and run oncall loops out of the team authoring the code and all of it.

You can minimize dynamic linking and all the other forms of unforced dependency error.

You can understand and play towards the runtimes of all the managed languages in your stack. You can insist that a halfway decent verbal grammar for the languages is “readability”.

It gets shouted down over and over but it’s public knowledge how to ship quality software.

ttymck
4 replies
17h29m

I think the prevailing problem is that quality software often doesn't outperform shit software in terms of revenue, right?

foobiekr
1 replies
16h51m

It's not that, it's that quality doesn't matter and mostly neither does technology.

If you are doing network switches or GPUs or server CPUs or whatever, yeah, technology matters. If you are building pretty much any SaaS, MCCA, etc. the tech is literally irrelevant and mostly the more "new" tech you use the worse off you are.

Quality also only matters in some contexts and those are even rarer than the above.

Timing and solving a problem is all that matters in terms of revenue. As long as you aren't too bad, it's fine. The only real five-nines requirement for SaaS is that 99.999% of them would be fine with one or two nines.

valenterry
0 replies
10h37m

That's not true at all. Very often I've seen features being delayed or completely disregarded due to technology decisions.

It then looks like it was a "business reason" in the end, but it's not so easy to just distinguish what the root cause really was.

benreesman
1 replies
17h25m

CloudFlare and TailScale and that crowd don’t seem to be chancing it?

bottled_poe
0 replies
13h20m

They are outliers. Most startups focus on feature velocity and sales first, quality is an afterthought, regardless of what they might say. Also, if your project is open source, quality has a lot more weight towards uptake.

CyberDildonics
4 replies
17h19m

roughly FAANG

What does roughly FAANG mean?

Today? TailScale and stuff like that.

The VPN?

You might have some ideas about quality software, but this comment is incomprehensible.

anacrolix
2 replies
15h40m

Someone's butthurt about never working for a company with a recognisable name?

CyberDildonics
1 replies
15h35m

How did you get to that? I can't even figure out what the comment I replied to is talking about. It isn't made up of full sentences and real words.

slvng
0 replies
11h7m

Not well written but understood the same.

phendrenad2
0 replies
15h31m

OP is saying that 10-15 years ago FAANG companies (and a few more) were the only ones writing quality software. Now, FAANG doesn't care anymore but there are new unicorns in the making that do care - TailScale being one of them (debatable)

pkkm
0 replies
17h8m

Any advice on how to find these quality shops and this quality knowledge? Any particular things to look for, or particular books/courses you recommend?

withinboredom
10 replies
22h32m

First, define quality software. I'll wait.

ragnot
5 replies
22h25m

I'll take a stab. Quality software is software that is testable, able to adapt to new features and is architected to match the current organizational structure of the software team so that communication and dependencies don't have an impedance mismatch.

withinboredom
2 replies
22h14m

Now the hard questions:

- why is testable software higher quality? Does it add value to the software? I'd venture that untestable software has the same value (if not more) than testable software (due to time-to-market). You can write software that is 'obviously correct' and "high quality" at the same time, without any tests.

- Why does software that can adapt to new features increase the quality? If that is the case, we must argue that WordPress is extremely high-quality software. Or SAP.

- How does architecture influence quality? If that is the case, then there isn't any need for different architectural styles since there should be "one true style" that has the best quality software.

speedgoose
0 replies
21h33m

Testable software usually has a better quality because you can automate some parts of the quality assurance.

Sacrificing quality assurance to favour other aspects is common, but the quality usually suffers.

A company favouring time to market over testability is likely to release buggy software. They can get away with it.

Adaptability is a common quality, but you can find counterexamples. WordPress and SAP are successful software that may not check all the quality boxes.

Some architectures are for sure worse than others, and there isn’t one good architecture for all kinds of problems.

ragnot
0 replies
21h27m

Appreciate the questions!

why is testable software higher quality? Does it add value to the software? I'd venture that untestable software has the same value (if not more) than testable software (due to time-to-market). You can write software that is 'obviously correct' and "high quality" at the same time, without any tests.

Note I said testable software, not software with tests (there is a difference!)...I'd agree that software with tests (which is by definition testable software) has a huge developer cost to it that may not always be in the best interest of the company (like you said, time to market might be important). But in my experience, writing code in a way that can be tested later is only marginally more costly (time-wise) than writing code that isn't. A good example of this is writing modules that communicate with message passing and have state machines over direct function calls. The former has a slightly higher cost for dev time, but you can always retro-fit tests to it once you've achieved market penetration. You can't always do that with direct function calls.

Why does software that can adapt to new features increase the quality? If that is the case, we must argue that WordPress is extremely high-quality software. Or SAP.

This is a good point that you bring up. I think what we are getting at ultimately is that quality and value are distinct entities. Software can have high value without being high quality. In my mind, being able to provide the business with new value-producing functionality without causing a spike in bug reports is my (admittedly vague) standard.

How does architecture influence quality? If that is the case, then there isn't any need for different architectural styles since there should be "one true style" that has the best quality software.

Architecture has to match how the software teams communicate with each other. Like actually communicate, not how the org chart is made (see Conway's Law). So my point is then that if there are two separate teams, your code should communicate between two "modules" that have an interface between them. Just like real life. It would be silly to implement a micro service architecture here. That's why Amazon's SOA design works for them: it matches how teams are organized.

NBJack
1 replies
21h37m

Good start, but too broad and open for interpretation.

- Who gets to define testability?

- I want to add a coffee maker to my crash test dummy; is the lack of room for the filter and water tank a sign of a bad design? Or not flexible enough for my feature?

- (cue meme) "You guys have organizational structure?"

- Who gets to claim the impedence mismatch? What are those consequences? Wait, where are the dependencies defined again outside of the software?

ragnot
0 replies
21h16m

- Who gets to define testability?

I do (just kidding!)...Testability is the ability to add testing at a later point. There is no hard definition of this, but if you can't test at least 75% of your public facing functions then I'd say you don't have testability. Remember testability means you can have a tigher feedback loop which means that you don't have to test in production or in the physical world. This means you get where you want to go faster.

- I want to add a coffee maker to my crash test dummy; is the lack of room for the filter and water tank a sign of a bad design? Or not flexible enough for my feature?

I know you are joking, but imagine for a second that your business did in fact invent a brand new way to test crashes and that coffee makers were the key to breaking into that market. If the dummy can't accommodate that then...yes! It is a bad design, even if it was previously a good design.

- (cue meme) "You guys have organizational structure?"

Remember: there always is an organizational structure, with or without a formal hierarchy. You want to match your software to the real one.

- Who gets to claim the impedence mismatch? What are those consequences? Wait, where are the dependencies defined again outside of the software?

There are no "the company blew up" consequences with this type of failure mode. Instead you get a lot of "knock on" effects: high turnover, developer frustration, long time to complete basic features and high bug re-introduction rates. This is because software is inherently a human endeavor: you need to match how it is written to how requirements and features are communicated.

speedgoose
0 replies
21h41m
shrimp_emoji
0 replies
22h30m

Written in Rust

poisonborz
0 replies
21h29m

My entry: one that is easy to refactor regardless of code size.

If this is given, every other metric, like features, bugs, performance is just a linear dependence on development resources (maybe except documentation, but that is kinda an externality).

ahoka
0 replies
20h27m

Easy! Software that fulfills its requirements the cheapest.

But how do you define the requirements is the real question (and problem)…

jupp0r
4 replies
23h13m

1. The premise that college teaches you how to build software in industry is a pretty wild claim.

2. Is this article from the 90s where we ship software on CDs or floppy disks? In today's world where the concept of a "release" is often blurred by continuous delivery pipelines (and this is considered a good practice), having a quality insurance department manually assuring that no bugs are in that release seems downright archaic.

ponector
1 replies
20h14m

And then you remember stories like that about Boeing. How their 737 Max has been tested by cheap contractor from India.

There are tons of software without continuous delivery.

jupp0r
0 replies
15h39m

Absolutely, and industries like medical devices and aviation have extremely strict regulation and procedures regarding the testing of software. The article not mentioning any of those made me conclude that author is referring to regular software.

bluGill
1 replies
20h43m

Not everyone is writing a webapp where you can roll out upgrades anytime CI passes, or a phone app that you can upgrade every week. some of us work on code that will be shipped in a device that is not easy to upgrade.

jupp0r
0 replies
15h35m

Not all software that is frequently updated is a web app. Ask Tesla, Apple, Sonos, Garmin, ...

Anything connected to a network could be released frequently if people wanted to. Not everything is connected to a network though.

tanepiper
3 replies
20h46m

Maybe a spicy take (FWIW I'm also self-taught)

Software engineering quality isn't something you teach, it's something you learn.

withinboredom
0 replies
20h36m

Usually, the hard way.

I've worked on so many poor-quality projects. I've had so many heated conversations trying to teach people that there are no such things as rules in SE and it's important to _know when to bend/break the rules_. Rigorously following rules doesn't mean high quality, automatically. Some things simply don't exist in the little world you're building.

Translations, for example, usually live outside of any layered cake you are making. If you write a string that will be shown to a user deep down in your code, it needs to be marked for translation and translated. If you try to layer it ... gods help anyone who comes along to change that switch statement.

There are lots of other cross-cutting concerns that people try to put into a box because of "rules" that don't belong in a box. That's usually where your quality starts to plummet.

ponector
0 replies
20h11m

It is the same as you learn about backup and restore process. And then you learn hard way that you should have been testing your backups, if they can be used to restore data.

bob1029
0 replies
20h12m

I think developing high quality software is more art than engineering.

The most useful pieces of code typically have to deal with some of the most ridiculous constraints. Determining a way to integrate these such that the UX doesn't suffer is the real talent.

The only pedagogical model that makes sense for me anymore is the apprenticeship. You can definitely learn it on your own, but it's a hell of a lot faster to observe someone with 20k+ hours XP and skip some of the initial lava pits. Perhaps some of those lessons are important, but learning and knowledge are not linear things.

sanderjd
3 replies
21h45m

I have been, at various points in time, taught how to build quality software. I think the large majority of people I work with have been taught about this as well. So I'm not sure who the "you are never taught ..." is referring to here. Should it perhaps instead be "I was never taught ..."?

bluGill
2 replies
20h33m

Were you really taught that, or were you taught cargo-cult things that don't make for quality software. I've had some of each in my past.

This is a big area that I wish researchers would focus on. What really does make for long term high quality software. What are the actual trade offs. How do we mitigate them. What are the limits to TDD - is it good? Detroit or London mocks - when to use each? When is formal verification worth the time and effort? There is a lot places where we as an industry have ideas but are in heated debates without knowing how to find a truth.

sanderjd
0 replies
20h3m

No, I have been, at various times, taught about a number of different techniques that are used in the endeavor to build quality software, along with lots of discussion about the tradeoffs between them and when they may be more or less appropriate.

It certainly is not something with a single easy answer, and I certainly agree that it remains a fruitful thing to continue researching, but that doesn't mean that there is nothing to be taught about it. There is lots to be taught about this, and lots that is taught about it, to lots of people.

ponector
0 replies
20h5m

My colleague, a senior full stack developer with CS masters degree has been asking: "why should I write tests if I can write new features?" And that is what managers often think. Because you can present new features to business but can't do the same with tests. They have no immediate value.

aidenn0
3 replies
23h18m

I wonder how much of the lack of QA is rational. That is to say, for most projects does shipping with lots of somewhat hard-to-find bugs actually hurt the bottom line?

For some classes of bugs it can (e.g. if the software is so bad as to open you to a class-action lawsuit; in b2b software bugs that put you in breech of contract), but for many classes of consumer software, it's not clear to me that shipping software that works better is rewarded. Picking not-too-buggy software ahead of time is hard, people are slow to switch (even when the people encountering the bugs are the people selecting the software, which is often not the case), and people are good at subconsciously avoiding triggering bugs.

nikhil896
2 replies
23h10m

It starts mattering more for consumer software when you reach mass scale. Somewhat hard-to-find bugs at the scale of hundreds of millions of users (like a social media company), turn into bugs faced by hundreds of thousands of users.

But at that scale (in my experience), QA is up front and center and is typically a core pillar of engineering orgs.

aidenn0
1 replies
20h21m

It starts mattering more for consumer software when you reach mass scale. Somewhat hard-to-find bugs at the scale of hundreds of millions of users (like a social media company), turn into bugs faced by hundreds of thousands of users.

From a cynical point of view, if those hundreds of thousands of users will use your product despite the bugs, does it matter?

mdaniel
0 replies
15h36m

People are only as loyal as their opportunities; if the competition is mostly the same as your product but has either fewer bugs or bugs in a less painful flow, buh-bye

I'm super cognizant that the cited example of "social media company" is its own special ball of wax, due to the network effect, and I wish to holy hell I knew how to fix that bug :-)

johnnyanmac
2 replies
21h31m

Neglecting QA is a shame because 90%+ of all students work in a company context after they finish their degrees. It will be necessary to deliver software without bugs in time.

once again bringing up the hot debate of "are colleges job prep or research institutions"? Many students these days will proceed to grab a job, but is that something a university should strive to be?

I wish at the bare minimum there were more proper apprenticeships for companies that want some specific type of software engineer instead of 3 month vibe checks as it is right now. Or bootcamps made more or less to study to the test instead of what actually brings value. But I guess no one is really chomping at the bit to change the status quo

bluGill
1 replies
20h38m

is that something a university should strive to be?

Universities taking public money (including students government grants/loans) should strive to make society better. Part of that is getting kids into good jobs that society needs done.

There are a few "retired" people taking classes that they are paying for on their own just for fun. If those people think they are getting value despite taking subjects society doesn't value I'm fine with that. The majority of students though are young people that society is trying to invest in to make a better future, and so the more universities prepare those kids to make a better future the better.

maximus-decimus
0 replies
9h57m

People making roads and building powerplants also take government money. Do you expect them to prepare people for software dev jobs too?

Just because companies decided to start requiring degrees because they're too stuff to pay their own staff, IMO they shouldn't get to divert universities from their original mission, which is education and research. Afterall, those are also important to society, and if they don't do it, who will?

darth_aardvark
2 replies
21h25m

Unrelated to the article, I could immediately identify the image used as definitely AI-generated. But I can't identify any reason why. It's a normal picture of a stone brick wall. Yet I'm 100% sure it's AI.

No shame to the author for their choice; replacing stock images with generated ones is a great use case. It's spooky to me that we've been so quickly trained to identify this subconsciously.

realprimoh
0 replies
20h53m

AI-generated images have an unnatural "smoothness" to them whereas reality is more imperfect. At least that's my hypothesis.

dvfjsdhgfv
0 replies
21h14m

True. I think in this case it's because of the texture of the bricks. It looks like they were wrapped in cloth or something. This seems a common texture in many AI-generated images.

ck45
2 replies
20h49m

This has been previously handled in the excellent book "The Problem with Software - Why Smart Engineers Write Bad Code" by Adam Barr (https://mitpress.mit.edu/9780262038515/the-problem-with-soft...)

withinboredom
1 replies
20h32m

I literally can't figure out how to buy the book from that site. I clicked ebook, then amazon, and the link is dead. You should probably just share a working link.

andrewl
0 replies
18h58m

The Problem with Software - Why Smart Engineers Write Bad Code by Adam Barr

https://www.amazon.com/dp/026203851X

whalesalad
1 replies
16h32m

I don’t think it can be taught, at least not in a classroom. Apprenticeship and learning from wise mentors in real world environments is the best way to learn.

mdaniel
0 replies
15h41m

In my opinion, there's a tiny bit of nuance: it can for sure be taught, but to be internalized requires experience, likely being on the wrong end of something sharp (pager, company going out of business, or you yourself experiencing a bug that otherwise only a customer would see)

Kind of related to https://english.stackexchange.com/questions/226886/origin-of...

throwaway_08932
1 replies
1d

You may not be taught to build it, but everyone sure seems to know how to argue it.

0xdeadbeefbabe
0 replies
1d

Even if this teaching quality thing existed, everyone would still argue about it.

teknopaul
1 replies
1d

Author needs to change shop

Where I work is 50% qa effort at least, 50% by budgets, and devs do much >50% automated testing as part of what we call development.

Anything less and projects take longer, because bugs found later cost so much more to fix.

bluGill
0 replies
1d

When I first started in the 1990s I was told testing was 60% of the product budget and traditional writing code 20%. The remaining 20% was architecture and other design work. We didn't have unit test frameworks, but we did spend a lot of time writing throw away test fixtures (which was already generalizing into test frameworks by great developers and kicking off the unit test revolution)

tbm57
1 replies
21h49m

To be realistic, it's important to not over-engineer QA measures with a big upfront investment. We shouldn't block progress of the project as a whole and neither will we get the necessary buy-in from all stakeholders with this approach.

I suspect that a lot of the bad code that is out there exists because teams are constantly in crunch time where it is important to get certain features out by a deadline.

From that perspective, this statement is kind of a contradiction. If every minute is vital to finishing a feature on time, then the act of writing tests will always block progress of the project

Basically, I just wish it were more innately profitable to write quality software

bradley13
0 replies
21h14m

If every minute is vital to finishing a feature on time, then the act of writing tests will always block progress of the project

So you will deliver that feature, it will fail for your customers, and you will be in panic-mode trying to fix it on live systems. Seriously, that's going to happen.

Finishing a feature must always include at least some minimal time for testing. That testing needs to be done by someone not involved in the development. Developers sometimes misunderstand requirements (or think they "know better") and they will test what they implemented, not what the requirements actually say.

makach
1 replies
19h21m

What the hell? Yes it is taught. How it is interpreted afterwards is very dependant on the team or community you surround yourself with. You get all the tools from schooling, how we use them is entirely up to us.

makach
0 replies
19h19m

Obviously the entire article is an opinion piece and flame bait and I fell for it.

eternityforest
1 replies
20h18m

I completely understand university teaching algorithms over real dev stuff.

Algorithms are hard, and take serious work, they're basically math. University seems like the idea place to learn A* search or quicksort, I can't imagine figuring that out without a lot of effort and time spent just focusing on that.

What I don't understand is why programmers themselves focus on algorithms over stuff like "Vague heuristics to guess if this JS framework you tear your hair out in a month".

That kind of this isn't really hard and doesn't require any heavy discipline and focus, it's just something you pick up when you've tried a few different systems and notice stuff like "Oh look, that linter caught this bug for me" and "Hey look, there's not much autoformat support for Mako, maybe I'll just use Jinja2 like everyone else".

Instead they tell us to do stuff like code katas and endlessly polish our coding and algorithm skills rather than practice working on big stuff and managing complexity.

Maybe it's because algorithms are the hard part, and being really good at that is needed to work on the real cutting edge stuff.

ivix
0 replies
18h20m

I suspect as with most things in education, they focus on small self contained problems because it's easier to teach and easier to grade. These toy problems end up being all students know and therefore are what employers select on.

eftychis
1 replies
23h34m

As sibling comment `@wellpast` commented, but to extend it with my point of view.

We can roughly say we can have three out of: (high) quality, (low) time, (low) communication complexity, and (low) money. (time is a dependent here.)

People are trying to apply factory processes and structures to a team sport, an engineering discipline. You do not teach or build a basketball team by breaking down each attack phase into steps and checkmarks.

You try to minimize communication and make the team work as one. It is a team and individual building, not a process building exercise. You make a plan, and follow the Moltke's the Elder conclusion:

"no plan of operations extends with any certainty beyond the first contact with the main hostile force."

(Or paraphrased as you have heard: No plan survives contact with the enemy.)

All (types of) Engineers know this. But software engineering is "special." And it is not a "move fast and break things issue." That is part of all engineering or team playing too.

It is the type of business mentality, that because a plan did not go exactly as expected we need to add more process. Whatever that process may be. Because if "I as a manager add a process, then the next failed plan, I am covered, and I will blame the individuals."

Process has a place to ensure things happen in a legal and moral framework. And minimize adverse circumstances -- e.g. we bet all the hedge fund money accidentally when running tests.

Process is used differently in most startups and corporations with not the team in mind.

swader999
0 replies
23h27m

The construction metaphor is a bad analogy. The compiler does the construction, dev teams do iterative design, ideally with frequent feedback and adjustment.

Do you ever yell at a traditional architect and ask them when it's going to be done? It's always when the client is happy or makes their mind up about it. A lot of dev is like this.

continuational
1 replies
23h56m

The process of building quality applications starts long before any code is written.

You need to understand the domain, you have to design something that solves an actual problem or delivers tangible value to someone, you need a holistic approach to user experience.

withinboredom
0 replies
22h34m

Even if you understand the domain, you can build the wrong thing, or just simply not know something you don't know; thus you can't even learn it.

I've seen this too many times to count.

codenesium
1 replies
20h21m

I ask how do you build quality software in interviews. A lot of people are caught completely flat footed by it. They have no idea.

ponector
0 replies
19h58m

What if they ask you: and you? Are you actually building high quality software?

What are your quality gates? How many open bugs are there on production? How often you close bugs with resolution: won't fix(because of budget issues)? How often do you have production incidents? What is your testing strategy? How are you testing your requirements before assigning them to the developer for implementation? Do you hire external professionals for security testing?

Communitivity
1 replies
20h20m

There is a corollary. You need to learn how to build quality software. You also need to know what level of quality vs completeness tradeoff you need to make.

Imagine I have a deadline of Jan 15th to demo features x, y, and z to senior leadership (who then will make funding decisions that could impact my project), and I get to Jan 15th with x, y, and not z - or worse, none of them working fully. But the code quality is high, there are no TODOs hanging around, no extra development focused logging, no duplication of code, no leaky abstractions.

That is a 100% fail in leadership's eyes, unless you have a really good story on why z didn't get shipped (and code quality is NOT that).

All those things I listed will have to be addressed at some point, and the longer they go without being addressed the harder they will be to address. But if you need to do that to meet deadlines, then you do it.

Of course, if you are in a place where leadership allows you to work from a backlog and demonstrations and features to demo are scheduled not arbitrarily based on leadership's schedule/interest, but on the features that are newly shipped since the last demo, then you are in luck.

At the end of the day the important thing to remember is that you are not being paid to build software. You are being paid to provide a solution to your customer's problem. Other than CTOs and some forward thinking leaders, they don't care about the software. They care about whether the problem is solved, did it cost me more or less than expected in labor and materiel, and is it compliant with necessary laws/regulations.

beebmam
0 replies
20h18m

I think that "quality software" is not well defined, so until it's well defined, we're not talking about the same thing.

ChuckMcM
1 replies
20h33m

From the article -- At some point, I realized that I wasn't using the right arguments either. Explaining that the software will be 'more stable' or 'make maintenance much easier' is not palpable for someone who doesn't work in the codebase themselves. We need to speak about money. As developers, we need to speak about the cost of not doing QA. This is the language of business and managers in general.

The more general way of saying this is, "If you need someone's approval to do something, explain it as a good idea for them." Took me a bit to learn this in engineering, you would think "but it will be correct!" would be an unassailable argument but as the author notes, the person who has to sign off on it may not care if it is correct, even if you passionately do care. This works for everything at the job, not just fixing software correctly.

sublinear
0 replies
19h4m

I've found the only true way out of this hellish situation is to work at a place mature enough for the leadership to already understand all this.

If you have to explain why quality matters, they're at least as ignorant as you if not more. They deserve their fate. The upshot is you'll probably also get paid way better and more quickly develop a better sense of how business is supposed to be done at a mature company.

Of course you also have to deliver on your promises that all the time you're spending will improve things and isn't just some amateur quixotic itch.

ChrisMarshallNY
1 replies
6h24m

I really liked the umbrella demo :)

Here's my take on testing[0].

I write Quality software. It's a matter of personal satisfaction. Many folks can't really tell the difference between relatively good software, and very high Quality software, but the cost difference can be quite large. We don't get rich, writing top-shelf software.

The issue is, in my opinion, that we don't even produce much "relatively good" software, these days.

I won't get into it. It doesn't win me any friends. I was fortunate, to find a company that shared my values. Many of their processes would absolutely horrify a majority of folks on this site, and it was, quite frankly, often maddening.

But you can't argue with results. They produced extremely expensive kit, for over 100 years, and they are regularly used on the ISS.

[0] https://littlegreenviper.com/miscellany/testing-harness-vs-u...

tmshu1
0 replies
4h35m

Thank you for sharing. I just read through several of your blog posts and especially resonate with your “evolutionary design”. The idea of integration tests/test harness first over unit tests makes a lot of sense to me too. As a one person team myself, the “art” of creating quality software products, at speed, is revealing itself and is quite fascinating.

It’s not everyday that devs like me get to learn from someone with as much experience as you have. Thanks again for sharing your knowledge!

xav0989
0 replies
6h6m

My university offered both a CS degree and a Software Engineering degree. While both open the door to tech jobs once in industry, the contents varied in the upper years. The software engineering one started with common engineering courses, then common CS classes, and finished with project lifecycle classes such as QA & requirements gathering. In contrast, the CS program started with a mix of core engineering and science classes, progressed to the common CS classes, then went further into advanced CS and algorithms classes.

winddude
0 replies
22h7m

I've certainly be frustrated at others for shitty software. We're definitely taught what bad code is.

wellpast
0 replies
1d

'If we don't do it now, development efforts (and therefore also costs) will be up 15% in 4 months.'

Yeah you won't get to a point where you'll have a valid-enough metric to make this point.

I was at a startup once. The two founders said "don't write unit tests". I wasn't going to argue with them. I understood what they really meant. We've been too slow, we need to ship as fast as possible. I shipped fast and I shipped quality (ie low defects and outages). I wrote unit tests. They didn't need to know. They just needed the outcome.

The elephant in the room in all of these conversations is that you walk into any software development shop and they just don't how to ship both fast and at quality. No matter how much an organization or team tries to revisit/refactor their dev process year-to-year, they're still shipping too slow and mediocre quality.

The truth is there isn't a magic formula. It's an individual craft that gets you there. It's a team sport. And the context is different enough everywhere you go, yeah, sure, some lightweight processes might abstract across the universe but very little bang for those bucks. Far beyond any other facet of things, you really just have to have a good team with experience and the right value-system/value-delivery-focused wisdom.

warkanlock
0 replies
1d

Making good (!perfect) software is a function of three constraints: knowledge, economic resources, and time.

You can mix those three together and produce a desired output, but don't expect perfection, perfect software only appears when the three variables tend to infinity

vaxman
0 replies
11h51m

QA/Integrations/SQM and how to avoid rejection was one of the skills taught hands-on AFTER university by the great masters who were unceremoniously forced into other occupations like: auto leasing, banner print shop, etc. when they were forced to crack open their 401Ks to make their house and family support payments in spite of the dot com bubble crashing the NASDAQ and related capX for six miserable years.

For example, at 16, my QA manager (whose husband had a Nobel Prize) and my tech lead (who wrote malloc/free for Unix and was learning VMS from me as fast as I could learn C from him) were totally opposite forces, separated by HR and different directors. QA didn’t code anything but batch jobs/shell scripts and contracted back to Engineering for any apps they wanted (or simply required them to be in the release package). If you wanted to keep your job, you didn’t submit programs with performance/functional deficiencies and definitely not logic issues as compared to the SPEC that all parties signed at the outset of the project (put that in yoh Kan Ban, Man rofl)

Ok another arrogant post from older former child prodigy! Happy Holidays!

valand
0 replies
4h18m

A software business is made when someone needs an automated computation and buys a program from someone else.

The buyer has a specific requirement, which is distilled into a specification. The specification is implemented. Implementation that doesn't match with the specification is a bug.

Now in order to verify the implementation's correctness relative to the specification there must be a QA.

The idea that people forgets to add QA in the software development process is wild because it means people are forgetting how to conduct business.

tobiasSoftware
0 replies
23h31m

Honestly I think the root problem is that universities have a degree in computer science, whereas what most people want is to learn to build computer software.

The two overlap most of the time in subtle ways where the science gives an important foundation, such as learning Big O notation and low level memory concepts where exposure helps. I've personally seen this with a smart coworker who didn't go through university and is great at programming but I'll catch him on certain topics such as when he didn't know what sets and maps were and when he tries to sleep a second instead of properly wait on an event.

However, the differences between computer science and building software are problematic. Watching my wife go through university, she's had to struggle with insanely hard tasks that will not help her at all with software, such as learning Assembly and building circuits. The latest example is the class where she's learning functional programming is not actually teaching it to her. Instead, they combined it with how to build a programming language, and so instead of giving her toy problems to teach the language she is having to take complex code she doesn't understand well that generates an entirely different programming language and do things like change the associativity of the generated language. In the end, she feels like she's learned nothing in that class, despite it being her first experience with functional programming.

On the flip side are the things that are necessary for software that aren't taught in university, like QA. For me personally, back when I was in university a decade ago I never learned about version control and thought it was just for back up. Similarly, I never learned databases or web, as the required classes were instead focused on low level concepts as Assembly and hardware. My wife is at least learning these things, but even then they often seem taught badly. For example, when they tried to teach her QA, instead of hardcoded unit tests, they made her give random inputs and check to make sure the output was correct. Of course, checking the output can only be done by rewriting all of your code in the testing files, and if there's a bug in your code it'll just get copied, so that kind of defeats the purpose. Even when the assignments are relevant there is often no teaching on them. For example, her first ever web code was a project where they told her to hook up 6 different technologies that they had not gone over in class, with only the line "I hope you've learned some of these technologies already".

sinoue
0 replies
19h4m

The competitive CS schools where many of the students needed good sample code projects have an innate sense of building quality software. Usually because they know it'll be showcased, but also because they haven't gotten lazy with shortcuts or been overly managed to spend time developing instead of fixing. I thought it was funny the article referenced the famous umbrella monster. Here is the longer clip: https://coub.com/view/284lib

sharts
0 replies
13h35m

The easiest way to tell if you're about to join a bad team or company is to look at their QA process.

If it's slim to non-existent, you'll be working more hours with more on-call "incidents" and no life.

satisfice
0 replies
20h18m

Almost no university teaches software testing in a competent way. For some years, Cem Kaner and I (he, a professor at Florida Tech; me, a proud high school dropout) ran a small annual conference called the Workshop on Teaching Software Testing. We specifically invited Computer Science professors to come and grapple with the problem. Thus, I personally witnessed and engaged with the sorry state of practice with teaching people about testing.

Cem developed a great testing class for Florida Tech, later commercialized under the name BBST, that drew in contributions from prominent industry voices. Otherwise, I don't know of any other University that has put the effort in. An exception may be what Tanja Vos is doing at Open University.

The problem I have with CMU is that they are the source of the Capability Maturity Model, which is simply incompetent work, justified by no science whatsoever, which tries to apply Taylorist management theory to software engineering. It never caught on despite all the marketing they've done-- it's only popular in military circles. I shudder to think what they are telling students about testing.

You can disagree with me, of course-- but in doing so you have to admit there is at least a schism in the industry among various schools of thought about what testing is and should be. My part of the field considers testing to belong to social science, not computer science.

revskill
0 replies
1d

Yes, know how to build quality software is the breed and butter of a software engineer, why sharing it ?

What you read on the internet is mostly ... bullshits.

readyplayernull
0 replies
18h53m

I work in QA automation and also develop my own projects. There is more to quality assurance than QA. When you spend days creating something and then throw it away to make it better, that IS quality assurance! That deleted code, that deleted architecture, that deleted design is the cost of quality. A totally unappreciated aspect of QA.

rcbdev
0 replies
20h6m

In the Austria these topics are usually researched at universities of applied sciences (Fachhochschule) rather than universities (Universität). Some technical universities here attempt to combine these disciplines, to moderate success.

For example, the largest technical UAS in Austria offers Computer Science as an undergrad degree but only does research on Software Engineering at a graduate level.

pushedx
0 replies
13h45m

I went to a University with required Co-Op experience in industry. There was a great feedback cycle of students coming back to classes and writing tests and design documents for assignments that didn't even require them. Reading articles like this make me really grateful for that.

prosaic-hacker
0 replies
22h12m

The whole article has a straw-man feel to it. It is not the senior developers responsibility to create a proper QA policy for the project. It is good that the lead should know how to implement good QA processes.

HOWEVER, the real culprit are the MANAGERS.

After 50+ years of skanky software development policy aimed at low balling cost its time to blame the right people. No amount of cajoling, "business/budget" speak manipulation is going to fix a fundamental flaw in how managers at that level are trained and behave.

We have stop being apologists for mistakes not of our making. If using QA techniques would increase the managers bonus then we will see it being used. If all it does is make better software then this post will be rewritten in 50 years and would still be relevant.

phkahler
0 replies
20h38m

It doesn't help that early CS courses cover stupid examples like a recursive implementation of fibbonacci when the obvious solution is a loop. Tail call optimization is also another thing that should be reconsidered.

Plain imperative structured code is usually the cleanest and easiest to understand. Introducing abstraction too early I think confuses people and encourages complexity where it's not needed.

phendrenad2
0 replies
15h21m

Excuse me, I was taught how to build quality software, but I can't do it because the MBAs who run the company haven't been taught how to build quality software, and the board of directors who hired upper management were not taught how to write quality software. Most places I have worked the opinions of software developers were thoroughly ignored at best.

oopsthrowpass
0 replies
22h25m

Yeah the dimensions discussed in this article is somewhat advanced stuff and comes from experience, DRY is relatively basic concept and easy to grasp and unfortunately mid level engineer do really dangerous stuff in the name of DRY and horrible abstractions get created that break down and create a horrible mess when the requirements change.

In my experience it is generally wise to avoid abstractions and copy/paste things a couple of times, once the code base matures good abstractions will be more obvious. Even then it's good to think about future changes, will these 2 things want to evolve separately in the future? If the answer is YES, then maybe coupling them is not a great idea. I think there was a really good Kent Beck talk about coupling vs cohesion somewhere.

Another thing to think about is breaking things, if changes are local to one single endpoint then any changes there can only break that endpoint, edge cases and scenarios to consider are only relevant to that endpoint. When changes to a core abstraction are required then hundreds of use cases/edge cases need to be considered - why are we creating so many core abstractions in our systems in the name of DRY?

I've also found that the more moving parts you add the harder a system becomes to learn, the S in SOLID is probably to blame for that. The only single responsibility principle is useful for is unit tests (easier to mock), but many times harder to understand. If the actual functionality is not local to the file things become ungreppable via code search, understanding the entire system requires an IDE and jumping around to each and every beautiful lpad() implementation and trying to piece what is happening one 3 line function at a time.

Then there is also layering to consider, if 2 pieces of code look somewhat similar but belong to different layers (example controller and DAO layer, then also care must be taken to not make an abstraction that couples these together, or to couple 2 unrelated modules together that could otherwise have their own life cycle).

These are just some aspects I could think of that I think about when creating abstractions, but somehow I see engineers focus too much on DRY. Maybe they got burned so bad some time in the career by forgetting to change something in 2 places?

oglop
0 replies
3h46m

My favorite lines in here are people saying “we teach this, just get a masters”

Whoooosh

neocodesoftware
0 replies
4h38m

Sources?

References?

It sounds great - i am sure this has been studied. Anyone have any studies that have been done?

natbennett
0 replies
21h54m

Berkeley has a class that teaches TDD and other XP practices. (Or at least used to.) Pivotal used to recruit a lot of new grads from Berkeley for that reason.

IME, “QA” doesn’t really correlate with quality software, nor is there really a time vs. quality trade off. Bad software is often also delivered poorly, and high quality software can be delivered quickly.

mushufasa
0 replies
19h21m

I've found people with a computational physics background have a much better approach to QA. In that field "sense checking" calculation output is a part of the core methodology.

m0llusk
0 replies
20h33m

This is a side issue, but it seems to me like this general problem is what is making strict typing so popular. If there were tests and documentation in place and parameters were being checked then strict typing wouldn't have much benefit because types are just one aspect of making sure the whole system is working as intended. Instead in many cases development is chaos with little if any documentation and testing and strict typing is great because it is the only tie holding the bundle of sticks together.

jongjong
0 replies
17h32m

Something which should be taught is the importance of coming up with the simplest abstractions possible and sticking to the business domain. IMO, if you can't explain an abstraction to a non-technical person, then you shouldn't invent it.

jbs769
0 replies
20h2m

As someone who never studied computer science formally, I find this perspective fascinating - for me, programming is the tool to solve the problem. It often feels like folks study the tool. And the problem is important too.

heurist
0 replies
22h5m

Half the places I have worked had no desire for quality software..

hardwaregeek
0 replies
21h15m

I wrote a similar article: https://uptointerpretation.com/posts/art-school-for-programm.... I'd love to see schools that put programming first and foremost, that taught it with the respect that it deserves. There's this idea that programming is not worthy of a course of study, that programmers need to learn computer science and not programming. I disagree. It's a discipline unto itself and learning computer science to learn programming is really ineffective.

gorgoiler
0 replies
10h57m

This is a bit more abstract than the article but in my experience the best software comes from people with taste, rigour, reasoning, a good vocabulary, strong principles, and the ability to write clear and concise English (or whatever human language is used in their team.)

When you truly understand the software you are writing then, and only then, can you communicate it logically in code for the computer to execute and, much more importantly, code for a person to read. Well written and well understood code means it’s very obvious what you are doing. Later, when the code has a bug or needs to be rewritten, then it will at least be clear what you were trying to do so that it can be fixed or extended in some way.

So then the question is how do we train people to have these skills? In school, science experiments are a good way to teach logical reasoning and communication — here is what I thought would happen, here is what I did, here is what happened, and here is what it means. Math teaches you how to reason abstractly and, again, prove your point with logic. It’s a slightly different beast in that it’s harder for a math experiment to go wrong. It’s also harder to come up with and overcome novel scenarios in the lab with math, so in all it complements science well. And of course reading and writing English build your ability to express your thoughts with words and sentences. Many other high school subjects combine these in various measures — history for example is data gathering, fuzzy logical deduction, and reasoning in written language.

The bottom line is that quality software starts by working with well educated people and conversely all the most abhorrent heaps of over coupled illegible nonsense I’ve seen has come from people who, to be blunt, just ain’t that smart or well rounded, intellectually.

It’s a principle I carry over to hiring: smart and well educated wins out over pure-smarts.

gjvc
0 replies
15h15m

you can't teach software engineering before you have taught people programming. this suggests that programming should be a first requirement for a BS degree and "software engineering" should be MS and onwards.

elif
0 replies
1d

I dunno about this authors CS program but at GT we had significant coursework related to all aspects of SDLC (including unit and acceptance testing), business case value, etc.

efields
0 replies
17h38m

Basic project management triangle stuff. We’ve been told by the biggest names in tech that first you move fast and money is practically bottomless. You get fast and expensive but quality suffers. The end.

dudul
0 replies
19h46m

What uni Teaches You > blablabla

So I'm gonna need to see dome data here. How many universities has this guy surveyed to make a claim like that?

My university did teach me design patterns, architecture and all. My professors did give me projects where requirements would change mid way and the software had to adapt without being completely rewritten. They did teache to write unit tests, to use a build tool to generate docs, reports, etc.

Was I battle tested for my first job? No, but it really wasn't as bad as what's described here.

drivers99
0 replies
23h48m

In addition, at least in my studies, there was a semester about project management approaches and scrum. All of which is great, but QA is missing completely.

Just an anecdote, but, we had a "Software Development" class like this in CS (I took it in the '90s) and even though it followed a waterfall development model[0] and we used Gantt charts, QA (testing) was a big part of it and 1 of our 4 team members (or maybe 2 of 4 worked on it together) was primarily responsible for it. (I wrote the parser and the made diagrams/documentation for the parser.)

The description (in an old catalog[1]) is:

Software specification, design, testing, maintenance, documentation; informal proof methods; team implementation of a large project.

Turns out I didn't need to look up the old catalog because the description is exactly the same still! Except it's CS 371 now, and the longer "Learning Outcomes" for the course has some newer stuff (agile and version control) but otherwise is all the same things I learned at the time.

[0] https://en.wikipedia.org/wiki/Waterfall_model

[1] https://nmsu.contentdm.oclc.org/digital/collection/catalogs/... (C S 372 in the lower right corner)

dep_b
0 replies
1h1m

I did a bachelor in Software Engineering which is a bit more practical than Computer Science and while it really had quite a bunch of interesting practical subjects like communication, agile, waterfall etcetera there was absolutely nothing about QA.

dclowd9901
0 replies
12h25m

I didn’t learn how to build quality _anything_ until I dove into woodworking. Order, organization, process, care, details. You can _work_ with mistakes, but you can’t undo them.

I highly recommend anyone wanting to learn how to do something with care, try to pick up a skill that requires you to do it right the first time.

cognomano
0 replies
13h9m

We need to speak about money. As developers, we need to speak about the cost of not doing QA.

Yep!!

btown
0 replies
23h47m

While not a panacea, visual/snapshot testing tools like Cypress + Percy that perform clicks and take screenshots can be tremendously helpful to ensure that building in a programmatic QA plan is a predictable part of the scope of what is coded.

And the good thing about snapshots is that they provide a visual way to communicate a preview to stakeholders of what may change, both for the currently-being-developed workflow as well as to prevent visual regressions elsewhere - so they're inherently more easily justifiable than "we're spending time on testing that you'll never see."

The article is correct that treating QA as the last phase of the project, and a disposable phase at that, is a recipe for disaster. But if you make it an ongoing part of a project as individual stories are burned down, and rebrand it as part of a modern development workflow, then it's an entirely different conversation.

bsuvc
0 replies
21h33m

Huh. I was taught this in Software Engineering (2 semesters) in my CS degree, which was a required "capstone", so to speak, and focused on how to build real world software.

Some of it can't be taught though, any more than a craftsman or artist can teach a novice how to produce what they do with the same result.

The processes and steps can be taught, but only through experience can some things be internalized.

austin-cheney
0 replies
20h15m

Like with all things quality is proven through practice and measures, which means quality can only be guessed at before a product is built.

Like in all things here is how you do it:

1. Build a first version that works and accomplishes all initial business requirements.

2. Build test automation or inspection criteria.

3. Reflect upon what you built and watch how it’s actually used in production.

4. Measure it. Measure performance. Count user steps. Count development steps. Determine real cost of ownership.

5. Take stuff out without breaking business requirements.

6. Repeat steps 2-6.

That is how you build quality whether it’s software, manufacturing, labor, management, whatever.

In my own software I devised a new model to solve for this that I call Single Source of Truth. It’s like DRY but hyper aggressive and based upon empathy instead of micro-artifacts.

askonomm
0 replies
4h52m

The problem is that while experienced software engineers probably do know how, and would even like to, management never approves enough time for doing so, and so here we are.

amelius
0 replies
20h42m

What they also don't tell you is that quality software can turn into a nightmare if you let some manager add a bunch of random requirements to it, freely, after the software has already been built.

adriangrigore
0 replies
20h19m

Why get out of the hamster wheel?

RevEng
0 replies
17h57m

I've been a software developer for 15 years. I started at a start-up company with many others who were fairly junior. We knew we had to take QA seriously, but didn't know how to do it. It took many years to come up with standards, tooling, and so on, but it was totally worth it. Everything in this article rings true from my experience.

Dudester230602
0 replies
1d

Indeed:

"I want to start programming, what language should I learn?"

"One of these two most popular languages, where you will not even know about basic errors until you run the code!"