I imagine the main reason he could do that is because he had a very clearly defined set of requirements. Most of what makes software buggy and slow today is that no one actually knows what they're building, and even if they do it keeps changing because "agile". Give a dev a clear API and a well defined set of criteria, and most will write code that works very well.
This 100%. The issue is that much of the time, project planning is done terribly. The stakeholders have no idea what they want, and will demand extra features/sweeping changes at seemingly random. Lots of edge cases are needed for things that probably shouldn't be done with this tool, but are being done so anyway. The designers often create designs that are at odds with the features needed, how the existing system works, what said stakeholders want, etc. The various teams developing parts of the system don't communicate well and build things in silos. Poorly thought out 'agile' processes end up trying to cram everything into a certain timeframe based on some questionably decided points score. Etc.
For the most part, it's those types of issues that lead to a system not working as intended, or being buggy as hell. If you can get a good setup going where the requirements are clear and don't change and the people on the project communicate well and have full control over the process then things will work out fine. That's seemingly what happened in the source article.
I see so much dissonance in this attitude. It's right here in your comment:
And yet, you want to put together a project plan up front based on them telling you exactly what to build without assuming there will be sweeping changes? That makes no sense!
Building iteratively recognizes this reality that people aren't sure what they want or need, rather than peevishly ignoring it. The goal is to figure out what would be valuable in a short period of time; if it isn't valuable at all, that's fine, it was not a huge investment and can just be thrown away; if it is valuable but not quite right or not as useful in its current form as it could be in a different form, but the change needed is large or fundamental, that's still fine, even sweeping changes are no big deal on top of something small; and if it's already valuable just the way it is, that's great, move on to the next iteration.
This seems to make a lot of people uneasy, I guess because it provides no formula for answering the question of where things will stand in a year, or even in six months. But it honestly recognizes that nobody knows, rather than setting the expectation that where things will stand is exactly where they project plan says they will, with everyone inevitably getting mad that reality didn't match that plan, or worse, being exactly where the plan said you'd be, except the thing the plan called for was the wrong thing to build and the whole thing was a useless waste of time.
Or engage with stakeholders early in the process and work with them to actually elicit something resembling a stable set of preferences. Maybe you need to even iterate through a couple rounds of prototypes to figure out what they actually want.
I never understood why software engineers had such a high tolerance for this shit. Is it because changes are technically possible at any point?
For example, if I order flowers for a wedding, I need to order at least a few months in advance depending on the time of year, because flowers are a seasonal agricultural product. And I can't change my order once it's locked in because they can't go back in time and plant more flowers.
We should treat software engineering more like that. There's no reason we should allow product people to abuse the flexibility of software. You can still be agile, but you don't have to be at the beck and call of people whose preferences change with the direction of the wind.
What I think is that this is right, except the expectation should just be that ideally you keep that iteration going indefinitely, instead of saying, "ok, now that we've done these couple rounds of prototypes, we now definitely know everything about what your preferences are!".
That's more likely to be true after a couple rounds of iteration of prototypes, which is good, but still unlikely to be true.
Yes, it's because it would be a lot better if you could immediately switch the order. Wedding planning, and many other things, would be a much better if everything didn't require locking in decisions months in advance.
Why advocate for a poorer experience when it's possible to achieve a better one? Sure, if you want to charge less for a process that asks for all requirements up front with no changes allowed later, because that's less valuable, and a higher rate for an iterative process that responds swiftly to changes in direction, because that's more valuable, then that would make sense.
But it's clearly possible to make changes to software without a bunch of lead time - you don't have to wait for seeds to grow or send a manuscript to the printer or blueprints to a manufactures - so why would we artificially mimic those worse experiences?
You claim that "people not knowing what they need or want" is some objective reality, but I don't think it is. It seems to me that it is a symptom of top leadership unwillingness to commit to a clear goal.
I also think you can do iterative development with having a clear long-term objective (requirements) in mind. I would even argue it helps a lot.
I suggest looking up "What made Apollo a success" (https://ntrs.nasa.gov/citations/19720005243), they explain it quite well.
Well, actually it was the comment I replied to that said "stakeholders don't know what they want".
But I do agree with that commenter that it is usually the case. But people often seem to ascribe that to a failing of a person or group of people - "top leadership" in your comment - but I think it's essentially the same "failing" as predicting the future incorrectly. Of course lots of effort is put into forecasting as well, and effort put into requirement gathering is similarly valuable. But in both cases, investing in flexibility is a useful hedge against the likelihood that your original prediction was wrong.
Personally, I don't think this is "iterative" in the same sense of the word. I recognize that it is still iterative and that there probably isn't a better word to use. But just chopping up a long list of static requirements into smaller chunks and doing them in some order is what project planners have done time immemorial, and is not the same conceptual idea as setting a vision and discovering detailed requirements toward that vision a small chunk at a time. It's that second approach that is the sense of "iterative" I was using.
I certainly don't think it's the only way to do things, and I don't think it's a great fit for every project, but I wish more people were actually bought into the leap of faith required to let go of detailed up-front top-down planning and work in small iterations.
The issue stems more from how stakeholders and management don't know what they want, but they still expect tight and concrete deadlines. Either you have iterative development with space to explore ideas and solutions which will take longer, but might result in a cleaner, more robust design, or you focus on deliverables and deadlines with clearly defined specifications. You can't have it both ways.
Oh I agree. My criticism here is for the widespread belief that people know what they want, and that certainly applies to management and other stakeholders. It's why I like working with management that has come up through the ranks of IC software development, because they have naturally internalized this lesson that they probably don't actually know exactly what they want up front, and are more likely to support the more humble path of small iterations. But of course you work on the project you have with the stakeholders you have!
Well I guess maybe my wording wasn't ideal. Sometimes it's 'what they want changes significantly through the development process due to outside factors'. Having time to iterate is good. Being able to experiment and figure out what's valuable is good. Being given a strict deadline, then halfway through being expected to chuck out nearly everything because the client brought the demo to the rest of the board and the latter didn't like it/chewed them out for it is perhaps not so great.
Yes, but I'd argue that that bad thing is downstream of the implicit expectation people seem to have that people do and especially that they should know what they want, up front. In this example, the client thought they knew what the board wanted, but they didn't, and you thought you knew what the client wanted, but you couldn't possibly, because what they wanted was a thing the board would like, but they clearly didn't know what that was.
It's a sticky wicket because of course time in front of the board is precious so you don't want to constantly run things by them and make them micro-manage, but I think the board would have probably chewed them out less if they had brought a tiny MVP (or even just a proof of concept) that hadn't required significant investment, and asked "here's what we have with almost no investment, here's our plan for the next small step, what do you think of this direction?".
If you think "agile" is the reason requirements change...I don't know what to tell you. Requirements will always change, full stop. It's like a force of nature, there is no universe in which everyone just "knows" what to build up front and has a fully spec'd out API that you can go off into a cave and implement. Real life never works that way, and software engineering is not the right profession for you if you need that.
But that doesn't mean you have to stop what you're doing and go chase those requirements.
Why did the requirements change? Is it mandatory that the change happen right now? What research was done to support the requirements change? Was the original requirement bad in the first place? Was insufficient alignment and understanding achieved at the start of the project? Do you actually talk to the stakeholders at all, or does your PM just forward you their emails and expect you to do whatever is in there?
There's a lot of room between "design everything up-front and never change anything" and "allow requirements to change arbitrarily".
The arbiter of the changing requirements is the one paying for the work.
Sort of. I've only done a small amount of consulting, but when I did, change requests from the business side of things were the start of a conversation, not direct unchangeable instructions. I would never knowingly choose work in an environment where the latter was the norm.
Agile is the reason requirements are allowed to change. Non-agile is not changing the requirements, and continuing to build the wrong thing.
But it's not that binary. "Agile" is the reason requirements are allowed to change constantly. At worst, it can be like trying to steer down the freeway by slamming the steering wheel from one extreme to the other. And pure waterfall is also blatantly unworkable.
The real question is, at what rate do you allow changes to be made to the specification/requirements? How much dampening do you apply? And maybe under that, there's another question: How fast can you respond to the real world, and still maintain a coherent direction? The faster the better, but don't try to respond faster than you can maintain coherence.
Changing requirements is like fast food, people don't eat it because they need the calories they eat it because it is hard not to.
And agile doesn't make you respond to change quicker, it makes it slower since it is done in 2 weeks sprints. Normally a team could adapt the moment new information comes up, strict adherence to scrum agile would push that for the next sprint.
Agile does make scheduling new changes effortless though, encouraging new changes to be made all the time, but it doesn't make the team react quickly to those changes and nor does it remove the total cost of a change. I don't think that is a good thing to encourage, in such a system no wonder people get used to changing things all the time so nobody really knows what things are supposed to be.
Agile means you can respond to a change in "what's most important" more quickly. You can't respond to everything more quickly, though, because you can't do everything at once.
It depends on the industry. What you say about not knowing requirements (or at least 95% at the start) isn't true in many industries/situations. They know exactly what they want. And I have worked at those places. I have also worked in agile environments and they are usually organized chaos right up until release. I just code, and I don't care if I have to erase things, I get paid as much to erase as I do to create. I figure it's all part of the process. I do my best for 8-9 hours and go home and let it stay at work, it's the only way I've stayed sane doing the thing I love for many years. I'll do a "push" for a couple here and there but I take that as a sign of trouble when I work someplace if it happens often. So yeah real life does work that way sometimes, some projects do have 95% of the spec at the beginning, just probably not in Silicon Valley.
That plus a thousand special cases, that are quite often absolutely meaningless to implement in code, but still have to be done.
What do I mean by that: I work for big corporations mostly and quite often there is the average case that is about 90%-99% of all incoming work.
Then there is a myriad of special cases some of which happen every third year on a blood moon, if an eclipse is happening at the same time and the witches chant in the woods...
Instead of managing these unicorn-cases by hand, they have to be implemented in code and lead to bugs and a ton more code to review and maintain.
Speaking of maintenance... oh, don't get me started on that one, it's a sore spot!
If you work at a big enough company, that 1% edge case could affect millions of people, so it is definitely worth the brain cycles to consider.
Doesn't that also mean it scales up the negative consequences just as much? More code = more maintenance, risk, bugs, resource usage etc.
Sometimes these things are worth it because the manual process is error prone and perhaps frustrating/stressful (these things can't be measured as easily). But sometimes they are not examined critically at all.
How often? How many? How important? Are there simpler solutions?
Kind of depends on the receiving end of the questions in my experience. Some people are happy if you push back and keep things simple, others have problems with that.
I wonder how much does scale affect these basic principles. I would assume that with scale you already have a lot of problems that push even harder towards not implementing every single thing.
Here’s the catch, as you scale up, more and more users could get caught in the edge case. Remember why we build systems — for end users to use. It is our job as engineers/technologists to solve them.
This is where trade-offs in engineering and product are important.
Is the edge case safety critical or poses a safety risk? If so, it definitely should be considered and handled.
Does the edge case block a critical user flow (eg. purchase/checkout step)? If so, it should probably be considered.
Does the edge case result in some non-critical piece of UX having the wrong padding in some lesser trodden user flow? Possibly acceptable.
TBF, the current dev team might right now is aware of the edge case how to deal with it, the poor shmuck three years down the line when blood moon eclipse witch chant happens probably wont.
In particular, these edge cases tend to be more complex, require more knowledge and have less margin of errors than the standard errors happening 90% of the time. Leaving them as a gift for the future maintainers is a special kind of dick move.
by whom exactly? I mean, I agree that it's shitty (I was such a maintainer already), but who is responsible (the whole machinery or a specific role?) and how can we change it?
I personally try to add meaningful comments to code that may seem "strange". Like code that if I read it and it was written by someone else I would ask myself "Why so complicated?" - that way I hope to improve the situation a bit for future maintainers.
I was thinking at the team/org level. Edge cases are left out after some discussion on how much impact they have and how they'll have to be handled, so there is a conscious choice to not deal with cases that are rare but will happen and require intervention (cases that can just be ignored or worked around out of the system are a different thing, and unknown/unplanned edge cases are just bugs).
If there is specific finger pointing needed IMHO it would land either on the manager or the product owner for miscalculating the impact of that decision.
PS: To your point on "weird" code, imagine a code that just asserts for some conditions with a "if these conditions are true call XXXX team for help" message on it. That's how I'm representing an unhandled edge case.
Don't need the moon and witches even. Some datetimes serialized to epoch time around third week of October and here it comes.
The reality that most people don't want to hear is that he could do this because he himself was the domain expert.
Requirements isn't just making lists and tickets in swimlanes. It's actually learning the domain you are building for. Sure you can still build for something you don't really understand, but it'll be garbage. In fact, writing this garbage is OK if you accept that it's just a drafting process for learning the domain challenges and potential solutions. There are no shortcuts to quality.
My advice for stakeholders is to make sure you have access to the person(s) building your application and see if they understand your domain (you can do this without constantly pestering them!). Beware bullshitters using buzzwords.
My advice for designers / programmers is to make sure your stakeholders & domain experts are keen and engaged, otherwise getting requirements and understanding is like pulling teeth. Sometime a stakeholder doesn't even want to solve the problem. They may have wanted to go in a different direction entirely or maybe there are politics involved that has them miffed. Nothing sinks a project like lazy or uninterested stakeholders.
When I entered the engineering workforce in the late 00s I felt like I was on a team of experts, myself excluded since I was fresh out of school. We were building some complex systems but we had a bunch of people that really knew what they were doing.
Fast forward to today, I'm in a different part of the industry, we're building things that aren't quite as complex but they take longer to finish and the experience is very frustrating. I feel like I'm working in teams where many people don't really know what they were doing but we keep getting pushed for higher story point "velocity" and whatnot. Quality is suffering, tech debt is piling up. This whole thing feels broken.
Being close to a or being a domain expert yourself isn't enough. Most domain experts don't know the domain well enough to automate solutions for it, you need a much better understanding to program than to do it yourself. So most domain experts will come up with very bad rules for the programmers to implement, and those rules are buggy even when implemented correctly since the domain experts didn't think of everything.
I can vouch for this. At the other end of the spectrum, programmers getting thick with their domain experts is like having clairvoyance. I can sometimes spot problems before my PM does!
I'm building plain old boring desktop software. The bugs all (ok not all, but most) come from edge cases with users doing "dumb" (totally unexpected) stuff.
If it weren't for users my software would be perfect ;)
It is even more fun for folks writing window managers, they have literal edge cases to deal with xD
You have to expect users will click everthing, any time, while holding any key and if only by accident. If your UI cannot handle it, your states were not clearly defined.
Clear requirements and a singluar task, doesn't sound like he also had to balance a load of other projects, meetings, rituals, etc. He was able to do deep work as the hype books call it.
I'm probably romanticising it, but it was a time where software engineers and their skills, time and attention span were still respected.
Note that Multics had a particularly good work environment: https://multicians.org/devproc.html
In particular, management was tech-savvy and everyone liked and respected each other. Most were also talented.
We learned to code this way in school, exams gave us a printed copy of the API docs and as much paper as we wanted. Any compiler errors or warnings were point deductions, so people eventually got good at writing code that compiled on the first try.
But in actual professional coding practice over 10+ years, I can count on one hand the occasions where I had clean enough reqs and enough uninterrupted time to design software like that, and still have enough fingers left over to hold a pencil.
If you've got that - why do you need the developer? Genuine question. If the technology requirements are that well defined why on earth do i need to pay engineer level salaries for such basic work? As you said, and i agree:
At that point of having clear criteria, there are only 2 tasks left:
As a business owner, i care about 1 to make bank and as an astute business owner, I care about 2 to keep making bank.But but but performance! durability! ... <many more of the -ilities of software that an experienced engineer will claim to just take in their stride...>
As a customer of this software developement process, these are just the next cycle of requirements to feed into the machine - hey take this thing you did, make it faster while obeying rule #2 above.
It would not surprise me if ChatGPT were able to cover 80%+ of a problem that well defined.
I totally get the attraction to focus on the fun bit, but it's not how the business world is designed to work in most cases.
For what it's worth i think you nailed the actual value opportunity for a strong dev:
If you can nail that, you're always gonna be valuable. We talk about programming and coding but actually, this is more of the role. The coding part of the puzzle is easy in comparison.
As for:
If your understanding of why requirements are in constant flux amounts to pinning it on "agile", you're not even on the field yet never mind winning the game.
You do realize that good specifications and requirements do not necessarily mean trivial and easy implementation?
Also, indeed, not all dev work is equally costly / hard / basic. It’s a crazy thought, but maybe that’s partially the reason why different devs can have different pay grades. :O
Yup, the basic HTML spec for example is pretty well defined by now. That really does not mean it is easy to implement it as a working browser.
I wish we had give the ability to push back on nonsense like even when making a web page, ok make this whole div clickable but inside this div, put another button that when you click the button something else happens.
Or at least spend the time to tell us why we are doing this nonsense and maybe I can come up with a less asinine solution.
If you put a stopPropagtion in your click handlers by default that problem goes away. :)
Agile is precisely the response to the fact that requirements keep changing, not the cause of it.
Sure, but the fact that it's taken into account gives the message that it's free now and you can without consequences change anything anytime.
Depending on the definition of "dev", they can be the ones who define the requirements and API.
People always seem to be scoffing when they say stuff like this, like it's a bad thing. But I don't get that attitude at all. I don't want to work on things where the whole shape of the project is known up front. To me, that feels like doing art with a stencil. I love the process of seeking the most valuable next step to take while we feel our way through a dark room with an ambiguous destination.
Now, the (common) version of this that I don't like is when there is no way to know whether a sequence of steps ended up in a better place than where it began. So feedback is very important to me, but there are lots of ways to get it; there are quantitative measures of growth in usage or revenue and there are qualitative measures like surveys or for things like internal platforms, seeing the roadmaps of other teams be unblocked or accelerated by your work.
But I find significantly less joy in being told "we need this exact specific thing, please go build it and report back", and I have rarely seen it be the case that what they needed was actually that exact specific thing.
Nope, he had a well documented set of APIs to program against. In one instance he didn't, they had a crash.
He did "invent" internal and external APIs for his service, just like developers do today: that's the part requiring developers to be like "engineers".
I fully agree. Yet I'm in the same boat.
Started as a Windows line of business software developer almost 20 years ago. At first we got clear specifications with UI mockups and a description what each and every button should do. When there were questions, these specs where updated and we implemented and tested and fixed until our boss was happy.
Over the years we got more and more customers yet less and less time for tests and fixes. So we switched to "agile" and dropped the specs, instead wrote quick notes about what must (roughly) be done. At first all were happy. But now we have a huge amount of features that not one dev knows. You have to ask and search around until you find all the lose specs.
Now I'm managing such a team of developers and have the same issue. I don't have the time to write a clean specification, yet alone discuss it with the actual customer, which anyway doesn't really understand all the implications. So they start coding by adding more if and else blocks to an already bloated code base.
Pity the days we would start with a class diagram or just some quick drawing about how the components would interact and be testable.
Someone still needs to do this work, and sometimes that's the developer. Isn't that what he was doing in this anecdote?
Also because many devs do not understand multiple execution layers. From react devs who never heard of javascript native loops to Spring devs who have no idea how sql looks to c devs who have never used a memory heap tool to node devs who never seen io performance impact to ..... You need an all seeing eye to make things performant. Those are scarse. Probably one in the 20 devs max.
If you don’t understand a problem well enough to write it down, you don’t understand it well enough to solve it.
If you don’t understand a solution well enough to write it down, you don’t understand it well enough to implement it. (Much less get someone else to implement it.)
An overwhelming number of engineers I have worked with seem to think any kind of written design document is busy work, and not actually a part of the process of making something worth using. My experience shows it is not that at all: design docs are a tool to externalize your own thinking, and reflect on its quality without the overhead of keeping it in your mind. That’s their first and foremost purpose. To explain your rationale to others is a secondary objective.
I’m not talking about lengthy requirements documents or specifications, either. I mean a two or three page white paper outlining a problem or a proposed solution in plain English. This is something you can bang out in a half hour or hour if you’re diligent.
Many of the folks I work with never even seem to bother.
If you think there were good old days were all software was well specified, you are wearing very rose tinted glasses.
We are inventing the field as we go. Have been for the last decades.
Also it's one of those fields where the expert in the topic must build a system for something he is not an expert with.
Again and again.
And most project are custom.
Agile has nothing to do with it.
It's the nature of IT right now.