return to table of content

It Can Be Done (2003)

onion2k
51 replies
1d10h

I imagine the main reason he could do that is because he had a very clearly defined set of requirements. Most of what makes software buggy and slow today is that no one actually knows what they're building, and even if they do it keeps changing because "agile". Give a dev a clear API and a well defined set of criteria, and most will write code that works very well.

CM30
9 replies
1d8h

This 100%. The issue is that much of the time, project planning is done terribly. The stakeholders have no idea what they want, and will demand extra features/sweeping changes at seemingly random. Lots of edge cases are needed for things that probably shouldn't be done with this tool, but are being done so anyway. The designers often create designs that are at odds with the features needed, how the existing system works, what said stakeholders want, etc. The various teams developing parts of the system don't communicate well and build things in silos. Poorly thought out 'agile' processes end up trying to cram everything into a certain timeframe based on some questionably decided points score. Etc.

For the most part, it's those types of issues that lead to a system not working as intended, or being buggy as hell. If you can get a good setup going where the requirements are clear and don't change and the people on the project communicate well and have full control over the process then things will work out fine. That's seemingly what happened in the source article.

sanderjd
8 replies
1d5h

I see so much dissonance in this attitude. It's right here in your comment:

The stakeholders have no idea what they want

And yet, you want to put together a project plan up front based on them telling you exactly what to build without assuming there will be sweeping changes? That makes no sense!

Building iteratively recognizes this reality that people aren't sure what they want or need, rather than peevishly ignoring it. The goal is to figure out what would be valuable in a short period of time; if it isn't valuable at all, that's fine, it was not a huge investment and can just be thrown away; if it is valuable but not quite right or not as useful in its current form as it could be in a different form, but the change needed is large or fundamental, that's still fine, even sweeping changes are no big deal on top of something small; and if it's already valuable just the way it is, that's great, move on to the next iteration.

This seems to make a lot of people uneasy, I guess because it provides no formula for answering the question of where things will stand in a year, or even in six months. But it honestly recognizes that nobody knows, rather than setting the expectation that where things will stand is exactly where they project plan says they will, with everyone inevitably getting mad that reality didn't match that plan, or worse, being exactly where the plan said you'd be, except the thing the plan called for was the wrong thing to build and the whole thing was a useless waste of time.

nerdponx
1 replies
1d2h

Or engage with stakeholders early in the process and work with them to actually elicit something resembling a stable set of preferences. Maybe you need to even iterate through a couple rounds of prototypes to figure out what they actually want.

I never understood why software engineers had such a high tolerance for this shit. Is it because changes are technically possible at any point?

For example, if I order flowers for a wedding, I need to order at least a few months in advance depending on the time of year, because flowers are a seasonal agricultural product. And I can't change my order once it's locked in because they can't go back in time and plant more flowers.

We should treat software engineering more like that. There's no reason we should allow product people to abuse the flexibility of software. You can still be agile, but you don't have to be at the beck and call of people whose preferences change with the direction of the wind.

sanderjd
0 replies
5h21m

Or engage with stakeholders early in the process and work with them to actually elicit something resembling a stable set of preferences. Maybe you need to even iterate through a couple rounds of prototypes to figure out what they actually want.

What I think is that this is right, except the expectation should just be that ideally you keep that iteration going indefinitely, instead of saying, "ok, now that we've done these couple rounds of prototypes, we now definitely know everything about what your preferences are!".

That's more likely to be true after a couple rounds of iteration of prototypes, which is good, but still unlikely to be true.

For example, if I order flowers for a wedding, I need to order at least a few months in advance depending on the time of year, because flowers are a seasonal agricultural product. And I can't change my order once it's locked in because they can't go back in time and plant more flowers.

Yes, it's because it would be a lot better if you could immediately switch the order. Wedding planning, and many other things, would be a much better if everything didn't require locking in decisions months in advance.

Why advocate for a poorer experience when it's possible to achieve a better one? Sure, if you want to charge less for a process that asks for all requirements up front with no changes allowed later, because that's less valuable, and a higher rate for an iterative process that responds swiftly to changes in direction, because that's more valuable, then that would make sense.

But it's clearly possible to make changes to software without a bunch of lead time - you don't have to wait for seeds to grow or send a manuscript to the printer or blueprints to a manufactures - so why would we artificially mimic those worse experiences?

js8
1 replies
1d2h

You claim that "people not knowing what they need or want" is some objective reality, but I don't think it is. It seems to me that it is a symptom of top leadership unwillingness to commit to a clear goal.

I also think you can do iterative development with having a clear long-term objective (requirements) in mind. I would even argue it helps a lot.

I suggest looking up "What made Apollo a success" (https://ntrs.nasa.gov/citations/19720005243), they explain it quite well.

sanderjd
0 replies
5h31m

Well, actually it was the comment I replied to that said "stakeholders don't know what they want".

But I do agree with that commenter that it is usually the case. But people often seem to ascribe that to a failing of a person or group of people - "top leadership" in your comment - but I think it's essentially the same "failing" as predicting the future incorrectly. Of course lots of effort is put into forecasting as well, and effort put into requirement gathering is similarly valuable. But in both cases, investing in flexibility is a useful hedge against the likelihood that your original prediction was wrong.

I also think you can do iterative development with having a clear long-term objective (requirements) in mind. I would even argue it helps a lot.

Personally, I don't think this is "iterative" in the same sense of the word. I recognize that it is still iterative and that there probably isn't a better word to use. But just chopping up a long list of static requirements into smaller chunks and doing them in some order is what project planners have done time immemorial, and is not the same conceptual idea as setting a vision and discovering detailed requirements toward that vision a small chunk at a time. It's that second approach that is the sense of "iterative" I was using.

I certainly don't think it's the only way to do things, and I don't think it's a great fit for every project, but I wish more people were actually bought into the leap of faith required to let go of detailed up-front top-down planning and work in small iterations.

Sakos
1 replies
1d4h

The issue stems more from how stakeholders and management don't know what they want, but they still expect tight and concrete deadlines. Either you have iterative development with space to explore ideas and solutions which will take longer, but might result in a cleaner, more robust design, or you focus on deliverables and deadlines with clearly defined specifications. You can't have it both ways.

sanderjd
0 replies
5h24m

Oh I agree. My criticism here is for the widespread belief that people know what they want, and that certainly applies to management and other stakeholders. It's why I like working with management that has come up through the ranks of IC software development, because they have naturally internalized this lesson that they probably don't actually know exactly what they want up front, and are more likely to support the more humble path of small iterations. But of course you work on the project you have with the stakeholders you have!

CM30
1 replies
23h52m

Well I guess maybe my wording wasn't ideal. Sometimes it's 'what they want changes significantly through the development process due to outside factors'. Having time to iterate is good. Being able to experiment and figure out what's valuable is good. Being given a strict deadline, then halfway through being expected to chuck out nearly everything because the client brought the demo to the rest of the board and the latter didn't like it/chewed them out for it is perhaps not so great.

sanderjd
0 replies
5h27m

Yes, but I'd argue that that bad thing is downstream of the implicit expectation people seem to have that people do and especially that they should know what they want, up front. In this example, the client thought they knew what the board wanted, but they didn't, and you thought you knew what the client wanted, but you couldn't possibly, because what they wanted was a thing the board would like, but they clearly didn't know what that was.

It's a sticky wicket because of course time in front of the board is precious so you don't want to constantly run things by them and make them micro-manage, but I think the board would have probably chewed them out less if they had brought a tiny MVP (or even just a proof of concept) that hadn't required significant investment, and asked "here's what we have with almost no investment, here's our plan for the next small step, what do you think of this direction?".

phailhaus
7 replies
1d4h

even if they do it keeps changing because "agile".

If you think "agile" is the reason requirements change...I don't know what to tell you. Requirements will always change, full stop. It's like a force of nature, there is no universe in which everyone just "knows" what to build up front and has a fully spec'd out API that you can go off into a cave and implement. Real life never works that way, and software engineering is not the right profession for you if you need that.

nerdponx
2 replies
23h12m

Requirements will always change, full stop

But that doesn't mean you have to stop what you're doing and go chase those requirements.

Why did the requirements change? Is it mandatory that the change happen right now? What research was done to support the requirements change? Was the original requirement bad in the first place? Was insufficient alignment and understanding achieved at the start of the project? Do you actually talk to the stakeholders at all, or does your PM just forward you their emails and expect you to do whatever is in there?

There's a lot of room between "design everything up-front and never change anything" and "allow requirements to change arbitrarily".

mrkeen
1 replies
22h30m

There's a lot of room between "design everything up-front and never change anything" and "allow requirements to change arbitrarily".

The arbiter of the changing requirements is the one paying for the work.

nerdponx
0 replies
21h59m

Sort of. I've only done a small amount of consulting, but when I did, change requests from the business side of things were the start of a conversation, not direct unchangeable instructions. I would never knowingly choose work in an environment where the latter was the norm.

AnimalMuppet
2 replies
1d3h

Agile is the reason requirements are allowed to change. Non-agile is not changing the requirements, and continuing to build the wrong thing.

But it's not that binary. "Agile" is the reason requirements are allowed to change constantly. At worst, it can be like trying to steer down the freeway by slamming the steering wheel from one extreme to the other. And pure waterfall is also blatantly unworkable.

The real question is, at what rate do you allow changes to be made to the specification/requirements? How much dampening do you apply? And maybe under that, there's another question: How fast can you respond to the real world, and still maintain a coherent direction? The faster the better, but don't try to respond faster than you can maintain coherence.

Jensson
1 replies
22h46m

Changing requirements is like fast food, people don't eat it because they need the calories they eat it because it is hard not to.

And agile doesn't make you respond to change quicker, it makes it slower since it is done in 2 weeks sprints. Normally a team could adapt the moment new information comes up, strict adherence to scrum agile would push that for the next sprint.

Agile does make scheduling new changes effortless though, encouraging new changes to be made all the time, but it doesn't make the team react quickly to those changes and nor does it remove the total cost of a change. I don't think that is a good thing to encourage, in such a system no wonder people get used to changing things all the time so nobody really knows what things are supposed to be.

AnimalMuppet
0 replies
21h4m

Agile means you can respond to a change in "what's most important" more quickly. You can't respond to everything more quickly, though, because you can't do everything at once.

EasyMark
0 replies
1d2h

It depends on the industry. What you say about not knowing requirements (or at least 95% at the start) isn't true in many industries/situations. They know exactly what they want. And I have worked at those places. I have also worked in agile environments and they are usually organized chaos right up until release. I just code, and I don't care if I have to erase things, I get paid as much to erase as I do to create. I figure it's all part of the process. I do my best for 8-9 hours and go home and let it stay at work, it's the only way I've stayed sane doing the thing I love for many years. I'll do a "push" for a couple here and there but I take that as a sign of trouble when I work someplace if it happens often. So yeah real life does work that way sometimes, some projects do have 95% of the spec at the beginning, just probably not in Silicon Valley.

OtomotO
7 replies
1d10h

That plus a thousand special cases, that are quite often absolutely meaningless to implement in code, but still have to be done.

What do I mean by that: I work for big corporations mostly and quite often there is the average case that is about 90%-99% of all incoming work.

Then there is a myriad of special cases some of which happen every third year on a blood moon, if an eclipse is happening at the same time and the witches chant in the woods...

Instead of managing these unicorn-cases by hand, they have to be implemented in code and lead to bugs and a ton more code to review and maintain.

Speaking of maintenance... oh, don't get me started on that one, it's a sore spot!

nudgeee
2 replies
1d6h

If you work at a big enough company, that 1% edge case could affect millions of people, so it is definitely worth the brain cycles to consider.

dgb23
1 replies
1d6h

Doesn't that also mean it scales up the negative consequences just as much? More code = more maintenance, risk, bugs, resource usage etc.

Sometimes these things are worth it because the manual process is error prone and perhaps frustrating/stressful (these things can't be measured as easily). But sometimes they are not examined critically at all.

How often? How many? How important? Are there simpler solutions?

Kind of depends on the receiving end of the questions in my experience. Some people are happy if you push back and keep things simple, others have problems with that.

I wonder how much does scale affect these basic principles. I would assume that with scale you already have a lot of problems that push even harder towards not implementing every single thing.

nudgeee
0 replies
1d5h

Here’s the catch, as you scale up, more and more users could get caught in the edge case. Remember why we build systems — for end users to use. It is our job as engineers/technologists to solve them.

This is where trade-offs in engineering and product are important.

Is the edge case safety critical or poses a safety risk? If so, it definitely should be considered and handled.

Does the edge case block a critical user flow (eg. purchase/checkout step)? If so, it should probably be considered.

Does the edge case result in some non-critical piece of UX having the wrong padding in some lesser trodden user flow? Possibly acceptable.

makeitdouble
2 replies
1d8h

TBF, the current dev team might right now is aware of the edge case how to deal with it, the poor shmuck three years down the line when blood moon eclipse witch chant happens probably wont.

In particular, these edge cases tend to be more complex, require more knowledge and have less margin of errors than the standard errors happening 90% of the time. Leaving them as a gift for the future maintainers is a special kind of dick move.

OtomotO
1 replies
1d8h

Leaving them as a gift for the future maintainers is a special kind of dick move.

by whom exactly? I mean, I agree that it's shitty (I was such a maintainer already), but who is responsible (the whole machinery or a specific role?) and how can we change it?

I personally try to add meaningful comments to code that may seem "strange". Like code that if I read it and it was written by someone else I would ask myself "Why so complicated?" - that way I hope to improve the situation a bit for future maintainers.

makeitdouble
0 replies
1d5h

I was thinking at the team/org level. Edge cases are left out after some discussion on how much impact they have and how they'll have to be handled, so there is a conscious choice to not deal with cases that are rare but will happen and require intervention (cases that can just be ignored or worked around out of the system are a different thing, and unknown/unplanned edge cases are just bugs).

If there is specific finger pointing needed IMHO it would land either on the manager or the product owner for miscalculating the impact of that decision.

PS: To your point on "weird" code, imagine a code that just asserts for some conditions with a "if these conditions are true call XXXX team for help" message on it. That's how I'm representing an unhandled edge case.

Muromec
0 replies
1d7h

Don't need the moon and witches even. Some datetimes serialized to epoch time around third week of October and here it comes.

rapind
3 replies
1d4h

The reality that most people don't want to hear is that he could do this because he himself was the domain expert.

Requirements isn't just making lists and tickets in swimlanes. It's actually learning the domain you are building for. Sure you can still build for something you don't really understand, but it'll be garbage. In fact, writing this garbage is OK if you accept that it's just a drafting process for learning the domain challenges and potential solutions. There are no shortcuts to quality.

My advice for stakeholders is to make sure you have access to the person(s) building your application and see if they understand your domain (you can do this without constantly pestering them!). Beware bullshitters using buzzwords.

My advice for designers / programmers is to make sure your stakeholders & domain experts are keen and engaged, otherwise getting requirements and understanding is like pulling teeth. Sometime a stakeholder doesn't even want to solve the problem. They may have wanted to go in a different direction entirely or maybe there are politics involved that has them miffed. Nothing sinks a project like lazy or uninterested stakeholders.

chrsw
0 replies
23h6m

When I entered the engineering workforce in the late 00s I felt like I was on a team of experts, myself excluded since I was fresh out of school. We were building some complex systems but we had a bunch of people that really knew what they were doing.

Fast forward to today, I'm in a different part of the industry, we're building things that aren't quite as complex but they take longer to finish and the experience is very frustrating. I feel like I'm working in teams where many people don't really know what they were doing but we keep getting pushed for higher story point "velocity" and whatnot. Quality is suffering, tech debt is piling up. This whole thing feels broken.

Jensson
0 replies
22h48m

Being close to a or being a domain expert yourself isn't enough. Most domain experts don't know the domain well enough to automate solutions for it, you need a much better understanding to program than to do it yourself. So most domain experts will come up with very bad rules for the programmers to implement, and those rules are buggy even when implemented correctly since the domain experts didn't think of everything.

JamesLeonis
0 replies
1d2h

My advice for designers / programmers is to make sure your stakeholders & domain experts are keen and engaged, otherwise getting requirements and understanding is like pulling teeth.

I can vouch for this. At the other end of the spectrum, programmers getting thick with their domain experts is like having clairvoyance. I can sometimes spot problems before my PM does!

kybernetyk
2 replies
1d9h

I'm building plain old boring desktop software. The bugs all (ok not all, but most) come from edge cases with users doing "dumb" (totally unexpected) stuff.

If it weren't for users my software would be perfect ;)

shaan7
0 replies
1d9h

It is even more fun for folks writing window managers, they have literal edge cases to deal with xD

lukan
0 replies
1d8h

You have to expect users will click everthing, any time, while holding any key and if only by accident. If your UI cannot handle it, your states were not clearly defined.

Cthulhu_
2 replies
1d10h

Clear requirements and a singluar task, doesn't sound like he also had to balance a load of other projects, meetings, rituals, etc. He was able to do deep work as the hype books call it.

I'm probably romanticising it, but it was a time where software engineers and their skills, time and attention span were still respected.

lieks
0 replies
1d7h

Note that Multics had a particularly good work environment: https://multicians.org/devproc.html

In particular, management was tech-savvy and everyone liked and respected each other. Most were also talented.

creshal
0 replies
1d10h

We learned to code this way in school, exams gave us a printed copy of the API docs and as much paper as we wanted. Any compiler errors or warnings were point deductions, so people eventually got good at writing code that compiled on the first try.

But in actual professional coding practice over 10+ years, I can count on one hand the occasions where I had clean enough reqs and enough uninterrupted time to design software like that, and still have enough fingers left over to hold a pencil.

CraigJPerry
2 replies
1d8h

> Give a dev a clear API and a well defined set of criteria

If you've got that - why do you need the developer? Genuine question. If the technology requirements are that well defined why on earth do i need to pay engineer level salaries for such basic work? As you said, and i agree:

> Give a dev a clear API and a well defined set of criteria, and most will write code that works very well

At that point of having clear criteria, there are only 2 tasks left:

    1. write some code that satisfies this outcome
    2. write it in a way such that it can be cheaply changed in future
As a business owner, i care about 1 to make bank and as an astute business owner, I care about 2 to keep making bank.

But but but performance! durability! ... <many more of the -ilities of software that an experienced engineer will claim to just take in their stride...>

As a customer of this software developement process, these are just the next cycle of requirements to feed into the machine - hey take this thing you did, make it faster while obeying rule #2 above.

It would not surprise me if ChatGPT were able to cover 80%+ of a problem that well defined.

I totally get the attraction to focus on the fun bit, but it's not how the business world is designed to work in most cases.

For what it's worth i think you nailed the actual value opportunity for a strong dev:

> no one actually knows what they're building

If you can nail that, you're always gonna be valuable. We talk about programming and coding but actually, this is more of the role. The coding part of the puzzle is easy in comparison.

As for:

> and even if they do it keeps changing because "agile"

If your understanding of why requirements are in constant flux amounts to pinning it on "agile", you're not even on the field yet never mind winning the game.

anon22981
1 replies
1d8h

You do realize that good specifications and requirements do not necessarily mean trivial and easy implementation?

Also, indeed, not all dev work is equally costly / hard / basic. It’s a crazy thought, but maybe that’s partially the reason why different devs can have different pay grades. :O

lukan
0 replies
1d7h

Yup, the basic HTML spec for example is pretty well defined by now. That really does not mean it is easy to implement it as a working browser.

pooper
1 replies
1d9h

I wish we had give the ability to push back on nonsense like even when making a web page, ok make this whole div clickable but inside this div, put another button that when you click the button something else happens.

Or at least spend the time to tell us why we are doing this nonsense and maybe I can come up with a less asinine solution.

onion2k
0 replies
1d4h

If you put a stopPropagtion in your click handlers by default that problem goes away. :)

himinlomax
1 replies
1d7h

and even if they do it keeps changing because "agile".

Agile is precisely the response to the fact that requirements keep changing, not the cause of it.

makapuf
0 replies
1d7h

Sure, but the fact that it's taken into account gives the message that it's free now and you can without consequences change anything anytime.

temac
0 replies
1d10h

Depending on the definition of "dev", they can be the ones who define the requirements and API.

sanderjd
0 replies
1d5h

People always seem to be scoffing when they say stuff like this, like it's a bad thing. But I don't get that attitude at all. I don't want to work on things where the whole shape of the project is known up front. To me, that feels like doing art with a stencil. I love the process of seeking the most valuable next step to take while we feel our way through a dark room with an ambiguous destination.

Now, the (common) version of this that I don't like is when there is no way to know whether a sequence of steps ended up in a better place than where it began. So feedback is very important to me, but there are lots of ways to get it; there are quantitative measures of growth in usage or revenue and there are qualitative measures like surveys or for things like internal platforms, seeing the roadmaps of other teams be unblocked or accelerated by your work.

But I find significantly less joy in being told "we need this exact specific thing, please go build it and report back", and I have rarely seen it be the case that what they needed was actually that exact specific thing.

necovek
0 replies
1d10h

Nope, he had a well documented set of APIs to program against. In one instance he didn't, they had a crash.

He did "invent" internal and external APIs for his service, just like developers do today: that's the part requiring developers to be like "engineers".

m_st
0 replies
1d5h

I fully agree. Yet I'm in the same boat.

Started as a Windows line of business software developer almost 20 years ago. At first we got clear specifications with UI mockups and a description what each and every button should do. When there were questions, these specs where updated and we implemented and tested and fixed until our boss was happy.

Over the years we got more and more customers yet less and less time for tests and fixes. So we switched to "agile" and dropped the specs, instead wrote quick notes about what must (roughly) be done. At first all were happy. But now we have a huge amount of features that not one dev knows. You have to ask and search around until you find all the lose specs.

Now I'm managing such a team of developers and have the same issue. I don't have the time to write a clean specification, yet alone discuss it with the actual customer, which anyway doesn't really understand all the implications. So they start coding by adding more if and else blocks to an already bloated code base.

Pity the days we would start with a class diagram or just some quick drawing about how the components would interact and be testable.

itsoktocry
0 replies
1d7h

Give a dev a clear API and a well defined set of criteria, and most will write code that works very well

Someone still needs to do this work, and sometimes that's the developer. Isn't that what he was doing in this anecdote?

holoduke
0 replies
1d4h

Also because many devs do not understand multiple execution layers. From react devs who never heard of javascript native loops to Spring devs who have no idea how sql looks to c devs who have never used a memory heap tool to node devs who never seen io performance impact to ..... You need an all seeing eye to make things performant. Those are scarse. Probably one in the 20 devs max.

cushychicken
0 replies
1d3h

If you don’t understand a problem well enough to write it down, you don’t understand it well enough to solve it.

If you don’t understand a solution well enough to write it down, you don’t understand it well enough to implement it. (Much less get someone else to implement it.)

An overwhelming number of engineers I have worked with seem to think any kind of written design document is busy work, and not actually a part of the process of making something worth using. My experience shows it is not that at all: design docs are a tool to externalize your own thinking, and reflect on its quality without the overhead of keeping it in your mind. That’s their first and foremost purpose. To explain your rationale to others is a secondary objective.

I’m not talking about lengthy requirements documents or specifications, either. I mean a two or three page white paper outlining a problem or a proposed solution in plain English. This is something you can bang out in a half hour or hour if you’re diligent.

Many of the folks I work with never even seem to bother.

BiteCode_dev
0 replies
1d8h

If you think there were good old days were all software was well specified, you are wearing very rose tinted glasses.

We are inventing the field as we go. Have been for the last decades.

Also it's one of those fields where the expert in the topic must build a system for something he is not an expert with.

Again and again.

And most project are custom.

Agile has nothing to do with it.

It's the nature of IT right now.

kandelt
14 replies
1d10h

Everytime something impressive is posted, a real feat, the comments mostly pick on it. Please stop

kybernetyk
6 replies
1d9h

Modern devs are frustrated. They don't do anything of importance or significance anymore. They're not the guy designing a virtual memory manager for a novel OS. They're just some cog in a machine converting SQL queries into HTML to show ads to kids and old people.

One can only become cynical and bitter.

kome
3 replies
1d8h

spot on. plus using a lot of js because they don't know html and css (or just very superficially). :-)

antihipocrat
2 replies
1d8h

JS is necessary to maintain state, can this be done with CSS and HTML?

Also, I'd say that manipulation of CSS classes and HTML elements with JS gives the developer quite a strong understanding of them.

p_l
0 replies
1d6h

You can get away with much less state with minimal JS like HTMX

mcosta
0 replies
1d6h

Once again the confusion between "HTML document" and "Web App" bites again. A NYT article does not needs client side state.

elzbardico
1 replies
1d7h

Since the beginning, most developers in history wrote payroll and accounts software. Now they do cat photos sharing apps. If anything, I consider this an improvement.

p_l
0 replies
1d6h

Payroll, Accounts Receivable, etc. feels much more dignified than figuring how to deliver more brain-damaging ads.

Also, the whole extreme programming style was prototyped around hilariously big (for reasons called "USA" not bad code) payroll software for Chrysler.

LAC-Tech
3 replies
1d7h

But you don't understand, I once had to replace chart.js 3 with chart.js 4 and the documentation didn't cover all of the changes. This man would have collapsed into a heap if he'd had to face the challenges I've faced.

Perz1val
2 replies
1d7h

Like unironically, the things we deal with would be equivalent of somebody coming in, dumping a bucket of mud on his pretty diagrams and saying "integrate with that"

LAC-Tech
1 replies
1d7h

I think we should just admit people working on major components of file systems, from scratch, are probably smarter than us.

kragen
0 replies
1d6h

maybe they were just more skilled or working in a better working environment

i mean they might just be smarter, but that's far from the only explanation. they could be working harder, or have better mentorship, or not have to deal with productivity sinks you do, etc.

sanderjd
0 replies
1d4h

I understand this attitude, but FWIW, when I got here 5 hours after your post, all the top comments were very positive and not picking on it at all. And that's how it usually is when I see one of these comments like yours. It takes a little time for the sieve of user moderation to bubble the better comments up to the top.

necovek
0 replies
1d6h

Every time someone posts a counter argument, people pick on it. Please stop.

Oh really, don't: that's what the discussion should be about.

You are certainly right to a point, but a better way to redirect positively is to ask: how can we come to a state where similar feats are possible today in a company setting?

Etheryte
0 replies
1d8h

Every time comments are posted, a real interactive discussion, other commenters pick on it. Please stop.

musiciangames
10 replies
1d10h

I worked with a defector from the Soviet Union. He told me the reason Soviet programmers were so good was that they had very limited access to computers. If you had to program using pencil and paper, you wanted to make it work first time. (Coincidentally we were working for Honeywell).

viraptor
5 replies
1d10h

Alternatively: only people who were into it enough to handle doing things on paper and with limited access to computers kept working afterwards.

So: was it a good way to teach or a good way to filter out less motivated people?

littlestymaar
2 replies
1d10h

It cannot be the only effect, since this pen-and-paper effect manifests itself in other places: when you kook at old industrial systems' plans (nuclear power plant in the example I have in mind) you're very surprised to see how small the number of revision is for the plans: because it's very tedious to re-do the entire plan from scratch by hand, people used to be extremely cautious when design them. Today with CAD software, revisions are cheap and producing half backed plans that need later correction is the norm.

Maybe the new way of doing thing is still more efficient in the end, but maybe not.

ew6082
0 replies
1d7h

I am an Engineer and have done things both ways throughout my career. What is easy to miss in digital are the overall holistic design and the way it all fits together, because you are always zoomed in on a particular piece of the puzzle and rarely see the whole thing on screen in a meaningful way. There is just something about seeing a full design laid out on paper, spread out on the table, that lets many little things jump out at you.

If it's important print it out, in large format.

Ringz
0 replies
1d7h

A solution to this problem could be that the engineer has to pay for each revision out of his own pocket. Or better yet: everyone involved in the project has a budget for revisions and change requests. I'm just kidding. Or maybe not.

schnitzelstoat
0 replies
1d6h

It's only relatively recently (like since the 00's) that CS has become this sexy field due to sky-high salaries (at least in the USA).

Before then filtering out less motivated people wasn't as much of an issue, as they had little incentive to study CS in the first place.

Megranium
0 replies
1d10h

I guess it was at a time when computers weren't ubiquitous as they're today, and an average non-technical person might have never touched one.

In that days, designing on paper was probably just the default mode of working ...

treprinum
0 replies
1d8h

He must have been one of the privileged few with access to pencil and paper then.

m463
0 replies
1d1h

I learned on computers where cards and printouts were the I/O system.

I remember a number of years where the edit->compile->print loop was much better with listings than terminals, simply because you couldn't see/navigate/edit your code as efficiently. It eventually improved with better/visual editors, larger terminals and faster compiling and running.

An analogy might be early firearms, where things like wet gunpowder, unreliable flintlocks and pushing everything down the barrel made for an inefficient transition time from the longbow.

lieks
0 replies
1d7h

I can attest to this. Once, a college teacher gave us a programming exercise, and then went on to suggest how we might go about solving it.

While he was talking, I started outlining a solution on a piece of paper. At school, I couldn't use a computer during class, so I'd gotten used to programming (and debugging!) on paper.

When I arrived at home, I typed it in and tried it out. Could I possibly reproduce that Multics story I'd read about?

It didn't compile the first time, but after correcting a variable name, it worked perfectly.

Arisaka1
0 replies
1d3h

When we learned COBOL in high school we did it in pen and paper mostly because the school labs didn't had enough PC's for every student, and we ended up working 3-4 students per computer.

countWSS
7 replies
1d10h

The software at that time was just much smaller. The size of most projects, anything above a toy program is in megabytes and will be impossible to "write down" as single file.

tgv
3 replies
1d9h

OTOH, there were practically no examples to draw from. There was no virtual memory class in their OS-101 course. These people pioneered OS design.

adrianN
2 replies
1d8h

I often find it easier not to introduce bugs while I pioneer something than when I integrate with a mountain of existing stuff.

subtra3t
0 replies
1d8h

Of course, since then you can label bugs as unintended features :)

Ringz
0 replies
1d7h

The current JavaScript dilemma.

kragen
0 replies
1d6h

i was surprised to find that my natural-language notes from my electronics notebook from the last month total a quarter megabyte already. about 37000 words of markdown

a small fraction of that is urls for electronic parts, but nearly all (90%, say) is just english text. another 10% is stuff i quoted from other people (web pages, manufacturer app notes, books, etc.) it is a single file and will probably continue to be a single file forever, at least for the rest of the year. if i continue at the same pace, that will be 12 megabytes. there is nothing impossible about that at all

(if you're interested, git clone http://canonical.org/~kragen/sw/leatherdrink.git. it's currently almost 200 megs because i checked in the avr toolchain. there are also photos and videos and schematics and stuff)

in a programming language the growth would be fewer bytes per month, but perhaps by a factor of five

coldtea
0 replies
1d6h

The software at that time was just much smaller.

Do go check the code: https://multicians.org/vtoc_man.html

This is both bigger and more complex than most stuff people work on today (which is glorified glue code for bullshit CRUD/REST/etc). It might be smaller than the overall LoC of their project, but it's more than the units they deal with (and this is part of an overal OS code anyway).

This is a whole "manager, to manage file description information. It had to transport the file information between disk and memory, manage a shared memory buffer pool, and manage space on disk for the information."

Dare most current programmers to write such a thing, with the requirements and semantics it has on his version, today, in their language of choice.

Most would be lost at even contemplating that. Much less write it in paper and type it in and have it work.

LAC-Tech
0 replies
1d7h

I'm sorry but the line of business software most of us work on is not more impressive than writing a major component of a file system from scratch.

raverbashing
5 replies
1d10h

This is a bit of a specific idiosyncrasy

I just can't plan things on paper. Sure, I can try to plan then, but then I see I have no idea how things can/should work or how to best arrange things unless I'm actually typing code

Also the planning on paper usually looks nice but doesn't consider all the real-world imperfections, so your beautiful planning becomes "ugly" quickly

Megranium
2 replies
1d10h

It might also have to do with how programming was taught to a certain generation? At least from my own experience, when I was at university OOP and UML was all the rage, so we had to specify through diagrams in fancy diagram editors, then generate the code from that, and then still re-write everything by hand because it never really worked out quite that well.

I never touched UML again after university ...

kunley
0 replies
1d9h

UML was an abomination, and it's a relief it's completely dead.

bux93
0 replies
1d8h

Well, the code generation was a mistake, but drawing diagrams is explicitly what the guy in this story did:

He started by sitting at his desk and drawing a lot of diagrams. I was the project coordinator, so I used to drop in on him and ask how things were going. "Still designing," he'd say. He wanted the diagrams to look beautiful and symmetrical as well as capturing all the state information.

jpgvm
0 replies
1d10h

My experience is that it's a skill like any other, I was bad at it, I wanted to be better at it so I did it more often and now I'm good at it.

Being able to anticipate and find nice design level general solutions for hairy real-world problems is where systems design becomes most rewarding.

Cthulhu_
0 replies
1d9h

I think the modern-day equivalent would be test-driven development; in both cases (unless I don't get it) is that you make sure you understand the problem and write verifications before you start on a solution.

Of course, this doesn't work for every problem / task.

lqet
4 replies
1d6h

Here is the code written by André Bensoussan: https://multicians.org/vtoc_man.html

smikhanov
2 replies
1d2h

So the answer to "how did he do it?" is obvious -- it simply wasn't too complex of a task (by modern standards).

gwern
0 replies
23h30m

I don't know about that: is it 'not complex' or did he succeed at the design? Implementing core components of a file system in a few pages sounds quite complex to me - I'm getting a bit scared just reading that comment explaining why the race condition is harmless.

abecedarius
0 replies
1d2h

I wouldn't measure difficulty of a design problem by the complexity of a solution. "This is so much simpler than I expected" is some of the highest praise.

(About this particular problem I have no idea.)

coldtea
0 replies
1d6h

Hmm, have never seen much of PL/1.

Control flow aside, the syntax looks surprisingly clean.

xeyownt
3 replies
1d10h

How did André do this, with no tool but a pencil?

Using his brain. Intensively ;-)

Reminds me a story that happened to me. I was doing some heavy changes to some library, then started running the tests. The tests were failing, and without looking at log I knew immediately where the bug was. So I fixed that, launch the test and again same story. And again. At the 4th time, the cause was not immediate so I was considering looking at the log when I noticed I was not actually running the program at all, but was executing something else!

So to find bug, you just need to ask yourself where it will fail, and fix that ;-)

euroderf
2 replies
1d10h

pebkac ?

virtualbluesky
0 replies
1d9h

Problem Exists Between Keyboard And Chair

defrost
0 replies
1d10h

Programmer Exists,

Bug Kausing Almost Certain.

frozenwind
3 replies
1d9h

As I sat down at my desk to start the work day and finish a dreaded task, my mind of course immediately drifted away and felt the urge to check some news on reddit or HW. I opened HW and this is the first title.

"It can be done".

Thanks for the motivation!

P.S. Of course, I also read the article :).

siwatanejo
2 replies
1d9h

s/HW/HN/?

frozenwind
1 replies
1d8h

HN.... I have an app on my phone that's called "Hews". It's morning and everything is confusing :(

namtab00
0 replies
1d8h

There's a "Hews 2"..

Also "Hacki" and "HACK". All similar (of course), but all in some way lacking..

i_am_proteus
2 replies
1d7h

There's a good comment from account jrd259, who worked with Andre, in the previous HN thread about this article (https://news.ycombinator.com/item?id=18415231). It relates the importance of a private work space with large desks and no notifications.

sssilver
1 replies
1d

This comment really deserves more attention.

Multicomp
0 replies
1d

Agreed, here's the quoted copy/paste for those scrolling by:

Former Multician here. Andre was super-smart, but it is perhaps also relevant that even the most junior developers (as I was then) had quiet private space to work with large desks. All design was done offline. Multics terminals were like type-writers (although video did show up near the end), hence no stream of popups. The environment both allowed for and demanded focus and concentration. This is no longer so.
agumonkey
2 replies
1d10h

didn't know about andre bensoussan, but here's a pic of him with louis pouzin (of cyclade and RUNCOM fame) https://multicians.org/shell.html

that said, i always love stories of people taking time to find aesthetics and quality with pen and paper

rokkitmensch
1 replies
1d10h

I don't even smoke, and think that photo is amazing.

agumonkey
0 replies
1d9h

There's a clint eastwood western style to it I guess

paganel
1 replies
1d7h

Multics operating system at Honeywell in Cambridge

Still surprising how much entrenched the computer industry was (and probably still is) within the military complex.

I'm curious if there is a list somewhere with big, consequential IT/computer programming stuff that has NOT had any direct connection with the military.

p_l
0 replies
1d6h

Scams and ad-pushing? Persistent Invigilation by ad-serving companies also surprisingly started out in commercial space, not military, from what I know.

andai
1 replies
1d

A friend told me that when she was in school, they'd hand in their programming assignments on paper, and a week later they'd get the output. She said week-long "compilation times" make you learn to double check your work!

deely3
0 replies
22h5m

Im not sure that I like this approach. Documentation very rarely cover all 100% of nuances. There always will be "test and see" part of learning in my opinion.

For me its like you can use paper for drawing but only 10 minutes a day. Or piano but only every second Monday for few hours.

Timwi
1 replies
1d6h

This is reminding me of two times in my life that I was programming on paper.

The first time I was maybe 10–12 years old. We were visiting my grandparents, who had no computer (and there were no smartphones). They did, however, have a (mechanical) typewriter that had the ability to type in both black and red with a switch. Back then my main hobby was Turbo Pascal, so I used the typewriter to write a Pascal program that I would later type up into the PC and run when we got back home. We spent a week at my grandparents, so I had plenty time to think it through, debug it by hand, and re-type sections that were faulty.

The second time relates to an esoteric programming language I invented called Ziim (https://esolangs.org/wiki/Ziim), and more specifically, the binary addition function you see on that page. It's huge and complicated... and it had a bug, which I knew because I ran it in an interpreter, but I had no idea where the bug was and how to fix it. — Around that time, I had a long bus ride coming up; sitting in a coach for like 6 hours or so. That was a perfect opportunity to debug this. I transferred the Ziim addition function to square-ruled paper using a pencil, and then, on the bus, I executed it by hand, step by step. I found the bug and was able to fix it. It required redoing the entire function layout.

I guess the moral of the story is that restricting your own ability to do things the “easy” way can sometimes lead to well thought out code. Despite, I don't do this regularly at all. I will just as soon write half-arsed snippets and iterate, grab a debugger to step through code, etc. Maybe I should rethink that?

donkeybeer
0 replies
1d4h

It works because normally you do most of the thinking in your head, so the few occasions you are doing deep thinking using paper and pen, you have the discipline and care to only write what is important and keep the rest in your head. If you do everything on paper, it will start being not much different than doing everything directly on computer because you will be writing everything and you will lose that abstraction and care.

ChrisMarshallNY
1 replies
1d6h

Well, when I started, it was the waning days of "big iron."

In Ye Olde Days, "programmers," were usually little more than data entry clerks (often, women). The people who wrote the software would be in offices, filled with smoke, writing the programs on paper.

Compute time was expensive and precious. If you got a bug during your run, you wouldn't get a chance to fix it, until you could schedule another data entry session, and another CPU run.

It encouraged a "measure twice, cut once" approach. Since most software, in those days, was fairly humble, compared to the de rigueur, these days, it was easier to do. Also, software usually had extremely restricted I/O. UI was not even a term, and peripheral connection was a big deal.

These days, I often "throw stuff at the wall, and see what sticks," when writing software. It's easier for me to write some half-baked crap, and debug it in the IDE.

My software development tends to be "iterative." I write about that, here: https://littlegreenviper.com/miscellany/evolutionary-design-...

stavros
0 replies
1d2h

This is exactly it. People are lamenting the bygone days of having to think a lot about your software before you wrote it, but the truth is that, today, the same programmer would have been able to iterate his way to the same program in perhaps half the time.

I remember an assignment in my university course, where I had to write a (simple, admittedly) OS kernel. I didn't know C very well, and I didn't know what kernels did beyond the theory, but we had to write a few hundred lines of C code to manage tasks. I knew that there would be absolutely no way I could debug this program if it didn't work, as it was all parallel code and a bug would mean mysterious race conditions.

I reasoned about the system as a whole, and I spent a few days writing small, self-contained functions that I thought a lot about. I then compiled and ran it, and, after the obligatory "missing semicolon" compilation errors, it worked first try.

virtualritz
0 replies
1d5h

My first exposure to programming was GWBASIC on the 8086 PC clone of my uncle. I was 12. My uncle lived over 500km away from my home.

I saw him once a year, when my family came together for xmas at his house.

In the 12 months in-between I wrote tons of BASIC code with pencil. And then typed it in at xmas. That lasted for two years, until I had saved enough pocket money to buy my first PC.

The experience was very similar. Stuff just ran & worked as expected.

If you write code by pencil you just think a lot more before putting anything down on paper. As any corrections are a nightmare -- except for the last line. You can't just insert a code block etc.

tanseydavid
0 replies
21h38m

How did he do it?

Simple. Agile/Scrum had not yet been conceived of.

szundi
0 replies
1d10h

Back then it was possible with the human brain.

shermantanktop
0 replies
1d

I love the idea of this but it is so, so far from the iterative approach I normally use.

When I have taken this approach - pencil and paper - I find that it accelerates the initial big decisions (e.g. "what will this module's responsibility be?") but sometimes causes rework when I discover that I was optimistic or unaware of how two areas of the code interact.

Perhaps I'm just not doing it thoroughly enough.

revskill
0 replies
1d7h

Given the input, the correct data structure, you can.

quickthrower2
0 replies
1d10h

As well as a bit of genius the key here is being able to understand every layer of the stack from the chip upwards which was probably more achievable then.

necovek
0 replies
1d10h

Well, the systems are way more interconnected today, esp over less stable interconnects (internet!), with nothing really being done serially. That in itself makes programming multiple times harder. Add to that that devs are usually dealing with external, badly documented interfaces (eg. The Cloud, frameworks...), and it's impossible to know how something will work without trying it out (just like they had with the error handling bug they hit: now multiply that by 1000 or 10000 of today's usual external API usage in a non-trivial program).

But the other missing piece to this is how much time did this actually take (there is an implication "designing" took its sweet time)? Would the modern way of coding on a computer be faster with the same number of incidents?

dang
0 replies
1d10h

Related:

It Can Be Done (2003) - https://news.ycombinator.com/item?id=18415231 - Nov 2018 (18 comments)

alphazard
0 replies
1d3h

Certain kinds of systems really have to be designed ahead of time. The alternative is an incomplete system. You might never get anything working without a design.

Observability and trial-and-error tooling are a big deal. They enable the workflows that allow developers with a poor conception of the system to brute force their way to fitting requirements. The popular ethos around shipping the first thing that works, often, only works when you can see what you're doing, and the cost of bugs in production is low. That's software in 2024: mostly unimportant garbage.

For this kind of system at this point in time, a full design was the only way. The article hints at a touch of perfectionism (which probably didn't help the timeline), but it's not like he would have been more likely to guess and check his way to a working memory management system in less time with the sans-design development style popular today.

RugnirViking
0 replies
1d3h

Wow, that sounds like a dream work environment. Sounds like he had a good amount of time to get it right, a well defined but tough task, and near total freedom to architect it. Gotta admire everyone involved.

LAC-Tech
0 replies
1d7h

I would love a more systematic approach to this none-coding part of coding. I mean I doodle little diagrams and write notes on scrap paper but nothing so rigorous as what Andre is doing.

DeathArrow
0 replies
1d8h

How did André do this, with no tool but a pencil?

When I was 14 and went to highschool, for programming classes I did most of my code on paper. I was a poor kid in a poor country, so not only I did not have a PC at home, but the few "computers" we had in the computer lab at school were ZX Spectrum clones which didn't run Turbo Pascal, the programming language we used at that time.

ChrisArchitect
0 replies
1d10h

(1994)

331c8c71
0 replies
1d10h

It can be done no doubt (with some training as any other skill). And probably not by everyone (in the same way as not everyone has the will, motivation and means to, say, completely master a musical instrument or certain sport).

The issue is that this way of working would be even less acceptable in 2024 than in 2003.

That being said, even now there are places where you can work the way you like but the irony is those places won't be on the radar and of much interest to your run-of-the-mill career developer.