Great article. I just want to comment on this quote from the article:
"Really good developers do 90% or more of the work before they ever touch the keyboard;"
While that may be true sometimes, I think that ignores the fact that most people can't keep a whole lot of constraints and concepts in their head at the same time. So the amount of pure thinking you can do without writing anything at all is extremely limited.
My solution to this problem is to actually hit the keyboard almost immediately once I have one or more possible ways to go about a problem, without first fully developing them into a well specified design. And then, I try as many of those as I think necessary, by actually writing the code. With experience, I've found that many times, what I initially thought would be the best solution turned out to be much worse than what was initially a less promising one. Nothing makes problems more apparent than concrete, running code.
In other words, I think that rather than just thinking, you need to put your ideas to the test by actually materializing them into code. And only then you can truly appreciate all consequences your ideas have on the final code.
This is not an original idea, of course, I think it's just another way of describing the idea of software prototyping, or the idea that you should "throw away" your first iteration.
In yet different words: writing code should be actually seen as part of the "thinking process".
I had the same thought as I read that line. I think he's actually describing Linus Torvalds there, who, legend has it, thought about Git for a month or so and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.
But for the rest of us (especially myself), it seems to be more like an interplay between thinking of what to write, writing it, testing it, thinking some more, changing some minor or major parts of what we wrote, and so on, until it feels good enough.
In the end, it's a bit of an art, coming up with the final working version.
Well that explains why Git has such a god awful API. Maybe he should've done some prototyping too.
I'm going to take a stab here: you've never used cvs or svn. git, for all its warts, is quite literally a 10x improvement on those, which is what it was (mostly) competing with.
It's really hard to overstate how much of a sea change git was.
It's very rare that a new piece of software just completely supplants existing solutions as widely and quickly as git did in the version control space.
What do you mean by API? Linus's original got didn't have an API, just a bunch of low level C commands ('plumbing'). The CLI ('porcelain') was originally just wrappers around the plumbing.
Those C functions are the API for git.
baseless conjecture
On the other side the hooks system of git is very good api design imo.
Yeah, could be.. IIRC, he said he doesn't find version control and databases interesting. So he just did what had to be done, did it quickly and then delegated, so he could get back to more satisfying work.
I can relate to that.
I tend to see this as a sign that a design is still too complicated. Keep simplifying, which may include splitting into components that are separately easy to keep in your head.
This is really important for maintenance later on. If it's too complicated now to keep in your head, how will you ever have a chance to maintain it 3 years down the line? Or explain it to somebody else?
This is the only practical way (IMHO) to do a good job, but there can be an irreducibly complex kernel to a problem which manifests itself in the interactions between components even when each atomic component is simple.
I don't do it in my head. I do diagrams, then discuss them with other people until everyone is on the same page. It's amazing how convoluted get data from db, do something to it, send it back can get, especially if there is a queue or multiple consumers in play, when it's actually the simplest thing in the world, which is why people get over-confident and write super-confusing code.
I tend to iterate.
I get a general idea, then start writing code; usually the "sticky" parts, where I anticipate the highest likelihood of trouble.
I've learned that I can't anticipate all the problems, and I really need to encounter them in practice.
This method often means that I need to throw out a lot of work.
I seldom write stuff down[0], until I know that I'm on the right track, which reduces what I call "Concrete Galoshes."[1]
[0] https://littlegreenviper.com/miscellany/evolutionary-design-...
[1] https://littlegreenviper.com/miscellany/concrete-galoshes/
I do the same, iterate. When I am happy with the code I imagine I've probably rewritten it roughly three times.
Now I could have spent that time "whiteboarding" and it's possible I would have come close to the same solution. But whiteboarding in my mind is still guessing, anticipating - coding is of course real.
I think that as you gain experience as a programmer you are able to intuit the right way to begin to code a problem, the iterating is still there but more incremental.
Yeah, the same. I rewrite code until I'm happy with it. When starting new program, it might cause lots of time wasted because I might need to spend weeks rewriting and re-tossing everything until I feel I got it good enough. Tried to do it faster, but I just can't. The only way is to write a working code and reflect on it.
My only optimization of this process is to use Java and not just throw out everything, but keep refactoring. Idea allows for very quick and safe refactoring cycles, so I can iterate on overall architecture or any selected components.
I really envy on people who can get it right first time. I just can't, despite having 20 years of programming under my seat. And when time is tight and I need to accept obviously bad design, that what makes me burning out.
Nobody gets it right the first time.
Good design evolves from knowing the problem space.
Until you've explored it you don't know it.
I've seen some really good systems that have been built in one shot. They were all ground up rewrites of other very well known but fatally flawed systems.
And even then, within them, much of the architecture had to be reworked or also had some other trade off that had to be made.
I think once you are an experienced programmer, beyond being able to break down the target state into chunks of task, you are able to intuit pitfalls/blockers within those chunks better than less experienced programmers.
An experienced programmer is also more cognizant of the importance of architectural decisions, hitting the balance between keeping things simple vs abstractions and the balance between making things flexible vs YAGNI.
Once those important bits are taken care of, rest of it is more or less personal style.
I don’t think this methodology works, unless we are very experienced.
I wanted to work that way, when I was younger, but the results were seldom good.
Good judgment comes from experience. Experience comes from bad judgment.
-Attributed to Nasrudin
Indeed, and this limitation specifically is why I dislike VIPER: the design pattern itself was taking up too many of my attention slots, leaving less available for the actual code.
(I think I completely failed to convince anyone that this was important).
are you talking about the modal text editor for emacs
I would bet he meant this:
https://www.techtarget.com/whatis/definition/VIPER
surely. thanks
Correct, that.
The most I've done with emacs (and vim) is following a google search result for "how do I exit …"
Mathematics was invented by the human mind to minimize waste and maximize work productivity. By allowing reality mapping abstractions to take precedence over empirical falsifications of propositions.
And what most people can't do, such as keeping in their heads absolutely all the concepts of a theoretical computer software application, is an indication that real programmers exist on a higher elevation where information technology is literally second nature to them. To put it bluntly and succinctly.
For computer software development to be part of thinking, a more intimate fusion between man and machine needs to happen. Instead of the position that a programmer is a separate and autonomous entity from his fungible software.
The best programmers simulate machines in their heads, basically.
Yes, but they still suck at it.
That's why people create procedures like prototyping, test driven design, type driven design, paper-prototypes, API mocking, and etc.
The point is that there are no programmers that can simulate machines in their heads. These elite engineers only exist in theory. Because if they did exist, they would appear so alien and freakish to you that they would never be able to communicate their form of software development paradigms and patterns. These rare type of programmers only exist in the future, assuming we're on a timeline toward such a singularity. And we're not, except for some exceptions that cultivate a colony away from what is commonly called the tech industry.
EDIT: Unless you're saying that SOLID, VIPER, TDD, etc. are already alien invaders from a perfect world and only good and skilled humans can adhere to the rules with uncanny accuracy?
yeah, i just sketched out some code ideas on paper over a few days, checked and rechecked them to make sure they were right, and then after i wrote the code on the computer tonight, it was full of bugs that i took hours and hours to figure out anyway. debugging output, stepping through in the debugger, randomly banging on shit to see what would happen because i was out of ideas. i would have asked coworkers but i'm fresh out of those at the moment
i am not smart enough this week to debug a raytracer on paper before typing it in, if i ever was
things like hypothesis can make a computer very powerful for checking out ideas
I've no coworkers either, and over time both I and my code suffer for it. Some say thinking is at it's essence a social endeavor.
I should say rather, Thinking.
I agree this is how it often goes.
But this also makes it difficult to give accurate estimates because you sometimes need to prototype 2,3 or even more designs to workout the best option.
Unfortunately most of the times leadership dont' see things this way. For them the tough work of thinking ends with architecture or another layer down. Then the engineers are responsible only for translating those designs into software just by typing away with a keyboard.
This leads to mismatch in delivery expectations between leadership and developers.
If you know so little that you have to make 3 prototypes to understand your problem, do you think designing it by any other process will make it possible make an accurate estimate?
In my opinion, you shouldn't need to prototype all of these options .. but you will need to stress test any points where you have uncertainty.
The prototype should provide you with cast iron certainty that the final design can be implemented, to avoid wasting a huge amount of effort.
Right, I always thought this is what TDD is used for, very often I design my code in tests and let it kind of guide my implementation.
I kind of imagine what the end result should be in my head (given value A and B, these rows should be X and Y), then write the tests in what I _think_ would be a good api for my system and go from there.
The end result is that my code is testable by default and I get to go through multiple cycles on Red -> Green -> Refactor until I end up being happy with.
Does anyone else work like this?
TDD comes up with some really novel designs sometimes.
Like, I expect it should look one way but after I'm done with a few TDD cycles I'm at a state that's either hard to get there or unnecessary.
I think this is why some people don't like TDD much, sometimes you have to let go of your ideas, or if you're stuck to them, you need to go back much earlier and try again.
I kind of like this though, makes it kind of like you're following a choose your own adventure book.
Pen, paper, diagrams.
Xmind and draw.io
I don’t like that line at all.
Personally, I think good developers get characters on to the screen and update as needed.
One problem with so much upfront work is how missing even a tiny thing can blow it all up, and it is really easy to miss things.
This is what I do I right off the bat wrote down an line or two about what I need to do.
Then I break that down into small and smaller steps.
Then I hack it together to make it work.
Then I refactor to make it not a mess.
I also find it critical to start writing immediately. Just my thoughts and research results. I also like to attempt to write code too early. I'll get blocked very quickly or realize what I'm writing won't work, and it brings the blockage to the forefront of my mind. If I don't try to write code there will be some competing theories in my mind and they won't be prioritized correctly.
The secret to designing entire applications in your head is to be intimately familiar with the underlying platform and gotcha's of the technology you're using. And the only way to learn those is to spend a lot of time in hands-on coding and active study. It also implies that you're using the same technology stack over and over and over again instead of pushing yourself into new areas. There's nothing wrong with this; I actually prefer sticking to the same tech stack, so I can focus on the problem itself; but I would note that the kind of 'great developer' in view here is probably fairly one-dimensional with respect to the tools they use.
Agree with your point. I think “developers that do 90% of their thinking before they touch the keyboard are really good” is the actual correct inference.
Plenty of good developers use the code as a notepad / scratch space to shape ideas, and that can also get the job done.
I could not agree more, it's rare to write a program where you know all the dependencies, libraries you will use and the overall effect to other parts of the program by heart. So, gradual design process is best.
I would point out, though, that that part also touched understanding requirements, which is many times a very difficult process. We might have a technical requirement conjured, by someone less knowledgeable about the inner workings, from a customer requirement and the resolution of the technical requirement may not even closely address the end-users' use-case. So, a lot of time also goes into understanding what it is that the end-users actually need.
I completely agree with you. This article is on the right track but it completely ignore the importance of exploratory programming to guide that thinking process.
I'd like to second that, especially if combined with a process where a lot of code should get discarded before making it into the repository. Undoing and reconsidering initial ideas is crucial to any creative flow I've had.
I think first on a macro level, and use mind maps and diagrams to keep things linked and organised.
As I've grown older, the importance of architecture over micro decision has become blindingly apparent.
The micro can be optimised. Macro level decisions are often permanent.
That quote is already a quote in the article. The article author himself writes:
So at least there the article agrees with you.
I'd take the phrase with a grain of salt. What's certain true is that you can't just type your way to a solution.
Whether you plan before pen meets paper or noodle, jam and iterate is a matter of taste.
Same, and since we are on the topic of measuring developer productivity; usually my bad-but-kinda-working prototype is not only much faster to develop, it also has more lines of code, maximizing my measurable productivity!
I don't think the quote suggests that a programmer would mentally design a whole system before writing any code. As programmers, we are used to thinking in problems as steps needing resolution and that's exactly the 90% there. When you're quickly prototyping to see what fits better as a solution to the problem you're facing, you must have already thought what are the requirements, what are the constraints, what would be a reasonable API given your use case. Poking around until you find a reasonable path forward means you have already defined which way is forwards.
The way I read that is that only 10% of the work is writing out the actual implementation that you're sticking with. How you get there isn't as important. Ie someone might want to take notes on paper and draw graphs while others might want to type things out. That's all still planning.