My best bit of advice for any programmer at any level: "Don't make stuff more complicated than it has to be!"
Software is complicated. Large, feature rich software is even more complicated. That's hard enough to manage as it is. The last thing you want to do is to throw a million of abstraction layers, frameworks, libraries, precompilers, transpilers, build steps, validation hooks, style checkers etc. into the mix. Each of them makes your project more complex by a certain factor. And not only do they add up - they multiply!
Now, that doesn't mean that you need to built everything bare bones without the help of any third party software - just make sensible choices. Unfortunately, developers are way too prone to add another shiny thing to the mix.
So here's how you decide if you really need something:
At first: You don't - continue as is.
Then: If the problem persists and the suggested solution keeps coming up, still refuse, but investigate the solution.
At last: If the problem persists, the solution seems well suited to address it and it keeps coming up - accept that you have the problem and adopt the solution.
This is so hard in practice.
I just had a junior dev rewrite some of my code so that: a builder calls a constructor which instantiates a builder factory which builds a builder then that second builder creates the object.
This whole system only builds one type of object. He thinks that his solution is better because it’s more extensible.
I can’t make him see why it’s bad.
Schools teach OOP as though adding new types of objects is the norm—like every type of software construct is actually a GUI widget in disguise and we're going to be adding new interoperable subclasses every other week, so we may as well get the infrastructure set up to make that easy.
In most real world applications, the only type of object that behaves that way is, well, GUI widgets. Nearly every other type of construct in a typical system will have at most one implementation at a time (possibly two during a gradual transition). Factories, builders, and the whole design pattern menagerie aren't particularly useful when for the bulk of a system's life they're all just proxies to a single constructor.
I don't know if there's a good way to teach this out of someone besides just letting them experience it, but that's the insight that he needs—different types of code have different maintenance characteristics, and the tools he's been given were developed for a very specific type of code and don't apply here.
Entities in game systems tend to behave like that too when the whole thing is under development - but then (a) arguably a monster chasing you around is just a special GUI widget with extra behaviour and hit points; and (b) when things get really complex it doesn't hurt to switch from OOP to ECS for games.
I don't have a lot of knowledge on ECS, are there any good articles out there that compare it to OOP?
Anything that argues composition versus inheritance.
This tutorial: http://web.archive.org/web/20120314005352/https://www.richar... although now defunct, makes a good start. Around halfway through there's a heading "Abandoning good object-oriented practice" which gets to the point you're asking about.
I recommend reading this book on Data Oriented Design[0] since the underlying reason ECS has been widely adopted is because, written correctly, an ECS system can take full advantage of modern hardware. Most ECS vs OOP blogs come at it from a perspective that I personally find flawed. OOP has the tendency to have objects layed out poorly in memory which prevent programmers from utilizing SIMD instructions[1].
[0] https://www.dataorienteddesign.com/dodbook/
[1] For instance, objects in OOP are typically bundled with other data related to the object. As an example, in OOP a Door object could have float X,Y,Z coordinates, an enum for type of door it is, a bool for opened or close etc. This leads to inefficiencies in cpu cache usage when you are iterating through a list of doors and not using every field of the door object.
See this GDC talk for why this matters in the gaming world: https://www.gdcvault.com/play/1022248/SIMD-at-Insomniac-Game...
At least where I went to school, the teaching staff in university CS departments seem to be overrun by guys who haven't actually worked in the software industry since the 90s when OOP was the hot new thing. My theory is they were there when they got to build a bunch of new systems with OOP, but didn't have to stick around for the nightmare of trying to maintain those codebases.
Where I live (Tenth largest city in Spain, so not that big) I'm pretty sure I can track misconceptions about OOP to whoever taught that class in the only university that had a CS degree around here some 20 years ago. I've seen like half a dozen teachers saying the same stuff explained in the same way and I'm sure none of them understand OOP at all. Only one was able to explain why getters and setters ought to be used besides saying "it's the standard" and to provide an actual, reasonable use case for it rather than the tired examples of "Yeah OOP is cool because since both cats and dogs eat and have 4 legs you can keep a lot of the code together rather than duplicate it".
Software development is reliant on self-learning, but still, some education can be outright damaging.
Experience will make him see this.
Experience did the same for me.
It just takes some time.
Not if he does the type of job hopping typical in silicon valley. There are plenty of programmers who only stay at a job ~2 years, and never have to maintain a program over a long period. You could imagine such a person saying 'this worked well at my last job' and just keep introducing that kind of pattern to new companies, eventually with '20 years of experience'.
Of course really it is 6 months of experience repeated 40 times.
Agreed. It's "fun" to start breaking your code down into small bits; more interchangeable "pieces" -- until you have to maintain it for 10 years (or worse yet, explain it to a new junior dev) and the extensibility you envisioned never came to fruition and now it's just a liability spread across 8 source files.
Make him maintain one that someone else wrote.
That or make him try to maintain his own code a year later when it may as well have been written by someone else. This is really the only way to properly understand the problem with making all those abstractions. It all makes sense when you have the whole structure in your head but it's very hard to build that mental model back up later.
Or... get them to maintain their own code 14 years in the future. Hard to do in practice, but once you've done it, you get a new perspective.
I got a call from a guy saying "hey, this broke". I had not touched this system in 13 years. I had to re-remember a bunch of stuff. Some of the code was still a joy to work with. The parts that were hard were... obvious, and even then, I remember cutting corners, not documenting stuff, and overcomplicating things "just in case" (which mostly never happened).
There's little amount of someone 'telling' me ahead of time how much difficulty I was leaving for the next person (or even myself). It really seems like the only way to get this experience is with time. That doesn't mean you can't follow some best practices that turn out to be beneficial. But until you've experienced the downsides, you won't really be able to internalize the why regarding something being good or bad.
Every abstraction for extensible code implicitly assumes certain kinds of extensions and makes other kinds harder. Make him extend it in one of the harder directions :D
Good idea
It's hard in practice because the industry values delivering something over everything else. I don't know how many times I've seen decent architectures turn to complete crap because leaders felt obligated to put their stamp on it.
There’s another heuristic that he broke.
If you can easily read and understand some code, then it’s good code and you should leave it alone.
Premature generalization is the root of all evil nowadays.
We came up with a rule “no single use abstractions” so any abstraction with a single caller gets absorbed by its caller. We eventually had to say except across MVC boundaries (the project is MVC organized) or else we got some weird code organizations (controllers consume everything). It works pretty well though, in some cases we have some long methods, but really killed the premature abstraction “this will be useful in the future” discussions that eat a lot of time and energy. I think having a simple rule for everyone makes it easier to follow.
Show him this discussion
I think an automatic code formatted actually makes one’s job easier, not more complicated.
That completely removes a whole slew of useless comments when people are reviewing code, it's such an amazing win, every single language should have it's styleguide published and a tool that forces it without any options.
The key to style guides is to not have so many rules. Agree on the number of spacing and where to put braces. Past that, it’s obsessing on form over function, which you cannot control simply by virtue of having many people work together on the same codebase.
You should worry more about how things are named, the number of abstractions used, and whether the code has any comments explaining the writer’s intent.
This is broadly what reasonable people believe, but there are crazy people who WILL obsess over form and nitpick your whitespaces or other trivial bullshit if given a chance. An authoritative style guide shuts down many such detours.
It's not even just about obsessive nitpicking. There are people who, for example, prefer code with spaces around the parens for conditions and people who don't. If there's no enforced style guide and one from each camp end up touching the same code, you're likely to get a bunch of pointless noise in the diffs. Even if no one is going to hold up a diff arguing over it, it makes it harder to review changes.
Yes, but, on the other hand having a detailed style guide does make for short and simple debate during code review.
I think you can have as many rules as you want as long as there's something like black or rustfmt, just a script you run that auto-formats your code. You never have to worry about the rules because there is no configuration thus nothing to argue about. Sometimes your code gets formatted weird but who cares, just do your work.
This often leads to extremely annoying codebase because languages trying to enforce styleguides without proper options just leads to inconsistency once any code in another language leads to the codebase.
Just have an options file which is checked in with the code and enforce whatever is set in there works much better. You still avoids all the useless discussions about formatting while also allowing to set sensible settings which are consistent with surrounding technology.
Not sure I understand this. Do you mean tools like "go fmt" are going to try to format your Java code?
This is really simple:
* All our code must be linted / formatted
* All their code must be ignored.
Absolutely! I'm grateful that I don't have to worry about code formatting any more. But I remember in one of my earliest job the company used style checkers as a pre-commit hook that rejected your commit if they found trailing whitespace. That was before code formatting was part of your IDE. (Especially for us front end devs who used Notepad++ rather than an IDE at the time).
And notepad++ had no easy way of showing trailing whitespace. So every commit was a dance of commit -> read rejection log -> remove trailing whitespace -> commit again.
What's stopping you from configuring your editor to automatically strip trailing whitespace on save?
Code reviewers that made me type in trailing whitespace that editor stripped: "this change is on lines you are not modifying". Still makes my blood boil many years later.
Mostly the fact that this was 15 years ago :-)
I have memories at my post grad school where any deviations from the expected code source formatting led to a -1 on the mark. On the first few projects it was not rare to see students getting marks well into two or three digits negatives.
Things that don't matter much. A nice consistent style is good to have, but it isn't something worth worrying about that much.
I once worked on a project that would not even compile if a function had a param without a comment explaining the purpose of that param written with a specific format. Every comment was validated on compile time, you couldn't even comment code just to test something. Life was hell.
Not an intelligent quotation. Hardness can be defined as how long it takes to accomplish a task. Following this definition everything can be debugged. It just takes twice as long.
This definition isn't absolute and neither is the definition by Brian. The truth is much more complicated.
Your premise is false, time to completion and difficulty are not equivalent.
Believing such implies it’s more difficult to ride in an aircraft craft around the world than legitimately beat Magnus Carlsen in a rapid chess game.
The reverse is frequently the case where running a marathon faster is more difficult than doing so slower.
It's as "false" as his premise of relating difficulty to intelligence. My point is the existence of an alternative statement that is equally and fuzzily true shows that this quote is not really that intelligent.
Reread my post. I literally said neither definition is absolutely true.
I understood what your post was saying, but they they aren’t fuzzy equivalents. The basic premise is just false.
Further the quote wasn’t suggesting equivalency. Rather intelligence as one bound on debugging, which is clearly true as can’t get a flatworm to do it. When trying to debug really clever code the easiest solution can be giving up and starting from scratch.
Wrong. A more complex program has more possible origins for the bug. So you need to make more hypothesises to check and verify the bug. Time and intelligence are both a factor here. Sometimes one more than the other.
Clearly you don't think bugs are all solved in seconds and limited only based off of intelligence. A harder bug often needs more time to solve. This is common sense.
You're just sinking with the ship now.
This isn’t a discussion about all bugs but the class of bugs created from dealing with clever code. Very difficult bugs may be fairly quick to solve in comparison to simple bugs that require some long process to replicate. Time to solve really doesn’t map well to difficulty.
False, you clearly missed me stating it intelligence was “one” limitation not the only limitation. Poor tooling can be a massive pain among many other things. Again though this is talking about debugging a very specific kind of unnecessarily complex code.
It does. A bug solved in seconds is usually considered less difficult than one solved in weeks. It maps easily.
The quote was suggesting absolutism one limitation. By showing the existence of an equivalency I've shown the quote is not absolute. Therefore the quote is not intelligent. Therefore your statement is false and nonsensical.
Luck wildly impacts how long ‘difficult’ problems take to solve. I’ve solved bugs in seconds someone literally spent weeks and asked multiple people to help them solve and I’ve had the same thing happen to me.
Thus actual solve time and absolute difficulty are almost orthogonal.
Again no, it was saying one limitation becomes significant in a specific situation. Often these bugs may not actually take long to fix, but you’ll suffer when dealing with them. Even strait forward off by one errors can be annoying when you have to reason about really tricky bits of code.
That feeling where you spend an hour staring at an IDE with absolutely no clue what’s going on sucks even if it doesn’t take that long to actually fix the issue.
No it just means luck is another factor. You have luck, intelligence and length of time. Time is correlated with possibility right? You can get lucky and guess the probability on the get go.
Either way you introduced a third possibility here which goes further to illustrate that this quotation is inaccurate and not intelligent.
False. You are absolutely wrong. The statement was made without qualification to a specific situation. Therefore it is made in the context of the universal situation meaning absolutist. Sinking with the ship again.
So? This doesn't have anything to do with the topic at hand. The topic at hand is the quotation is wrong. How you feel during debugging is off topic.
Yes, but (not trying to in any way discredit half of K&R here) it can be useful to write clever code – ideally for personal projects – to both see what the language is capable of, and to learn first-hand why this quote is evergreen.
Clever code should be wrapped up in a parser and you should approach the process problem from a meta perspective, such as Mr. Kernighan did with his help in writing AWK.
Recursive descent, flex/bison, parser combinators… these are the tools to manage complexity. This is why lispers go so nuts once it fully dawns on them the power of manipulating the AST along with any other data structures in the program… it’s just a shame about all those parens!
We’re not writing machine code, most are not writing assembly or even C or Rust. Most hit the memory managed languages and it seems our abstraction has stalled, with endless language features applied to our current layer of abstraction. Its like goto programming before better abstractions were discovered.
I think this is explained by the fact that most business logic is dry, and most abstractions are interesting (until you get bored of them). So you wrap your actual code in fun code to make the job bearable.
This is the likely cause - most software development is dreadfully boring. But at some point in a developer's career, they will have touched the monolith and everything is full of stars, and instead of saying "I'll just hook you up with Magento", they're like "I will build you an event-driven microservices architecture in a custom made C# framework" and disappear for six months while they work late nights.
(I wish I made it up. This was what a CTO at a previous project did. 30-odd people were waiting and churning on while he was unavailable because he had to indulge his own things. And once they were beyond the point of no return, both he and the manager that greenlit this project quit, but stayed on as independent contractors. I believe they were demoted or taken off the project and eventually gotten rid of when a new manager was found)
This is a happier ending, could have been "he finished the project, it turned out to be bad but now everyone has to use it".
New life goal: become CTO.
I don't think that business logic is dry per se. The problem in my opinion rather is that in other parts of the software project, there is much more openness with respect to
- trying out new things in new ways
- making the code more elegant
- seeking abstractions
- ...
than in the business logic area.
Believe me: for the kind of business logic that I see at work, I could immediately see ways in which the (non-trivial) business logic could be made much more elegant by using clever mathematical ideas, but suggesting such ideas to other colleagues or the bosses is like talking to a brick wall.
It seems to me that if you can see elegant simplifications (or really any significant improvements) but are unable to implement them, you are either positioned too low in the hierarchy, or in the wrong organization.
I'll upgrade my original comment: I think the business code is the present, and the abstraction is the wrapping. It looks pretty on the surface, and you have to do a bit of unwrapping to see what's actually there!
True. On the other hand, don't try to make things simpler than they are.
https://en.wikipedia.org/wiki/Waterbed_theory
True. On the other hand, the reason things are not simple is probably that your requirements are dumb.
https://youtu.be/hhuaVsOAMFc?t=12
Seeing that that's attributed to Apocalypse 5, and Apocalypse 5 was >20 years ago makes me feel ... something not good.
I believe we have another name for this theory: leaky abstraction.
Know plenty of people who call this job security
It's worse if your team uses a house-built framework that's too complex for its own good. There's a whole team devoted to the care and feeding of a framework that could be replaced with source code templates or just a clear architecture document with a "cookbook" style set of accompanying documents. God forbid you try to use something else; you'll get inundated with "But that's the company standard!" style job protection complaints.
Good advice; it improves maintainability, which becomes even more important the larger a codebase becomes. https://grugbrain.dev/ talks a lot about avoiding complexity, and does so in a pretty entertaining writing style too.
Known as the KISS principle - Keep It Simple Stupid! Taught to me at university 20 years ago and still valid today.
Great advice! I wish more people had that insight nowadays.
I think good software like good algorithms is made from simple patterns that do complex things.
Don't solve simple problems with complicated solutions