Learning that some folks can produce so much value with crappy code.
I've seen entire teams burn so much money by overcomplicating projects. Bikesheding about how to implement DDD, Hexagonal Architecture, design patterns, complex queues that would maybe one day be required if the company scaled 1000x, unnecessary eventual consistency that required so much machinery and man hours to keep data integrity under control. Some of these projects were so late in their deadlines that had to be cancelled.
And then I've seen one man projects copy pasting spaghetti code around like there's no tomorrow that had a working system within 1/10th of the budget.
Now I admire those who can just produce value without worrying too much about what's under the hood. Very important mindset for most startups. And a very humbling realization.
As a good friend of mine often says: We work in a field of people who envision themselves as artists, when all that is wanted are painters.
No one wants to be a "programmer."
They'd rather be called "engineers."
With none of the licensure, mandatory education, and so forth. But the world needs programmers and technicians for most of the work we have to do. I do comparably little "engineering" and the little I have done that qualifies for such a statement I recall. Majority of the work is programming and technician work. Nothing wrong with that.
Maybe not in the U.S. but in Europe, Software Engineering is often a field of engineering where one can get licensed.
i.e. in Austria many IT techs and devs are called (at the doctor's office or in formal settings) "Mr. Engineer" ("Herr Ingenieur") if they fulfill some formal criteria and get licensed. A further qualification is becoming a federally certified civil technician for IT (Ziviltechniker or Ingenieurskonsulent) - also something that public sector contracts sometimes mandate.
Software around the world is engineered - it just often isn't in California.
Here's a story for you: I was born and raised in Switzerland, but I live in Germany now. When I first moved here, I was working as a freelancer and needed to register as a sole entrepreneur (Einzelunternehmen, one of the legal entity forms here).
Now, there are several types of taxes you pay in Germany based on your income. One of them is the income tax, the other is called Gewerbesteuer (maybe best translated as "commmercial tax) - and the Gewerbesteuer can be significant (something like 12% of your income). Many freelancers are exempt from the Gewerbesteuer, and so are software engineers. However, I learned that you only qualify as a software engineer if you have actually studied software engineering or a closely related field. I had studied business administration. I tried to explain to them that I have been working as a software developer for over 10 years, literally doing the same job as somebody who had studied software engineering - to no avail. They woulnd't exempt me from the Gewerbesteuer.
I'm not claiming that I'm an "engineer" or that my computer science fundamentals are as good as somebody who studied computer science, but the tax exemption shouldn't be on some bureaucratic difference like what you studied a decade ago, rather on what kind of job you're doing.
Welcome to Germany!
Bizarrely enough, I always had "Software Engineer" written on all my documents here in Germany: my old Residence Permit, tax documents, contracts, the speedy authorisation from Agentur für Arbeit when I moved here, etc. As far as BRD is concerned, I am a software engineer.
But I never really studied software engineering or anything related, only Electric Engineering.
But nobody in the government knows that. I don't have my diploma papers with me and have no time or interest in going to my country of origin and ask my university for it.
It doesn't matter so much because with a degree in electric engineering you are allowed to call yourself an Engineer/Ingenieur, and that is the only thing that matters. It is a Ordnungswidrigkeit (administrative offence) if you call yourself an engineer if you don't have a at least 3 year degree from a university or university of applied sciences.
With a degree in computer science you are also allowed to call yourself an engineer in Germany, for anyone reading this and wondering.
Yeah, Germany is kind of a bureaucratic hellhole. And from what I know France is even worse.
As for formal education - I've seen dozens of people with an engineering diploma who barely can write a FizzBuzz, so it is not the best indicator and mostly people don't look at it when it comes to recruitment aside from big corps. Although that kinda changed lately as junior positions started evaporating and without experience the diploma is the main thing they check.
Yeah. I also never had problems getting a job or similar because of my non-CS degree.
Yet almost every major piece of software or idea started in California and almost nothing noteworthy in the space started in Austria. I think that tells us something about the real value of a piece of paper granting you the right to call yourself an "engineer".
IrfanView?
I'm sure there are many things which make this such an unfair comparison that it's not worth making. If there was a firehose of money spraying all over Austria as a result of being the worlds reserve currency I'm sure the shoe would be on the other foot.
"Software around the world is engineered - it just often isn't in California."
Can you explain the difference between 'engineered' software, and the rest? I mean other than being created by someone with an engineering certificate or whatever.
I'm no defender of Degrees as a mark of quality
But, for me it implies that it gets Planned and Worked on formally like other engineering projects, Lists of Requierements, Detailed Propossals, Design documents, BOM's ,Gantt Charts, the whole shebang
The alternative being the ultra lean SV style, where a project is lucky if even the public facing documentation is not at least N-months out of date
Having any documentation it's pretty much a miracle.
Normally in startups it is very common to have the only available documentation be the code itself.
I live in a country where software engineers often have engineering degrees.
What that means in practice is that they study math and physics for the first 2-3 years of their degree, instead of computer science or software engineering.
Does that make them build better systems than Californian devs? Based on company revenues and salaries, I’d say no.
Majority of silicon valley devs are immigrants though, they are good because they are sourced and filtered from all over the world not thanks to American education.
Many of the immigrant devs attended American Education like Stanford, Georgia tech, MIT, CMU, one of the UC’s, etc.
A gatekeeping organisation you have to pay to enter the workforce isn't the same as something actually being "engineered".
Actually I call myself a developer. I'm not that good of a programmer, but quite resourceful as a developer.
Only reason I sometimes say I'm a programmer is because developer sounds like I work for an NGO
Yeah, developer is much more ambiguous. Could also be a real estate developer, or a business developer (fancy word for "sales").
I've come to understand the difference between programmer and software engineer, but what's the difference between (software) developer and programmer?
Developers build software whereas programmers merely flick the switches on an Altair 8800 /s
I call myself an engineer- but the way I understand it is that I still have to know when to use a hammer like simply copy paste code and not some complex solution.
The problem in software is that very frequently you dont have a real hammer. You have hundred amalgamations of swiss army knives that do everything. Like what is the hammer when you want to add some nice dynamic content to a website? Is it just javascript or maybe react?, Angular?, Svelte? Or another example: you want to code a command line app, what language is a hammer here?: C++?, Go?, Bash?, Python?
This analogy falls apart the moment you apply it to anything in software.
Well, the engineer first move should be "do we really need this dynamic to achieve the global goal, or can we do without"?
Most of the time however stakeholders will prefer that someone unleash the last technical shiny tech stack (that no one master in the team and that is about to be dethroned by the even newest thing) to deliver a pixel perfect reproduction of the mock-up rather than question the relevance of the proposal.
YES! Couldn't agree more. Although my title has always been Software Engineer, I never introduce myself that way (I prefer Software Developer or just Programmer, or I say "I write code at XYZ").
I went to an engineering school. Actual engineers design bridges and machines and biochemical whatnot, and of course require licensing. I push code around.
I even try to subvert company culture and say things like "Well developers will need to update that API" instead of "engineers".
Civil engineers are one thing and large projects require licensed people to be involved. Electrical engineers can get a P.E. in the US and I know at least one such person, but it's not required AFAIK for any EE job. I'm not sure what you mean by "actual engineers", do civil engineers have a lock on that word?
But other branches of engineering are also about doing a lot of repetitive work, exactly what "programmers" do.
Engineers shouldn't compromise on things like safety, security, legislation, budget, properly documenting, properly communicating with others, etc. Same as anyone working professionally with code should, no matter the title.
But there's nothing that says that engineers should only be doing groundbreaking or interesting work.
The "artist" analogy works much better.
EDIT: Perhaps the main difference is what psychoslave mentions below – engineers are expected to question the relevance and necessity of requirements, and work together with business, rather than just doing as asked.
I prefer the term "developer". In most cases, software is a craft discipline, not engineering.
Lately I've come to understand the term engineer somewhat differently from this excellent video from the 'engineerguy': https://www.youtube.com/watch?v=_ivqWN4L3zU
He defines the Engineering Method as: 'Solving problems using rules of thumb that cause the best change in a poorly understood situation using available resources.'
In that sense a Software Engineer would not necessary need to know all the complexities of the complete system to make something work as specified. And I would equate a 'Programmer' more like the Scientist or Mathematician in this case.
Funny how there are so many different ways to look at a title.
To extend that further, most professional painters think of themselves as sanders.
Why?
It’s usually more than 80% of the work. Painting something is really fast with modern sprayers but sanding off the old layer is still a lot of elbow grease, even when you have sand blasters.
This articulates something I've been thinking about quite a lot recently; quite nicely.
Tell us more! :)
So we should put a bug in the room and if the painter whistles a tune while working, sanction them for slacking off?
It's the same in the creative industry, no? Teens fall into it practicing the techniques of artistic self-expression wanting to make a career out of showing the world themselves with expressive finesse, but the alienable value in these skills lie in using them to create a consistent brand identity etc.
Some of the best are in fact also painters.
https://paulgraham.com/hackpaint.html
This is a false dichotomy. On one end, you have "overarchitects everything so much that the code is soon unmaintainable" and on the other end you have "architects the code so little that the code is soon unmaintainable".
Always write the simplest thing you can, but no simpler. Finding that line is where all the art is.
It's cliche, but I really do feel reading the Art of Unix Programming gave me a very good sense early on for how to walk this line carefully. Unix programs are high quality - but they're also, ideally, small, and written with an eye to compositionality with the rest of the ecosystem. The best architecture is in the middle, and often the best first draft is a simple series of pipes.
https://www.catb.org/~esr/writings/taoup/html/
(Honest question)
What is the difference between this microservice architecture that gets a lot of hate here?
I'm not the person you're replying to, but here's my take: even though the two look conceptually similar, Unix programs are just a lot simpler. All programs run on the same machine, they read their input, execute, produce output which is piped to the next program, and terminate. Want to change something? Change one of the programs (or your command line), run the command again, that's it.
Microservices are a lot more complicated. You'll need to manage images for each of the services, a fleet of servers on which the services will be deployed, an API for services to communicate together, etc. In many (most?) cases, a monolith architecture will be a lot simpler and work just fine. Once you reach the scale at which you'd actually benefit from a microservice architecture (most companies won't ever reach this scale), you can start hiring devops and other specialists to deal with the additional complexity.
What actually gets hate, I think, is not microservices themselves, but the fact that microservices are often used in contexts where they are completely unnecessary, purely because of big tech cargo culting.
We only think of Unix programs as simple because we have many more abstractions nowadays. But you should compare a Unix program with DOS programs (probably CP/M also but I never wrote those myself) at the time. Poking directly at the hardware, using segmented memory, dealing with interrupts. The idea that a program should be well behaved, should accept things an inputs, should push outputs, and should have a virtual address space are actually huge abstractions over what could just be a block of code run on a spare OS. I'm not saying that microservices are better than monoliths, just that Unix programs aren't as simple as we think they are in a world where we're managing servers like cattle and not like pets.
That's a great question. Some might say it's because of the network - that makes microservices messy and so on. But I dont think so, from what I remember plan9 (the os, successor of unix), Rob Pike wanted to make it so that there is no difference between an object being on the network or outside the network. In unix philosophy, things have the same interface, it's easy to communicate. For microservices it would be REST api which is unique to network things. I honestly see a direct link between these ideas. Unix here is projecting a much nicer, simpler image but nonetheless they seem to overlap a lot. The result in both cases seems to be a hard to debug network of small utilities working together. The saving grace for unix is that you are mostly using stable tools (like ls, cat), everything is on your system so - you don't get to experience the pain of debugging 5 different half-working tools.
Everyone wants to make network objects the same as local objects. Nobody's ever succeeded.
Microservices provide encapsulation and an API interface, but are not composable the way Unix programs are when e.g called by a Unix shell on a Unix OS.
Either microservice A calls into microservice B or there's a glue service calling into both A and B. Either way there's some deep API knowledge and serious development needed.
Compare with a (admittedly trivial, but just doing that is orders of magnitude less complex than web APIs) `ls -1 | grep foo | sed s/foo/bar/g`, exit codes, self-served (--help&al.) or pluggable (man) doc, and other "things are file-ish" aspects, signals (however annoying or broken they can be) and whatnot. There's a whole largely consistent operating system, not just in the purely software sense but in the "how to operate the thing" sense, that enables composition. The closest thing in http land would be REST, maybe, and even that is not quite there.
Because the Unix programs all use pipes as their interface. When you simplify and standardize the "API" composition becomes easy. Microservices are more like functions or modules each running as separate processes - if you use the same language for the services and glue you can just compile them all together and call it a program right?
Microservices require Kafka (or a Kafka equivalent)
You can do something like 0mq, but still need something to coordinate configurations on where service-x is, like etcd.
Composability.
This does not mean "it's a small component in a dedicated pipeline". It means "this is a component that's useful in many pipelines".
In theory, only the network boundary. Which allows you to independently scale different parts of the system
In practice, a way of splitting work up between teams. Also it makes it easier to figure out who to blame if something breaks. Also a convenient unit for managerial politics
So because manager X "owns" microservice Y, it's going to stay around so that they have something to manage. Over time the architecture mirrors the organization
If somebody created a complex system of one hundred small unix utilities that all talk to each other over pipes I am sure it would get abd deserve a lot of hate. Unix utilities are nice to do very small, simple things, but there is a limit.
My take:
Unix utilities are stand alone and used as needed for specific tasks. They hardly change much, have no data persistence, usually no config persistence other than runtime params, and don't know about or call each other directly.
Microservices are moving parts in a complex evolving machine with a higher purpose that can and do affect each other.
The problem is that they are microservices in name only. Where a unix utility does a few hundred or a thousand lines of C code for its entirety, a microservice will depend on a complex software stack with a web server, an interpreter, an interface with a database, and so on.
It's easy to forget this complexity, but it comes at a cost in terms of performance and, above all, technical debt. The microservice will probably have to be rewritten in 5 years' time because the software stack will be obsolete. Whereas some unix utilities are over 40 years old.
That's a really good book. Thanks for mentioning it
That's not a "false" dichotomy, that's an actual dichotomy: it's a real thing. Reading your comment, even tho you don't say it, I get the feeling you'd be with the rallying cry, "Bikeshedders Assemble!" hahaha! :)
It's a false dichotomy because it falsely implies there are only two options. Better than either are the other options which lie in between.
Oh, I see. That's a good point. But I don't think hu3's comment was suggesting there's only two options, just illustrating some possible margins to describe the landscape.
Maybe stavros' was hallucinating that strawman reduction in there, is what I think. Like you don't have to say it's a false dichotomy unless that's the only way you read it. The existence of something between the margins, should be obvious. Anyway, haha! :)
Well, it almost always becomes an actual dichotomy soon, for sufficiently large values of "soon."
Yes, the compounding effects of previous architectural decisions, but not if you take a balanced path, guided by awareness of the two extremes. So it needn't. Hahaha! :)
I don't think so. Time and time again the client will insist on stuff like "the customer only needs a single email address/phone number" but you're going to pay for that one later if you do the simple thing and add an "email" column.
Same for addresses.
And a whole bunch of other stuff...you need to normalize the heck out of your DB early on, even if you don't need it now. The code you can mostly make a mess of, just choose a good tech stack.
Go down the simple path to start, and refactor to a more complex solution when it makes sense to do so. If experience tells you the client is definitely going to ask for it later, add a “break condition” that tells you when you need to upgrade. You can put entry points into the code - comments, interfaces - to make it easier to do the upgrade.
Normalise the DB from the get go (doesn't really require much effort), then charge for the fact that "actually we have a customer who has 2 email addresses".
In many many cases this doesn't work and it crashes and burns the whole project/startup company when its necessary.
Sometimes messing up your fundamental architecture means that you hit a refactoring your company won't survive (while your competition grabs all the customers who wanted that feature your architecture doesn't allow).
This is where experienced lead engineers earn their worth - they understand which parts cannot be fudged and fixed later and need to be there from the get go.
You'll only pay if the project survives long enough for that new requirement to actually surface, which often it won't.
IME, it's easier to fix the latter than the former, if only because there is a limit to how large the latter can be possibly become.
Small projects are easier to fix than large projects.
As much as I love simplicity, optimising for peak simplicity isn't always a good use of your time.
Simple enough is often good enough.
I had a colleague who was old school and loved optimising, everything he reviewed come back with tiny changes that would save fractions of a ms. His nemesis was a guy who was arguably the best coder I have ever worked with. Things come to a head in one meeting and old school said if we did things his way our code would run quicker and the response was legendary 'If we coded like you it'd run quicker because most of the functionality would be still in the backlog.' I still think about that when it comes to optimisation.
'Optimising' for simplicity would often be a good idea, though.
Optimising for speed of execution only matters some times.
The steps in this witty quote helps puts things in perspective as what anyone should do first when in doubt: "Make it work, make it correct, make it fast".
And the 'fast' can be optional.
But sometimes you already have something that works, is correct and fast, but you still want to simplify: for example, when understanding _why_ that code is correct is too annoyingly complicated to explain and understand.
With AMD having 128 core CPUs with a 192 core coming soon... Depending on what you're doing, and how you're doing, there's a LOT of raw horsepower you can throw at a problem. For example, a classic RDBMS (PostgreSQL, MS-SQL, etc) over more complex No/New SQL solutions.
When you have an O(n^3) algorithm with input size > 1000, you're still better off making it O(n^2) than throwing hundreds of cores at it. OTOH if you're using C++ it can be easier to throw some OpenMP in the obvious places for a quick short term win without using complicated algorithms.
It breaks the flow of the quote, but it really should be 'make it fast enough'
I deeply hate this modern attitude.
It’s factually correct due to hardware and compilers (néé transpilers) offering so much headroom, but part of me cries when you compare modern hardware utilization compared to how optimized-to-the-gills late generation PS2 games were.
Optimising for simplicity first is almost always the right thing to do. Even if it turns out to be too slow you then have a reference implementation to confirm correctness against.
In my experience it's rare to correctly identify the bottleneck in code while writing it.
Oh, I had situations in mind where I can quickly write a simple version.
But sometimes when I really torture my brain I can spend a few days of doing mathematical proofs etc to come up with a even simpler solution.
That extra effort is only sometimes necessary. (But can be lots of fun to develop.)
Depends on your definition of simplicity.
Some view simplicity more as minimizing lines of code, e.g., less moving parts.
I view simplicity more as increasing lines of code, the goal being to make the code very verbose. Sometimes this means more moving parts, but smaller "movement" at each step.
There are other views of simplicity as well.
Why? It is so easy, just think of the work being done and pick the big parts, those are the bottlenecks.
Only reason I can see anyone fail at that is that they don't know how long things take in their programming language, but that takes very little time to learn, and once learned you will know the bottlenecks before writing.
In so many cases the bottleneck is using bad data structures everywhere, that often gets you 100x runtime and doesn't show up in a profiler because it is spread out all over the codebase, that is the real bottleneck that never gets fixed and is why programs are slow today. To fix that you have to learn to know the bottlenecks as you write the program and not rely on a profiler. Profilers helps you find how long things take, they are really bad at helping you understand how to make the program fast.
That's just a rude thing to say. If you all coded like him you wouldn't be having the discussion.
The issue is when you have people who do not code with efficiency in mind, and someone who does think about those things reviews the code.
Most efficiency gains are probably marginal and not that impactful. So you're probably OK ignoring it. And it's true that bringing such things up during code review and then going back and changing it will take more time.
But if people wrote the code with efficiency in mind to begin with they likely wouldn't be spending much more (if any) time while writing the code. Just have to use their brains a lil.
This is it. People could write the code with a lean footprint the first time around.
And then you get an in-memory SQL database that is used for cached settings with a text-based SQL query to retrieve every configuration setting (thousands of times during a login) and have a login that takes many seconds to run.
Literal example... replaced with a lock-free hashmap, and reduced the blink of an eye in terms of time.
Reminds me of a mantra I picked up from somewhere on the internet:
1. Make it work
2. Make it right
3. Make it fast
Once you realize this, you are actually invincible as a SWE.
The problem with optimisation is that you first need to decide what you’re trying to optimise for. Latency? Throughput? Overhead? Correctness? Time to market? Ease of ramping up new hires?
I don't remember where I saw this quote, but... "It's okay to half-ass something, when all you need is half an ass".
EDIT: Totally agree about the 'important mindest for startup', I had a similar eye-opening experience, working in a startup with 'cowboy' code that was actually quite good, but I had to unlearn a bit of stuff I read as a junior/mid-level developper.
It was code that was well-architected, had well-concieved data structure, and had business value, but every "professional" coder would deem bad-quality, since it had no unit test, did some stuff unconventionally, and probably would have many linter warning.
> since it had no unit test
But aren't unit tests the half-assed solution? They are a wonderful help if you want to deliver something fast, but if you had more time you'd probably do something else.
It doesn't get said enough that unit-tests are not the ultimate in good code. There's a lot of shitty code with unit-tests, and there's a lot of really good code with no tests.
That's true, but a codebase without unit tests is apt to be deemed poor quality by professionals due to the time factor. Being able to release fast is an important quality of business software. Professionals don't look for perfect artistry, they want to see painter's tape in the right places to get the job done efficiently.
Going to mule that over.
Sometimes, in the interest of efficiency, you only use half your horsepower.
Or, in this case, half your asspower.
I think the asspower unit of effort makes more sense than story points.
Well-architected + well designed means you can go back later and fix the code if the project survives. I'm hitting this personally right now - the amount of code I can write is dwarfed by the amount of features needed to get something like a working demo out the door - I can spend time and make every thing perfect, or I can squint and imagine how I will implement fixes later and focus on making sure that I don't have to rearchitect.
Ask yourself: Will the amount of work it takes to "do this right" increase over time? (And by how much?)
I drew some quick placeholder art for a game I'm working on for fun. One day I might sit down and draw some good art, but not today. When the day finally comes to finish the art, the difficulty of creating that art will be no greater than it is now. In fact, with improving art tools, it may be easier to do later.
On the other hand, if I'm not quite happy with my database scheme and I want to change it, I can spend a day on it now, or I can spend--based on my experience--months on it in the future. In fact, there's a good chance if I don't fix the database schema now, it will never be done.
Exactly.
It was Gary Bernhardt in this talk: https://m.youtube.com/watch?v=sCZJblyT_XM
More Gary's talks:
https://www.destroyallsoftware.com/talks
Most scenarios only warrant fractional assing. Also this philosophy provides the opportunity to say “ok guys, this time… we have to whole ass this one,” which is always fun.
Some times you might even have to bring two whole donkeys to bear on a project.
or "If it's worth doing, it's worth doing badly."
Meaning that e.g. if you really need a ride, then a noisy, beat up old car will do.
There's a discrepancy that the top comments here say that "some people can produce a ton of value by not caring about code quality" and "I didn't care about code quality and got bit later on". And people discuss that incurring technical debt with bad code can sometimes be worth it and sometimes not be worth it.
The logical implication of technical debt having an interest rate (i.e. It costs more to fix something later than now) is that like money, features have time value. This is what makes it worth it to incur technical debt.
Thinking about the implied interest rate of technical debt makes it easier to rank what should be prioritized for a fix. High interest debt should be paid off first. But you might also focus on refactors that reduce the interest rate, such as by modularizing your code so it's easier to replace down the line.
I learned this lesson when working on very fast paced projects with way fewer developer resources than needed. to ship a feature on time, shortcuts in quality have to be made, but you learn to make the shortcuts in a way that are easier to go back and clean up later.
I became very fond of #TODO ‘s.
My manager gave me gruff the other day for adding a TODO that I had no intention of ever doing. TODOs are like a get out of jail free card. Don't want to do something the reviewer is likely going to call out? Just add a TODO.
That's why some people insists on having a name with each TODO, and some even want a name and a date.
some TODOs are actually WOULDDOs
I still find them helpful when the change is outside the cope of the current task. I especially like the ones that include both a reference to a Jira ticket (or similar), and an explanation of why, or any gotchas:
This is a lot of stuff to add for some things, so we might not want it everywhere, but it helps explain to a reader why we don't currently have it built (product doesn't need X yet), but also has hooks to existing (planned/canceled) tickets and is easily greppable (e.g. "non-dairy" or ticket name).I agree with this analysis. Another important factor is: How likely will this code actually be used and drive business value?
If very likely, then you should invest in making the code high-quality. If unlikely, then you should half-ass the code strategically
I've declared quality bankruptcy. Decisions are now driven by user needs. Did I half-ass that feature? Yes. Is anyone actually using it despite crying it's essential? No. Then it's not getting cleaned up. Are they not using it because it's shoddy? I guess we'll never know.
Btw, if you like the financial metaphor, then technical debt is a bit of a misnomer. 'Technical debt' behaves a lot more like equity you sold than like debt you issued.
In the sense that the 'technical debt' is only something to worry about in the future, if your project goes anywhere. But if no-one ever uses your product, you don't need to fix the 'technical debt'.
Thank you! I like the metaphors because they allow me to think about related variables. What metrics would I use to compare different technical equity options? Is there an implied valuation of my project based on the % of developer time I have to spend fixing an issue related to the feature value?
I think one of the most valuable lessons I have learned in software engineering is that you can write entire projects with the express plan of rewriting them if they actually gain traction. If I want to prototype something these days, I will often write code that, while not quite spaghetti, would definitely not pass a proper code review. It's actually kind of fun. Almost like a cheat day on a diet or something.
Unfortunately, that rewrite step often doesn't happen. I can't count the number of times a prototype that was meant to be thrown away was actually put into production because "it's cheaper and faster than rewriting."
"Phase 2 never happens"
I’m working on a prototype now but I deliberately made it run entirely in the browser (indexeddb) to avoid the problem that I might be asked to put it in production!
Did the business make money, though? I think that's the law of the jungle
Yeah, there is that. I guess this comes with the caveat that you have to have enough say in the project that you can mandate a rewrite.
"There is nothing more permanent than a temporary solution that works"
I have no idea who said that, but I use it a lot at work when people want to cut corners with the intent to fix it later.
Having heard a variation of this comment many times, I keep waiting for an “aha” moment, where I see the light and abandon my obsession with minimalism and clean code.
But at least in science roles it hasn’t happened yet. Rather, I keep seeing instances of bogus scientific conclusions which waste money for years before they are corrected.
Being systematic, reproducible, and thorough is difficult, but it’s the only way to do science.
But that's just the point, for most problems most people have, you don't have to be scientific. If your invariants vary and it breaks 5% of the time that's fine and nothing bad happens.
Literally the only thing I tend to worry about up front is deployment automation. I've worked in so many environments that don't have it, or have some byzantine manual deployment strategy that just gets irksome and difficult. I'm a big fan of containers, even for single-system deployments. If only because it always bites you when you are under the greatest time pressure otherwise.
Beyond that, my focus is on a solution to the problem/need at hand, and less about optimizations. You can avoid silly stuff, and keep things very simple from the start. Make stuff that's easy to replace, and you often won't ever need to do so.
Most software isn't about science, and isn't engineering... it's about solving real world problems, or creating digital/virtual solutions to what would otherwise be manual and labor-costly processes. You can shape a classic rdbms into many uses, it's less than ideal, but easy enough to understand. Very few jobs are concentrated on maximizing performance, or minimizing memory or compute overhead. Most development is line of business dev that gets deployed to massively overpowered machines or VMs.
There's so much waste in the world, that it is unbelievable.
However counterintuitively, I have stopped caring about waste, and have been more focused on the value. Waste you can always optimize later if you want to, value creation is the difficult part.
I think the other replies miss an important part of your comment:
Some of these projects were so late in their deadlines that had to be canceled
Speed is really important a lot more often than devs like to acknowledge when a company is small and fighting for its life to get revenue, let alone become profitable, and the code debt is often worth it. Fixing the code debt itself doesn't need to be any more thorough than necessary either.
This is something that I like about the metaphor of 'code debt', and which tends to go over peoples head: debt can be a perfectly fine instrument, and just like a real debt, it can be a good thing when leveraged wisely. The issue is more when debt is treated as 'free money', and is used carelessly.
At the same time, companies often fail to invest into long-term goals, like maintainability, increasing test coverage, or even bettering the internal UI, even when it’s their core, business-critical product.
And the other part is just the sheer amount of projects that can't deliver that "speed" after a year or two because shoddy, quick and poor decisions were made around code quality. Once you find out that your startup chose the wrong architecture because "we need to do it fast" and it needs to stop pivoting for 6 months to unfsck themselves, it's mostly too late.
Unfortunately people who create those monstrosities hide behind the same "keep it simple and quick" excuse than people who know how to prioritize.
It's interesting how many people here ignore that scenario - it's surprisingly common. Is it because most of developers jump ship at that point?
I wonder if you’ll still admire them when they leave and the burden of maintaining the mess they left behind falls on you.
I'm numb to it. It's just a fact of life because tenures are so short, quality is often very opinionated, and jobs are readily available, and you usually get a raise from the next job too. You do what you can to stay employed until you can get the next job lined up.
It has become overwhelmingly obvious that the industry is never going to reach a state where technical debt and bad code is exceptional and high quality is the norm. If there is a maximum amount of pessimism I could have, it would be for this.
I had only one job so far where code quality was a primary objective it was fantastic, but it was also extremely slow and expensive. Slower and more expensive even than what you're probably thinking right now. I was shocked.
Nothing to maintain because all the overengineered projects got canceled.
My way of planning and writing software shifted a good bit after I went from working at a mid-size tech company to working for myself. Suddenly I cared very much about how long it took to reach a functional product and very little about developing sophisticated architectures.
Same here. This is what happens when business/developer interests align.
Simple UX too. Early adopters are so much more forgiving of a boring but functional user interface than we want to admit. It doesn't have to look amazing out of the gate. It just needs to do something cool.
Plus a lot of our overcomplicated architectures on the frontend are because we shoot for a UI that competes with products that have been iterated on for 15 years now by teams of dozens or even hundreds.
I have to kind of ask, maybe it's the design comitee that instead produces the sphagetti in that case? They are trying to commit themselves into decisions without much knowledge about the problem they are solving. I understand this is the reality of business and so on but let's not imply that this is good software design. The basic operation of abstraction (as this vague magical thing which creates concepts) is something that has to have something to abstract from. When we do abstraction in a bubble, not informed by the problem - like the design comitee does. What we get is abstractions made from abstractions: foundation less non-sense. Maybe it's this spagetti monster, the rogue coder, the guy that actually tries to solve the problem - the real designer.
This comment deserves more upvotes.
I can totally understand the move fast and break things mentality, but I'd like to stress it's equally important to pay back the occurring tech debts. I am working on a massive spring monolith that's somewhat of a huge pile of spaghetti, and when the higher management decided to enforce code quality in the pipeline, it became a living hell to work on.
I can't even add a logger without having to refactor the whole 2000 line class while writing a whole-ass unit test for it. It's been a full year and I still have to deal with this sort of crap on a weekly basis.
The most ironic part is that most of the devs that cooked the spaghetti are now either in upper management or in the same devOps team that's demanding us to clean the mess up.
I've come to the conclusion that you can't architect yourself out of this.
What you might do is write tests early, so at least you have integrated a test rig to build on if the system actually gets any use.
Most of the work on mature systems is like this, rewriting and refactoring stuff. If it is very concrete, non-abstract, it's generally easier to work with in this phase than if it was cleverly and abstractly architected. Even if it's a big spaghett.
So hard to resolve this in my head - unless it’s understood there’s a “detonate” button attached to the code, that it might escape, or worse, that it gets passed on or sold(!!) is a chilling thought.
Just leave comments for the most embarrassing pieces.
// really not the way to do this, but it works
// ideally, we'd do xyz, but this is good enough
The lack of these design elements in a solo project does not define crappy code. More often, it's due to a codebase being modified by multiple contributors, each wanting to get their work done quickly without paying attention to the overall context or considering others' work.
100% agree.
JUST_DO_IT (add value, fast) <-----------------------------------> WHAT_THE_BOOK_SAYS (enterprise arch etc etc)
The BIG problem is the total misconception that:
In reality, the quality of execution is independent of the development approach - you can make a complete mess of either approach, however, the WHAT_THE_BOOK_SAYS approach WILL cost you a LOT more time and money to discover your team has messed up it and WILL cost a lot more time and money to fix.
My experience is that, just as your product evolves & grows, your engineering strategy should evolve and grow. There's a good reason why fortune 500 companies will have enormous IT teams, using enterprise cloud technology... and there's a good reason why successful tech start-ups do the exact opposite.... but hopefully, the startup will become a fortune 500 company.
I can relate. I am now on a small team of half a dozen (2 front, two back, 2 devops). We've been smashing it for the last year, replaced a hodgepodge of systems with a configurable framework and a modular integration layer with third-party systems. As a reward our work the management are bringing in a consultancy we are supposed to train to replace us. That will be a very expensive and inefficient exercise. How do I know? Our small team was created to deliver what the previous consultancy could not do in 5+ years. That experience taught me once again that management in large organisations have no idea what they want to build, how to build it, how to maintain it, and that's where large consultancies come in, milk the client for all they can and leave a massive mess behind them.
Like with all things in life, one need to find balance within the two extremes you posted.
There's overcomplicated, there's too basic, and then there's simple.
I think it's fine adding just enough complexity and abstraction to make something malleable and manageable in the future
I’d argue that by definition, code producing value is good code. It may not be the best, but it has to be at least good.
Almost everything else about code is subjective, but value is objective.
That's not bikeshedding, but the product of these types:
https://en.wikipedia.org/wiki/Architecture_astronaut
...bikeshedding is when you have that all-day, all-teams meeting where marketing and design and management argue about whether to put X content in the header or footer.
This is how all modern software should be made.
Optimizing for 1M concurrent users, paying down tech debt, refactoring, and testing are all things that engineers love to cite that will somehow make everything better. The customer won't feel a single one.
Spotify uses 42 different versions of jquery (among a dozen other libraries) and it's working just fine (with >1M concurrent users).
Even electron! That crap is so heavy, its (lack of) performance is notable. But! Many billion-dollar companies built on its back.
I laughed when an exec at Evernote told me that every year they go to MIT to try to recruit the top 1% of cs grads. What a waste of talent.
The team that prides themselves on "we'll take a bit longer to get things right" usually winds up takes much, much longer... and still gets it wrong.
By that point, the fast team would be on their third iteration.
Similarly, some entrepreneurs (non-tech)e who can produce a basic MVP that generates money from the start, sometimes to an insane MRR. They use something like Google Sheets, Bubble or pay a dev shop a fraction of a corporate project to get it done.
As a struggling solo tech founder, I'm in awe.
You refactor and abstract later. A key principle absolutely no one observes. You have to let people turn out slop and then let them clean it up.
Instead it's combative code reviews where everything has to enter the code base perfect.
Makes me sick.
Ohh yeah.
A few years ago I worked on a system that had replaced a previous system that was like that: a bunch of microservices with multiple instances, communicating via message queues, all in the name of scalability and high availability.
The actual nonfunctional requirements? Handle between 0 and a few hundred requests per day, and it would be just fine of they get delayed a few days as well.
The best part was that the overengineered previous system actually had far more outages and delays caused by its complex deployment and data model than the simple single-instance monolith that replaced it.
So much this. But there is a fine line somewhere in there. I have seen and admittedly worked on such projects where time constraints simply do not allow you to polish all bits and pieces. That's perfectly reasonable during POC/early development stages and if anything, I encourage it: Writing hundreds of tests when all requirements change three times a day is incredibly counter-productive, slow you down and eventually burn you out. It may happen that crappy is much better than shiny, polished and over complicated if the project itself is not going to scale any further than it already has. And once you get a more complete picture of what the end goals are, then you can go back and gradually start doing things "the right way". But I've also been in another extreme. Take my old job for instance, which despite my at the time cognitive dissonance, I hated to a large degree because of this: relatively early stage with ever-changing requirements, endless dependencies, brutally unstable setup for development where deploying a single pod in minikube was a terrifying prospect because everything was hanging by a thread and rebuilding the cluster took hours. That was made even worse by dozens of forked open source projects that were patched left and right to fit the needs, lagging years behind the original projects, wild dependencies between repos, no real observability over what was going on, the version control was catastrophically packed with auto-generated code(like 80% or more) which was made worse by the fact that everything was git push --force so they don't have to deal with conflicts. Imagine having to explain that this practice should be avoided in nearly all cases. In a nutshell imagine crappy code and infrastructure which pretends to be and is sold as an enterprise grade software. I guess cognitive dissonance was a common theme in the company since everyone was under the impression that everything is perfect. Which couldn't have been further from the truth.
Exactly this - I've a colleague who just fell into "solution mode" and started whiteboarding a LLM framework he could fine-tune with RAG and some 3rd party vector database. It's six months later and there's still nothing working or even a deliverable schedule.
Compare this to another colleague, with an almost exactly similar use case, who just downloaded an open-source LLM, wrote some glue code and set it loose in production. It's not pretty but it (mostly) gets the job done.
As the old addage goes "Perfect is the enemy of good".